Show More
@@ -0,0 +1,72 b'' | |||||
|
1 | = Mercurial 6.8rc0 = | |||
|
2 | ||||
|
3 | /!\ This is a tentative release, any and all notes below are subject to change or removal. | |||
|
4 | ||||
|
5 | As usual, a *lot* of patches don't make it to this list. | |||
|
6 | ||||
|
7 | == New Features or performance improvements == | |||
|
8 | ||||
|
9 | * Phases have been reworked to improve their general performance | |||
|
10 | * revset: stop serializing node when using "%ln" | |||
|
11 | * phases: convert remote phase root to node while reading them | |||
|
12 | * phases: use revision number in new_heads | |||
|
13 | * phases: use revision number in analyze_remote_phases | |||
|
14 | * phases: stop using `repo.set` in `remotephasessummary` | |||
|
15 | * phases: move RemotePhasesSummary to revision number | |||
|
16 | * phases: use revision number in `_pushdiscoveryphase` | |||
|
17 | * phases: introduce a performant efficient way to access revision in a set | |||
|
18 | * phases: rework the logic of _pushdiscoveryphase to bound complexity | |||
|
19 | * The Rust working copy code is being used by more places now: | |||
|
20 | * matchers: support patternmatcher in rust | |||
|
21 | * dirstate: remove the python-side whitelist of allowed matchers | |||
|
22 | * stream-clone: disable gc for `_entries_walk` duration | |||
|
23 | * stream-clone: disable gc for the initial section for the v3 format | |||
|
24 | * postincoming: avoid computing branchhead if no report will be posted | |||
|
25 | * stream-clone: disable gc for the entry listing section for the v2 format | |||
|
26 | * perf: allow profiling of more than one run | |||
|
27 | * perf: run the gc before each run | |||
|
28 | * perf: start recording total time after warming | |||
|
29 | * perf: clear vfs audit_cache before each run | |||
|
30 | * outgoing: rework the handling of the `missingroots` case to be faster | |||
|
31 | * outgoing: add a simple fastpath when there is no common | |||
|
32 | * tags-cache: skip the filternode step if we are not going to use it | |||
|
33 | * tags-cache: directly operate on rev-num warming hgtagsfnodescache | |||
|
34 | * tags-cache: directly perform a monimal walk for hgtagsfnodescache warming | |||
|
35 | * exchange: improve computation of relevant markers for large repos | |||
|
36 | ||||
|
37 | ||||
|
38 | == New Experimental Features == | |||
|
39 | ||||
|
40 | * Introduce a new experimental branch cache "v3": | |||
|
41 | * branchcache: add more test for the logic around obsolescence and branch heads | |||
|
42 | * branchcache: skip entries that are topological heads in the on disk file | |||
|
43 | * branchcache: add a "pure topological head" fast path | |||
|
44 | * branchcache: allow to detect "pure topological case" for branchmap | |||
|
45 | ||||
|
46 | ||||
|
47 | == Bug Fixes == | |||
|
48 | ||||
|
49 | * perf-stream-locked-section: actually use v1 generation when requested | |||
|
50 | * perf-stream-locked-section: fix the call to the v3 generator | |||
|
51 | * perf-stream-locked-section: advertise the right version key in the help | |||
|
52 | * stream: in v3, skip the "size" fast path if the entries have some unknown size | |||
|
53 | * stream-clone: stop getting the file size of all file in v3 | |||
|
54 | * streamclone: stop listing files for entries that have no volatile files | |||
|
55 | * perf-stream-consume: use the source repository config when applying | |||
|
56 | * bundle: do no check the changegroup version if no changegroup is included | |||
|
57 | * perf: create the temporary target next to the source in stream-consume | |||
|
58 | * bundlespec: fix the "streamv2" and "streamv3-exp" variant | |||
|
59 | * push: rework the computation of fallbackheads to be correct | |||
|
60 | * profiler: flush after writing the profiler output | |||
|
61 | * base-revsets: use an author that actually exercises a lot of changesets | |||
|
62 | * hgrc: search XDG_CONFIG_HOME on mac | |||
|
63 | * clonebundles: add missing newline to legacy response | |||
|
64 | * narrow: add a test for linkrev computation done during widen | |||
|
65 | ||||
|
66 | == Backwards Compatibility Changes == | |||
|
67 | ||||
|
68 | == Internal API Changes == | |||
|
69 | ||||
|
70 | == Miscellaneous == | |||
|
71 | ||||
|
72 | * obsolete: quote the feature name No newline at end of file |
This diff has been collapsed as it changes many lines, (563 lines changed) Show them Hide them | |||||
@@ -0,0 +1,563 b'' | |||||
|
1 | ================================================================ | |||
|
2 | test the interaction of the branch cache with obsolete changeset | |||
|
3 | ================================================================ | |||
|
4 | ||||
|
5 | Some corner case have been covered by unrelated test (like rebase ones) this | |||
|
6 | file meant to gather explicite testing of those. | |||
|
7 | ||||
|
8 | See also: test-obsolete-checkheads.t | |||
|
9 | ||||
|
10 | #testcases v2 v3 | |||
|
11 | ||||
|
12 | $ cat >> $HGRCPATH << EOF | |||
|
13 | > [phases] | |||
|
14 | > publish = false | |||
|
15 | > [experimental] | |||
|
16 | > evolution = all | |||
|
17 | > server.allow-hidden-access = * | |||
|
18 | > EOF | |||
|
19 | ||||
|
20 | #if v3 | |||
|
21 | $ cat <<EOF >> $HGRCPATH | |||
|
22 | > [experimental] | |||
|
23 | > branch-cache-v3=yes | |||
|
24 | > EOF | |||
|
25 | $ CACHE_PREFIX=branch3-exp | |||
|
26 | #else | |||
|
27 | $ cat <<EOF >> $HGRCPATH | |||
|
28 | > [experimental] | |||
|
29 | > branch-cache-v3=no | |||
|
30 | > EOF | |||
|
31 | $ CACHE_PREFIX=branch2 | |||
|
32 | #endif | |||
|
33 | ||||
|
34 | $ show_cache() { | |||
|
35 | > for cache_file in .hg/cache/$CACHE_PREFIX*; do | |||
|
36 | > echo "##### $cache_file" | |||
|
37 | > cat $cache_file | |||
|
38 | > done | |||
|
39 | > } | |||
|
40 | ||||
|
41 | Setup graph | |||
|
42 | ############# | |||
|
43 | ||||
|
44 | $ . $RUNTESTDIR/testlib/common.sh | |||
|
45 | ||||
|
46 | graph with a single branch | |||
|
47 | -------------------------- | |||
|
48 | ||||
|
49 | We want some branching and some obsolescence | |||
|
50 | ||||
|
51 | $ hg init main-single-branch | |||
|
52 | $ cd main-single-branch | |||
|
53 | $ mkcommit root | |||
|
54 | $ mkcommit A_1 | |||
|
55 | $ mkcommit A_2 | |||
|
56 | $ hg update 'desc("A_2")' --quiet | |||
|
57 | $ mkcommit B_1 | |||
|
58 | $ mkcommit B_2 | |||
|
59 | $ mkcommit B_3 | |||
|
60 | $ mkcommit B_4 | |||
|
61 | $ hg update 'desc("A_2")' --quiet | |||
|
62 | $ mkcommit A_3 | |||
|
63 | created new head | |||
|
64 | $ mkcommit A_4 | |||
|
65 | $ hg up null --quiet | |||
|
66 | $ hg clone --noupdate . ../main-single-branch-pre-ops | |||
|
67 | $ hg log -r 'desc("A_1")' -T '{node}' > ../main-single-branch-node_A1 | |||
|
68 | $ hg log -r 'desc("A_2")' -T '{node}' > ../main-single-branch-node_A2 | |||
|
69 | $ hg log -r 'desc("A_3")' -T '{node}' > ../main-single-branch-node_A3 | |||
|
70 | $ hg log -r 'desc("A_4")' -T '{node}' > ../main-single-branch-node_A4 | |||
|
71 | $ hg log -r 'desc("B_1")' -T '{node}' > ../main-single-branch-node_B1 | |||
|
72 | $ hg log -r 'desc("B_2")' -T '{node}' > ../main-single-branch-node_B2 | |||
|
73 | $ hg log -r 'desc("B_3")' -T '{node}' > ../main-single-branch-node_B3 | |||
|
74 | $ hg log -r 'desc("B_4")' -T '{node}' > ../main-single-branch-node_B4 | |||
|
75 | ||||
|
76 | (double check the heads are right before we obsolete) | |||
|
77 | ||||
|
78 | $ hg log -R ../main-single-branch-pre-ops -G -T '{desc}\n' | |||
|
79 | o A_4 | |||
|
80 | | | |||
|
81 | o A_3 | |||
|
82 | | | |||
|
83 | | o B_4 | |||
|
84 | | | | |||
|
85 | | o B_3 | |||
|
86 | | | | |||
|
87 | | o B_2 | |||
|
88 | | | | |||
|
89 | | o B_1 | |||
|
90 | |/ | |||
|
91 | o A_2 | |||
|
92 | | | |||
|
93 | o A_1 | |||
|
94 | | | |||
|
95 | o root | |||
|
96 | ||||
|
97 | $ hg log -G -T '{desc}\n' | |||
|
98 | o A_4 | |||
|
99 | | | |||
|
100 | o A_3 | |||
|
101 | | | |||
|
102 | | o B_4 | |||
|
103 | | | | |||
|
104 | | o B_3 | |||
|
105 | | | | |||
|
106 | | o B_2 | |||
|
107 | | | | |||
|
108 | | o B_1 | |||
|
109 | |/ | |||
|
110 | o A_2 | |||
|
111 | | | |||
|
112 | o A_1 | |||
|
113 | | | |||
|
114 | o root | |||
|
115 | ||||
|
116 | ||||
|
117 | #if v2 | |||
|
118 | $ show_cache | |||
|
119 | ##### .hg/cache/branch2-served | |||
|
120 | 3d808bbc94408ea19da905596d4079357a1f28be 8 | |||
|
121 | 63ba7cd843d1e95aac1a24435befeb1909c53619 o default | |||
|
122 | 3d808bbc94408ea19da905596d4079357a1f28be o default | |||
|
123 | #else | |||
|
124 | $ show_cache | |||
|
125 | ##### .hg/cache/branch3-exp-served | |||
|
126 | tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 topo-mode=pure | |||
|
127 | default | |||
|
128 | #endif | |||
|
129 | $ hg log -T '{desc}\n' --rev 'head()' | |||
|
130 | B_4 | |||
|
131 | A_4 | |||
|
132 | ||||
|
133 | Absolete a couple of changes | |||
|
134 | ||||
|
135 | $ for d in B2 B3 B4 A4; do | |||
|
136 | > hg debugobsolete --record-parents `cat ../main-single-branch-node_$d`; | |||
|
137 | > done | |||
|
138 | 1 new obsolescence markers | |||
|
139 | obsoleted 1 changesets | |||
|
140 | 2 new orphan changesets | |||
|
141 | 1 new obsolescence markers | |||
|
142 | obsoleted 1 changesets | |||
|
143 | 1 new obsolescence markers | |||
|
144 | obsoleted 1 changesets | |||
|
145 | 1 new obsolescence markers | |||
|
146 | obsoleted 1 changesets | |||
|
147 | ||||
|
148 | (double check the result is okay) | |||
|
149 | ||||
|
150 | $ hg log -G -T '{desc}\n' | |||
|
151 | o A_3 | |||
|
152 | | | |||
|
153 | | o B_1 | |||
|
154 | |/ | |||
|
155 | o A_2 | |||
|
156 | | | |||
|
157 | o A_1 | |||
|
158 | | | |||
|
159 | o root | |||
|
160 | ||||
|
161 | $ hg heads -T '{desc}\n' | |||
|
162 | A_3 | |||
|
163 | B_1 | |||
|
164 | #if v2 | |||
|
165 | $ show_cache | |||
|
166 | ##### .hg/cache/branch2-served | |||
|
167 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
168 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
169 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
170 | #else | |||
|
171 | $ show_cache | |||
|
172 | ##### .hg/cache/branch3-exp-served | |||
|
173 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 topo-mode=pure | |||
|
174 | default | |||
|
175 | #endif | |||
|
176 | $ cd .. | |||
|
177 | ||||
|
178 | ||||
|
179 | Actual testing | |||
|
180 | ############## | |||
|
181 | ||||
|
182 | Revealing obsolete changeset | |||
|
183 | ---------------------------- | |||
|
184 | ||||
|
185 | Check that revealing obsolete changesets does not confuse branch computation and checks | |||
|
186 | ||||
|
187 | Revealing tipmost changeset | |||
|
188 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |||
|
189 | ||||
|
190 | ||||
|
191 | $ cp -R ./main-single-branch tmp-repo | |||
|
192 | $ cd tmp-repo | |||
|
193 | $ hg update --hidden --rev 'desc("A_4")' --quiet | |||
|
194 | updated to hidden changeset 3d808bbc9440 | |||
|
195 | (hidden revision '3d808bbc9440' is pruned) | |||
|
196 | $ hg log -G -T '{desc}\n' | |||
|
197 | @ A_4 | |||
|
198 | | | |||
|
199 | o A_3 | |||
|
200 | | | |||
|
201 | | o B_1 | |||
|
202 | |/ | |||
|
203 | o A_2 | |||
|
204 | | | |||
|
205 | o A_1 | |||
|
206 | | | |||
|
207 | o root | |||
|
208 | ||||
|
209 | $ hg heads -T '{desc}\n' | |||
|
210 | A_3 | |||
|
211 | B_1 | |||
|
212 | #if v2 | |||
|
213 | $ show_cache | |||
|
214 | ##### .hg/cache/branch2 | |||
|
215 | 3d808bbc94408ea19da905596d4079357a1f28be 8 a943c3355ad9e93654d58b1c934c7c4329a5d1d4 | |||
|
216 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
217 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
218 | ##### .hg/cache/branch2-served | |||
|
219 | 3d808bbc94408ea19da905596d4079357a1f28be 8 a943c3355ad9e93654d58b1c934c7c4329a5d1d4 | |||
|
220 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
221 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
222 | #else | |||
|
223 | $ show_cache | |||
|
224 | ##### .hg/cache/branch3-exp | |||
|
225 | obsolete-hash=b6d2b1f5b70f09c25c835edcae69be35f681605c tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
226 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
227 | ##### .hg/cache/branch3-exp-served | |||
|
228 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 obsolete-hash=ac5282439f301518f362f37547fcd52bcc670373 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
229 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
230 | #endif | |||
|
231 | ||||
|
232 | Even when computing branches from scratch | |||
|
233 | ||||
|
234 | $ rm -rf .hg/cache/branch* | |||
|
235 | $ rm -rf .hg/wcache/branch* | |||
|
236 | $ hg heads -T '{desc}\n' | |||
|
237 | A_3 | |||
|
238 | B_1 | |||
|
239 | #if v2 | |||
|
240 | $ show_cache | |||
|
241 | ##### .hg/cache/branch2-served | |||
|
242 | 3d808bbc94408ea19da905596d4079357a1f28be 8 a943c3355ad9e93654d58b1c934c7c4329a5d1d4 | |||
|
243 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
244 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
245 | #else | |||
|
246 | $ show_cache | |||
|
247 | ##### .hg/cache/branch3-exp-served | |||
|
248 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 obsolete-hash=ac5282439f301518f362f37547fcd52bcc670373 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
249 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
250 | #endif | |||
|
251 | ||||
|
252 | And we can get back to normal | |||
|
253 | ||||
|
254 | $ hg update null --quiet | |||
|
255 | $ hg heads -T '{desc}\n' | |||
|
256 | A_3 | |||
|
257 | B_1 | |||
|
258 | #if v2 | |||
|
259 | $ show_cache | |||
|
260 | ##### .hg/cache/branch2-served | |||
|
261 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
262 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
263 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
264 | #else | |||
|
265 | $ show_cache | |||
|
266 | ##### .hg/cache/branch3-exp-served | |||
|
267 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 topo-mode=pure | |||
|
268 | default | |||
|
269 | #endif | |||
|
270 | ||||
|
271 | $ cd .. | |||
|
272 | $ rm -rf tmp-repo | |||
|
273 | ||||
|
274 | Revealing changeset in the middle of the changelog | |||
|
275 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~------------------------ | |||
|
276 | ||||
|
277 | Check that revealing an obsolete changeset does not confuse branch computation and checks | |||
|
278 | ||||
|
279 | $ cp -R ./main-single-branch tmp-repo | |||
|
280 | $ cd tmp-repo | |||
|
281 | $ hg update --hidden --rev 'desc("B_3")' --quiet | |||
|
282 | updated to hidden changeset 9c996d7674bb | |||
|
283 | (hidden revision '9c996d7674bb' is pruned) | |||
|
284 | $ hg log -G -T '{desc}\n' | |||
|
285 | o A_3 | |||
|
286 | | | |||
|
287 | | @ B_3 | |||
|
288 | | | | |||
|
289 | | x B_2 | |||
|
290 | | | | |||
|
291 | | o B_1 | |||
|
292 | |/ | |||
|
293 | o A_2 | |||
|
294 | | | |||
|
295 | o A_1 | |||
|
296 | | | |||
|
297 | o root | |||
|
298 | ||||
|
299 | $ hg heads -T '{desc}\n' | |||
|
300 | A_3 | |||
|
301 | B_1 | |||
|
302 | #if v2 | |||
|
303 | $ show_cache | |||
|
304 | ##### .hg/cache/branch2 | |||
|
305 | 3d808bbc94408ea19da905596d4079357a1f28be 8 a943c3355ad9e93654d58b1c934c7c4329a5d1d4 | |||
|
306 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
307 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
308 | ##### .hg/cache/branch2-served | |||
|
309 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
310 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
311 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
312 | #else | |||
|
313 | $ show_cache | |||
|
314 | ##### .hg/cache/branch3-exp | |||
|
315 | obsolete-hash=b6d2b1f5b70f09c25c835edcae69be35f681605c tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
316 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
317 | ##### .hg/cache/branch3-exp-served | |||
|
318 | filtered-hash=f1456c0d675980582dda9b8edc7f13f503ce544f obsolete-hash=3e74f5349008671629e39d13d7e00d9ba94c74f7 tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 | |||
|
319 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
320 | #endif | |||
|
321 | ||||
|
322 | Even when computing branches from scratch | |||
|
323 | ||||
|
324 | $ rm -rf .hg/cache/branch* | |||
|
325 | $ rm -rf .hg/wcache/branch* | |||
|
326 | $ hg heads -T '{desc}\n' | |||
|
327 | A_3 | |||
|
328 | B_1 | |||
|
329 | #if v2 | |||
|
330 | $ show_cache | |||
|
331 | ##### .hg/cache/branch2-served | |||
|
332 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
333 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
334 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
335 | #else | |||
|
336 | $ show_cache | |||
|
337 | ##### .hg/cache/branch3-exp-served | |||
|
338 | filtered-hash=f1456c0d675980582dda9b8edc7f13f503ce544f obsolete-hash=3e74f5349008671629e39d13d7e00d9ba94c74f7 tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 | |||
|
339 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
340 | #endif | |||
|
341 | ||||
|
342 | And we can get back to normal | |||
|
343 | ||||
|
344 | $ hg update null --quiet | |||
|
345 | $ hg heads -T '{desc}\n' | |||
|
346 | A_3 | |||
|
347 | B_1 | |||
|
348 | #if v2 | |||
|
349 | $ show_cache | |||
|
350 | ##### .hg/cache/branch2-served | |||
|
351 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
352 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
353 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
354 | #else | |||
|
355 | $ show_cache | |||
|
356 | ##### .hg/cache/branch3-exp-served | |||
|
357 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 topo-mode=pure | |||
|
358 | default | |||
|
359 | #endif | |||
|
360 | ||||
|
361 | $ cd .. | |||
|
362 | $ rm -rf tmp-repo | |||
|
363 | ||||
|
364 | Getting the obsolescence marker after the fact for the tip rev | |||
|
365 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |||
|
366 | ||||
|
367 | $ cp -R ./main-single-branch-pre-ops tmp-repo | |||
|
368 | $ cd tmp-repo | |||
|
369 | $ hg update --hidden --rev 'desc("A_4")' --quiet | |||
|
370 | $ hg log -G -T '{desc}\n' | |||
|
371 | @ A_4 | |||
|
372 | | | |||
|
373 | o A_3 | |||
|
374 | | | |||
|
375 | | o B_4 | |||
|
376 | | | | |||
|
377 | | o B_3 | |||
|
378 | | | | |||
|
379 | | o B_2 | |||
|
380 | | | | |||
|
381 | | o B_1 | |||
|
382 | |/ | |||
|
383 | o A_2 | |||
|
384 | | | |||
|
385 | o A_1 | |||
|
386 | | | |||
|
387 | o root | |||
|
388 | ||||
|
389 | $ hg heads -T '{desc}\n' | |||
|
390 | A_4 | |||
|
391 | B_4 | |||
|
392 | $ hg pull --rev `cat ../main-single-branch-node_A4` --remote-hidden | |||
|
393 | pulling from $TESTTMP/main-single-branch | |||
|
394 | no changes found | |||
|
395 | 1 new obsolescence markers | |||
|
396 | obsoleted 1 changesets | |||
|
397 | ||||
|
398 | branch head are okay | |||
|
399 | ||||
|
400 | $ hg heads -T '{desc}\n' | |||
|
401 | A_3 | |||
|
402 | B_4 | |||
|
403 | #if v2 | |||
|
404 | $ show_cache | |||
|
405 | ##### .hg/cache/branch2-served | |||
|
406 | 3d808bbc94408ea19da905596d4079357a1f28be 8 ac5282439f301518f362f37547fcd52bcc670373 | |||
|
407 | 63ba7cd843d1e95aac1a24435befeb1909c53619 o default | |||
|
408 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
409 | #else | |||
|
410 | $ show_cache | |||
|
411 | ##### .hg/cache/branch3-exp-served | |||
|
412 | obsolete-hash=ac5282439f301518f362f37547fcd52bcc670373 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
413 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
414 | #endif | |||
|
415 | ||||
|
416 | Even when computing branches from scratch | |||
|
417 | ||||
|
418 | $ rm -rf .hg/cache/branch* | |||
|
419 | $ rm -rf .hg/wcache/branch* | |||
|
420 | $ hg heads -T '{desc}\n' | |||
|
421 | A_3 | |||
|
422 | B_4 | |||
|
423 | #if v2 | |||
|
424 | $ show_cache | |||
|
425 | ##### .hg/cache/branch2-served | |||
|
426 | 3d808bbc94408ea19da905596d4079357a1f28be 8 ac5282439f301518f362f37547fcd52bcc670373 | |||
|
427 | 63ba7cd843d1e95aac1a24435befeb1909c53619 o default | |||
|
428 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
429 | #else | |||
|
430 | $ show_cache | |||
|
431 | ##### .hg/cache/branch3-exp-served | |||
|
432 | obsolete-hash=ac5282439f301518f362f37547fcd52bcc670373 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
433 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
434 | #endif | |||
|
435 | ||||
|
436 | And we can get back to normal | |||
|
437 | ||||
|
438 | $ hg update null --quiet | |||
|
439 | $ hg heads -T '{desc}\n' | |||
|
440 | A_3 | |||
|
441 | B_4 | |||
|
442 | #if v2 | |||
|
443 | $ show_cache | |||
|
444 | ##### .hg/cache/branch2-served | |||
|
445 | 7c29ff2453bf38c75ee8982935739103c38a9284 7 | |||
|
446 | 63ba7cd843d1e95aac1a24435befeb1909c53619 o default | |||
|
447 | 7c29ff2453bf38c75ee8982935739103c38a9284 o default | |||
|
448 | #else | |||
|
449 | $ show_cache | |||
|
450 | ##### .hg/cache/branch3-exp-served | |||
|
451 | tip-node=7c29ff2453bf38c75ee8982935739103c38a9284 tip-rev=7 topo-mode=pure | |||
|
452 | default | |||
|
453 | #endif | |||
|
454 | ||||
|
455 | $ cd .. | |||
|
456 | $ rm -rf tmp-repo | |||
|
457 | ||||
|
458 | Getting the obsolescence marker after the fact for another rev | |||
|
459 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |||
|
460 | ||||
|
461 | $ cp -R ./main-single-branch-pre-ops tmp-repo | |||
|
462 | $ cd tmp-repo | |||
|
463 | $ hg update --hidden --rev 'desc("B_3")' --quiet | |||
|
464 | $ hg log -G -T '{desc}\n' | |||
|
465 | o A_4 | |||
|
466 | | | |||
|
467 | o A_3 | |||
|
468 | | | |||
|
469 | | o B_4 | |||
|
470 | | | | |||
|
471 | | @ B_3 | |||
|
472 | | | | |||
|
473 | | o B_2 | |||
|
474 | | | | |||
|
475 | | o B_1 | |||
|
476 | |/ | |||
|
477 | o A_2 | |||
|
478 | | | |||
|
479 | o A_1 | |||
|
480 | | | |||
|
481 | o root | |||
|
482 | ||||
|
483 | $ hg heads -T '{desc}\n' | |||
|
484 | A_4 | |||
|
485 | B_4 | |||
|
486 | #if v2 | |||
|
487 | $ show_cache | |||
|
488 | ##### .hg/cache/branch2-served | |||
|
489 | 3d808bbc94408ea19da905596d4079357a1f28be 8 | |||
|
490 | 63ba7cd843d1e95aac1a24435befeb1909c53619 o default | |||
|
491 | 3d808bbc94408ea19da905596d4079357a1f28be o default | |||
|
492 | #else | |||
|
493 | $ show_cache | |||
|
494 | ##### .hg/cache/branch3-exp-served | |||
|
495 | tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 topo-mode=pure | |||
|
496 | default | |||
|
497 | #endif | |||
|
498 | ||||
|
499 | $ hg pull --rev `cat ../main-single-branch-node_B4` --remote-hidden | |||
|
500 | pulling from $TESTTMP/main-single-branch | |||
|
501 | no changes found | |||
|
502 | 3 new obsolescence markers | |||
|
503 | obsoleted 3 changesets | |||
|
504 | ||||
|
505 | branch head are okay | |||
|
506 | ||||
|
507 | $ hg heads -T '{desc}\n' | |||
|
508 | A_4 | |||
|
509 | B_1 | |||
|
510 | #if v2 | |||
|
511 | $ show_cache | |||
|
512 | ##### .hg/cache/branch2-served | |||
|
513 | 3d808bbc94408ea19da905596d4079357a1f28be 8 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
514 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
515 | 3d808bbc94408ea19da905596d4079357a1f28be o default | |||
|
516 | #else | |||
|
517 | $ show_cache | |||
|
518 | ##### .hg/cache/branch3-exp-served | |||
|
519 | filtered-hash=f1456c0d675980582dda9b8edc7f13f503ce544f obsolete-hash=3e74f5349008671629e39d13d7e00d9ba94c74f7 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
520 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
521 | #endif | |||
|
522 | ||||
|
523 | Even when computing branches from scratch | |||
|
524 | ||||
|
525 | $ rm -rf .hg/cache/branch* | |||
|
526 | $ rm -rf .hg/wcache/branch* | |||
|
527 | $ hg heads -T '{desc}\n' | |||
|
528 | A_4 | |||
|
529 | B_1 | |||
|
530 | #if v2 | |||
|
531 | $ show_cache | |||
|
532 | ##### .hg/cache/branch2-served | |||
|
533 | 3d808bbc94408ea19da905596d4079357a1f28be 8 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
534 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
535 | 3d808bbc94408ea19da905596d4079357a1f28be o default | |||
|
536 | #else | |||
|
537 | $ show_cache | |||
|
538 | ##### .hg/cache/branch3-exp-served | |||
|
539 | filtered-hash=f1456c0d675980582dda9b8edc7f13f503ce544f obsolete-hash=3e74f5349008671629e39d13d7e00d9ba94c74f7 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 | |||
|
540 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
541 | #endif | |||
|
542 | ||||
|
543 | And we can get back to normal | |||
|
544 | ||||
|
545 | $ hg update null --quiet | |||
|
546 | $ hg heads -T '{desc}\n' | |||
|
547 | A_4 | |||
|
548 | B_1 | |||
|
549 | #if v2 | |||
|
550 | $ show_cache | |||
|
551 | ##### .hg/cache/branch2-served | |||
|
552 | 3d808bbc94408ea19da905596d4079357a1f28be 8 f8006d64a10d35c011a5c5fa88be1e25c5929514 | |||
|
553 | 550bb31f072912453ccbb503de1d554616911e88 o default | |||
|
554 | 3d808bbc94408ea19da905596d4079357a1f28be o default | |||
|
555 | #else | |||
|
556 | $ show_cache | |||
|
557 | ##### .hg/cache/branch3-exp-served | |||
|
558 | filtered-hash=f8006d64a10d35c011a5c5fa88be1e25c5929514 tip-node=3d808bbc94408ea19da905596d4079357a1f28be tip-rev=8 topo-mode=pure | |||
|
559 | default | |||
|
560 | #endif | |||
|
561 | ||||
|
562 | $ cd .. | |||
|
563 | $ rm -rf tmp-repo |
This diff has been collapsed as it changes many lines, (627 lines changed) Show them Hide them | |||||
@@ -0,0 +1,627 b'' | |||||
|
1 | ============================================================================================== | |||
|
2 | Test the computation of linkrev that is needed when sending file content after their changeset | |||
|
3 | ============================================================================================== | |||
|
4 | ||||
|
5 | Setup | |||
|
6 | ===== | |||
|
7 | ||||
|
8 | tree/flat make the hash unstable had are anoying, reinstall that later. | |||
|
9 | .. #testcases tree flat | |||
|
10 | $ . "$TESTDIR/narrow-library.sh" | |||
|
11 | ||||
|
12 | .. #if tree | |||
|
13 | .. $ cat << EOF >> $HGRCPATH | |||
|
14 | .. > [experimental] | |||
|
15 | .. > treemanifest = 1 | |||
|
16 | .. > EOF | |||
|
17 | .. #endif | |||
|
18 | ||||
|
19 | $ hg init server | |||
|
20 | $ cd server | |||
|
21 | ||||
|
22 | We build a non linear history with some filenome that exist in parallel. | |||
|
23 | ||||
|
24 | $ echo foo > readme.txt | |||
|
25 | $ hg add readme.txt | |||
|
26 | $ hg ci -m 'root' | |||
|
27 | $ mkdir dir_x | |||
|
28 | $ echo foo > dir_x/f1 | |||
|
29 | $ echo fo0 > dir_x/f2 | |||
|
30 | $ echo f0o > dir_x/f3 | |||
|
31 | $ mkdir dir_y | |||
|
32 | $ echo bar > dir_y/f1 | |||
|
33 | $ echo 8ar > dir_y/f2 | |||
|
34 | $ echo ba9 > dir_y/f3 | |||
|
35 | $ hg add dir_x dir_y | |||
|
36 | adding dir_x/f1 | |||
|
37 | adding dir_x/f2 | |||
|
38 | adding dir_x/f3 | |||
|
39 | adding dir_y/f1 | |||
|
40 | adding dir_y/f2 | |||
|
41 | adding dir_y/f3 | |||
|
42 | $ hg ci -m 'rev_a_' | |||
|
43 | ||||
|
44 | $ hg update 'desc("rev_a_")' | |||
|
45 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
46 | $ echo foo-01 > dir_x/f1 | |||
|
47 | $ hg ci -m 'rev_b_0_' | |||
|
48 | ||||
|
49 | $ hg update 'desc("rev_b_0_")' | |||
|
50 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
51 | $ echo foo-02 > dir_x/f1 | |||
|
52 | $ hg ci -m 'rev_b_1_' | |||
|
53 | ||||
|
54 | $ hg update 'desc("rev_a_")' | |||
|
55 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
56 | $ mkdir dir_z | |||
|
57 | $ echo bar-01 > dir_y/f1 | |||
|
58 | $ echo 8ar-01 > dir_y/f2 | |||
|
59 | $ echo babar > dir_z/f1 | |||
|
60 | $ hg add dir_z | |||
|
61 | adding dir_z/f1 | |||
|
62 | $ hg ci -m 'rev_c_0_' | |||
|
63 | created new head | |||
|
64 | ||||
|
65 | $ hg update 'desc("rev_c_0_")' | |||
|
66 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
67 | $ echo celeste > dir_z/f2 | |||
|
68 | $ echo zephir > dir_z/f1 | |||
|
69 | $ hg add dir_z | |||
|
70 | adding dir_z/f2 | |||
|
71 | $ hg ci -m 'rev_c_1_' | |||
|
72 | ||||
|
73 | $ hg update 'desc("rev_b_1_")' | |||
|
74 | 3 files updated, 0 files merged, 2 files removed, 0 files unresolved | |||
|
75 | $ echo fo0-01 > dir_x/f2 | |||
|
76 | $ mkdir dir_z | |||
|
77 | $ ls dir_z | |||
|
78 | $ echo babar > dir_z/f1 | |||
|
79 | $ echo celeste > dir_z/f2 | |||
|
80 | $ echo foo > dir_z/f3 | |||
|
81 | $ hg add dir_z | |||
|
82 | adding dir_z/f1 | |||
|
83 | adding dir_z/f2 | |||
|
84 | adding dir_z/f3 | |||
|
85 | $ hg ci -m 'rev_b_2_' | |||
|
86 | ||||
|
87 | $ hg update 'desc("rev_b_2_")' | |||
|
88 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
89 | $ echo f0o-01 > dir_x/f3 | |||
|
90 | $ echo zephir > dir_z/f1 | |||
|
91 | $ echo arthur > dir_z/f2 | |||
|
92 | $ hg ci -m 'rev_b_3_' | |||
|
93 | ||||
|
94 | $ hg update 'desc("rev_c_1_")' | |||
|
95 | 6 files updated, 0 files merged, 1 files removed, 0 files unresolved | |||
|
96 | $ echo bar-02 > dir_y/f1 | |||
|
97 | $ echo ba9-01 > dir_y/f3 | |||
|
98 | $ echo bar > dir_z/f4 | |||
|
99 | $ hg add dir_z/ | |||
|
100 | adding dir_z/f4 | |||
|
101 | $ echo arthur > dir_z/f2 | |||
|
102 | $ hg ci -m 'rev_c_2_' | |||
|
103 | ||||
|
104 | $ hg update 'desc("rev_b_3_")' | |||
|
105 | 7 files updated, 0 files merged, 1 files removed, 0 files unresolved | |||
|
106 | $ hg merge 'desc("rev_c_2_")' | |||
|
107 | 4 files updated, 0 files merged, 0 files removed, 0 files unresolved | |||
|
108 | (branch merge, don't forget to commit) | |||
|
109 | $ echo flore > dir_z/f1 | |||
|
110 | $ echo foo-04 > dir_x/f1 | |||
|
111 | $ echo foo-01 > dir_z/f3 | |||
|
112 | $ hg ci -m 'rev_d_0_' | |||
|
113 | $ echo alexandre > dir_z/f1 | |||
|
114 | $ echo bar-01 > dir_z/f4 | |||
|
115 | $ echo bar-04 > dir_y/f1 | |||
|
116 | $ hg ci -m 'rev_d_1_' | |||
|
117 | $ hg status | |||
|
118 | $ hg status -A | |||
|
119 | C dir_x/f1 | |||
|
120 | C dir_x/f2 | |||
|
121 | C dir_x/f3 | |||
|
122 | C dir_y/f1 | |||
|
123 | C dir_y/f2 | |||
|
124 | C dir_y/f3 | |||
|
125 | C dir_z/f1 | |||
|
126 | C dir_z/f2 | |||
|
127 | C dir_z/f3 | |||
|
128 | C dir_z/f4 | |||
|
129 | C readme.txt | |||
|
130 | $ hg up null | |||
|
131 | 0 files updated, 0 files merged, 11 files removed, 0 files unresolved | |||
|
132 | ||||
|
133 | Resulting graph | |||
|
134 | ||||
|
135 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
136 | o 10:71e6a9c7a6a2: rev_d_1_ | |||
|
137 | | dir_y/f1 dir_z/f1 dir_z/f4 | |||
|
138 | o 9:b0a0cbe5ce57: rev_d_0_ | |||
|
139 | |\ dir_x/f1 dir_z/f1 dir_z/f3 | |||
|
140 | | o 8:d04e01dcc82d: rev_c_2_ | |||
|
141 | | | dir_y/f1 dir_y/f3 dir_z/f2 dir_z/f4 | |||
|
142 | o | 7:fc05b303b551: rev_b_3_ | |||
|
143 | | | dir_x/f3 dir_z/f1 dir_z/f2 | |||
|
144 | o | 6:17fd34adb43b: rev_b_2_ | |||
|
145 | | | dir_x/f2 dir_z/f1 dir_z/f2 dir_z/f3 | |||
|
146 | | o 5:fa05dbe8eed1: rev_c_1_ | |||
|
147 | | | dir_z/f1 dir_z/f2 | |||
|
148 | | o 4:59b4258b00dc: rev_c_0_ | |||
|
149 | | | dir_y/f1 dir_y/f2 dir_z/f1 | |||
|
150 | o | 3:328f8ced5276: rev_b_1_ | |||
|
151 | | | dir_x/f1 | |||
|
152 | o | 2:0ccce83dd29b: rev_b_0_ | |||
|
153 | |/ dir_x/f1 | |||
|
154 | o 1:63f468a0fdac: rev_a_ | |||
|
155 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
156 | o 0:4978c5c7386b: root | |||
|
157 | readme.txt | |||
|
158 | ||||
|
159 | Useful save useful nodes : | |||
|
160 | ||||
|
161 | $ hg log -T '{node}' > ../rev_c_2_ --rev 'desc("rev_c_2_")' | |||
|
162 | $ hg log -T '{node}' > ../rev_b_3_ --rev 'desc("rev_b_3_")' | |||
|
163 | ||||
|
164 | Reference output | |||
|
165 | ||||
|
166 | Since we have the same file conent on each side, we should get a limited number | |||
|
167 | of file revision (and the associated linkrev). | |||
|
168 | ||||
|
169 | This these shared file-revision and the associated linkrev computation is | |||
|
170 | fueling the complexity test in this file. | |||
|
171 | ||||
|
172 | $ cat > ../linkrev-check.sh << EOF | |||
|
173 | > echo '# expected linkrev for dir_z/f1' | |||
|
174 | > hg log -T '0 {rev}\n' --rev 'min(desc(rev_b_2_) or desc(rev_c_0_))' | |||
|
175 | > hg log -T '1 {rev}\n' --rev 'min(desc(rev_b_3_) or desc(rev_c_1_))' | |||
|
176 | > hg log -T '2 {rev}\n' --rev 'min(desc(rev_d_0_))' | |||
|
177 | > hg log -T '3 {rev}\n' --rev 'min(desc(rev_d_1_))' | |||
|
178 | > hg debugindex dir_z/f1 | |||
|
179 | > # rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
180 | > # 0 4 360afd990eef 000000000000 000000000000 | |||
|
181 | > # 1 5 7054ee088631 360afd990eef 000000000000 | |||
|
182 | > # 2 9 6bb290463f21 7054ee088631 000000000000 | |||
|
183 | > # 3 10 91fec784ff86 6bb290463f21 000000000000 | |||
|
184 | > echo '# expected linkrev for dir_z/f2' | |||
|
185 | > hg log -T '0 {rev}\n' --rev 'min(desc(rev_c_1_) or desc(rev_b_2_))' | |||
|
186 | > hg log -T '1 {rev}\n' --rev 'min(desc(rev_c_2_) or desc(rev_b_3_))' | |||
|
187 | > hg debugindex dir_z/f2 | |||
|
188 | > # rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
189 | > # 0 5 093bb0f8a0fb 000000000000 000000000000 | |||
|
190 | > # 1 7 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
191 | > if hg files --rev tip | grep dir_z/f3 > /dev/null; then | |||
|
192 | > echo '# expected linkrev for dir_z/f3' | |||
|
193 | > hg log -T '0 {rev}\n' --rev 'desc(rev_b_2_)' | |||
|
194 | > hg log -T '1 {rev}\n' --rev 'desc(rev_d_0_)' | |||
|
195 | > hg debugindex dir_z/f3 | |||
|
196 | > # rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
197 | > # 0 6 2ed2a3912a0b 000000000000 000000000000 | |||
|
198 | > # 1 9 7c6d649320ae 2ed2a3912a0b 000000000000 | |||
|
199 | > fi | |||
|
200 | > if hg files --rev tip | grep dir_z/f4 > /dev/null; then | |||
|
201 | > echo '# expected linkrev for dir_z/f4' | |||
|
202 | > hg log -T '0 {rev}\n' --rev 'desc(rev_c_2_)' | |||
|
203 | > hg log -T '1 {rev}\n' --rev 'desc(rev_d_1_)' | |||
|
204 | > hg debugindex dir_z/f4 | |||
|
205 | > # rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
206 | > # 0 8 b004912a8510 000000000000 000000000000 | |||
|
207 | > # 1 10 9f85b3b95e70 b004912a8510 000000000000 | |||
|
208 | > fi | |||
|
209 | > echo '# verify the repository' | |||
|
210 | > hg verify | |||
|
211 | > EOF | |||
|
212 | $ sh ../linkrev-check.sh | |||
|
213 | # expected linkrev for dir_z/f1 | |||
|
214 | 0 4 | |||
|
215 | 1 5 | |||
|
216 | 2 9 | |||
|
217 | 3 10 | |||
|
218 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
219 | 0 4 360afd990eef 000000000000 000000000000 | |||
|
220 | 1 5 7054ee088631 360afd990eef 000000000000 | |||
|
221 | 2 9 6bb290463f21 7054ee088631 000000000000 | |||
|
222 | 3 10 91fec784ff86 6bb290463f21 000000000000 | |||
|
223 | # expected linkrev for dir_z/f2 | |||
|
224 | 0 5 | |||
|
225 | 1 7 | |||
|
226 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
227 | 0 5 093bb0f8a0fb 000000000000 000000000000 | |||
|
228 | 1 7 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
229 | # expected linkrev for dir_z/f3 | |||
|
230 | 0 6 | |||
|
231 | 1 9 | |||
|
232 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
233 | 0 6 2ed2a3912a0b 000000000000 000000000000 | |||
|
234 | 1 9 7c6d649320ae 2ed2a3912a0b 000000000000 | |||
|
235 | # expected linkrev for dir_z/f4 | |||
|
236 | 0 8 | |||
|
237 | 1 10 | |||
|
238 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
239 | 0 8 b004912a8510 000000000000 000000000000 | |||
|
240 | 1 10 9f85b3b95e70 b004912a8510 000000000000 | |||
|
241 | # verify the repository | |||
|
242 | checking changesets | |||
|
243 | checking manifests | |||
|
244 | crosschecking files in changesets and manifests | |||
|
245 | checking files | |||
|
246 | checking dirstate | |||
|
247 | checked 11 changesets with 27 changes to 11 files | |||
|
248 | ||||
|
249 | $ cd .. | |||
|
250 | ||||
|
251 | Test linkrev computation for various widening scenario | |||
|
252 | ====================================================== | |||
|
253 | ||||
|
254 | Having cloning all revisions initially | |||
|
255 | -------------------------------------- | |||
|
256 | ||||
|
257 | $ hg clone --narrow ssh://user@dummy/server --include dir_x --include dir_y client_xy_rev_all --noupdate | |||
|
258 | requesting all changes | |||
|
259 | adding changesets | |||
|
260 | adding manifests | |||
|
261 | adding file changes | |||
|
262 | added 11 changesets with 16 changes to 6 files | |||
|
263 | new changesets 4978c5c7386b:71e6a9c7a6a2 | |||
|
264 | $ cd client_xy_rev_all | |||
|
265 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
266 | o 10:71e6a9c7a6a2: rev_d_1_ | |||
|
267 | | dir_y/f1 dir_z/f1 dir_z/f4 | |||
|
268 | o 9:b0a0cbe5ce57: rev_d_0_ | |||
|
269 | |\ dir_x/f1 dir_z/f1 dir_z/f3 | |||
|
270 | | o 8:d04e01dcc82d: rev_c_2_ | |||
|
271 | | | dir_y/f1 dir_y/f3 dir_z/f2 dir_z/f4 | |||
|
272 | o | 7:fc05b303b551: rev_b_3_ | |||
|
273 | | | dir_x/f3 dir_z/f1 dir_z/f2 | |||
|
274 | o | 6:17fd34adb43b: rev_b_2_ | |||
|
275 | | | dir_x/f2 dir_z/f1 dir_z/f2 dir_z/f3 | |||
|
276 | | o 5:fa05dbe8eed1: rev_c_1_ | |||
|
277 | | | dir_z/f1 dir_z/f2 | |||
|
278 | | o 4:59b4258b00dc: rev_c_0_ | |||
|
279 | | | dir_y/f1 dir_y/f2 dir_z/f1 | |||
|
280 | o | 3:328f8ced5276: rev_b_1_ | |||
|
281 | | | dir_x/f1 | |||
|
282 | o | 2:0ccce83dd29b: rev_b_0_ | |||
|
283 | |/ dir_x/f1 | |||
|
284 | o 1:63f468a0fdac: rev_a_ | |||
|
285 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
286 | o 0:4978c5c7386b: root | |||
|
287 | readme.txt | |||
|
288 | ||||
|
289 | $ hg tracked --addinclude dir_z | |||
|
290 | comparing with ssh://user@dummy/server | |||
|
291 | searching for changes | |||
|
292 | adding changesets | |||
|
293 | adding manifests | |||
|
294 | adding file changes | |||
|
295 | added 0 changesets with 10 changes to 4 files | |||
|
296 | $ sh ../linkrev-check.sh | |||
|
297 | # expected linkrev for dir_z/f1 | |||
|
298 | 0 4 | |||
|
299 | 1 5 | |||
|
300 | 2 9 | |||
|
301 | 3 10 | |||
|
302 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
303 | 0 4 360afd990eef 000000000000 000000000000 | |||
|
304 | 1 5 7054ee088631 360afd990eef 000000000000 | |||
|
305 | 2 9 6bb290463f21 7054ee088631 000000000000 | |||
|
306 | 3 10 91fec784ff86 6bb290463f21 000000000000 | |||
|
307 | # expected linkrev for dir_z/f2 | |||
|
308 | 0 5 | |||
|
309 | 1 7 | |||
|
310 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
311 | 0 5 093bb0f8a0fb 000000000000 000000000000 | |||
|
312 | 1 7 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
313 | # expected linkrev for dir_z/f3 | |||
|
314 | 0 6 | |||
|
315 | 1 9 | |||
|
316 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
317 | 0 6 2ed2a3912a0b 000000000000 000000000000 | |||
|
318 | 1 9 7c6d649320ae 2ed2a3912a0b 000000000000 | |||
|
319 | # expected linkrev for dir_z/f4 | |||
|
320 | 0 8 | |||
|
321 | 1 10 | |||
|
322 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
323 | 0 8 b004912a8510 000000000000 000000000000 | |||
|
324 | 1 10 9f85b3b95e70 b004912a8510 000000000000 | |||
|
325 | # verify the repository | |||
|
326 | checking changesets | |||
|
327 | checking manifests | |||
|
328 | crosschecking files in changesets and manifests | |||
|
329 | checking files | |||
|
330 | checking dirstate | |||
|
331 | checked 11 changesets with 26 changes to 10 files | |||
|
332 | $ cd .. | |||
|
333 | ||||
|
334 | ||||
|
335 | Having cloning all only branch b | |||
|
336 | -------------------------------- | |||
|
337 | ||||
|
338 | $ hg clone --narrow ssh://user@dummy/server --rev `cat ./rev_b_3_` --include dir_x --include dir_y client_xy_rev_from_b_only --noupdate | |||
|
339 | adding changesets | |||
|
340 | adding manifests | |||
|
341 | adding file changes | |||
|
342 | added 6 changesets with 10 changes to 6 files | |||
|
343 | new changesets 4978c5c7386b:fc05b303b551 | |||
|
344 | $ cd client_xy_rev_from_b_only | |||
|
345 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
346 | o 5:fc05b303b551: rev_b_3_ | |||
|
347 | | dir_x/f3 dir_z/f1 dir_z/f2 | |||
|
348 | o 4:17fd34adb43b: rev_b_2_ | |||
|
349 | | dir_x/f2 dir_z/f1 dir_z/f2 dir_z/f3 | |||
|
350 | o 3:328f8ced5276: rev_b_1_ | |||
|
351 | | dir_x/f1 | |||
|
352 | o 2:0ccce83dd29b: rev_b_0_ | |||
|
353 | | dir_x/f1 | |||
|
354 | o 1:63f468a0fdac: rev_a_ | |||
|
355 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
356 | o 0:4978c5c7386b: root | |||
|
357 | readme.txt | |||
|
358 | ||||
|
359 | $ hg tracked --addinclude dir_z | |||
|
360 | comparing with ssh://user@dummy/server | |||
|
361 | searching for changes | |||
|
362 | adding changesets | |||
|
363 | adding manifests | |||
|
364 | adding file changes | |||
|
365 | added 0 changesets with 5 changes to 3 files | |||
|
366 | $ sh ../linkrev-check.sh | |||
|
367 | # expected linkrev for dir_z/f1 | |||
|
368 | 0 4 | |||
|
369 | 1 5 | |||
|
370 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
371 | 0 4 360afd990eef 000000000000 000000000000 | |||
|
372 | 1 5 7054ee088631 360afd990eef 000000000000 | |||
|
373 | # expected linkrev for dir_z/f2 | |||
|
374 | 0 4 | |||
|
375 | 1 5 | |||
|
376 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
377 | 0 4 093bb0f8a0fb 000000000000 000000000000 | |||
|
378 | 1 5 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
379 | # expected linkrev for dir_z/f3 | |||
|
380 | 0 4 | |||
|
381 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
382 | 0 4 2ed2a3912a0b 000000000000 000000000000 | |||
|
383 | # verify the repository | |||
|
384 | checking changesets | |||
|
385 | checking manifests | |||
|
386 | crosschecking files in changesets and manifests | |||
|
387 | checking files | |||
|
388 | checking dirstate | |||
|
389 | checked 6 changesets with 15 changes to 9 files | |||
|
390 | $ cd .. | |||
|
391 | ||||
|
392 | ||||
|
393 | Having cloning all only branch c | |||
|
394 | -------------------------------- | |||
|
395 | ||||
|
396 | $ hg clone --narrow ssh://user@dummy/server --rev `cat ./rev_c_2_` --include dir_x --include dir_y client_xy_rev_from_c_only --noupdate | |||
|
397 | adding changesets | |||
|
398 | adding manifests | |||
|
399 | adding file changes | |||
|
400 | added 5 changesets with 10 changes to 6 files | |||
|
401 | new changesets 4978c5c7386b:d04e01dcc82d | |||
|
402 | $ cd client_xy_rev_from_c_only | |||
|
403 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
404 | o 4:d04e01dcc82d: rev_c_2_ | |||
|
405 | | dir_y/f1 dir_y/f3 dir_z/f2 dir_z/f4 | |||
|
406 | o 3:fa05dbe8eed1: rev_c_1_ | |||
|
407 | | dir_z/f1 dir_z/f2 | |||
|
408 | o 2:59b4258b00dc: rev_c_0_ | |||
|
409 | | dir_y/f1 dir_y/f2 dir_z/f1 | |||
|
410 | o 1:63f468a0fdac: rev_a_ | |||
|
411 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
412 | o 0:4978c5c7386b: root | |||
|
413 | readme.txt | |||
|
414 | ||||
|
415 | $ hg tracked --addinclude dir_z | |||
|
416 | comparing with ssh://user@dummy/server | |||
|
417 | searching for changes | |||
|
418 | adding changesets | |||
|
419 | adding manifests | |||
|
420 | adding file changes | |||
|
421 | added 0 changesets with 5 changes to 3 files | |||
|
422 | $ sh ../linkrev-check.sh | |||
|
423 | # expected linkrev for dir_z/f1 | |||
|
424 | 0 2 | |||
|
425 | 1 3 | |||
|
426 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
427 | 0 2 360afd990eef 000000000000 000000000000 | |||
|
428 | 1 3 7054ee088631 360afd990eef 000000000000 | |||
|
429 | # expected linkrev for dir_z/f2 | |||
|
430 | 0 3 | |||
|
431 | 1 4 | |||
|
432 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
433 | 0 3 093bb0f8a0fb 000000000000 000000000000 | |||
|
434 | 1 4 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
435 | # expected linkrev for dir_z/f4 | |||
|
436 | 0 4 | |||
|
437 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
438 | 0 4 b004912a8510 000000000000 000000000000 | |||
|
439 | # verify the repository | |||
|
440 | checking changesets | |||
|
441 | checking manifests | |||
|
442 | crosschecking files in changesets and manifests | |||
|
443 | checking files | |||
|
444 | checking dirstate | |||
|
445 | checked 5 changesets with 15 changes to 9 files | |||
|
446 | $ cd .. | |||
|
447 | ||||
|
448 | Having cloning all first branch b | |||
|
449 | --------------------------------- | |||
|
450 | ||||
|
451 | $ hg clone --narrow ssh://user@dummy/server --rev `cat ./rev_b_3_` --include dir_x --include dir_y client_xy_rev_from_b_first --noupdate | |||
|
452 | adding changesets | |||
|
453 | adding manifests | |||
|
454 | adding file changes | |||
|
455 | added 6 changesets with 10 changes to 6 files | |||
|
456 | new changesets 4978c5c7386b:fc05b303b551 | |||
|
457 | $ cd client_xy_rev_from_b_first | |||
|
458 | $ hg pull | |||
|
459 | pulling from ssh://user@dummy/server | |||
|
460 | searching for changes | |||
|
461 | adding changesets | |||
|
462 | adding manifests | |||
|
463 | adding file changes | |||
|
464 | added 5 changesets with 6 changes to 4 files | |||
|
465 | new changesets 59b4258b00dc:71e6a9c7a6a2 | |||
|
466 | (run 'hg update' to get a working copy) | |||
|
467 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
468 | o 10:71e6a9c7a6a2: rev_d_1_ | |||
|
469 | | dir_y/f1 dir_z/f1 dir_z/f4 | |||
|
470 | o 9:b0a0cbe5ce57: rev_d_0_ | |||
|
471 | |\ dir_x/f1 dir_z/f1 dir_z/f3 | |||
|
472 | | o 8:d04e01dcc82d: rev_c_2_ | |||
|
473 | | | dir_y/f1 dir_y/f3 dir_z/f2 dir_z/f4 | |||
|
474 | | o 7:fa05dbe8eed1: rev_c_1_ | |||
|
475 | | | dir_z/f1 dir_z/f2 | |||
|
476 | | o 6:59b4258b00dc: rev_c_0_ | |||
|
477 | | | dir_y/f1 dir_y/f2 dir_z/f1 | |||
|
478 | o | 5:fc05b303b551: rev_b_3_ | |||
|
479 | | | dir_x/f3 dir_z/f1 dir_z/f2 | |||
|
480 | o | 4:17fd34adb43b: rev_b_2_ | |||
|
481 | | | dir_x/f2 dir_z/f1 dir_z/f2 dir_z/f3 | |||
|
482 | o | 3:328f8ced5276: rev_b_1_ | |||
|
483 | | | dir_x/f1 | |||
|
484 | o | 2:0ccce83dd29b: rev_b_0_ | |||
|
485 | |/ dir_x/f1 | |||
|
486 | o 1:63f468a0fdac: rev_a_ | |||
|
487 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
488 | o 0:4978c5c7386b: root | |||
|
489 | readme.txt | |||
|
490 | ||||
|
491 | $ hg tracked --addinclude dir_z | |||
|
492 | comparing with ssh://user@dummy/server | |||
|
493 | searching for changes | |||
|
494 | adding changesets | |||
|
495 | adding manifests | |||
|
496 | adding file changes | |||
|
497 | added 0 changesets with 10 changes to 4 files | |||
|
498 | $ sh ../linkrev-check.sh | |||
|
499 | # expected linkrev for dir_z/f1 | |||
|
500 | 0 4 | |||
|
501 | 1 5 | |||
|
502 | 2 9 | |||
|
503 | 3 10 | |||
|
504 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
505 | 0 6 360afd990eef 000000000000 000000000000 (known-bad-output !) | |||
|
506 | 0 4 360afd990eef 000000000000 000000000000 (missing-correct-output !) | |||
|
507 | 1 7 7054ee088631 360afd990eef 000000000000 (known-bad-output !) | |||
|
508 | 1 5 7054ee088631 360afd990eef 000000000000 (missing-correct-output !) | |||
|
509 | 2 9 6bb290463f21 7054ee088631 000000000000 | |||
|
510 | 3 10 91fec784ff86 6bb290463f21 000000000000 | |||
|
511 | # expected linkrev for dir_z/f2 | |||
|
512 | 0 4 | |||
|
513 | 1 5 | |||
|
514 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
515 | 0 7 093bb0f8a0fb 000000000000 000000000000 (known-bad-output !) | |||
|
516 | 0 4 093bb0f8a0fb 000000000000 000000000000 (missing-correct-output !) | |||
|
517 | 1 5 0f47e254cb19 093bb0f8a0fb 000000000000 | |||
|
518 | # expected linkrev for dir_z/f3 | |||
|
519 | 0 4 | |||
|
520 | 1 9 | |||
|
521 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
522 | 0 4 2ed2a3912a0b 000000000000 000000000000 | |||
|
523 | 1 9 7c6d649320ae 2ed2a3912a0b 000000000000 | |||
|
524 | # expected linkrev for dir_z/f4 | |||
|
525 | 0 8 | |||
|
526 | 1 10 | |||
|
527 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
528 | 0 8 b004912a8510 000000000000 000000000000 | |||
|
529 | 1 10 9f85b3b95e70 b004912a8510 000000000000 | |||
|
530 | # verify the repository | |||
|
531 | checking changesets | |||
|
532 | checking manifests | |||
|
533 | crosschecking files in changesets and manifests | |||
|
534 | checking files | |||
|
535 | checking dirstate | |||
|
536 | checked 11 changesets with 26 changes to 10 files | |||
|
537 | $ cd .. | |||
|
538 | ||||
|
539 | ||||
|
540 | Having cloning all first branch c | |||
|
541 | --------------------------------- | |||
|
542 | ||||
|
543 | $ hg clone --narrow ssh://user@dummy/server --rev `cat ./rev_c_2_` --include dir_x --include dir_y client_xy_rev_from_c_first --noupdate | |||
|
544 | adding changesets | |||
|
545 | adding manifests | |||
|
546 | adding file changes | |||
|
547 | added 5 changesets with 10 changes to 6 files | |||
|
548 | new changesets 4978c5c7386b:d04e01dcc82d | |||
|
549 | $ cd client_xy_rev_from_c_first | |||
|
550 | $ hg pull | |||
|
551 | pulling from ssh://user@dummy/server | |||
|
552 | searching for changes | |||
|
553 | adding changesets | |||
|
554 | adding manifests | |||
|
555 | adding file changes | |||
|
556 | added 6 changesets with 6 changes to 4 files | |||
|
557 | new changesets 0ccce83dd29b:71e6a9c7a6a2 | |||
|
558 | (run 'hg update' to get a working copy) | |||
|
559 | $ hg log -GT "{rev}:{node|short}: {desc}\n {files}\n" | |||
|
560 | o 10:71e6a9c7a6a2: rev_d_1_ | |||
|
561 | | dir_y/f1 dir_z/f1 dir_z/f4 | |||
|
562 | o 9:b0a0cbe5ce57: rev_d_0_ | |||
|
563 | |\ dir_x/f1 dir_z/f1 dir_z/f3 | |||
|
564 | | o 8:fc05b303b551: rev_b_3_ | |||
|
565 | | | dir_x/f3 dir_z/f1 dir_z/f2 | |||
|
566 | | o 7:17fd34adb43b: rev_b_2_ | |||
|
567 | | | dir_x/f2 dir_z/f1 dir_z/f2 dir_z/f3 | |||
|
568 | | o 6:328f8ced5276: rev_b_1_ | |||
|
569 | | | dir_x/f1 | |||
|
570 | | o 5:0ccce83dd29b: rev_b_0_ | |||
|
571 | | | dir_x/f1 | |||
|
572 | o | 4:d04e01dcc82d: rev_c_2_ | |||
|
573 | | | dir_y/f1 dir_y/f3 dir_z/f2 dir_z/f4 | |||
|
574 | o | 3:fa05dbe8eed1: rev_c_1_ | |||
|
575 | | | dir_z/f1 dir_z/f2 | |||
|
576 | o | 2:59b4258b00dc: rev_c_0_ | |||
|
577 | |/ dir_y/f1 dir_y/f2 dir_z/f1 | |||
|
578 | o 1:63f468a0fdac: rev_a_ | |||
|
579 | | dir_x/f1 dir_x/f2 dir_x/f3 dir_y/f1 dir_y/f2 dir_y/f3 | |||
|
580 | o 0:4978c5c7386b: root | |||
|
581 | readme.txt | |||
|
582 | ||||
|
583 | $ hg tracked --addinclude dir_z | |||
|
584 | comparing with ssh://user@dummy/server | |||
|
585 | searching for changes | |||
|
586 | adding changesets | |||
|
587 | adding manifests | |||
|
588 | adding file changes | |||
|
589 | added 0 changesets with 10 changes to 4 files | |||
|
590 | $ sh ../linkrev-check.sh | |||
|
591 | # expected linkrev for dir_z/f1 | |||
|
592 | 0 2 | |||
|
593 | 1 3 | |||
|
594 | 2 9 | |||
|
595 | 3 10 | |||
|
596 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
597 | 0 2 360afd990eef 000000000000 000000000000 | |||
|
598 | 1 3 7054ee088631 360afd990eef 000000000000 | |||
|
599 | 2 9 6bb290463f21 7054ee088631 000000000000 | |||
|
600 | 3 10 91fec784ff86 6bb290463f21 000000000000 | |||
|
601 | # expected linkrev for dir_z/f2 | |||
|
602 | 0 3 | |||
|
603 | 1 4 | |||
|
604 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
605 | 0 3 093bb0f8a0fb 000000000000 000000000000 | |||
|
606 | 1 8 0f47e254cb19 093bb0f8a0fb 000000000000 (known-bad-output !) | |||
|
607 | 1 4 0f47e254cb19 093bb0f8a0fb 000000000000 (missing-correct-output !) | |||
|
608 | # expected linkrev for dir_z/f3 | |||
|
609 | 0 7 | |||
|
610 | 1 9 | |||
|
611 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
612 | 0 7 2ed2a3912a0b 000000000000 000000000000 | |||
|
613 | 1 9 7c6d649320ae 2ed2a3912a0b 000000000000 | |||
|
614 | # expected linkrev for dir_z/f4 | |||
|
615 | 0 4 | |||
|
616 | 1 10 | |||
|
617 | rev linkrev nodeid p1-nodeid p2-nodeid | |||
|
618 | 0 4 b004912a8510 000000000000 000000000000 | |||
|
619 | 1 10 9f85b3b95e70 b004912a8510 000000000000 | |||
|
620 | # verify the repository | |||
|
621 | checking changesets | |||
|
622 | checking manifests | |||
|
623 | crosschecking files in changesets and manifests | |||
|
624 | checking files | |||
|
625 | checking dirstate | |||
|
626 | checked 11 changesets with 26 changes to 10 files | |||
|
627 | $ cd .. |
@@ -46,8 +46,8 b' p1()' | |||||
46 | # Used in revision c1546d7400ef |
|
46 | # Used in revision c1546d7400ef | |
47 | min(0::) |
|
47 | min(0::) | |
48 | # Used in revision 546fa6576815 |
|
48 | # Used in revision 546fa6576815 | |
49 |
author(lmoscovicz) or author( |
|
49 | author(lmoscovicz) or author("pierre-yves") | |
50 |
author( |
|
50 | author("pierre-yves") or author(lmoscovicz) | |
51 | # Used in revision 9bfe68357c01 |
|
51 | # Used in revision 9bfe68357c01 | |
52 | public() and id("d82e2223f132") |
|
52 | public() and id("d82e2223f132") | |
53 | # Used in revision ba89f7b542c9 |
|
53 | # Used in revision ba89f7b542c9 | |
@@ -100,7 +100,7 b' draft()' | |||||
100 | draft() and ::tip |
|
100 | draft() and ::tip | |
101 | ::tip and draft() |
|
101 | ::tip and draft() | |
102 | author(lmoscovicz) |
|
102 | author(lmoscovicz) | |
103 | author(olivia) |
|
103 | author("pierre-yves") | |
104 | ::p1(p1(tip)):: |
|
104 | ::p1(p1(tip)):: | |
105 | public() |
|
105 | public() | |
106 | :10000 and public() |
|
106 | :10000 and public() | |
@@ -130,7 +130,7 b' roots((:42) + (tip~42:))' | |||||
130 | head() |
|
130 | head() | |
131 | head() - public() |
|
131 | head() - public() | |
132 | draft() and head() |
|
132 | draft() and head() | |
133 |
head() and author(" |
|
133 | head() and author("pierre-yves") | |
134 |
|
134 | |||
135 | # testing the mutable phases set |
|
135 | # testing the mutable phases set | |
136 | draft() |
|
136 | draft() |
@@ -25,9 +25,9 b' draft() and ::tip' | |||||
25 | 0::tip |
|
25 | 0::tip | |
26 | roots(0::tip) |
|
26 | roots(0::tip) | |
27 | author(lmoscovicz) |
|
27 | author(lmoscovicz) | |
28 | author(olivia) |
|
28 | author("pierre-yves") | |
29 |
author(lmoscovicz) or author( |
|
29 | author(lmoscovicz) or author("pierre-yves") | |
30 |
author( |
|
30 | author("pierre-yves") or author(lmoscovicz) | |
31 | tip:0 |
|
31 | tip:0 | |
32 | 0:: |
|
32 | 0:: | |
33 | # those two `roots(...)` inputs are close to what phase movement use. |
|
33 | # those two `roots(...)` inputs are close to what phase movement use. |
@@ -20,7 +20,10 b' Configurations' | |||||
20 |
|
20 | |||
21 | ``profile-benchmark`` |
|
21 | ``profile-benchmark`` | |
22 | Enable profiling for the benchmarked section. |
|
22 | Enable profiling for the benchmarked section. | |
23 |
( |
|
23 | (by default, the first iteration is benchmarked) | |
|
24 | ||||
|
25 | ``profiled-runs`` | |||
|
26 | list of iteration to profile (starting from 0) | |||
24 |
|
27 | |||
25 | ``run-limits`` |
|
28 | ``run-limits`` | |
26 | Control the number of runs each benchmark will perform. The option value |
|
29 | Control the number of runs each benchmark will perform. The option value | |
@@ -318,6 +321,11 b' try:' | |||||
318 | ) |
|
321 | ) | |
319 | configitem( |
|
322 | configitem( | |
320 | b'perf', |
|
323 | b'perf', | |
|
324 | b'profiled-runs', | |||
|
325 | default=mercurial.configitems.dynamicdefault, | |||
|
326 | ) | |||
|
327 | configitem( | |||
|
328 | b'perf', | |||
321 | b'run-limits', |
|
329 | b'run-limits', | |
322 | default=mercurial.configitems.dynamicdefault, |
|
330 | default=mercurial.configitems.dynamicdefault, | |
323 | experimental=True, |
|
331 | experimental=True, | |
@@ -354,7 +362,7 b' except TypeError:' | |||||
354 | ) |
|
362 | ) | |
355 | configitem( |
|
363 | configitem( | |
356 | b'perf', |
|
364 | b'perf', | |
357 |
b'profile |
|
365 | b'profiled-runs', | |
358 | default=mercurial.configitems.dynamicdefault, |
|
366 | default=mercurial.configitems.dynamicdefault, | |
359 | ) |
|
367 | ) | |
360 | configitem( |
|
368 | configitem( | |
@@ -491,9 +499,12 b' def gettimer(ui, opts=None):' | |||||
491 | limits = DEFAULTLIMITS |
|
499 | limits = DEFAULTLIMITS | |
492 |
|
500 | |||
493 | profiler = None |
|
501 | profiler = None | |
|
502 | profiled_runs = set() | |||
494 | if profiling is not None: |
|
503 | if profiling is not None: | |
495 | if ui.configbool(b"perf", b"profile-benchmark", False): |
|
504 | if ui.configbool(b"perf", b"profile-benchmark", False): | |
496 | profiler = profiling.profile(ui) |
|
505 | profiler = lambda: profiling.profile(ui) | |
|
506 | for run in ui.configlist(b"perf", b"profiled-runs", [0]): | |||
|
507 | profiled_runs.add(int(run)) | |||
497 |
|
508 | |||
498 | prerun = getint(ui, b"perf", b"pre-run", 0) |
|
509 | prerun = getint(ui, b"perf", b"pre-run", 0) | |
499 | t = functools.partial( |
|
510 | t = functools.partial( | |
@@ -503,6 +514,7 b' def gettimer(ui, opts=None):' | |||||
503 | limits=limits, |
|
514 | limits=limits, | |
504 | prerun=prerun, |
|
515 | prerun=prerun, | |
505 | profiler=profiler, |
|
516 | profiler=profiler, | |
|
517 | profiled_runs=profiled_runs, | |||
506 | ) |
|
518 | ) | |
507 | return t, fm |
|
519 | return t, fm | |
508 |
|
520 | |||
@@ -547,27 +559,32 b' def _timer(' | |||||
547 | limits=DEFAULTLIMITS, |
|
559 | limits=DEFAULTLIMITS, | |
548 | prerun=0, |
|
560 | prerun=0, | |
549 | profiler=None, |
|
561 | profiler=None, | |
|
562 | profiled_runs=(0,), | |||
550 | ): |
|
563 | ): | |
551 | gc.collect() |
|
564 | gc.collect() | |
552 | results = [] |
|
565 | results = [] | |
553 | begin = util.timer() |
|
|||
554 | count = 0 |
|
566 | count = 0 | |
555 | if profiler is None: |
|
567 | if profiler is None: | |
556 | profiler = NOOPCTX |
|
568 | profiler = lambda: NOOPCTX | |
557 | for i in range(prerun): |
|
569 | for i in range(prerun): | |
558 | if setup is not None: |
|
570 | if setup is not None: | |
559 | setup() |
|
571 | setup() | |
560 | with context(): |
|
572 | with context(): | |
561 | func() |
|
573 | func() | |
|
574 | begin = util.timer() | |||
562 | keepgoing = True |
|
575 | keepgoing = True | |
563 | while keepgoing: |
|
576 | while keepgoing: | |
|
577 | if count in profiled_runs: | |||
|
578 | prof = profiler() | |||
|
579 | else: | |||
|
580 | prof = NOOPCTX | |||
564 | if setup is not None: |
|
581 | if setup is not None: | |
565 | setup() |
|
582 | setup() | |
566 | with context(): |
|
583 | with context(): | |
567 |
|
|
584 | gc.collect() | |
|
585 | with prof: | |||
568 | with timeone() as item: |
|
586 | with timeone() as item: | |
569 | r = func() |
|
587 | r = func() | |
570 | profiler = NOOPCTX |
|
|||
571 | count += 1 |
|
588 | count += 1 | |
572 | results.append(item[0]) |
|
589 | results.append(item[0]) | |
573 | cstop = util.timer() |
|
590 | cstop = util.timer() | |
@@ -2029,6 +2046,19 b' def perfstartup(ui, repo, **opts):' | |||||
2029 | fm.end() |
|
2046 | fm.end() | |
2030 |
|
2047 | |||
2031 |
|
2048 | |||
|
2049 | def _clear_store_audit_cache(repo): | |||
|
2050 | vfs = getsvfs(repo) | |||
|
2051 | # unwrap the fncache proxy | |||
|
2052 | if not hasattr(vfs, "audit"): | |||
|
2053 | vfs = getattr(vfs, "vfs", vfs) | |||
|
2054 | auditor = vfs.audit | |||
|
2055 | if hasattr(auditor, "clear_audit_cache"): | |||
|
2056 | auditor.clear_audit_cache() | |||
|
2057 | elif hasattr(auditor, "audited"): | |||
|
2058 | auditor.audited.clear() | |||
|
2059 | auditor.auditeddir.clear() | |||
|
2060 | ||||
|
2061 | ||||
2032 | def _find_stream_generator(version): |
|
2062 | def _find_stream_generator(version): | |
2033 | """find the proper generator function for this stream version""" |
|
2063 | """find the proper generator function for this stream version""" | |
2034 | import mercurial.streamclone |
|
2064 | import mercurial.streamclone | |
@@ -2040,7 +2070,7 b' def _find_stream_generator(version):' | |||||
2040 | if generatev1 is not None: |
|
2070 | if generatev1 is not None: | |
2041 |
|
2071 | |||
2042 | def generate(repo): |
|
2072 | def generate(repo): | |
2043 |
entries, bytes, data = generatev |
|
2073 | entries, bytes, data = generatev1(repo, None, None, True) | |
2044 | return data |
|
2074 | return data | |
2045 |
|
2075 | |||
2046 | available[b'v1'] = generatev1 |
|
2076 | available[b'v1'] = generatev1 | |
@@ -2058,8 +2088,7 b' def _find_stream_generator(version):' | |||||
2058 | if generatev3 is not None: |
|
2088 | if generatev3 is not None: | |
2059 |
|
2089 | |||
2060 | def generate(repo): |
|
2090 | def generate(repo): | |
2061 |
|
|
2091 | return generatev3(repo, None, None, True) | |
2062 | return data |
|
|||
2063 |
|
2092 | |||
2064 | available[b'v3-exp'] = generate |
|
2093 | available[b'v3-exp'] = generate | |
2065 |
|
2094 | |||
@@ -2085,7 +2114,8 b' def _find_stream_generator(version):' | |||||
2085 | b'', |
|
2114 | b'', | |
2086 | b'stream-version', |
|
2115 | b'stream-version', | |
2087 | b'latest', |
|
2116 | b'latest', | |
2088 |
b'stream version to use ("v1", "v2", "v3" |
|
2117 | b'stream version to use ("v1", "v2", "v3-exp" ' | |
|
2118 | b'or "latest", (the default))', | |||
2089 | ), |
|
2119 | ), | |
2090 | ] |
|
2120 | ] | |
2091 | + formatteropts, |
|
2121 | + formatteropts, | |
@@ -2102,6 +2132,9 b' def perf_stream_clone_scan(ui, repo, str' | |||||
2102 |
|
2132 | |||
2103 | def setupone(): |
|
2133 | def setupone(): | |
2104 | result_holder[0] = None |
|
2134 | result_holder[0] = None | |
|
2135 | # This is important for the full generation, even if it does not | |||
|
2136 | # currently matters, it seems safer to also real it here. | |||
|
2137 | _clear_store_audit_cache(repo) | |||
2105 |
|
2138 | |||
2106 | generate = _find_stream_generator(stream_version) |
|
2139 | generate = _find_stream_generator(stream_version) | |
2107 |
|
2140 | |||
@@ -2120,7 +2153,8 b' def perf_stream_clone_scan(ui, repo, str' | |||||
2120 | b'', |
|
2153 | b'', | |
2121 | b'stream-version', |
|
2154 | b'stream-version', | |
2122 | b'latest', |
|
2155 | b'latest', | |
2123 |
b'stream version to us ("v1", "v2" |
|
2156 | b'stream version to us ("v1", "v2", "v3-exp" ' | |
|
2157 | b'or "latest", (the default))', | |||
2124 | ), |
|
2158 | ), | |
2125 | ] |
|
2159 | ] | |
2126 | + formatteropts, |
|
2160 | + formatteropts, | |
@@ -2136,12 +2170,15 b' def perf_stream_clone_generate(ui, repo,' | |||||
2136 |
|
2170 | |||
2137 | generate = _find_stream_generator(stream_version) |
|
2171 | generate = _find_stream_generator(stream_version) | |
2138 |
|
2172 | |||
|
2173 | def setup(): | |||
|
2174 | _clear_store_audit_cache(repo) | |||
|
2175 | ||||
2139 | def runone(): |
|
2176 | def runone(): | |
2140 | # the lock is held for the duration the initialisation |
|
2177 | # the lock is held for the duration the initialisation | |
2141 | for chunk in generate(repo): |
|
2178 | for chunk in generate(repo): | |
2142 | pass |
|
2179 | pass | |
2143 |
|
2180 | |||
2144 | timer(runone, title=b"generate") |
|
2181 | timer(runone, setup=setup, title=b"generate") | |
2145 | fm.end() |
|
2182 | fm.end() | |
2146 |
|
2183 | |||
2147 |
|
2184 | |||
@@ -2187,10 +2224,18 b' def perf_stream_clone_consume(ui, repo, ' | |||||
2187 |
|
2224 | |||
2188 | run_variables = [None, None] |
|
2225 | run_variables = [None, None] | |
2189 |
|
2226 | |||
|
2227 | # we create the new repository next to the other one for two reasons: | |||
|
2228 | # - this way we use the same file system, which are relevant for benchmark | |||
|
2229 | # - if /tmp/ is small, the operation could overfills it. | |||
|
2230 | source_repo_dir = os.path.dirname(repo.root) | |||
|
2231 | ||||
2190 | @contextlib.contextmanager |
|
2232 | @contextlib.contextmanager | |
2191 | def context(): |
|
2233 | def context(): | |
2192 | with open(filename, mode='rb') as bundle: |
|
2234 | with open(filename, mode='rb') as bundle: | |
2193 |
with tempfile.TemporaryDirectory( |
|
2235 | with tempfile.TemporaryDirectory( | |
|
2236 | prefix=b'hg-perf-stream-consume-', | |||
|
2237 | dir=source_repo_dir, | |||
|
2238 | ) as tmp_dir: | |||
2194 | tmp_dir = fsencode(tmp_dir) |
|
2239 | tmp_dir = fsencode(tmp_dir) | |
2195 | run_variables[0] = bundle |
|
2240 | run_variables[0] = bundle | |
2196 | run_variables[1] = tmp_dir |
|
2241 | run_variables[1] = tmp_dir | |
@@ -2201,11 +2246,15 b' def perf_stream_clone_consume(ui, repo, ' | |||||
2201 | def runone(): |
|
2246 | def runone(): | |
2202 | bundle = run_variables[0] |
|
2247 | bundle = run_variables[0] | |
2203 | tmp_dir = run_variables[1] |
|
2248 | tmp_dir = run_variables[1] | |
|
2249 | ||||
|
2250 | # we actually wants to copy all config to ensure the repo config is | |||
|
2251 | # taken in account during the benchmark | |||
|
2252 | new_ui = repo.ui.__class__(repo.ui) | |||
2204 | # only pass ui when no srcrepo |
|
2253 | # only pass ui when no srcrepo | |
2205 | localrepo.createrepository( |
|
2254 | localrepo.createrepository( | |
2206 |
|
|
2255 | new_ui, tmp_dir, requirements=repo.requirements | |
2207 | ) |
|
2256 | ) | |
2208 |
target = hg.repository( |
|
2257 | target = hg.repository(new_ui, tmp_dir) | |
2209 | gen = exchange.readbundle(target.ui, bundle, bundle.name) |
|
2258 | gen = exchange.readbundle(target.ui, bundle, bundle.name) | |
2210 | # stream v1 |
|
2259 | # stream v1 | |
2211 | if util.safehasattr(gen, 'apply'): |
|
2260 | if util.safehasattr(gen, 'apply'): | |
@@ -4205,15 +4254,24 b' def perfbranchmap(ui, repo, *filternames' | |||||
4205 | # add unfiltered |
|
4254 | # add unfiltered | |
4206 | allfilters.append(None) |
|
4255 | allfilters.append(None) | |
4207 |
|
4256 | |||
4208 | if util.safehasattr(branchmap.branchcache, 'fromfile'): |
|
4257 | old_branch_cache_from_file = None | |
|
4258 | branchcacheread = None | |||
|
4259 | if util.safehasattr(branchmap, 'branch_cache_from_file'): | |||
|
4260 | old_branch_cache_from_file = branchmap.branch_cache_from_file | |||
|
4261 | branchmap.branch_cache_from_file = lambda *args: None | |||
|
4262 | elif util.safehasattr(branchmap.branchcache, 'fromfile'): | |||
4209 | branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile') |
|
4263 | branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile') | |
4210 | branchcacheread.set(classmethod(lambda *args: None)) |
|
4264 | branchcacheread.set(classmethod(lambda *args: None)) | |
4211 | else: |
|
4265 | else: | |
4212 | # older versions |
|
4266 | # older versions | |
4213 | branchcacheread = safeattrsetter(branchmap, b'read') |
|
4267 | branchcacheread = safeattrsetter(branchmap, b'read') | |
4214 | branchcacheread.set(lambda *args: None) |
|
4268 | branchcacheread.set(lambda *args: None) | |
4215 | branchcachewrite = safeattrsetter(branchmap.branchcache, b'write') |
|
4269 | if util.safehasattr(branchmap, '_LocalBranchCache'): | |
4216 | branchcachewrite.set(lambda *args: None) |
|
4270 | branchcachewrite = safeattrsetter(branchmap._LocalBranchCache, b'write') | |
|
4271 | branchcachewrite.set(lambda *args: None) | |||
|
4272 | else: | |||
|
4273 | branchcachewrite = safeattrsetter(branchmap.branchcache, b'write') | |||
|
4274 | branchcachewrite.set(lambda *args: None) | |||
4217 | try: |
|
4275 | try: | |
4218 | for name in allfilters: |
|
4276 | for name in allfilters: | |
4219 | printname = name |
|
4277 | printname = name | |
@@ -4221,7 +4279,10 b' def perfbranchmap(ui, repo, *filternames' | |||||
4221 | printname = b'unfiltered' |
|
4279 | printname = b'unfiltered' | |
4222 | timer(getbranchmap(name), title=printname) |
|
4280 | timer(getbranchmap(name), title=printname) | |
4223 | finally: |
|
4281 | finally: | |
4224 | branchcacheread.restore() |
|
4282 | if old_branch_cache_from_file is not None: | |
|
4283 | branchmap.branch_cache_from_file = old_branch_cache_from_file | |||
|
4284 | if branchcacheread is not None: | |||
|
4285 | branchcacheread.restore() | |||
4225 | branchcachewrite.restore() |
|
4286 | branchcachewrite.restore() | |
4226 | fm.end() |
|
4287 | fm.end() | |
4227 |
|
4288 | |||
@@ -4303,6 +4364,19 b' def perfbranchmapupdate(ui, repo, base=(' | |||||
4303 | baserepo = repo.filtered(b'__perf_branchmap_update_base') |
|
4364 | baserepo = repo.filtered(b'__perf_branchmap_update_base') | |
4304 | targetrepo = repo.filtered(b'__perf_branchmap_update_target') |
|
4365 | targetrepo = repo.filtered(b'__perf_branchmap_update_target') | |
4305 |
|
4366 | |||
|
4367 | bcache = repo.branchmap() | |||
|
4368 | copy_method = 'copy' | |||
|
4369 | ||||
|
4370 | copy_base_kwargs = copy_base_kwargs = {} | |||
|
4371 | if hasattr(bcache, 'copy'): | |||
|
4372 | if 'repo' in getargspec(bcache.copy).args: | |||
|
4373 | copy_base_kwargs = {"repo": baserepo} | |||
|
4374 | copy_target_kwargs = {"repo": targetrepo} | |||
|
4375 | else: | |||
|
4376 | copy_method = 'inherit_for' | |||
|
4377 | copy_base_kwargs = {"repo": baserepo} | |||
|
4378 | copy_target_kwargs = {"repo": targetrepo} | |||
|
4379 | ||||
4306 | # try to find an existing branchmap to reuse |
|
4380 | # try to find an existing branchmap to reuse | |
4307 | subsettable = getbranchmapsubsettable() |
|
4381 | subsettable = getbranchmapsubsettable() | |
4308 | candidatefilter = subsettable.get(None) |
|
4382 | candidatefilter = subsettable.get(None) | |
@@ -4311,7 +4385,7 b' def perfbranchmapupdate(ui, repo, base=(' | |||||
4311 | if candidatebm.validfor(baserepo): |
|
4385 | if candidatebm.validfor(baserepo): | |
4312 | filtered = repoview.filterrevs(repo, candidatefilter) |
|
4386 | filtered = repoview.filterrevs(repo, candidatefilter) | |
4313 | missing = [r for r in allbaserevs if r in filtered] |
|
4387 | missing = [r for r in allbaserevs if r in filtered] | |
4314 |
base = candidatebm |
|
4388 | base = getattr(candidatebm, copy_method)(**copy_base_kwargs) | |
4315 | base.update(baserepo, missing) |
|
4389 | base.update(baserepo, missing) | |
4316 | break |
|
4390 | break | |
4317 | candidatefilter = subsettable.get(candidatefilter) |
|
4391 | candidatefilter = subsettable.get(candidatefilter) | |
@@ -4321,7 +4395,7 b' def perfbranchmapupdate(ui, repo, base=(' | |||||
4321 | base.update(baserepo, allbaserevs) |
|
4395 | base.update(baserepo, allbaserevs) | |
4322 |
|
4396 | |||
4323 | def setup(): |
|
4397 | def setup(): | |
4324 | x[0] = base.copy() |
|
4398 | x[0] = getattr(base, copy_method)(**copy_target_kwargs) | |
4325 | if clearcaches: |
|
4399 | if clearcaches: | |
4326 | unfi._revbranchcache = None |
|
4400 | unfi._revbranchcache = None | |
4327 | clearchangelog(repo) |
|
4401 | clearchangelog(repo) | |
@@ -4368,10 +4442,10 b' def perfbranchmapload(ui, repo, filter=b' | |||||
4368 |
|
4442 | |||
4369 | repo.branchmap() # make sure we have a relevant, up to date branchmap |
|
4443 | repo.branchmap() # make sure we have a relevant, up to date branchmap | |
4370 |
|
4444 | |||
4371 | try: |
|
4445 | fromfile = getattr(branchmap, 'branch_cache_from_file', None) | |
4372 | fromfile = branchmap.branchcache.fromfile |
|
4446 | if fromfile is None: | |
4373 | except AttributeError: |
|
4447 | fromfile = getattr(branchmap.branchcache, 'fromfile', None) | |
4374 | # older versions |
|
4448 | if fromfile is None: | |
4375 | fromfile = branchmap.read |
|
4449 | fromfile = branchmap.read | |
4376 |
|
4450 | |||
4377 | currentfilter = filter |
|
4451 | currentfilter = filter |
@@ -430,6 +430,7 b' def composestandinmatcher(repo, rmatcher' | |||||
430 | def composedmatchfn(f): |
|
430 | def composedmatchfn(f): | |
431 | return isstandin(f) and rmatcher.matchfn(splitstandin(f)) |
|
431 | return isstandin(f) and rmatcher.matchfn(splitstandin(f)) | |
432 |
|
432 | |||
|
433 | smatcher._was_tampered_with = True | |||
433 | smatcher.matchfn = composedmatchfn |
|
434 | smatcher.matchfn = composedmatchfn | |
434 |
|
435 | |||
435 | return smatcher |
|
436 | return smatcher | |
@@ -716,6 +717,7 b' def updatestandinsbymatch(repo, match):' | |||||
716 | return match |
|
717 | return match | |
717 |
|
718 | |||
718 | lfiles = listlfiles(repo) |
|
719 | lfiles = listlfiles(repo) | |
|
720 | match._was_tampered_with = True | |||
719 | match._files = repo._subdirlfs(match.files(), lfiles) |
|
721 | match._files = repo._subdirlfs(match.files(), lfiles) | |
720 |
|
722 | |||
721 | # Case 2: user calls commit with specified patterns: refresh |
|
723 | # Case 2: user calls commit with specified patterns: refresh | |
@@ -746,6 +748,7 b' def updatestandinsbymatch(repo, match):' | |||||
746 | # user. Have to modify _files to prevent commit() from |
|
748 | # user. Have to modify _files to prevent commit() from | |
747 | # complaining "not tracked" for big files. |
|
749 | # complaining "not tracked" for big files. | |
748 | match = copy.copy(match) |
|
750 | match = copy.copy(match) | |
|
751 | match._was_tampered_with = True | |||
749 | origmatchfn = match.matchfn |
|
752 | origmatchfn = match.matchfn | |
750 |
|
753 | |||
751 | # Check both the list of largefiles and the list of |
|
754 | # Check both the list of largefiles and the list of |
@@ -71,6 +71,7 b' def composelargefilematcher(match, manif' | |||||
71 | """create a matcher that matches only the largefiles in the original |
|
71 | """create a matcher that matches only the largefiles in the original | |
72 | matcher""" |
|
72 | matcher""" | |
73 | m = copy.copy(match) |
|
73 | m = copy.copy(match) | |
|
74 | m._was_tampered_with = True | |||
74 | lfile = lambda f: lfutil.standin(f) in manifest |
|
75 | lfile = lambda f: lfutil.standin(f) in manifest | |
75 | m._files = [lf for lf in m._files if lfile(lf)] |
|
76 | m._files = [lf for lf in m._files if lfile(lf)] | |
76 | m._fileset = set(m._files) |
|
77 | m._fileset = set(m._files) | |
@@ -86,6 +87,7 b' def composenormalfilematcher(match, mani' | |||||
86 | excluded.update(exclude) |
|
87 | excluded.update(exclude) | |
87 |
|
88 | |||
88 | m = copy.copy(match) |
|
89 | m = copy.copy(match) | |
|
90 | m._was_tampered_with = True | |||
89 | notlfile = lambda f: not ( |
|
91 | notlfile = lambda f: not ( | |
90 | lfutil.isstandin(f) or lfutil.standin(f) in manifest or f in excluded |
|
92 | lfutil.isstandin(f) or lfutil.standin(f) in manifest or f in excluded | |
91 | ) |
|
93 | ) | |
@@ -442,6 +444,8 b' def overridelog(orig, ui, repo, *pats, *' | |||||
442 |
|
444 | |||
443 | pats.update(fixpats(f, tostandin) for f in p) |
|
445 | pats.update(fixpats(f, tostandin) for f in p) | |
444 |
|
446 | |||
|
447 | m._was_tampered_with = True | |||
|
448 | ||||
445 | for i in range(0, len(m._files)): |
|
449 | for i in range(0, len(m._files)): | |
446 | # Don't add '.hglf' to m.files, since that is already covered by '.' |
|
450 | # Don't add '.hglf' to m.files, since that is already covered by '.' | |
447 | if m._files[i] == b'.': |
|
451 | if m._files[i] == b'.': | |
@@ -849,6 +853,7 b' def overridecopy(orig, ui, repo, pats, o' | |||||
849 | newpats.append(pat) |
|
853 | newpats.append(pat) | |
850 | match = orig(ctx, newpats, opts, globbed, default, badfn=badfn) |
|
854 | match = orig(ctx, newpats, opts, globbed, default, badfn=badfn) | |
851 | m = copy.copy(match) |
|
855 | m = copy.copy(match) | |
|
856 | m._was_tampered_with = True | |||
852 | lfile = lambda f: lfutil.standin(f) in manifest |
|
857 | lfile = lambda f: lfutil.standin(f) in manifest | |
853 | m._files = [lfutil.standin(f) for f in m._files if lfile(f)] |
|
858 | m._files = [lfutil.standin(f) for f in m._files if lfile(f)] | |
854 | m._fileset = set(m._files) |
|
859 | m._fileset = set(m._files) | |
@@ -967,6 +972,7 b' def overriderevert(orig, ui, repo, ctx, ' | |||||
967 | opts = {} |
|
972 | opts = {} | |
968 | match = orig(mctx, pats, opts, globbed, default, badfn=badfn) |
|
973 | match = orig(mctx, pats, opts, globbed, default, badfn=badfn) | |
969 | m = copy.copy(match) |
|
974 | m = copy.copy(match) | |
|
975 | m._was_tampered_with = True | |||
970 |
|
976 | |||
971 | # revert supports recursing into subrepos, and though largefiles |
|
977 | # revert supports recursing into subrepos, and though largefiles | |
972 | # currently doesn't work correctly in that case, this match is |
|
978 | # currently doesn't work correctly in that case, this match is | |
@@ -1595,6 +1601,7 b' def scmutiladdremove(' | |||||
1595 | # confused state later. |
|
1601 | # confused state later. | |
1596 | if s.deleted: |
|
1602 | if s.deleted: | |
1597 | m = copy.copy(matcher) |
|
1603 | m = copy.copy(matcher) | |
|
1604 | m._was_tampered_with = True | |||
1598 |
|
1605 | |||
1599 | # The m._files and m._map attributes are not changed to the deleted list |
|
1606 | # The m._files and m._map attributes are not changed to the deleted list | |
1600 | # because that affects the m.exact() test, which in turn governs whether |
|
1607 | # because that affects the m.exact() test, which in turn governs whether | |
@@ -1721,6 +1728,7 b' def overridecat(orig, ui, repo, file1, *' | |||||
1721 | err = 1 |
|
1728 | err = 1 | |
1722 | notbad = set() |
|
1729 | notbad = set() | |
1723 | m = scmutil.match(ctx, (file1,) + pats, pycompat.byteskwargs(opts)) |
|
1730 | m = scmutil.match(ctx, (file1,) + pats, pycompat.byteskwargs(opts)) | |
|
1731 | m._was_tampered_with = True | |||
1724 | origmatchfn = m.matchfn |
|
1732 | origmatchfn = m.matchfn | |
1725 |
|
1733 | |||
1726 | def lfmatchfn(f): |
|
1734 | def lfmatchfn(f): |
@@ -181,6 +181,7 b' def reposetup(ui, repo):' | |||||
181 | return newfiles |
|
181 | return newfiles | |
182 |
|
182 | |||
183 | m = copy.copy(match) |
|
183 | m = copy.copy(match) | |
|
184 | m._was_tampered_with = True | |||
184 | m._files = tostandins(m._files) |
|
185 | m._files = tostandins(m._files) | |
185 |
|
186 | |||
186 | result = orig( |
|
187 | result = orig( | |
@@ -193,6 +194,7 b' def reposetup(ui, repo):' | |||||
193 | dirstate = self.dirstate |
|
194 | dirstate = self.dirstate | |
194 | return sf in dirstate or dirstate.hasdir(sf) |
|
195 | return sf in dirstate or dirstate.hasdir(sf) | |
195 |
|
196 | |||
|
197 | match._was_tampered_with = True | |||
196 | match._files = [f for f in match._files if sfindirstate(f)] |
|
198 | match._files = [f for f in match._files if sfindirstate(f)] | |
197 | # Don't waste time getting the ignored and unknown |
|
199 | # Don't waste time getting the ignored and unknown | |
198 | # files from lfdirstate |
|
200 | # files from lfdirstate |
@@ -2133,16 +2133,16 b' def pullrebase(orig, ui, repo, *args, **' | |||||
2133 | ) |
|
2133 | ) | |
2134 |
|
2134 | |||
2135 | revsprepull = len(repo) |
|
2135 | revsprepull = len(repo) | |
2136 |
origpostincoming = c |
|
2136 | origpostincoming = cmdutil.postincoming | |
2137 |
|
2137 | |||
2138 | def _dummy(*args, **kwargs): |
|
2138 | def _dummy(*args, **kwargs): | |
2139 | pass |
|
2139 | pass | |
2140 |
|
2140 | |||
2141 |
c |
|
2141 | cmdutil.postincoming = _dummy | |
2142 | try: |
|
2142 | try: | |
2143 | ret = orig(ui, repo, *args, **opts) |
|
2143 | ret = orig(ui, repo, *args, **opts) | |
2144 | finally: |
|
2144 | finally: | |
2145 |
c |
|
2145 | cmdutil.postincoming = origpostincoming | |
2146 | revspostpull = len(repo) |
|
2146 | revspostpull = len(repo) | |
2147 | if revspostpull > revsprepull: |
|
2147 | if revspostpull > revsprepull: | |
2148 | # --rev option from pull conflict with rebase own --rev |
|
2148 | # --rev option from pull conflict with rebase own --rev |
@@ -34780,8 +34780,8 b' msgid "creating obsolete markers is not ' | |||||
34780 | msgstr "廃止マーカの作成機能は無効化されています" |
|
34780 | msgstr "廃止マーカの作成機能は無効化されています" | |
34781 |
|
34781 | |||
34782 | #, python-format |
|
34782 | #, python-format | |
34783 | msgid "obsolete feature not enabled but %i markers found!\n" |
|
34783 | msgid "\"obsolete\" feature not enabled but %i markers found!\n" | |
34784 | msgstr "obsolete 機能は無効ですが、 %i 個の廃止情報マーカが存在します!\n" |
|
34784 | msgstr "\"obsolete\" 機能は無効ですが、 %i 個の廃止情報マーカが存在します!\n" | |
34785 |
|
34785 | |||
34786 | #, python-format |
|
34786 | #, python-format | |
34787 | msgid "unknown key: %r" |
|
34787 | msgid "unknown key: %r" |
@@ -36049,9 +36049,9 b' msgstr ""' | |||||
36049 | "repositório" |
|
36049 | "repositório" | |
36050 |
|
36050 | |||
36051 | #, python-format |
|
36051 | #, python-format | |
36052 | msgid "obsolete feature not enabled but %i markers found!\n" |
|
36052 | msgid "\"obsolete\" feature not enabled but %i markers found!\n" | |
36053 | msgstr "" |
|
36053 | msgstr "" | |
36054 | "a funcionalidade obsolete não está habilitada, mas foram encontradas %i " |
|
36054 | "a funcionalidade \"obsolete\" não está habilitada, mas foram encontradas %i " | |
36055 | "marcações!\n" |
|
36055 | "marcações!\n" | |
36056 |
|
36056 | |||
36057 | #, python-format |
|
36057 | #, python-format |
This diff has been collapsed as it changes many lines, (1021 lines changed) Show them Hide them | |||||
@@ -15,6 +15,7 b' from .node import (' | |||||
15 | ) |
|
15 | ) | |
16 |
|
16 | |||
17 | from typing import ( |
|
17 | from typing import ( | |
|
18 | Any, | |||
18 | Callable, |
|
19 | Callable, | |
19 | Dict, |
|
20 | Dict, | |
20 | Iterable, |
|
21 | Iterable, | |
@@ -24,6 +25,7 b' from typing import (' | |||||
24 | TYPE_CHECKING, |
|
25 | TYPE_CHECKING, | |
25 | Tuple, |
|
26 | Tuple, | |
26 | Union, |
|
27 | Union, | |
|
28 | cast, | |||
27 | ) |
|
29 | ) | |
28 |
|
30 | |||
29 | from . import ( |
|
31 | from . import ( | |
@@ -59,7 +61,37 b' class BranchMapCache:' | |||||
59 |
|
61 | |||
60 | def __getitem__(self, repo): |
|
62 | def __getitem__(self, repo): | |
61 | self.updatecache(repo) |
|
63 | self.updatecache(repo) | |
62 |
|
|
64 | bcache = self._per_filter[repo.filtername] | |
|
65 | bcache._ensure_populated(repo) | |||
|
66 | assert bcache._filtername == repo.filtername, ( | |||
|
67 | bcache._filtername, | |||
|
68 | repo.filtername, | |||
|
69 | ) | |||
|
70 | return bcache | |||
|
71 | ||||
|
72 | def update_disk(self, repo, detect_pure_topo=False): | |||
|
73 | """ensure and up-to-date cache is (or will be) written on disk | |||
|
74 | ||||
|
75 | The cache for this repository view is updated if needed and written on | |||
|
76 | disk. | |||
|
77 | ||||
|
78 | If a transaction is in progress, the writing is schedule to transaction | |||
|
79 | close. See the `BranchMapCache.write_dirty` method. | |||
|
80 | ||||
|
81 | This method exist independently of __getitem__ as it is sometime useful | |||
|
82 | to signal that we have no intend to use the data in memory yet. | |||
|
83 | """ | |||
|
84 | self.updatecache(repo) | |||
|
85 | bcache = self._per_filter[repo.filtername] | |||
|
86 | assert bcache._filtername == repo.filtername, ( | |||
|
87 | bcache._filtername, | |||
|
88 | repo.filtername, | |||
|
89 | ) | |||
|
90 | if detect_pure_topo: | |||
|
91 | bcache._detect_pure_topo(repo) | |||
|
92 | tr = repo.currenttransaction() | |||
|
93 | if getattr(tr, 'finalized', True): | |||
|
94 | bcache.sync_disk(repo) | |||
63 |
|
95 | |||
64 | def updatecache(self, repo): |
|
96 | def updatecache(self, repo): | |
65 | """Update the cache for the given filtered view on a repository""" |
|
97 | """Update the cache for the given filtered view on a repository""" | |
@@ -72,7 +104,7 b' class BranchMapCache:' | |||||
72 | bcache = self._per_filter.get(filtername) |
|
104 | bcache = self._per_filter.get(filtername) | |
73 | if bcache is None or not bcache.validfor(repo): |
|
105 | if bcache is None or not bcache.validfor(repo): | |
74 | # cache object missing or cache object stale? Read from disk |
|
106 | # cache object missing or cache object stale? Read from disk | |
75 |
bcache = branch |
|
107 | bcache = branch_cache_from_file(repo) | |
76 |
|
108 | |||
77 | revs = [] |
|
109 | revs = [] | |
78 | if bcache is None: |
|
110 | if bcache is None: | |
@@ -82,12 +114,13 b' class BranchMapCache:' | |||||
82 | subsetname = subsettable.get(filtername) |
|
114 | subsetname = subsettable.get(filtername) | |
83 | if subsetname is not None: |
|
115 | if subsetname is not None: | |
84 | subset = repo.filtered(subsetname) |
|
116 | subset = repo.filtered(subsetname) | |
85 |
|
|
117 | self.updatecache(subset) | |
|
118 | bcache = self._per_filter[subset.filtername].inherit_for(repo) | |||
86 | extrarevs = subset.changelog.filteredrevs - cl.filteredrevs |
|
119 | extrarevs = subset.changelog.filteredrevs - cl.filteredrevs | |
87 | revs.extend(r for r in extrarevs if r <= bcache.tiprev) |
|
120 | revs.extend(r for r in extrarevs if r <= bcache.tiprev) | |
88 | else: |
|
121 | else: | |
89 | # nothing to fall back on, start empty. |
|
122 | # nothing to fall back on, start empty. | |
90 | bcache = branchcache(repo) |
|
123 | bcache = new_branch_cache(repo) | |
91 |
|
124 | |||
92 | revs.extend(cl.revs(start=bcache.tiprev + 1)) |
|
125 | revs.extend(cl.revs(start=bcache.tiprev + 1)) | |
93 | if revs: |
|
126 | if revs: | |
@@ -118,7 +151,7 b' class BranchMapCache:' | |||||
118 |
|
151 | |||
119 | if rbheads: |
|
152 | if rbheads: | |
120 | rtiprev = max((int(clrev(node)) for node in rbheads)) |
|
153 | rtiprev = max((int(clrev(node)) for node in rbheads)) | |
121 | cache = branchcache( |
|
154 | cache = new_branch_cache( | |
122 | repo, |
|
155 | repo, | |
123 | remotebranchmap, |
|
156 | remotebranchmap, | |
124 | repo[rtiprev].node(), |
|
157 | repo[rtiprev].node(), | |
@@ -131,19 +164,26 b' class BranchMapCache:' | |||||
131 | for candidate in (b'base', b'immutable', b'served'): |
|
164 | for candidate in (b'base', b'immutable', b'served'): | |
132 | rview = repo.filtered(candidate) |
|
165 | rview = repo.filtered(candidate) | |
133 | if cache.validfor(rview): |
|
166 | if cache.validfor(rview): | |
|
167 | cache._filtername = candidate | |||
134 | self._per_filter[candidate] = cache |
|
168 | self._per_filter[candidate] = cache | |
|
169 | cache._state = STATE_DIRTY | |||
135 | cache.write(rview) |
|
170 | cache.write(rview) | |
136 | return |
|
171 | return | |
137 |
|
172 | |||
138 | def clear(self): |
|
173 | def clear(self): | |
139 | self._per_filter.clear() |
|
174 | self._per_filter.clear() | |
140 |
|
175 | |||
141 |
def write_d |
|
176 | def write_dirty(self, repo): | |
142 | unfi = repo.unfiltered() |
|
177 | unfi = repo.unfiltered() | |
143 | for filtername, cache in self._per_filter.items(): |
|
178 | for filtername in repoviewutil.get_ordered_subset(): | |
144 | if cache._delayed: |
|
179 | cache = self._per_filter.get(filtername) | |
|
180 | if cache is None: | |||
|
181 | continue | |||
|
182 | if filtername is None: | |||
|
183 | repo = unfi | |||
|
184 | else: | |||
145 | repo = unfi.filtered(filtername) |
|
185 | repo = unfi.filtered(filtername) | |
146 |
|
|
186 | cache.sync_disk(repo) | |
147 |
|
187 | |||
148 |
|
188 | |||
149 | def _unknownnode(node): |
|
189 | def _unknownnode(node): | |
@@ -158,26 +198,11 b' def _branchcachedesc(repo):' | |||||
158 | return b'branch cache' |
|
198 | return b'branch cache' | |
159 |
|
199 | |||
160 |
|
200 | |||
161 |
class |
|
201 | class _BaseBranchCache: | |
162 | """A dict like object that hold branches heads cache. |
|
202 | """A dict like object that hold branches heads cache. | |
163 |
|
203 | |||
164 | This cache is used to avoid costly computations to determine all the |
|
204 | This cache is used to avoid costly computations to determine all the | |
165 | branch heads of a repo. |
|
205 | branch heads of a repo. | |
166 |
|
||||
167 | The cache is serialized on disk in the following format: |
|
|||
168 |
|
||||
169 | <tip hex node> <tip rev number> [optional filtered repo hex hash] |
|
|||
170 | <branch head hex node> <open/closed state> <branch name> |
|
|||
171 | <branch head hex node> <open/closed state> <branch name> |
|
|||
172 | ... |
|
|||
173 |
|
||||
174 | The first line is used to check if the cache is still valid. If the |
|
|||
175 | branch cache is for a filtered repo view, an optional third hash is |
|
|||
176 | included that hashes the hashes of all filtered and obsolete revisions. |
|
|||
177 |
|
||||
178 | The open/closed state is represented by a single letter 'o' or 'c'. |
|
|||
179 | This field can be used to avoid changelog reads when determining if a |
|
|||
180 | branch head closes a branch or not. |
|
|||
181 | """ |
|
206 | """ | |
182 |
|
207 | |||
183 | def __init__( |
|
208 | def __init__( | |
@@ -186,64 +211,18 b' class branchcache:' | |||||
186 | entries: Union[ |
|
211 | entries: Union[ | |
187 | Dict[bytes, List[bytes]], Iterable[Tuple[bytes, List[bytes]]] |
|
212 | Dict[bytes, List[bytes]], Iterable[Tuple[bytes, List[bytes]]] | |
188 | ] = (), |
|
213 | ] = (), | |
189 |
|
|
214 | closed_nodes: Optional[Set[bytes]] = None, | |
190 | tiprev: Optional[int] = nullrev, |
|
|||
191 | filteredhash: Optional[bytes] = None, |
|
|||
192 | closednodes: Optional[Set[bytes]] = None, |
|
|||
193 | hasnode: Optional[Callable[[bytes], bool]] = None, |
|
|||
194 | ) -> None: |
|
215 | ) -> None: | |
195 | """hasnode is a function which can be used to verify whether changelog |
|
216 | """hasnode is a function which can be used to verify whether changelog | |
196 | has a given node or not. If it's not provided, we assume that every node |
|
217 | has a given node or not. If it's not provided, we assume that every node | |
197 | we have exists in changelog""" |
|
218 | we have exists in changelog""" | |
198 | self._repo = repo |
|
|||
199 | self._delayed = False |
|
|||
200 | if tipnode is None: |
|
|||
201 | self.tipnode = repo.nullid |
|
|||
202 | else: |
|
|||
203 | self.tipnode = tipnode |
|
|||
204 | self.tiprev = tiprev |
|
|||
205 | self.filteredhash = filteredhash |
|
|||
206 | # closednodes is a set of nodes that close their branch. If the branch |
|
219 | # closednodes is a set of nodes that close their branch. If the branch | |
207 | # cache has been updated, it may contain nodes that are no longer |
|
220 | # cache has been updated, it may contain nodes that are no longer | |
208 | # heads. |
|
221 | # heads. | |
209 | if closednodes is None: |
|
222 | if closed_nodes is None: | |
210 |
|
|
223 | closed_nodes = set() | |
211 | else: |
|
224 | self._closednodes = set(closed_nodes) | |
212 | self._closednodes = closednodes |
|
|||
213 | self._entries = dict(entries) |
|
225 | self._entries = dict(entries) | |
214 | # whether closed nodes are verified or not |
|
|||
215 | self._closedverified = False |
|
|||
216 | # branches for which nodes are verified |
|
|||
217 | self._verifiedbranches = set() |
|
|||
218 | self._hasnode = hasnode |
|
|||
219 | if self._hasnode is None: |
|
|||
220 | self._hasnode = lambda x: True |
|
|||
221 |
|
||||
222 | def _verifyclosed(self): |
|
|||
223 | """verify the closed nodes we have""" |
|
|||
224 | if self._closedverified: |
|
|||
225 | return |
|
|||
226 | for node in self._closednodes: |
|
|||
227 | if not self._hasnode(node): |
|
|||
228 | _unknownnode(node) |
|
|||
229 |
|
||||
230 | self._closedverified = True |
|
|||
231 |
|
||||
232 | def _verifybranch(self, branch): |
|
|||
233 | """verify head nodes for the given branch.""" |
|
|||
234 | if branch not in self._entries or branch in self._verifiedbranches: |
|
|||
235 | return |
|
|||
236 | for n in self._entries[branch]: |
|
|||
237 | if not self._hasnode(n): |
|
|||
238 | _unknownnode(n) |
|
|||
239 |
|
||||
240 | self._verifiedbranches.add(branch) |
|
|||
241 |
|
||||
242 | def _verifyall(self): |
|
|||
243 | """verifies nodes of all the branches""" |
|
|||
244 | needverification = set(self._entries.keys()) - self._verifiedbranches |
|
|||
245 | for b in needverification: |
|
|||
246 | self._verifybranch(b) |
|
|||
247 |
|
226 | |||
248 | def __iter__(self): |
|
227 | def __iter__(self): | |
249 | return iter(self._entries) |
|
228 | return iter(self._entries) | |
@@ -252,115 +231,20 b' class branchcache:' | |||||
252 | self._entries[key] = value |
|
231 | self._entries[key] = value | |
253 |
|
232 | |||
254 | def __getitem__(self, key): |
|
233 | def __getitem__(self, key): | |
255 | self._verifybranch(key) |
|
|||
256 | return self._entries[key] |
|
234 | return self._entries[key] | |
257 |
|
235 | |||
258 | def __contains__(self, key): |
|
236 | def __contains__(self, key): | |
259 | self._verifybranch(key) |
|
|||
260 | return key in self._entries |
|
237 | return key in self._entries | |
261 |
|
238 | |||
262 | def iteritems(self): |
|
239 | def iteritems(self): | |
263 |
|
|
240 | return self._entries.items() | |
264 | self._verifybranch(k) |
|
|||
265 | yield k, v |
|
|||
266 |
|
241 | |||
267 | items = iteritems |
|
242 | items = iteritems | |
268 |
|
243 | |||
269 | def hasbranch(self, label): |
|
244 | def hasbranch(self, label): | |
270 | """checks whether a branch of this name exists or not""" |
|
245 | """checks whether a branch of this name exists or not""" | |
271 | self._verifybranch(label) |
|
|||
272 | return label in self._entries |
|
246 | return label in self._entries | |
273 |
|
247 | |||
274 | @classmethod |
|
|||
275 | def fromfile(cls, repo): |
|
|||
276 | f = None |
|
|||
277 | try: |
|
|||
278 | f = repo.cachevfs(cls._filename(repo)) |
|
|||
279 | lineiter = iter(f) |
|
|||
280 | cachekey = next(lineiter).rstrip(b'\n').split(b" ", 2) |
|
|||
281 | last, lrev = cachekey[:2] |
|
|||
282 | last, lrev = bin(last), int(lrev) |
|
|||
283 | filteredhash = None |
|
|||
284 | hasnode = repo.changelog.hasnode |
|
|||
285 | if len(cachekey) > 2: |
|
|||
286 | filteredhash = bin(cachekey[2]) |
|
|||
287 | bcache = cls( |
|
|||
288 | repo, |
|
|||
289 | tipnode=last, |
|
|||
290 | tiprev=lrev, |
|
|||
291 | filteredhash=filteredhash, |
|
|||
292 | hasnode=hasnode, |
|
|||
293 | ) |
|
|||
294 | if not bcache.validfor(repo): |
|
|||
295 | # invalidate the cache |
|
|||
296 | raise ValueError('tip differs') |
|
|||
297 | bcache.load(repo, lineiter) |
|
|||
298 | except (IOError, OSError): |
|
|||
299 | return None |
|
|||
300 |
|
||||
301 | except Exception as inst: |
|
|||
302 | if repo.ui.debugflag: |
|
|||
303 | msg = b'invalid %s: %s\n' |
|
|||
304 | repo.ui.debug( |
|
|||
305 | msg |
|
|||
306 | % ( |
|
|||
307 | _branchcachedesc(repo), |
|
|||
308 | stringutil.forcebytestr(inst), |
|
|||
309 | ) |
|
|||
310 | ) |
|
|||
311 | bcache = None |
|
|||
312 |
|
||||
313 | finally: |
|
|||
314 | if f: |
|
|||
315 | f.close() |
|
|||
316 |
|
||||
317 | return bcache |
|
|||
318 |
|
||||
319 | def load(self, repo, lineiter): |
|
|||
320 | """fully loads the branchcache by reading from the file using the line |
|
|||
321 | iterator passed""" |
|
|||
322 | for line in lineiter: |
|
|||
323 | line = line.rstrip(b'\n') |
|
|||
324 | if not line: |
|
|||
325 | continue |
|
|||
326 | node, state, label = line.split(b" ", 2) |
|
|||
327 | if state not in b'oc': |
|
|||
328 | raise ValueError('invalid branch state') |
|
|||
329 | label = encoding.tolocal(label.strip()) |
|
|||
330 | node = bin(node) |
|
|||
331 | self._entries.setdefault(label, []).append(node) |
|
|||
332 | if state == b'c': |
|
|||
333 | self._closednodes.add(node) |
|
|||
334 |
|
||||
335 | @staticmethod |
|
|||
336 | def _filename(repo): |
|
|||
337 | """name of a branchcache file for a given repo or repoview""" |
|
|||
338 | filename = b"branch2" |
|
|||
339 | if repo.filtername: |
|
|||
340 | filename = b'%s-%s' % (filename, repo.filtername) |
|
|||
341 | return filename |
|
|||
342 |
|
||||
343 | def validfor(self, repo): |
|
|||
344 | """check that cache contents are valid for (a subset of) this repo |
|
|||
345 |
|
||||
346 | - False when the order of changesets changed or if we detect a strip. |
|
|||
347 | - True when cache is up-to-date for the current repo or its subset.""" |
|
|||
348 | try: |
|
|||
349 | node = repo.changelog.node(self.tiprev) |
|
|||
350 | except IndexError: |
|
|||
351 | # changesets were stripped and now we don't even have enough to |
|
|||
352 | # find tiprev |
|
|||
353 | return False |
|
|||
354 | if self.tipnode != node: |
|
|||
355 | # tiprev doesn't correspond to tipnode: repo was stripped, or this |
|
|||
356 | # repo has a different order of changesets |
|
|||
357 | return False |
|
|||
358 | tiphash = scmutil.filteredhash(repo, self.tiprev, needobsolete=True) |
|
|||
359 | # hashes don't match if this repo view has a different set of filtered |
|
|||
360 | # revisions (e.g. due to phase changes) or obsolete revisions (e.g. |
|
|||
361 | # history was rewritten) |
|
|||
362 | return self.filteredhash == tiphash |
|
|||
363 |
|
||||
364 | def _branchtip(self, heads): |
|
248 | def _branchtip(self, heads): | |
365 | """Return tuple with last open head in heads and false, |
|
249 | """Return tuple with last open head in heads and false, | |
366 | otherwise return last closed head and true.""" |
|
250 | otherwise return last closed head and true.""" | |
@@ -383,7 +267,6 b' class branchcache:' | |||||
383 | return (n for n in nodes if n not in self._closednodes) |
|
267 | return (n for n in nodes if n not in self._closednodes) | |
384 |
|
268 | |||
385 | def branchheads(self, branch, closed=False): |
|
269 | def branchheads(self, branch, closed=False): | |
386 | self._verifybranch(branch) |
|
|||
387 | heads = self._entries[branch] |
|
270 | heads = self._entries[branch] | |
388 | if not closed: |
|
271 | if not closed: | |
389 | heads = list(self.iteropen(heads)) |
|
272 | heads = list(self.iteropen(heads)) | |
@@ -395,60 +278,8 b' class branchcache:' | |||||
395 |
|
278 | |||
396 | def iterheads(self): |
|
279 | def iterheads(self): | |
397 | """returns all the heads""" |
|
280 | """returns all the heads""" | |
398 | self._verifyall() |
|
|||
399 | return self._entries.values() |
|
281 | return self._entries.values() | |
400 |
|
282 | |||
401 | def copy(self): |
|
|||
402 | """return an deep copy of the branchcache object""" |
|
|||
403 | return type(self)( |
|
|||
404 | self._repo, |
|
|||
405 | self._entries, |
|
|||
406 | self.tipnode, |
|
|||
407 | self.tiprev, |
|
|||
408 | self.filteredhash, |
|
|||
409 | self._closednodes, |
|
|||
410 | ) |
|
|||
411 |
|
||||
412 | def write(self, repo): |
|
|||
413 | tr = repo.currenttransaction() |
|
|||
414 | if not getattr(tr, 'finalized', True): |
|
|||
415 | # Avoid premature writing. |
|
|||
416 | # |
|
|||
417 | # (The cache warming setup by localrepo will update the file later.) |
|
|||
418 | self._delayed = True |
|
|||
419 | return |
|
|||
420 | try: |
|
|||
421 | filename = self._filename(repo) |
|
|||
422 | with repo.cachevfs(filename, b"w", atomictemp=True) as f: |
|
|||
423 | cachekey = [hex(self.tipnode), b'%d' % self.tiprev] |
|
|||
424 | if self.filteredhash is not None: |
|
|||
425 | cachekey.append(hex(self.filteredhash)) |
|
|||
426 | f.write(b" ".join(cachekey) + b'\n') |
|
|||
427 | nodecount = 0 |
|
|||
428 | for label, nodes in sorted(self._entries.items()): |
|
|||
429 | label = encoding.fromlocal(label) |
|
|||
430 | for node in nodes: |
|
|||
431 | nodecount += 1 |
|
|||
432 | if node in self._closednodes: |
|
|||
433 | state = b'c' |
|
|||
434 | else: |
|
|||
435 | state = b'o' |
|
|||
436 | f.write(b"%s %s %s\n" % (hex(node), state, label)) |
|
|||
437 | repo.ui.log( |
|
|||
438 | b'branchcache', |
|
|||
439 | b'wrote %s with %d labels and %d nodes\n', |
|
|||
440 | _branchcachedesc(repo), |
|
|||
441 | len(self._entries), |
|
|||
442 | nodecount, |
|
|||
443 | ) |
|
|||
444 | self._delayed = False |
|
|||
445 | except (IOError, OSError, error.Abort) as inst: |
|
|||
446 | # Abort may be raised by read only opener, so log and continue |
|
|||
447 | repo.ui.debug( |
|
|||
448 | b"couldn't write branch cache: %s\n" |
|
|||
449 | % stringutil.forcebytestr(inst) |
|
|||
450 | ) |
|
|||
451 |
|
||||
452 | def update(self, repo, revgen): |
|
283 | def update(self, repo, revgen): | |
453 | """Given a branchhead cache, self, that may have extra nodes or be |
|
284 | """Given a branchhead cache, self, that may have extra nodes or be | |
454 | missing heads, and a generator of nodes that are strictly a superset of |
|
285 | missing heads, and a generator of nodes that are strictly a superset of | |
@@ -456,29 +287,69 b' class branchcache:' | |||||
456 | """ |
|
287 | """ | |
457 | starttime = util.timer() |
|
288 | starttime = util.timer() | |
458 | cl = repo.changelog |
|
289 | cl = repo.changelog | |
|
290 | # Faster than using ctx.obsolete() | |||
|
291 | obsrevs = obsolete.getrevs(repo, b'obsolete') | |||
459 | # collect new branch entries |
|
292 | # collect new branch entries | |
460 | newbranches = {} |
|
293 | newbranches = {} | |
|
294 | new_closed = set() | |||
|
295 | obs_ignored = set() | |||
461 | getbranchinfo = repo.revbranchcache().branchinfo |
|
296 | getbranchinfo = repo.revbranchcache().branchinfo | |
|
297 | max_rev = -1 | |||
462 | for r in revgen: |
|
298 | for r in revgen: | |
|
299 | max_rev = max(max_rev, r) | |||
|
300 | if r in obsrevs: | |||
|
301 | # We ignore obsolete changesets as they shouldn't be | |||
|
302 | # considered heads. | |||
|
303 | obs_ignored.add(r) | |||
|
304 | continue | |||
463 | branch, closesbranch = getbranchinfo(r) |
|
305 | branch, closesbranch = getbranchinfo(r) | |
464 | newbranches.setdefault(branch, []).append(r) |
|
306 | newbranches.setdefault(branch, []).append(r) | |
465 | if closesbranch: |
|
307 | if closesbranch: | |
466 |
|
|
308 | new_closed.add(r) | |
|
309 | if max_rev < 0: | |||
|
310 | msg = "running branchcache.update without revision to update" | |||
|
311 | raise error.ProgrammingError(msg) | |||
|
312 | ||||
|
313 | self._process_new( | |||
|
314 | repo, | |||
|
315 | newbranches, | |||
|
316 | new_closed, | |||
|
317 | obs_ignored, | |||
|
318 | max_rev, | |||
|
319 | ) | |||
|
320 | ||||
|
321 | self._closednodes.update(cl.node(rev) for rev in new_closed) | |||
467 |
|
322 | |||
468 | # new tip revision which we found after iterating items from new |
|
323 | duration = util.timer() - starttime | |
469 | # branches |
|
324 | repo.ui.log( | |
470 | ntiprev = self.tiprev |
|
325 | b'branchcache', | |
|
326 | b'updated %s in %.4f seconds\n', | |||
|
327 | _branchcachedesc(repo), | |||
|
328 | duration, | |||
|
329 | ) | |||
|
330 | return max_rev | |||
471 |
|
331 | |||
|
332 | def _process_new( | |||
|
333 | self, | |||
|
334 | repo, | |||
|
335 | newbranches, | |||
|
336 | new_closed, | |||
|
337 | obs_ignored, | |||
|
338 | max_rev, | |||
|
339 | ): | |||
|
340 | """update the branchmap from a set of new information""" | |||
472 | # Delay fetching the topological heads until they are needed. |
|
341 | # Delay fetching the topological heads until they are needed. | |
473 | # A repository without non-continous branches can skip this part. |
|
342 | # A repository without non-continous branches can skip this part. | |
474 | topoheads = None |
|
343 | topoheads = None | |
475 |
|
344 | |||
|
345 | cl = repo.changelog | |||
|
346 | getbranchinfo = repo.revbranchcache().branchinfo | |||
|
347 | # Faster than using ctx.obsolete() | |||
|
348 | obsrevs = obsolete.getrevs(repo, b'obsolete') | |||
|
349 | ||||
476 | # If a changeset is visible, its parents must be visible too, so |
|
350 | # If a changeset is visible, its parents must be visible too, so | |
477 | # use the faster unfiltered parent accessor. |
|
351 | # use the faster unfiltered parent accessor. | |
478 |
parentrevs = |
|
352 | parentrevs = cl._uncheckedparentrevs | |
479 |
|
||||
480 | # Faster than using ctx.obsolete() |
|
|||
481 | obsrevs = obsolete.getrevs(repo, b'obsolete') |
|
|||
482 |
|
353 | |||
483 | for branch, newheadrevs in newbranches.items(): |
|
354 | for branch, newheadrevs in newbranches.items(): | |
484 | # For every branch, compute the new branchheads. |
|
355 | # For every branch, compute the new branchheads. | |
@@ -520,11 +391,6 b' class branchcache:' | |||||
520 | bheadset = {cl.rev(node) for node in bheads} |
|
391 | bheadset = {cl.rev(node) for node in bheads} | |
521 | uncertain = set() |
|
392 | uncertain = set() | |
522 | for newrev in sorted(newheadrevs): |
|
393 | for newrev in sorted(newheadrevs): | |
523 | if newrev in obsrevs: |
|
|||
524 | # We ignore obsolete changesets as they shouldn't be |
|
|||
525 | # considered heads. |
|
|||
526 | continue |
|
|||
527 |
|
||||
528 | if not bheadset: |
|
394 | if not bheadset: | |
529 | bheadset.add(newrev) |
|
395 | bheadset.add(newrev) | |
530 | continue |
|
396 | continue | |
@@ -561,50 +427,665 b' class branchcache:' | |||||
561 | bheadset -= ancestors |
|
427 | bheadset -= ancestors | |
562 | if bheadset: |
|
428 | if bheadset: | |
563 | self[branch] = [cl.node(rev) for rev in sorted(bheadset)] |
|
429 | self[branch] = [cl.node(rev) for rev in sorted(bheadset)] | |
564 | tiprev = max(newheadrevs) |
|
430 | ||
565 | if tiprev > ntiprev: |
|
431 | ||
566 | ntiprev = tiprev |
|
432 | STATE_CLEAN = 1 | |
|
433 | STATE_INHERITED = 2 | |||
|
434 | STATE_DIRTY = 3 | |||
|
435 | ||||
|
436 | ||||
|
437 | class _LocalBranchCache(_BaseBranchCache): | |||
|
438 | """base class of branch-map info for a local repo or repoview""" | |||
|
439 | ||||
|
440 | _base_filename = None | |||
|
441 | _default_key_hashes: Tuple[bytes] = cast(Tuple[bytes], ()) | |||
|
442 | ||||
|
443 | def __init__( | |||
|
444 | self, | |||
|
445 | repo: "localrepo.localrepository", | |||
|
446 | entries: Union[ | |||
|
447 | Dict[bytes, List[bytes]], Iterable[Tuple[bytes, List[bytes]]] | |||
|
448 | ] = (), | |||
|
449 | tipnode: Optional[bytes] = None, | |||
|
450 | tiprev: Optional[int] = nullrev, | |||
|
451 | key_hashes: Optional[Tuple[bytes]] = None, | |||
|
452 | closednodes: Optional[Set[bytes]] = None, | |||
|
453 | hasnode: Optional[Callable[[bytes], bool]] = None, | |||
|
454 | verify_node: bool = False, | |||
|
455 | inherited: bool = False, | |||
|
456 | ) -> None: | |||
|
457 | """hasnode is a function which can be used to verify whether changelog | |||
|
458 | has a given node or not. If it's not provided, we assume that every node | |||
|
459 | we have exists in changelog""" | |||
|
460 | self._filtername = repo.filtername | |||
|
461 | if tipnode is None: | |||
|
462 | self.tipnode = repo.nullid | |||
|
463 | else: | |||
|
464 | self.tipnode = tipnode | |||
|
465 | self.tiprev = tiprev | |||
|
466 | if key_hashes is None: | |||
|
467 | self.key_hashes = self._default_key_hashes | |||
|
468 | else: | |||
|
469 | self.key_hashes = key_hashes | |||
|
470 | self._state = STATE_CLEAN | |||
|
471 | if inherited: | |||
|
472 | self._state = STATE_INHERITED | |||
|
473 | ||||
|
474 | super().__init__(repo=repo, entries=entries, closed_nodes=closednodes) | |||
|
475 | # closednodes is a set of nodes that close their branch. If the branch | |||
|
476 | # cache has been updated, it may contain nodes that are no longer | |||
|
477 | # heads. | |||
|
478 | ||||
|
479 | # Do we need to verify branch at all ? | |||
|
480 | self._verify_node = verify_node | |||
|
481 | # branches for which nodes are verified | |||
|
482 | self._verifiedbranches = set() | |||
|
483 | self._hasnode = None | |||
|
484 | if self._verify_node: | |||
|
485 | self._hasnode = repo.changelog.hasnode | |||
|
486 | ||||
|
487 | def _compute_key_hashes(self, repo) -> Tuple[bytes]: | |||
|
488 | raise NotImplementedError | |||
|
489 | ||||
|
490 | def _ensure_populated(self, repo): | |||
|
491 | """make sure any lazily loaded values are fully populated""" | |||
|
492 | ||||
|
493 | def _detect_pure_topo(self, repo) -> None: | |||
|
494 | pass | |||
|
495 | ||||
|
496 | def validfor(self, repo): | |||
|
497 | """check that cache contents are valid for (a subset of) this repo | |||
|
498 | ||||
|
499 | - False when the order of changesets changed or if we detect a strip. | |||
|
500 | - True when cache is up-to-date for the current repo or its subset.""" | |||
|
501 | try: | |||
|
502 | node = repo.changelog.node(self.tiprev) | |||
|
503 | except IndexError: | |||
|
504 | # changesets were stripped and now we don't even have enough to | |||
|
505 | # find tiprev | |||
|
506 | return False | |||
|
507 | if self.tipnode != node: | |||
|
508 | # tiprev doesn't correspond to tipnode: repo was stripped, or this | |||
|
509 | # repo has a different order of changesets | |||
|
510 | return False | |||
|
511 | repo_key_hashes = self._compute_key_hashes(repo) | |||
|
512 | # hashes don't match if this repo view has a different set of filtered | |||
|
513 | # revisions (e.g. due to phase changes) or obsolete revisions (e.g. | |||
|
514 | # history was rewritten) | |||
|
515 | return self.key_hashes == repo_key_hashes | |||
|
516 | ||||
|
517 | @classmethod | |||
|
518 | def fromfile(cls, repo): | |||
|
519 | f = None | |||
|
520 | try: | |||
|
521 | f = repo.cachevfs(cls._filename(repo)) | |||
|
522 | lineiter = iter(f) | |||
|
523 | init_kwargs = cls._load_header(repo, lineiter) | |||
|
524 | bcache = cls( | |||
|
525 | repo, | |||
|
526 | verify_node=True, | |||
|
527 | **init_kwargs, | |||
|
528 | ) | |||
|
529 | if not bcache.validfor(repo): | |||
|
530 | # invalidate the cache | |||
|
531 | raise ValueError('tip differs') | |||
|
532 | bcache._load_heads(repo, lineiter) | |||
|
533 | except (IOError, OSError): | |||
|
534 | return None | |||
|
535 | ||||
|
536 | except Exception as inst: | |||
|
537 | if repo.ui.debugflag: | |||
|
538 | msg = b'invalid %s: %s\n' | |||
|
539 | msg %= ( | |||
|
540 | _branchcachedesc(repo), | |||
|
541 | stringutil.forcebytestr(inst), | |||
|
542 | ) | |||
|
543 | repo.ui.debug(msg) | |||
|
544 | bcache = None | |||
|
545 | ||||
|
546 | finally: | |||
|
547 | if f: | |||
|
548 | f.close() | |||
|
549 | ||||
|
550 | return bcache | |||
|
551 | ||||
|
552 | @classmethod | |||
|
553 | def _load_header(cls, repo, lineiter) -> "dict[str, Any]": | |||
|
554 | raise NotImplementedError | |||
|
555 | ||||
|
556 | def _load_heads(self, repo, lineiter): | |||
|
557 | """fully loads the branchcache by reading from the file using the line | |||
|
558 | iterator passed""" | |||
|
559 | for line in lineiter: | |||
|
560 | line = line.rstrip(b'\n') | |||
|
561 | if not line: | |||
|
562 | continue | |||
|
563 | node, state, label = line.split(b" ", 2) | |||
|
564 | if state not in b'oc': | |||
|
565 | raise ValueError('invalid branch state') | |||
|
566 | label = encoding.tolocal(label.strip()) | |||
|
567 | node = bin(node) | |||
|
568 | self._entries.setdefault(label, []).append(node) | |||
|
569 | if state == b'c': | |||
|
570 | self._closednodes.add(node) | |||
567 |
|
571 | |||
568 | if ntiprev > self.tiprev: |
|
572 | @classmethod | |
569 | self.tiprev = ntiprev |
|
573 | def _filename(cls, repo): | |
570 | self.tipnode = cl.node(ntiprev) |
|
574 | """name of a branchcache file for a given repo or repoview""" | |
|
575 | filename = cls._base_filename | |||
|
576 | assert filename is not None | |||
|
577 | if repo.filtername: | |||
|
578 | filename = b'%s-%s' % (filename, repo.filtername) | |||
|
579 | return filename | |||
|
580 | ||||
|
581 | def inherit_for(self, repo): | |||
|
582 | """return a deep copy of the branchcache object""" | |||
|
583 | assert repo.filtername != self._filtername | |||
|
584 | other = type(self)( | |||
|
585 | repo=repo, | |||
|
586 | # we always do a shally copy of self._entries, and the values is | |||
|
587 | # always replaced, so no need to deepcopy until the above remains | |||
|
588 | # true. | |||
|
589 | entries=self._entries, | |||
|
590 | tipnode=self.tipnode, | |||
|
591 | tiprev=self.tiprev, | |||
|
592 | key_hashes=self.key_hashes, | |||
|
593 | closednodes=set(self._closednodes), | |||
|
594 | verify_node=self._verify_node, | |||
|
595 | inherited=True, | |||
|
596 | ) | |||
|
597 | # also copy information about the current verification state | |||
|
598 | other._verifiedbranches = set(self._verifiedbranches) | |||
|
599 | return other | |||
|
600 | ||||
|
601 | def sync_disk(self, repo): | |||
|
602 | """synchronise the on disk file with the cache state | |||
|
603 | ||||
|
604 | If new value specific to this filter level need to be written, the file | |||
|
605 | will be updated, if the state of the branchcache is inherited from a | |||
|
606 | subset, any stalled on disk file will be deleted. | |||
|
607 | ||||
|
608 | That method does nothing if there is nothing to do. | |||
|
609 | """ | |||
|
610 | if self._state == STATE_DIRTY: | |||
|
611 | self.write(repo) | |||
|
612 | elif self._state == STATE_INHERITED: | |||
|
613 | filename = self._filename(repo) | |||
|
614 | repo.cachevfs.tryunlink(filename) | |||
|
615 | ||||
|
616 | def write(self, repo): | |||
|
617 | assert self._filtername == repo.filtername, ( | |||
|
618 | self._filtername, | |||
|
619 | repo.filtername, | |||
|
620 | ) | |||
|
621 | assert self._state == STATE_DIRTY, self._state | |||
|
622 | # This method should not be called during an open transaction | |||
|
623 | tr = repo.currenttransaction() | |||
|
624 | if not getattr(tr, 'finalized', True): | |||
|
625 | msg = "writing branchcache in the middle of a transaction" | |||
|
626 | raise error.ProgrammingError(msg) | |||
|
627 | try: | |||
|
628 | filename = self._filename(repo) | |||
|
629 | with repo.cachevfs(filename, b"w", atomictemp=True) as f: | |||
|
630 | self._write_header(f) | |||
|
631 | nodecount = self._write_heads(repo, f) | |||
|
632 | repo.ui.log( | |||
|
633 | b'branchcache', | |||
|
634 | b'wrote %s with %d labels and %d nodes\n', | |||
|
635 | _branchcachedesc(repo), | |||
|
636 | len(self._entries), | |||
|
637 | nodecount, | |||
|
638 | ) | |||
|
639 | self._state = STATE_CLEAN | |||
|
640 | except (IOError, OSError, error.Abort) as inst: | |||
|
641 | # Abort may be raised by read only opener, so log and continue | |||
|
642 | repo.ui.debug( | |||
|
643 | b"couldn't write branch cache: %s\n" | |||
|
644 | % stringutil.forcebytestr(inst) | |||
|
645 | ) | |||
|
646 | ||||
|
647 | def _write_header(self, fp) -> None: | |||
|
648 | raise NotImplementedError | |||
|
649 | ||||
|
650 | def _write_heads(self, repo, fp) -> int: | |||
|
651 | """write list of heads to a file | |||
|
652 | ||||
|
653 | Return the number of heads written.""" | |||
|
654 | nodecount = 0 | |||
|
655 | for label, nodes in sorted(self._entries.items()): | |||
|
656 | label = encoding.fromlocal(label) | |||
|
657 | for node in nodes: | |||
|
658 | nodecount += 1 | |||
|
659 | if node in self._closednodes: | |||
|
660 | state = b'c' | |||
|
661 | else: | |||
|
662 | state = b'o' | |||
|
663 | fp.write(b"%s %s %s\n" % (hex(node), state, label)) | |||
|
664 | return nodecount | |||
|
665 | ||||
|
666 | def _verifybranch(self, branch): | |||
|
667 | """verify head nodes for the given branch.""" | |||
|
668 | if not self._verify_node: | |||
|
669 | return | |||
|
670 | if branch not in self._entries or branch in self._verifiedbranches: | |||
|
671 | return | |||
|
672 | assert self._hasnode is not None | |||
|
673 | for n in self._entries[branch]: | |||
|
674 | if not self._hasnode(n): | |||
|
675 | _unknownnode(n) | |||
|
676 | ||||
|
677 | self._verifiedbranches.add(branch) | |||
|
678 | ||||
|
679 | def _verifyall(self): | |||
|
680 | """verifies nodes of all the branches""" | |||
|
681 | for b in self._entries.keys(): | |||
|
682 | if b not in self._verifiedbranches: | |||
|
683 | self._verifybranch(b) | |||
|
684 | ||||
|
685 | def __getitem__(self, key): | |||
|
686 | self._verifybranch(key) | |||
|
687 | return super().__getitem__(key) | |||
|
688 | ||||
|
689 | def __contains__(self, key): | |||
|
690 | self._verifybranch(key) | |||
|
691 | return super().__contains__(key) | |||
|
692 | ||||
|
693 | def iteritems(self): | |||
|
694 | self._verifyall() | |||
|
695 | return super().iteritems() | |||
|
696 | ||||
|
697 | items = iteritems | |||
|
698 | ||||
|
699 | def iterheads(self): | |||
|
700 | """returns all the heads""" | |||
|
701 | self._verifyall() | |||
|
702 | return super().iterheads() | |||
|
703 | ||||
|
704 | def hasbranch(self, label): | |||
|
705 | """checks whether a branch of this name exists or not""" | |||
|
706 | self._verifybranch(label) | |||
|
707 | return super().hasbranch(label) | |||
|
708 | ||||
|
709 | def branchheads(self, branch, closed=False): | |||
|
710 | self._verifybranch(branch) | |||
|
711 | return super().branchheads(branch, closed=closed) | |||
|
712 | ||||
|
713 | def update(self, repo, revgen): | |||
|
714 | assert self._filtername == repo.filtername, ( | |||
|
715 | self._filtername, | |||
|
716 | repo.filtername, | |||
|
717 | ) | |||
|
718 | cl = repo.changelog | |||
|
719 | max_rev = super().update(repo, revgen) | |||
|
720 | # new tip revision which we found after iterating items from new | |||
|
721 | # branches | |||
|
722 | if max_rev is not None and max_rev > self.tiprev: | |||
|
723 | self.tiprev = max_rev | |||
|
724 | self.tipnode = cl.node(max_rev) | |||
|
725 | else: | |||
|
726 | # We should not be here is if this is false | |||
|
727 | assert cl.node(self.tiprev) == self.tipnode | |||
571 |
|
728 | |||
572 | if not self.validfor(repo): |
|
729 | if not self.validfor(repo): | |
573 | # old cache key is now invalid for the repo, but we've just updated |
|
730 | # the tiprev and tipnode should be aligned, so if the current repo | |
574 | # the cache and we assume it's valid, so let's make the cache key |
|
731 | # is not seens as valid this is because old cache key is now | |
575 | # valid as well by recomputing it from the cached data |
|
732 | # invalid for the repo. | |
576 | self.tipnode = repo.nullid |
|
733 | # | |
577 | self.tiprev = nullrev |
|
734 | # However. we've just updated the cache and we assume it's valid, | |
578 | for heads in self.iterheads(): |
|
735 | # so let's make the cache key valid as well by recomputing it from | |
579 |
|
|
736 | # the cached data | |
580 | # all revisions on a branch are obsolete |
|
737 | self.key_hashes = self._compute_key_hashes(repo) | |
581 | continue |
|
738 | self.filteredhash = scmutil.combined_filtered_and_obsolete_hash( | |
582 | # note: tiprev is not necessarily the tip revision of repo, |
|
739 | repo, | |
583 | # because the tip could be obsolete (i.e. not a head) |
|
740 | self.tiprev, | |
584 | tiprev = max(cl.rev(node) for node in heads) |
|
741 | ) | |
585 | if tiprev > self.tiprev: |
|
742 | ||
586 | self.tipnode = cl.node(tiprev) |
|
743 | self._state = STATE_DIRTY | |
587 | self.tiprev = tiprev |
|
744 | tr = repo.currenttransaction() | |
588 | self.filteredhash = scmutil.filteredhash( |
|
745 | if getattr(tr, 'finalized', True): | |
589 | repo, self.tiprev, needobsolete=True |
|
746 | # Avoid premature writing. | |
|
747 | # | |||
|
748 | # (The cache warming setup by localrepo will update the file later.) | |||
|
749 | self.write(repo) | |||
|
750 | ||||
|
751 | ||||
|
752 | def branch_cache_from_file(repo) -> Optional[_LocalBranchCache]: | |||
|
753 | """Build a branch cache from on-disk data if possible | |||
|
754 | ||||
|
755 | Return a branch cache of the right format depending of the repository. | |||
|
756 | """ | |||
|
757 | if repo.ui.configbool(b"experimental", b"branch-cache-v3"): | |||
|
758 | return BranchCacheV3.fromfile(repo) | |||
|
759 | else: | |||
|
760 | return BranchCacheV2.fromfile(repo) | |||
|
761 | ||||
|
762 | ||||
|
763 | def new_branch_cache(repo, *args, **kwargs): | |||
|
764 | """Build a new branch cache from argument | |||
|
765 | ||||
|
766 | Return a branch cache of the right format depending of the repository. | |||
|
767 | """ | |||
|
768 | if repo.ui.configbool(b"experimental", b"branch-cache-v3"): | |||
|
769 | return BranchCacheV3(repo, *args, **kwargs) | |||
|
770 | else: | |||
|
771 | return BranchCacheV2(repo, *args, **kwargs) | |||
|
772 | ||||
|
773 | ||||
|
774 | class BranchCacheV2(_LocalBranchCache): | |||
|
775 | """a branch cache using version 2 of the format on disk | |||
|
776 | ||||
|
777 | The cache is serialized on disk in the following format: | |||
|
778 | ||||
|
779 | <tip hex node> <tip rev number> [optional filtered repo hex hash] | |||
|
780 | <branch head hex node> <open/closed state> <branch name> | |||
|
781 | <branch head hex node> <open/closed state> <branch name> | |||
|
782 | ... | |||
|
783 | ||||
|
784 | The first line is used to check if the cache is still valid. If the | |||
|
785 | branch cache is for a filtered repo view, an optional third hash is | |||
|
786 | included that hashes the hashes of all filtered and obsolete revisions. | |||
|
787 | ||||
|
788 | The open/closed state is represented by a single letter 'o' or 'c'. | |||
|
789 | This field can be used to avoid changelog reads when determining if a | |||
|
790 | branch head closes a branch or not. | |||
|
791 | """ | |||
|
792 | ||||
|
793 | _base_filename = b"branch2" | |||
|
794 | ||||
|
795 | @classmethod | |||
|
796 | def _load_header(cls, repo, lineiter) -> "dict[str, Any]": | |||
|
797 | """parse the head of a branchmap file | |||
|
798 | ||||
|
799 | return parameters to pass to a newly created class instance. | |||
|
800 | """ | |||
|
801 | cachekey = next(lineiter).rstrip(b'\n').split(b" ", 2) | |||
|
802 | last, lrev = cachekey[:2] | |||
|
803 | last, lrev = bin(last), int(lrev) | |||
|
804 | filteredhash = () | |||
|
805 | if len(cachekey) > 2: | |||
|
806 | filteredhash = (bin(cachekey[2]),) | |||
|
807 | return { | |||
|
808 | "tipnode": last, | |||
|
809 | "tiprev": lrev, | |||
|
810 | "key_hashes": filteredhash, | |||
|
811 | } | |||
|
812 | ||||
|
813 | def _write_header(self, fp) -> None: | |||
|
814 | """write the branch cache header to a file""" | |||
|
815 | cachekey = [hex(self.tipnode), b'%d' % self.tiprev] | |||
|
816 | if self.key_hashes: | |||
|
817 | cachekey.append(hex(self.key_hashes[0])) | |||
|
818 | fp.write(b" ".join(cachekey) + b'\n') | |||
|
819 | ||||
|
820 | def _compute_key_hashes(self, repo) -> Tuple[bytes]: | |||
|
821 | """return the cache key hashes that match this repoview state""" | |||
|
822 | filtered_hash = scmutil.combined_filtered_and_obsolete_hash( | |||
|
823 | repo, | |||
|
824 | self.tiprev, | |||
|
825 | needobsolete=True, | |||
|
826 | ) | |||
|
827 | keys: Tuple[bytes] = cast(Tuple[bytes], ()) | |||
|
828 | if filtered_hash is not None: | |||
|
829 | keys: Tuple[bytes] = (filtered_hash,) | |||
|
830 | return keys | |||
|
831 | ||||
|
832 | ||||
|
833 | class BranchCacheV3(_LocalBranchCache): | |||
|
834 | """a branch cache using version 3 of the format on disk | |||
|
835 | ||||
|
836 | This version is still EXPERIMENTAL and the format is subject to changes. | |||
|
837 | ||||
|
838 | The cache is serialized on disk in the following format: | |||
|
839 | ||||
|
840 | <cache-key-xxx>=<xxx-value> <cache-key-yyy>=<yyy-value> […] | |||
|
841 | <branch head hex node> <open/closed state> <branch name> | |||
|
842 | <branch head hex node> <open/closed state> <branch name> | |||
|
843 | ... | |||
|
844 | ||||
|
845 | The first line is used to check if the cache is still valid. It is a series | |||
|
846 | of key value pair. The following key are recognized: | |||
|
847 | ||||
|
848 | - tip-rev: the rev-num of the tip-most revision seen by this cache | |||
|
849 | - tip-node: the node-id of the tip-most revision sen by this cache | |||
|
850 | - filtered-hash: the hash of all filtered revisions (before tip-rev) | |||
|
851 | ignored by this cache. | |||
|
852 | - obsolete-hash: the hash of all non-filtered obsolete revisions (before | |||
|
853 | tip-rev) ignored by this cache. | |||
|
854 | ||||
|
855 | The tip-rev is used to know how far behind the value in the file are | |||
|
856 | compared to the current repository state. | |||
|
857 | ||||
|
858 | The tip-node, filtered-hash and obsolete-hash are used to detect if this | |||
|
859 | cache can be used for this repository state at all. | |||
|
860 | ||||
|
861 | The open/closed state is represented by a single letter 'o' or 'c'. | |||
|
862 | This field can be used to avoid changelog reads when determining if a | |||
|
863 | branch head closes a branch or not. | |||
|
864 | ||||
|
865 | Topological heads are not included in the listing and should be dispatched | |||
|
866 | on the right branch at read time. Obsolete topological heads should be | |||
|
867 | ignored. | |||
|
868 | """ | |||
|
869 | ||||
|
870 | _base_filename = b"branch3-exp" | |||
|
871 | _default_key_hashes = (None, None) | |||
|
872 | ||||
|
873 | def __init__(self, *args, pure_topo_branch=None, **kwargs): | |||
|
874 | super().__init__(*args, **kwargs) | |||
|
875 | self._pure_topo_branch = pure_topo_branch | |||
|
876 | self._needs_populate = self._pure_topo_branch is not None | |||
|
877 | ||||
|
878 | def inherit_for(self, repo): | |||
|
879 | new = super().inherit_for(repo) | |||
|
880 | new._pure_topo_branch = self._pure_topo_branch | |||
|
881 | new._needs_populate = self._needs_populate | |||
|
882 | return new | |||
|
883 | ||||
|
884 | def _get_topo_heads(self, repo): | |||
|
885 | """returns the topological head of a repoview content up to self.tiprev""" | |||
|
886 | cl = repo.changelog | |||
|
887 | if self.tiprev == nullrev: | |||
|
888 | return [] | |||
|
889 | elif self.tiprev == cl.tiprev(): | |||
|
890 | return cl.headrevs() | |||
|
891 | else: | |||
|
892 | # XXX passing tiprev as ceiling of cl.headrevs could be faster | |||
|
893 | heads = cl.headrevs(cl.revs(stop=self.tiprev)) | |||
|
894 | return heads | |||
|
895 | ||||
|
896 | def _write_header(self, fp) -> None: | |||
|
897 | cache_keys = { | |||
|
898 | b"tip-node": hex(self.tipnode), | |||
|
899 | b"tip-rev": b'%d' % self.tiprev, | |||
|
900 | } | |||
|
901 | if self.key_hashes: | |||
|
902 | if self.key_hashes[0] is not None: | |||
|
903 | cache_keys[b"filtered-hash"] = hex(self.key_hashes[0]) | |||
|
904 | if self.key_hashes[1] is not None: | |||
|
905 | cache_keys[b"obsolete-hash"] = hex(self.key_hashes[1]) | |||
|
906 | if self._pure_topo_branch is not None: | |||
|
907 | cache_keys[b"topo-mode"] = b"pure" | |||
|
908 | pieces = (b"%s=%s" % i for i in sorted(cache_keys.items())) | |||
|
909 | fp.write(b" ".join(pieces) + b'\n') | |||
|
910 | if self._pure_topo_branch is not None: | |||
|
911 | label = encoding.fromlocal(self._pure_topo_branch) | |||
|
912 | fp.write(label + b'\n') | |||
|
913 | ||||
|
914 | def _write_heads(self, repo, fp) -> int: | |||
|
915 | """write list of heads to a file | |||
|
916 | ||||
|
917 | Return the number of heads written.""" | |||
|
918 | nodecount = 0 | |||
|
919 | topo_heads = None | |||
|
920 | if self._pure_topo_branch is None: | |||
|
921 | topo_heads = set(self._get_topo_heads(repo)) | |||
|
922 | to_rev = repo.changelog.index.rev | |||
|
923 | for label, nodes in sorted(self._entries.items()): | |||
|
924 | if label == self._pure_topo_branch: | |||
|
925 | # not need to write anything the header took care of that | |||
|
926 | continue | |||
|
927 | label = encoding.fromlocal(label) | |||
|
928 | for node in nodes: | |||
|
929 | if topo_heads is not None: | |||
|
930 | rev = to_rev(node) | |||
|
931 | if rev in topo_heads: | |||
|
932 | continue | |||
|
933 | if node in self._closednodes: | |||
|
934 | state = b'c' | |||
|
935 | else: | |||
|
936 | state = b'o' | |||
|
937 | nodecount += 1 | |||
|
938 | fp.write(b"%s %s %s\n" % (hex(node), state, label)) | |||
|
939 | return nodecount | |||
|
940 | ||||
|
941 | @classmethod | |||
|
942 | def _load_header(cls, repo, lineiter): | |||
|
943 | header_line = next(lineiter) | |||
|
944 | pieces = header_line.rstrip(b'\n').split(b" ") | |||
|
945 | cache_keys = dict(p.split(b'=', 1) for p in pieces) | |||
|
946 | ||||
|
947 | args = {} | |||
|
948 | filtered_hash = None | |||
|
949 | obsolete_hash = None | |||
|
950 | has_pure_topo_heads = False | |||
|
951 | for k, v in cache_keys.items(): | |||
|
952 | if k == b"tip-rev": | |||
|
953 | args["tiprev"] = int(v) | |||
|
954 | elif k == b"tip-node": | |||
|
955 | args["tipnode"] = bin(v) | |||
|
956 | elif k == b"filtered-hash": | |||
|
957 | filtered_hash = bin(v) | |||
|
958 | elif k == b"obsolete-hash": | |||
|
959 | obsolete_hash = bin(v) | |||
|
960 | elif k == b"topo-mode": | |||
|
961 | if v == b"pure": | |||
|
962 | has_pure_topo_heads = True | |||
|
963 | else: | |||
|
964 | msg = b"unknown topo-mode: %r" % v | |||
|
965 | raise ValueError(msg) | |||
|
966 | else: | |||
|
967 | msg = b"unknown cache key: %r" % k | |||
|
968 | raise ValueError(msg) | |||
|
969 | args["key_hashes"] = (filtered_hash, obsolete_hash) | |||
|
970 | if has_pure_topo_heads: | |||
|
971 | pure_line = next(lineiter).rstrip(b'\n') | |||
|
972 | args["pure_topo_branch"] = encoding.tolocal(pure_line) | |||
|
973 | return args | |||
|
974 | ||||
|
975 | def _load_heads(self, repo, lineiter): | |||
|
976 | """fully loads the branchcache by reading from the file using the line | |||
|
977 | iterator passed""" | |||
|
978 | super()._load_heads(repo, lineiter) | |||
|
979 | if self._pure_topo_branch is not None: | |||
|
980 | # no need to read the repository heads, we know their value already. | |||
|
981 | return | |||
|
982 | cl = repo.changelog | |||
|
983 | getbranchinfo = repo.revbranchcache().branchinfo | |||
|
984 | obsrevs = obsolete.getrevs(repo, b'obsolete') | |||
|
985 | to_node = cl.node | |||
|
986 | touched_branch = set() | |||
|
987 | for head in self._get_topo_heads(repo): | |||
|
988 | if head in obsrevs: | |||
|
989 | continue | |||
|
990 | node = to_node(head) | |||
|
991 | branch, closed = getbranchinfo(head) | |||
|
992 | self._entries.setdefault(branch, []).append(node) | |||
|
993 | if closed: | |||
|
994 | self._closednodes.add(node) | |||
|
995 | touched_branch.add(branch) | |||
|
996 | to_rev = cl.index.rev | |||
|
997 | for branch in touched_branch: | |||
|
998 | self._entries[branch].sort(key=to_rev) | |||
|
999 | ||||
|
1000 | def _compute_key_hashes(self, repo) -> Tuple[bytes]: | |||
|
1001 | """return the cache key hashes that match this repoview state""" | |||
|
1002 | return scmutil.filtered_and_obsolete_hash( | |||
|
1003 | repo, | |||
|
1004 | self.tiprev, | |||
590 | ) |
|
1005 | ) | |
591 |
|
1006 | |||
592 | duration = util.timer() - starttime |
|
1007 | def _process_new( | |
593 |
|
|
1008 | self, | |
594 | b'branchcache', |
|
1009 | repo, | |
595 | b'updated %s in %.4f seconds\n', |
|
1010 | newbranches, | |
596 | _branchcachedesc(repo), |
|
1011 | new_closed, | |
597 | duration, |
|
1012 | obs_ignored, | |
|
1013 | max_rev, | |||
|
1014 | ) -> None: | |||
|
1015 | if ( | |||
|
1016 | # note: the check about `obs_ignored` is too strict as the | |||
|
1017 | # obsolete revision could be non-topological, but lets keep | |||
|
1018 | # things simple for now | |||
|
1019 | # | |||
|
1020 | # The same apply to `new_closed` if the closed changeset are | |||
|
1021 | # not a head, we don't care that it is closed, but lets keep | |||
|
1022 | # things simple here too. | |||
|
1023 | not (obs_ignored or new_closed) | |||
|
1024 | and ( | |||
|
1025 | not newbranches | |||
|
1026 | or ( | |||
|
1027 | len(newbranches) == 1 | |||
|
1028 | and ( | |||
|
1029 | self.tiprev == nullrev | |||
|
1030 | or self._pure_topo_branch in newbranches | |||
|
1031 | ) | |||
|
1032 | ) | |||
|
1033 | ) | |||
|
1034 | ): | |||
|
1035 | if newbranches: | |||
|
1036 | assert len(newbranches) == 1 | |||
|
1037 | self._pure_topo_branch = list(newbranches.keys())[0] | |||
|
1038 | self._needs_populate = True | |||
|
1039 | self._entries.pop(self._pure_topo_branch, None) | |||
|
1040 | return | |||
|
1041 | ||||
|
1042 | self._ensure_populated(repo) | |||
|
1043 | self._pure_topo_branch = None | |||
|
1044 | super()._process_new( | |||
|
1045 | repo, | |||
|
1046 | newbranches, | |||
|
1047 | new_closed, | |||
|
1048 | obs_ignored, | |||
|
1049 | max_rev, | |||
598 | ) |
|
1050 | ) | |
599 |
|
1051 | |||
600 | self.write(repo) |
|
1052 | def _ensure_populated(self, repo): | |
|
1053 | """make sure any lazily loaded values are fully populated""" | |||
|
1054 | if self._needs_populate: | |||
|
1055 | assert self._pure_topo_branch is not None | |||
|
1056 | cl = repo.changelog | |||
|
1057 | to_node = cl.node | |||
|
1058 | topo_heads = self._get_topo_heads(repo) | |||
|
1059 | heads = [to_node(r) for r in topo_heads] | |||
|
1060 | self._entries[self._pure_topo_branch] = heads | |||
|
1061 | self._needs_populate = False | |||
|
1062 | ||||
|
1063 | def _detect_pure_topo(self, repo) -> None: | |||
|
1064 | if self._pure_topo_branch is not None: | |||
|
1065 | # we are pure topological already | |||
|
1066 | return | |||
|
1067 | to_node = repo.changelog.node | |||
|
1068 | topo_heads = [to_node(r) for r in self._get_topo_heads(repo)] | |||
|
1069 | if any(n in self._closednodes for n in topo_heads): | |||
|
1070 | return | |||
|
1071 | for branch, heads in self._entries.items(): | |||
|
1072 | if heads == topo_heads: | |||
|
1073 | self._pure_topo_branch = branch | |||
|
1074 | break | |||
601 |
|
1075 | |||
602 |
|
1076 | |||
603 |
class remotebranchcache( |
|
1077 | class remotebranchcache(_BaseBranchCache): | |
604 | """Branchmap info for a remote connection, should not write locally""" |
|
1078 | """Branchmap info for a remote connection, should not write locally""" | |
605 |
|
1079 | |||
606 | def write(self, repo): |
|
1080 | def __init__( | |
607 |
|
|
1081 | self, | |
|
1082 | repo: "localrepo.localrepository", | |||
|
1083 | entries: Union[ | |||
|
1084 | Dict[bytes, List[bytes]], Iterable[Tuple[bytes, List[bytes]]] | |||
|
1085 | ] = (), | |||
|
1086 | closednodes: Optional[Set[bytes]] = None, | |||
|
1087 | ) -> None: | |||
|
1088 | super().__init__(repo=repo, entries=entries, closed_nodes=closednodes) | |||
608 |
|
1089 | |||
609 |
|
1090 | |||
610 | # Revision branch info cache |
|
1091 | # Revision branch info cache |
@@ -1728,9 +1728,10 b' def writenewbundle(' | |||||
1728 | caps = {} |
|
1728 | caps = {} | |
1729 | if opts.get(b'obsolescence', False): |
|
1729 | if opts.get(b'obsolescence', False): | |
1730 | caps[b'obsmarkers'] = (b'V1',) |
|
1730 | caps[b'obsmarkers'] = (b'V1',) | |
1731 |
|
|
1731 | stream_version = opts.get(b'stream', b"") | |
|
1732 | if stream_version == b"v2": | |||
1732 | caps[b'stream'] = [b'v2'] |
|
1733 | caps[b'stream'] = [b'v2'] | |
1733 |
elif |
|
1734 | elif stream_version == b"v3-exp": | |
1734 | caps[b'stream'] = [b'v3-exp'] |
|
1735 | caps[b'stream'] = [b'v3-exp'] | |
1735 | bundle = bundle20(ui, caps) |
|
1736 | bundle = bundle20(ui, caps) | |
1736 | bundle.setcompression(compression, compopts) |
|
1737 | bundle.setcompression(compression, compopts) | |
@@ -1774,10 +1775,10 b' def _addpartsfromopts(ui, repo, bundler,' | |||||
1774 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: |
|
1775 | if repository.REPO_FEATURE_SIDE_DATA in repo.features: | |
1775 | part.addparam(b'exp-sidedata', b'1') |
|
1776 | part.addparam(b'exp-sidedata', b'1') | |
1776 |
|
1777 | |||
1777 |
if opts.get(b'stream |
|
1778 | if opts.get(b'stream', b"") == b"v2": | |
1778 | addpartbundlestream2(bundler, repo, stream=True) |
|
1779 | addpartbundlestream2(bundler, repo, stream=True) | |
1779 |
|
1780 | |||
1780 |
if opts.get(b'stream |
|
1781 | if opts.get(b'stream', b"") == b"v3-exp": | |
1781 | addpartbundlestream2(bundler, repo, stream=True) |
|
1782 | addpartbundlestream2(bundler, repo, stream=True) | |
1782 |
|
1783 | |||
1783 | if opts.get(b'tagsfnodescache', True): |
|
1784 | if opts.get(b'tagsfnodescache', True): | |
@@ -1787,7 +1788,7 b' def _addpartsfromopts(ui, repo, bundler,' | |||||
1787 | addpartrevbranchcache(repo, bundler, outgoing) |
|
1788 | addpartrevbranchcache(repo, bundler, outgoing) | |
1788 |
|
1789 | |||
1789 | if opts.get(b'obsolescence', False): |
|
1790 | if opts.get(b'obsolescence', False): | |
1790 | obsmarkers = repo.obsstore.relevantmarkers(outgoing.missing) |
|
1791 | obsmarkers = repo.obsstore.relevantmarkers(nodes=outgoing.missing) | |
1791 | buildobsmarkerspart( |
|
1792 | buildobsmarkerspart( | |
1792 | bundler, |
|
1793 | bundler, | |
1793 | obsmarkers, |
|
1794 | obsmarkers, |
@@ -6,6 +6,8 b'' | |||||
6 | import collections |
|
6 | import collections | |
7 |
|
7 | |||
8 | from typing import ( |
|
8 | from typing import ( | |
|
9 | Dict, | |||
|
10 | Union, | |||
9 | cast, |
|
11 | cast, | |
10 | ) |
|
12 | ) | |
11 |
|
13 | |||
@@ -106,7 +108,7 b' class bundlespec:' | |||||
106 | } |
|
108 | } | |
107 |
|
109 | |||
108 | # Maps bundle version with content opts to choose which part to bundle |
|
110 | # Maps bundle version with content opts to choose which part to bundle | |
109 | _bundlespeccontentopts = { |
|
111 | _bundlespeccontentopts: Dict[bytes, Dict[bytes, Union[bool, bytes]]] = { | |
110 | b'v1': { |
|
112 | b'v1': { | |
111 | b'changegroup': True, |
|
113 | b'changegroup': True, | |
112 | b'cg.version': b'01', |
|
114 | b'cg.version': b'01', | |
@@ -136,7 +138,7 b' class bundlespec:' | |||||
136 | b'cg.version': b'02', |
|
138 | b'cg.version': b'02', | |
137 | b'obsolescence': False, |
|
139 | b'obsolescence': False, | |
138 | b'phases': False, |
|
140 | b'phases': False, | |
139 |
b"stream |
|
141 | b"stream": b"v2", | |
140 | b'tagsfnodescache': False, |
|
142 | b'tagsfnodescache': False, | |
141 | b'revbranchcache': False, |
|
143 | b'revbranchcache': False, | |
142 | }, |
|
144 | }, | |
@@ -145,7 +147,7 b' class bundlespec:' | |||||
145 | b'cg.version': b'03', |
|
147 | b'cg.version': b'03', | |
146 | b'obsolescence': False, |
|
148 | b'obsolescence': False, | |
147 | b'phases': False, |
|
149 | b'phases': False, | |
148 |
b"stream |
|
150 | b"stream": b"v3-exp", | |
149 | b'tagsfnodescache': False, |
|
151 | b'tagsfnodescache': False, | |
150 | b'revbranchcache': False, |
|
152 | b'revbranchcache': False, | |
151 | }, |
|
153 | }, | |
@@ -158,8 +160,6 b' class bundlespec:' | |||||
158 | } |
|
160 | } | |
159 | _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2'] |
|
161 | _bundlespeccontentopts[b'bundle2'] = _bundlespeccontentopts[b'v2'] | |
160 |
|
162 | |||
161 | _bundlespecvariants = {b"streamv2": {}} |
|
|||
162 |
|
||||
163 | # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE. |
|
163 | # Compression engines allowed in version 1. THIS SHOULD NEVER CHANGE. | |
164 | _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'} |
|
164 | _bundlespecv1compengines = {b'gzip', b'bzip2', b'none'} | |
165 |
|
165 | |||
@@ -391,10 +391,7 b' def isstreamclonespec(bundlespec):' | |||||
391 | if ( |
|
391 | if ( | |
392 | bundlespec.wirecompression == b'UN' |
|
392 | bundlespec.wirecompression == b'UN' | |
393 | and bundlespec.wireversion == b'02' |
|
393 | and bundlespec.wireversion == b'02' | |
394 | and ( |
|
394 | and bundlespec.contentopts.get(b'stream', None) in (b"v2", b"v3-exp") | |
395 | bundlespec.contentopts.get(b'streamv2') |
|
|||
396 | or bundlespec.contentopts.get(b'streamv3-exp') |
|
|||
397 | ) |
|
|||
398 | ): |
|
395 | ): | |
399 | return True |
|
396 | return True | |
400 |
|
397 |
@@ -14,6 +14,8 b' def cachetocopy(srcrepo):' | |||||
14 | # ones. Therefore copy all branch caches over. |
|
14 | # ones. Therefore copy all branch caches over. | |
15 | cachefiles = [b'branch2'] |
|
15 | cachefiles = [b'branch2'] | |
16 | cachefiles += [b'branch2-%s' % f for f in repoview.filtertable] |
|
16 | cachefiles += [b'branch2-%s' % f for f in repoview.filtertable] | |
|
17 | cachefiles += [b'branch3'] | |||
|
18 | cachefiles += [b'branch3-%s' % f for f in repoview.filtertable] | |||
17 | cachefiles += [b'rbc-names-v1', b'rbc-revs-v1'] |
|
19 | cachefiles += [b'rbc-names-v1', b'rbc-revs-v1'] | |
18 | cachefiles += [b'tags2'] |
|
20 | cachefiles += [b'tags2'] | |
19 | cachefiles += [b'tags2-%s' % f for f in repoview.filtertable] |
|
21 | cachefiles += [b'tags2-%s' % f for f in repoview.filtertable] |
@@ -327,6 +327,9 b' class changelog(revlog.revlog):' | |||||
327 | self._filteredrevs_hashcache = {} |
|
327 | self._filteredrevs_hashcache = {} | |
328 | self._copiesstorage = opener.options.get(b'copies-storage') |
|
328 | self._copiesstorage = opener.options.get(b'copies-storage') | |
329 |
|
329 | |||
|
330 | def __contains__(self, rev): | |||
|
331 | return (0 <= rev < len(self)) and rev not in self._filteredrevs | |||
|
332 | ||||
330 | @property |
|
333 | @property | |
331 | def filteredrevs(self): |
|
334 | def filteredrevs(self): | |
332 | return self._filteredrevs |
|
335 | return self._filteredrevs |
@@ -35,11 +35,13 b' from .thirdparty import attr' | |||||
35 |
|
35 | |||
36 | from . import ( |
|
36 | from . import ( | |
37 | bookmarks, |
|
37 | bookmarks, | |
|
38 | bundle2, | |||
38 | changelog, |
|
39 | changelog, | |
39 | copies, |
|
40 | copies, | |
40 | crecord as crecordmod, |
|
41 | crecord as crecordmod, | |
41 | encoding, |
|
42 | encoding, | |
42 | error, |
|
43 | error, | |
|
44 | exchange, | |||
43 | formatter, |
|
45 | formatter, | |
44 | logcmdutil, |
|
46 | logcmdutil, | |
45 | match as matchmod, |
|
47 | match as matchmod, | |
@@ -56,6 +58,7 b' from . import (' | |||||
56 | rewriteutil, |
|
58 | rewriteutil, | |
57 | scmutil, |
|
59 | scmutil, | |
58 | state as statemod, |
|
60 | state as statemod, | |
|
61 | streamclone, | |||
59 | subrepoutil, |
|
62 | subrepoutil, | |
60 | templatekw, |
|
63 | templatekw, | |
61 | templater, |
|
64 | templater, | |
@@ -66,6 +69,7 b' from . import (' | |||||
66 | from .utils import ( |
|
69 | from .utils import ( | |
67 | dateutil, |
|
70 | dateutil, | |
68 | stringutil, |
|
71 | stringutil, | |
|
72 | urlutil, | |||
69 | ) |
|
73 | ) | |
70 |
|
74 | |||
71 | from .revlogutils import ( |
|
75 | from .revlogutils import ( | |
@@ -4135,3 +4139,90 b' def hgabortgraft(ui, repo):' | |||||
4135 | with repo.wlock(): |
|
4139 | with repo.wlock(): | |
4136 | graftstate = statemod.cmdstate(repo, b'graftstate') |
|
4140 | graftstate = statemod.cmdstate(repo, b'graftstate') | |
4137 | return abortgraft(ui, repo, graftstate) |
|
4141 | return abortgraft(ui, repo, graftstate) | |
|
4142 | ||||
|
4143 | ||||
|
4144 | def postincoming(ui, repo, modheads, optupdate, checkout, brev): | |||
|
4145 | """Run after a changegroup has been added via pull/unbundle | |||
|
4146 | ||||
|
4147 | This takes arguments below: | |||
|
4148 | ||||
|
4149 | :modheads: change of heads by pull/unbundle | |||
|
4150 | :optupdate: updating working directory is needed or not | |||
|
4151 | :checkout: update destination revision (or None to default destination) | |||
|
4152 | :brev: a name, which might be a bookmark to be activated after updating | |||
|
4153 | ||||
|
4154 | return True if update raise any conflict, False otherwise. | |||
|
4155 | """ | |||
|
4156 | if modheads == 0: | |||
|
4157 | return False | |||
|
4158 | if optupdate: | |||
|
4159 | # avoid circular import | |||
|
4160 | from . import hg | |||
|
4161 | ||||
|
4162 | try: | |||
|
4163 | return hg.updatetotally(ui, repo, checkout, brev) | |||
|
4164 | except error.UpdateAbort as inst: | |||
|
4165 | msg = _(b"not updating: %s") % stringutil.forcebytestr(inst) | |||
|
4166 | hint = inst.hint | |||
|
4167 | raise error.UpdateAbort(msg, hint=hint) | |||
|
4168 | if ui.quiet: | |||
|
4169 | pass # we won't report anything so the other clause are useless. | |||
|
4170 | elif modheads is not None and modheads > 1: | |||
|
4171 | currentbranchheads = len(repo.branchheads()) | |||
|
4172 | if currentbranchheads == modheads: | |||
|
4173 | ui.status( | |||
|
4174 | _(b"(run 'hg heads' to see heads, 'hg merge' to merge)\n") | |||
|
4175 | ) | |||
|
4176 | elif currentbranchheads > 1: | |||
|
4177 | ui.status( | |||
|
4178 | _(b"(run 'hg heads .' to see heads, 'hg merge' to merge)\n") | |||
|
4179 | ) | |||
|
4180 | else: | |||
|
4181 | ui.status(_(b"(run 'hg heads' to see heads)\n")) | |||
|
4182 | elif not ui.configbool(b'commands', b'update.requiredest'): | |||
|
4183 | ui.status(_(b"(run 'hg update' to get a working copy)\n")) | |||
|
4184 | return False | |||
|
4185 | ||||
|
4186 | ||||
|
4187 | def unbundle_files(ui, repo, fnames, unbundle_source=b'unbundle'): | |||
|
4188 | """utility for `hg unbundle` and `hg debug::unbundle`""" | |||
|
4189 | assert fnames | |||
|
4190 | # avoid circular import | |||
|
4191 | from . import hg | |||
|
4192 | ||||
|
4193 | with repo.lock(): | |||
|
4194 | for fname in fnames: | |||
|
4195 | f = hg.openpath(ui, fname) | |||
|
4196 | gen = exchange.readbundle(ui, f, fname) | |||
|
4197 | if isinstance(gen, streamclone.streamcloneapplier): | |||
|
4198 | raise error.InputError( | |||
|
4199 | _( | |||
|
4200 | b'packed bundles cannot be applied with ' | |||
|
4201 | b'"hg unbundle"' | |||
|
4202 | ), | |||
|
4203 | hint=_(b'use "hg debugapplystreamclonebundle"'), | |||
|
4204 | ) | |||
|
4205 | url = b'bundle:' + fname | |||
|
4206 | try: | |||
|
4207 | txnname = b'unbundle' | |||
|
4208 | if not isinstance(gen, bundle2.unbundle20): | |||
|
4209 | txnname = b'unbundle\n%s' % urlutil.hidepassword(url) | |||
|
4210 | with repo.transaction(txnname) as tr: | |||
|
4211 | op = bundle2.applybundle( | |||
|
4212 | repo, | |||
|
4213 | gen, | |||
|
4214 | tr, | |||
|
4215 | source=unbundle_source, # used by debug::unbundle | |||
|
4216 | url=url, | |||
|
4217 | ) | |||
|
4218 | except error.BundleUnknownFeatureError as exc: | |||
|
4219 | raise error.Abort( | |||
|
4220 | _(b'%s: unknown bundle feature, %s') % (fname, exc), | |||
|
4221 | hint=_( | |||
|
4222 | b"see https://mercurial-scm.org/" | |||
|
4223 | b"wiki/BundleFeature for more " | |||
|
4224 | b"information" | |||
|
4225 | ), | |||
|
4226 | ) | |||
|
4227 | modheads = bundle2.combinechangegroupresults(op) | |||
|
4228 | return modheads |
@@ -60,7 +60,6 b' from . import (' | |||||
60 | server, |
|
60 | server, | |
61 | shelve as shelvemod, |
|
61 | shelve as shelvemod, | |
62 | state as statemod, |
|
62 | state as statemod, | |
63 | streamclone, |
|
|||
64 | tags as tagsmod, |
|
63 | tags as tagsmod, | |
65 | ui as uimod, |
|
64 | ui as uimod, | |
66 | util, |
|
65 | util, | |
@@ -1627,6 +1626,8 b' def bundle(ui, repo, fname, *dests, **op' | |||||
1627 | pycompat.bytestr(e), |
|
1626 | pycompat.bytestr(e), | |
1628 | hint=_(b"see 'hg help bundlespec' for supported values for --type"), |
|
1627 | hint=_(b"see 'hg help bundlespec' for supported values for --type"), | |
1629 | ) |
|
1628 | ) | |
|
1629 | ||||
|
1630 | has_changegroup = bundlespec.params.get(b"changegroup", False) | |||
1630 | cgversion = bundlespec.params[b"cg.version"] |
|
1631 | cgversion = bundlespec.params[b"cg.version"] | |
1631 |
|
1632 | |||
1632 | # Packed bundles are a pseudo bundle format for now. |
|
1633 | # Packed bundles are a pseudo bundle format for now. | |
@@ -1663,7 +1664,8 b' def bundle(ui, repo, fname, *dests, **op' | |||||
1663 | base = [nullrev] |
|
1664 | base = [nullrev] | |
1664 | else: |
|
1665 | else: | |
1665 | base = None |
|
1666 | base = None | |
1666 |
|
|
1667 | supported_cg_versions = changegroup.supportedoutgoingversions(repo) | |
|
1668 | if has_changegroup and cgversion not in supported_cg_versions: | |||
1667 | raise error.Abort( |
|
1669 | raise error.Abort( | |
1668 | _(b"repository does not support bundle version %s") % cgversion |
|
1670 | _(b"repository does not support bundle version %s") % cgversion | |
1669 | ) |
|
1671 | ) | |
@@ -5375,44 +5377,6 b' def phase(ui, repo, *revs, **opts):' | |||||
5375 | return ret |
|
5377 | return ret | |
5376 |
|
5378 | |||
5377 |
|
5379 | |||
5378 | def postincoming(ui, repo, modheads, optupdate, checkout, brev): |
|
|||
5379 | """Run after a changegroup has been added via pull/unbundle |
|
|||
5380 |
|
||||
5381 | This takes arguments below: |
|
|||
5382 |
|
||||
5383 | :modheads: change of heads by pull/unbundle |
|
|||
5384 | :optupdate: updating working directory is needed or not |
|
|||
5385 | :checkout: update destination revision (or None to default destination) |
|
|||
5386 | :brev: a name, which might be a bookmark to be activated after updating |
|
|||
5387 |
|
||||
5388 | return True if update raise any conflict, False otherwise. |
|
|||
5389 | """ |
|
|||
5390 | if modheads == 0: |
|
|||
5391 | return False |
|
|||
5392 | if optupdate: |
|
|||
5393 | try: |
|
|||
5394 | return hg.updatetotally(ui, repo, checkout, brev) |
|
|||
5395 | except error.UpdateAbort as inst: |
|
|||
5396 | msg = _(b"not updating: %s") % stringutil.forcebytestr(inst) |
|
|||
5397 | hint = inst.hint |
|
|||
5398 | raise error.UpdateAbort(msg, hint=hint) |
|
|||
5399 | if modheads is not None and modheads > 1: |
|
|||
5400 | currentbranchheads = len(repo.branchheads()) |
|
|||
5401 | if currentbranchheads == modheads: |
|
|||
5402 | ui.status( |
|
|||
5403 | _(b"(run 'hg heads' to see heads, 'hg merge' to merge)\n") |
|
|||
5404 | ) |
|
|||
5405 | elif currentbranchheads > 1: |
|
|||
5406 | ui.status( |
|
|||
5407 | _(b"(run 'hg heads .' to see heads, 'hg merge' to merge)\n") |
|
|||
5408 | ) |
|
|||
5409 | else: |
|
|||
5410 | ui.status(_(b"(run 'hg heads' to see heads)\n")) |
|
|||
5411 | elif not ui.configbool(b'commands', b'update.requiredest'): |
|
|||
5412 | ui.status(_(b"(run 'hg update' to get a working copy)\n")) |
|
|||
5413 | return False |
|
|||
5414 |
|
||||
5415 |
|
||||
5416 | @command( |
|
5380 | @command( | |
5417 | b'pull', |
|
5381 | b'pull', | |
5418 | [ |
|
5382 | [ | |
@@ -5608,7 +5572,7 b' def pull(ui, repo, *sources, **opts):' | |||||
5608 | # for pushes. |
|
5572 | # for pushes. | |
5609 | repo._subtoppath = path.loc |
|
5573 | repo._subtoppath = path.loc | |
5610 | try: |
|
5574 | try: | |
5611 | update_conflict = postincoming( |
|
5575 | update_conflict = cmdutil.postincoming( | |
5612 | ui, repo, modheads, opts.get('update'), checkout, brev |
|
5576 | ui, repo, modheads, opts.get('update'), checkout, brev | |
5613 | ) |
|
5577 | ) | |
5614 | except error.FilteredRepoLookupError as exc: |
|
5578 | except error.FilteredRepoLookupError as exc: | |
@@ -7730,7 +7694,7 b' def tip(ui, repo, **opts):' | |||||
7730 | _(b'[-u] FILE...'), |
|
7694 | _(b'[-u] FILE...'), | |
7731 | helpcategory=command.CATEGORY_IMPORT_EXPORT, |
|
7695 | helpcategory=command.CATEGORY_IMPORT_EXPORT, | |
7732 | ) |
|
7696 | ) | |
7733 |
def unbundle(ui, repo, fname1, *fnames, |
|
7697 | def unbundle(ui, repo, fname1, *fnames, **opts): | |
7734 | """apply one or more bundle files |
|
7698 | """apply one or more bundle files | |
7735 |
|
7699 | |||
7736 | Apply one or more bundle files generated by :hg:`bundle`. |
|
7700 | Apply one or more bundle files generated by :hg:`bundle`. | |
@@ -7738,44 +7702,9 b' def unbundle(ui, repo, fname1, *fnames, ' | |||||
7738 | Returns 0 on success, 1 if an update has unresolved files. |
|
7702 | Returns 0 on success, 1 if an update has unresolved files. | |
7739 | """ |
|
7703 | """ | |
7740 | fnames = (fname1,) + fnames |
|
7704 | fnames = (fname1,) + fnames | |
7741 |
|
7705 | modheads = cmdutil.unbundle_files(ui, repo, fnames) | ||
7742 | with repo.lock(): |
|
7706 | ||
7743 | for fname in fnames: |
|
7707 | if cmdutil.postincoming(ui, repo, modheads, opts.get('update'), None, None): | |
7744 | f = hg.openpath(ui, fname) |
|
|||
7745 | gen = exchange.readbundle(ui, f, fname) |
|
|||
7746 | if isinstance(gen, streamclone.streamcloneapplier): |
|
|||
7747 | raise error.InputError( |
|
|||
7748 | _( |
|
|||
7749 | b'packed bundles cannot be applied with ' |
|
|||
7750 | b'"hg unbundle"' |
|
|||
7751 | ), |
|
|||
7752 | hint=_(b'use "hg debugapplystreamclonebundle"'), |
|
|||
7753 | ) |
|
|||
7754 | url = b'bundle:' + fname |
|
|||
7755 | try: |
|
|||
7756 | txnname = b'unbundle' |
|
|||
7757 | if not isinstance(gen, bundle2.unbundle20): |
|
|||
7758 | txnname = b'unbundle\n%s' % urlutil.hidepassword(url) |
|
|||
7759 | with repo.transaction(txnname) as tr: |
|
|||
7760 | op = bundle2.applybundle( |
|
|||
7761 | repo, |
|
|||
7762 | gen, |
|
|||
7763 | tr, |
|
|||
7764 | source=_unbundle_source, # used by debug::unbundle |
|
|||
7765 | url=url, |
|
|||
7766 | ) |
|
|||
7767 | except error.BundleUnknownFeatureError as exc: |
|
|||
7768 | raise error.Abort( |
|
|||
7769 | _(b'%s: unknown bundle feature, %s') % (fname, exc), |
|
|||
7770 | hint=_( |
|
|||
7771 | b"see https://mercurial-scm.org/" |
|
|||
7772 | b"wiki/BundleFeature for more " |
|
|||
7773 | b"information" |
|
|||
7774 | ), |
|
|||
7775 | ) |
|
|||
7776 | modheads = bundle2.combinechangegroupresults(op) |
|
|||
7777 |
|
||||
7778 | if postincoming(ui, repo, modheads, opts.get('update'), None, None): |
|
|||
7779 | return 1 |
|
7708 | return 1 | |
7780 | else: |
|
7709 | else: | |
7781 | return 0 |
|
7710 | return 0 |
@@ -719,6 +719,15 b' section = "experimental"' | |||||
719 | name = "auto-publish" |
|
719 | name = "auto-publish" | |
720 | default = "publish" |
|
720 | default = "publish" | |
721 |
|
721 | |||
|
722 | ||||
|
723 | # The current implementation of the filtering/injecting of topological heads is | |||
|
724 | # naive and need proper benchmark and optimisation because we can envision | |||
|
725 | # moving the the v3 of the branch-cache format out of experimental | |||
|
726 | [[items]] | |||
|
727 | section = "experimental" | |||
|
728 | name = "branch-cache-v3" | |||
|
729 | default = false | |||
|
730 | ||||
722 | [[items]] |
|
731 | [[items]] | |
723 | section = "experimental" |
|
732 | section = "experimental" | |
724 | name = "bundle-phases" |
|
733 | name = "bundle-phases" |
@@ -4078,26 +4078,17 b' def debugupgraderepo(ui, repo, run=False' | |||||
4078 |
|
4078 | |||
4079 | @command( |
|
4079 | @command( | |
4080 | b'debug::unbundle', |
|
4080 | b'debug::unbundle', | |
4081 | [ |
|
4081 | [], | |
4082 | ( |
|
4082 | _(b'FILE...'), | |
4083 | b'u', |
|
|||
4084 | b'update', |
|
|||
4085 | None, |
|
|||
4086 | _(b'update to new branch head if changesets were unbundled'), |
|
|||
4087 | ) |
|
|||
4088 | ], |
|
|||
4089 | _(b'[-u] FILE...'), |
|
|||
4090 | helpcategory=command.CATEGORY_IMPORT_EXPORT, |
|
4083 | helpcategory=command.CATEGORY_IMPORT_EXPORT, | |
4091 | ) |
|
4084 | ) | |
4092 |
def debugunbundle(ui, repo, |
|
4085 | def debugunbundle(ui, repo, fname1, *fnames): | |
4093 | """same as `hg unbundle`, but pretent to come from a push |
|
4086 | """same as `hg unbundle`, but pretent to come from a push | |
4094 |
|
4087 | |||
4095 | This is useful to debug behavior and performance change in this case. |
|
4088 | This is useful to debug behavior and performance change in this case. | |
4096 | """ |
|
4089 | """ | |
4097 | from . import commands # avoid cycle |
|
4090 | fnames = (fname1,) + fnames | |
4098 |
|
4091 | cmdutil.unbundle_files(ui, repo, fnames) | ||
4099 | unbundle = cmdutil.findcmd(b'unbundle', commands.table)[1][0] |
|
|||
4100 | return unbundle(ui, repo, *args, _unbundle_source=b'push', **kwargs) |
|
|||
4101 |
|
4092 | |||
4102 |
|
4093 | |||
4103 | @command( |
|
4094 | @command( |
@@ -1639,16 +1639,6 b' class dirstate:' | |||||
1639 |
|
1639 | |||
1640 | use_rust = True |
|
1640 | use_rust = True | |
1641 |
|
1641 | |||
1642 | allowed_matchers = ( |
|
|||
1643 | matchmod.alwaysmatcher, |
|
|||
1644 | matchmod.differencematcher, |
|
|||
1645 | matchmod.exactmatcher, |
|
|||
1646 | matchmod.includematcher, |
|
|||
1647 | matchmod.intersectionmatcher, |
|
|||
1648 | matchmod.nevermatcher, |
|
|||
1649 | matchmod.unionmatcher, |
|
|||
1650 | ) |
|
|||
1651 |
|
||||
1652 | if rustmod is None: |
|
1642 | if rustmod is None: | |
1653 | use_rust = False |
|
1643 | use_rust = False | |
1654 | elif self._checkcase: |
|
1644 | elif self._checkcase: | |
@@ -1656,9 +1646,6 b' class dirstate:' | |||||
1656 | use_rust = False |
|
1646 | use_rust = False | |
1657 | elif subrepos: |
|
1647 | elif subrepos: | |
1658 | use_rust = False |
|
1648 | use_rust = False | |
1659 | elif not isinstance(match, allowed_matchers): |
|
|||
1660 | # Some matchers have yet to be implemented |
|
|||
1661 | use_rust = False |
|
|||
1662 |
|
1649 | |||
1663 | # Get the time from the filesystem so we can disambiguate files that |
|
1650 | # Get the time from the filesystem so we can disambiguate files that | |
1664 | # appear modified in the present or future. |
|
1651 | # appear modified in the present or future. |
@@ -18,6 +18,7 b' from . import (' | |||||
18 | bookmarks, |
|
18 | bookmarks, | |
19 | branchmap, |
|
19 | branchmap, | |
20 | error, |
|
20 | error, | |
|
21 | node as nodemod, | |||
21 | obsolete, |
|
22 | obsolete, | |
22 | phases, |
|
23 | phases, | |
23 | pycompat, |
|
24 | pycompat, | |
@@ -98,29 +99,62 b' class outgoing:' | |||||
98 | def __init__( |
|
99 | def __init__( | |
99 | self, repo, commonheads=None, ancestorsof=None, missingroots=None |
|
100 | self, repo, commonheads=None, ancestorsof=None, missingroots=None | |
100 | ): |
|
101 | ): | |
101 |
# at |
|
102 | # at most one of them must not be set | |
102 | assert None in (commonheads, missingroots) |
|
103 | if commonheads is not None and missingroots is not None: | |
|
104 | m = 'commonheads and missingroots arguments are mutually exclusive' | |||
|
105 | raise error.ProgrammingError(m) | |||
103 | cl = repo.changelog |
|
106 | cl = repo.changelog | |
|
107 | unfi = repo.unfiltered() | |||
|
108 | ucl = unfi.changelog | |||
|
109 | to_node = ucl.node | |||
|
110 | missing = None | |||
|
111 | common = None | |||
|
112 | arg_anc = ancestorsof | |||
104 | if ancestorsof is None: |
|
113 | if ancestorsof is None: | |
105 | ancestorsof = cl.heads() |
|
114 | ancestorsof = cl.heads() | |
106 | if missingroots: |
|
115 | ||
|
116 | # XXX-perf: do we need all this to be node-list? They would be simpler | |||
|
117 | # as rev-num sets (and smartset) | |||
|
118 | if missingroots == [nodemod.nullrev] or missingroots == []: | |||
|
119 | commonheads = [repo.nullid] | |||
|
120 | common = set() | |||
|
121 | if arg_anc is None: | |||
|
122 | missing = [to_node(r) for r in cl] | |||
|
123 | else: | |||
|
124 | missing_rev = repo.revs('::%ln', missingroots, ancestorsof) | |||
|
125 | missing = [to_node(r) for r in missing_rev] | |||
|
126 | elif missingroots is not None: | |||
107 | # TODO remove call to nodesbetween. |
|
127 | # TODO remove call to nodesbetween. | |
108 | # TODO populate attributes on outgoing instance instead of setting |
|
128 | missing_rev = repo.revs('%ln::%ln', missingroots, ancestorsof) | |
109 | # discbases. |
|
129 | ancestorsof = [to_node(r) for r in ucl.headrevs(missing_rev)] | |
110 | csets, roots, heads = cl.nodesbetween(missingroots, ancestorsof) |
|
130 | parent_revs = ucl.parentrevs | |
111 |
|
|
131 | common_legs = set() | |
112 | discbases = [] |
|
132 | for r in missing_rev: | |
113 | for n in csets: |
|
133 | p1, p2 = parent_revs(r) | |
114 | discbases.extend([p for p in cl.parents(n) if p != repo.nullid]) |
|
134 | if p1 not in missing_rev: | |
115 | ancestorsof = heads |
|
135 | common_legs.add(p1) | |
116 | commonheads = [n for n in discbases if n not in included] |
|
136 | if p2 not in missing_rev: | |
|
137 | common_legs.add(p2) | |||
|
138 | common_legs.discard(nodemod.nullrev) | |||
|
139 | if not common_legs: | |||
|
140 | commonheads = [repo.nullid] | |||
|
141 | common = set() | |||
|
142 | else: | |||
|
143 | commonheads_revs = unfi.revs( | |||
|
144 | 'heads(%ld::%ld)', | |||
|
145 | common_legs, | |||
|
146 | common_legs, | |||
|
147 | ) | |||
|
148 | commonheads = [to_node(r) for r in commonheads_revs] | |||
|
149 | common = ucl.ancestors(commonheads_revs, inclusive=True) | |||
|
150 | missing = [to_node(r) for r in missing_rev] | |||
117 | elif not commonheads: |
|
151 | elif not commonheads: | |
118 | commonheads = [repo.nullid] |
|
152 | commonheads = [repo.nullid] | |
119 | self.commonheads = commonheads |
|
153 | self.commonheads = commonheads | |
120 | self.ancestorsof = ancestorsof |
|
154 | self.ancestorsof = ancestorsof | |
121 | self._revlog = cl |
|
155 | self._revlog = cl | |
122 |
self._common = |
|
156 | self._common = common | |
123 |
self._missing = |
|
157 | self._missing = missing | |
124 | self.excluded = [] |
|
158 | self.excluded = [] | |
125 |
|
159 | |||
126 | def _computecommonmissing(self): |
|
160 | def _computecommonmissing(self): | |
@@ -190,7 +224,12 b' def findcommonoutgoing(' | |||||
190 | if len(missing) == len(allmissing): |
|
224 | if len(missing) == len(allmissing): | |
191 | ancestorsof = onlyheads |
|
225 | ancestorsof = onlyheads | |
192 | else: # update missing heads |
|
226 | else: # update missing heads | |
193 | ancestorsof = phases.newheads(repo, onlyheads, excluded) |
|
227 | to_rev = repo.changelog.index.rev | |
|
228 | to_node = repo.changelog.node | |||
|
229 | excluded_revs = [to_rev(r) for r in excluded] | |||
|
230 | onlyheads_revs = [to_rev(r) for r in onlyheads] | |||
|
231 | new_heads = phases.new_heads(repo, onlyheads_revs, excluded_revs) | |||
|
232 | ancestorsof = [to_node(r) for r in new_heads] | |||
194 | og.ancestorsof = ancestorsof |
|
233 | og.ancestorsof = ancestorsof | |
195 | if portable: |
|
234 | if portable: | |
196 | # recompute common and ancestorsof as if -r<rev> had been given for |
|
235 | # recompute common and ancestorsof as if -r<rev> had been given for |
@@ -344,32 +344,56 b' class pushoperation:' | |||||
344 | # not target to push, all common are relevant |
|
344 | # not target to push, all common are relevant | |
345 | return self.outgoing.commonheads |
|
345 | return self.outgoing.commonheads | |
346 | unfi = self.repo.unfiltered() |
|
346 | unfi = self.repo.unfiltered() | |
347 |
# I want cheads = heads(:: |
|
347 | # I want cheads = heads(::push_heads and ::commonheads) | |
348 | # (ancestorsof is revs with secret changeset filtered out) |
|
348 | # | |
|
349 | # To push, we already computed | |||
|
350 | # common = (::commonheads) | |||
|
351 | # missing = ((commonheads::push_heads) - commonheads) | |||
|
352 | # | |||
|
353 | # So we basically search | |||
349 | # |
|
354 | # | |
350 | # This can be expressed as: |
|
355 | # almost_heads = heads((parents(missing) + push_heads) & common) | |
351 | # cheads = ( (ancestorsof and ::commonheads) |
|
|||
352 | # + (commonheads and ::ancestorsof))" |
|
|||
353 | # ) |
|
|||
354 | # |
|
356 | # | |
355 | # while trying to push we already computed the following: |
|
357 | # We use "almost" here as this can return revision that are ancestors | |
356 | # common = (::commonheads) |
|
358 | # of other in the set and we need to explicitly turn it into an | |
357 | # missing = ((commonheads::ancestorsof) - commonheads) |
|
359 | # antichain later. We can do so using: | |
|
360 | # | |||
|
361 | # cheads = heads(almost_heads::almost_heads) | |||
358 | # |
|
362 | # | |
359 | # We can pick: |
|
363 | # In pratice the code is a bit more convulted to avoid some extra | |
360 | # * ancestorsof part of common (::commonheads) |
|
364 | # computation. It aims at doing the same computation as highlighted | |
|
365 | # above however. | |||
361 | common = self.outgoing.common |
|
366 | common = self.outgoing.common | |
362 |
|
|
367 | unfi = self.repo.unfiltered() | |
363 | cheads = [node for node in self.revs if rev(node) in common] |
|
368 | cl = unfi.changelog | |
364 | # and |
|
369 | to_rev = cl.index.rev | |
365 | # * commonheads parents on missing |
|
370 | to_node = cl.node | |
366 | revset = unfi.set( |
|
371 | parent_revs = cl.parentrevs | |
367 | b'%ln and parents(roots(%ln))', |
|
372 | unselected = [] | |
368 | self.outgoing.commonheads, |
|
373 | cheads = set() | |
369 | self.outgoing.missing, |
|
374 | # XXX-perf: `self.revs` and `outgoing.missing` could hold revs directly | |
370 | ) |
|
375 | for n in self.revs: | |
371 | cheads.extend(c.node() for c in revset) |
|
376 | r = to_rev(n) | |
372 | return cheads |
|
377 | if r in common: | |
|
378 | cheads.add(r) | |||
|
379 | else: | |||
|
380 | unselected.append(r) | |||
|
381 | known_non_heads = cl.ancestors(cheads, inclusive=True) | |||
|
382 | if unselected: | |||
|
383 | missing_revs = {to_rev(n) for n in self.outgoing.missing} | |||
|
384 | missing_revs.add(nullrev) | |||
|
385 | root_points = set() | |||
|
386 | for r in missing_revs: | |||
|
387 | p1, p2 = parent_revs(r) | |||
|
388 | if p1 not in missing_revs and p1 not in known_non_heads: | |||
|
389 | root_points.add(p1) | |||
|
390 | if p2 not in missing_revs and p2 not in known_non_heads: | |||
|
391 | root_points.add(p2) | |||
|
392 | if root_points: | |||
|
393 | heads = unfi.revs('heads(%ld::%ld)', root_points, root_points) | |||
|
394 | cheads.update(heads) | |||
|
395 | # XXX-perf: could this be a set of revision? | |||
|
396 | return [to_node(r) for r in sorted(cheads)] | |||
373 |
|
397 | |||
374 | @property |
|
398 | @property | |
375 | def commonheads(self): |
|
399 | def commonheads(self): | |
@@ -600,7 +624,10 b' def _pushdiscoveryphase(pushop):' | |||||
600 |
|
624 | |||
601 | (computed for both success and failure case for changesets push)""" |
|
625 | (computed for both success and failure case for changesets push)""" | |
602 | outgoing = pushop.outgoing |
|
626 | outgoing = pushop.outgoing | |
603 |
|
|
627 | repo = pushop.repo | |
|
628 | unfi = repo.unfiltered() | |||
|
629 | cl = unfi.changelog | |||
|
630 | to_rev = cl.index.rev | |||
604 | remotephases = listkeys(pushop.remote, b'phases') |
|
631 | remotephases = listkeys(pushop.remote, b'phases') | |
605 |
|
632 | |||
606 | if ( |
|
633 | if ( | |
@@ -622,38 +649,43 b' def _pushdiscoveryphase(pushop):' | |||||
622 | pushop.fallbackoutdatedphases = [] |
|
649 | pushop.fallbackoutdatedphases = [] | |
623 | return |
|
650 | return | |
624 |
|
651 | |||
625 | pushop.remotephases = phases.remotephasessummary( |
|
652 | fallbackheads_rev = {to_rev(n) for n in pushop.fallbackheads} | |
626 | pushop.repo, pushop.fallbackheads, remotephases |
|
653 | pushop.remotephases = phases.RemotePhasesSummary( | |
|
654 | pushop.repo, | |||
|
655 | fallbackheads_rev, | |||
|
656 | remotephases, | |||
627 | ) |
|
657 | ) | |
628 | droots = pushop.remotephases.draftroots |
|
658 | droots = set(pushop.remotephases.draft_roots) | |
629 |
|
659 | |||
630 | extracond = b'' |
|
660 | fallback_publishing = pushop.remotephases.publishing | |
631 |
|
|
661 | push_publishing = pushop.remotephases.publishing or pushop.publish | |
632 | extracond = b' and public()' |
|
662 | missing_revs = {to_rev(n) for n in outgoing.missing} | |
633 | revset = b'heads((%%ln::%%ln) %s)' % extracond |
|
663 | drafts = unfi._phasecache.get_raw_set(unfi, phases.draft) | |
634 | # Get the list of all revs draft on remote by public here. |
|
664 | ||
635 | # XXX Beware that revset break if droots is not strictly |
|
665 | if fallback_publishing: | |
636 | # XXX root we may want to ensure it is but it is costly |
|
666 | fallback_roots = droots - missing_revs | |
637 | fallback = list(unfi.set(revset, droots, pushop.fallbackheads)) |
|
667 | revset = b'heads(%ld::%ld)' | |
638 | if not pushop.remotephases.publishing and pushop.publish: |
|
|||
639 | future = list( |
|
|||
640 | unfi.set( |
|
|||
641 | b'%ln and (not public() or %ln::)', pushop.futureheads, droots |
|
|||
642 | ) |
|
|||
643 | ) |
|
|||
644 | elif not outgoing.missing: |
|
|||
645 | future = fallback |
|
|||
646 | else: |
|
668 | else: | |
647 | # adds changeset we are going to push as draft |
|
669 | fallback_roots = droots - drafts | |
648 | # |
|
670 | fallback_roots -= missing_revs | |
649 | # should not be necessary for publishing server, but because of an |
|
671 | # Get the list of all revs draft on remote but public here. | |
650 | # issue fixed in xxxxx we have to do it anyway. |
|
672 | revset = b'heads((%ld::%ld) and public())' | |
651 | fdroots = list( |
|
673 | if not fallback_roots: | |
652 | unfi.set(b'roots(%ln + %ln::)', outgoing.missing, droots) |
|
674 | fallback = fallback_rev = [] | |
653 | ) |
|
675 | else: | |
654 | fdroots = [f.node() for f in fdroots] |
|
676 | fallback_rev = unfi.revs(revset, fallback_roots, fallbackheads_rev) | |
655 | future = list(unfi.set(revset, fdroots, pushop.futureheads)) |
|
677 | fallback = [repo[r] for r in fallback_rev] | |
656 | pushop.outdatedphases = future |
|
678 | ||
|
679 | if push_publishing: | |||
|
680 | published = missing_revs.copy() | |||
|
681 | else: | |||
|
682 | published = missing_revs - drafts | |||
|
683 | if pushop.publish: | |||
|
684 | published.update(fallbackheads_rev & drafts) | |||
|
685 | elif fallback: | |||
|
686 | published.update(fallback_rev) | |||
|
687 | ||||
|
688 | pushop.outdatedphases = [repo[r] for r in cl.headrevs(published)] | |||
657 | pushop.fallbackoutdatedphases = fallback |
|
689 | pushop.fallbackoutdatedphases = fallback | |
658 |
|
690 | |||
659 |
|
691 | |||
@@ -671,8 +703,8 b' def _pushdiscoveryobsmarkers(pushop):' | |||||
671 | repo = pushop.repo |
|
703 | repo = pushop.repo | |
672 | # very naive computation, that can be quite expensive on big repo. |
|
704 | # very naive computation, that can be quite expensive on big repo. | |
673 | # However: evolution is currently slow on them anyway. |
|
705 | # However: evolution is currently slow on them anyway. | |
674 |
|
|
706 | revs = repo.revs(b'::%ln', pushop.futureheads) | |
675 |
pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers( |
|
707 | pushop.outobsmarkers = pushop.repo.obsstore.relevantmarkers(revs=revs) | |
676 |
|
708 | |||
677 |
|
709 | |||
678 | @pushdiscovery(b'bookmarks') |
|
710 | @pushdiscovery(b'bookmarks') | |
@@ -888,8 +920,13 b' def _pushb2checkphases(pushop, bundler):' | |||||
888 | if pushop.remotephases is not None and hasphaseheads: |
|
920 | if pushop.remotephases is not None and hasphaseheads: | |
889 | # check that the remote phase has not changed |
|
921 | # check that the remote phase has not changed | |
890 | checks = {p: [] for p in phases.allphases} |
|
922 | checks = {p: [] for p in phases.allphases} | |
891 | checks[phases.public].extend(pushop.remotephases.publicheads) |
|
923 | to_node = pushop.repo.unfiltered().changelog.node | |
892 | checks[phases.draft].extend(pushop.remotephases.draftroots) |
|
924 | checks[phases.public].extend( | |
|
925 | to_node(r) for r in pushop.remotephases.public_heads | |||
|
926 | ) | |||
|
927 | checks[phases.draft].extend( | |||
|
928 | to_node(r) for r in pushop.remotephases.draft_roots | |||
|
929 | ) | |||
893 | if any(checks.values()): |
|
930 | if any(checks.values()): | |
894 | for phase in checks: |
|
931 | for phase in checks: | |
895 | checks[phase].sort() |
|
932 | checks[phase].sort() | |
@@ -1293,8 +1330,16 b' def _pushsyncphase(pushop):' | |||||
1293 | _localphasemove(pushop, cheads) |
|
1330 | _localphasemove(pushop, cheads) | |
1294 | # don't push any phase data as there is nothing to push |
|
1331 | # don't push any phase data as there is nothing to push | |
1295 | else: |
|
1332 | else: | |
1296 | ana = phases.analyzeremotephases(pushop.repo, cheads, remotephases) |
|
1333 | unfi = pushop.repo.unfiltered() | |
1297 | pheads, droots = ana |
|
1334 | to_rev = unfi.changelog.index.rev | |
|
1335 | to_node = unfi.changelog.node | |||
|
1336 | cheads_revs = [to_rev(n) for n in cheads] | |||
|
1337 | pheads_revs, _dr = phases.analyze_remote_phases( | |||
|
1338 | pushop.repo, | |||
|
1339 | cheads_revs, | |||
|
1340 | remotephases, | |||
|
1341 | ) | |||
|
1342 | pheads = [to_node(r) for r in pheads_revs] | |||
1298 | ### Apply remote phase on local |
|
1343 | ### Apply remote phase on local | |
1299 | if remotephases.get(b'publishing', False): |
|
1344 | if remotephases.get(b'publishing', False): | |
1300 | _localphasemove(pushop, cheads) |
|
1345 | _localphasemove(pushop, cheads) | |
@@ -2048,10 +2093,17 b' def _pullapplyphases(pullop, remotephase' | |||||
2048 | pullop.stepsdone.add(b'phases') |
|
2093 | pullop.stepsdone.add(b'phases') | |
2049 | publishing = bool(remotephases.get(b'publishing', False)) |
|
2094 | publishing = bool(remotephases.get(b'publishing', False)) | |
2050 | if remotephases and not publishing: |
|
2095 | if remotephases and not publishing: | |
|
2096 | unfi = pullop.repo.unfiltered() | |||
|
2097 | to_rev = unfi.changelog.index.rev | |||
|
2098 | to_node = unfi.changelog.node | |||
|
2099 | pulledsubset_revs = [to_rev(n) for n in pullop.pulledsubset] | |||
2051 | # remote is new and non-publishing |
|
2100 | # remote is new and non-publishing | |
2052 | pheads, _dr = phases.analyzeremotephases( |
|
2101 | pheads_revs, _dr = phases.analyze_remote_phases( | |
2053 | pullop.repo, pullop.pulledsubset, remotephases |
|
2102 | pullop.repo, | |
|
2103 | pulledsubset_revs, | |||
|
2104 | remotephases, | |||
2054 | ) |
|
2105 | ) | |
|
2106 | pheads = [to_node(r) for r in pheads_revs] | |||
2055 | dheads = pullop.pulledsubset |
|
2107 | dheads = pullop.pulledsubset | |
2056 | else: |
|
2108 | else: | |
2057 | # Remote is old or publishing all common changesets |
|
2109 | # Remote is old or publishing all common changesets | |
@@ -2553,8 +2605,8 b' def _getbundleobsmarkerpart(' | |||||
2553 | if kwargs.get('obsmarkers', False): |
|
2605 | if kwargs.get('obsmarkers', False): | |
2554 | if heads is None: |
|
2606 | if heads is None: | |
2555 | heads = repo.heads() |
|
2607 | heads = repo.heads() | |
2556 |
|
|
2608 | revs = repo.revs(b'::%ln', heads) | |
2557 |
markers = repo.obsstore.relevantmarkers( |
|
2609 | markers = repo.obsstore.relevantmarkers(revs=revs) | |
2558 | markers = obsutil.sortedmarkers(markers) |
|
2610 | markers = obsutil.sortedmarkers(markers) | |
2559 | bundle2.buildobsmarkerspart(bundler, markers) |
|
2611 | bundle2.buildobsmarkerspart(bundler, markers) | |
2560 |
|
2612 |
@@ -54,6 +54,8 b' CACHE_BRANCHMAP_ALL = b"branchmap-all"' | |||||
54 | CACHE_BRANCHMAP_SERVED = b"branchmap-served" |
|
54 | CACHE_BRANCHMAP_SERVED = b"branchmap-served" | |
55 | # Warm internal changelog cache (eg: persistent nodemap) |
|
55 | # Warm internal changelog cache (eg: persistent nodemap) | |
56 | CACHE_CHANGELOG_CACHE = b"changelog-cache" |
|
56 | CACHE_CHANGELOG_CACHE = b"changelog-cache" | |
|
57 | # check of a branchmap can use the "pure topo" mode | |||
|
58 | CACHE_BRANCHMAP_DETECT_PURE_TOPO = b"branchmap-detect-pure-topo" | |||
57 | # Warm full manifest cache |
|
59 | # Warm full manifest cache | |
58 | CACHE_FULL_MANIFEST = b"full-manifest" |
|
60 | CACHE_FULL_MANIFEST = b"full-manifest" | |
59 | # Warm file-node-tags cache |
|
61 | # Warm file-node-tags cache | |
@@ -78,6 +80,7 b' CACHES_DEFAULT = {' | |||||
78 | CACHES_ALL = { |
|
80 | CACHES_ALL = { | |
79 | CACHE_BRANCHMAP_SERVED, |
|
81 | CACHE_BRANCHMAP_SERVED, | |
80 | CACHE_BRANCHMAP_ALL, |
|
82 | CACHE_BRANCHMAP_ALL, | |
|
83 | CACHE_BRANCHMAP_DETECT_PURE_TOPO, | |||
81 | CACHE_CHANGELOG_CACHE, |
|
84 | CACHE_CHANGELOG_CACHE, | |
82 | CACHE_FILE_NODE_TAGS, |
|
85 | CACHE_FILE_NODE_TAGS, | |
83 | CACHE_FULL_MANIFEST, |
|
86 | CACHE_FULL_MANIFEST, |
@@ -2923,12 +2923,14 b' class localrepository:' | |||||
2923 |
|
2923 | |||
2924 | if repository.CACHE_BRANCHMAP_SERVED in caches: |
|
2924 | if repository.CACHE_BRANCHMAP_SERVED in caches: | |
2925 | if tr is None or tr.changes[b'origrepolen'] < len(self): |
|
2925 | if tr is None or tr.changes[b'origrepolen'] < len(self): | |
2926 | # accessing the 'served' branchmap should refresh all the others, |
|
|||
2927 | self.ui.debug(b'updating the branch cache\n') |
|
2926 | self.ui.debug(b'updating the branch cache\n') | |
2928 | self.filtered(b'served').branchmap() |
|
2927 | dpt = repository.CACHE_BRANCHMAP_DETECT_PURE_TOPO in caches | |
2929 |
self.filtered(b'served |
|
2928 | served = self.filtered(b'served') | |
2930 | # flush all possibly delayed write. |
|
2929 | self._branchcaches.update_disk(served, detect_pure_topo=dpt) | |
2931 | self._branchcaches.write_delayed(self) |
|
2930 | served_hidden = self.filtered(b'served.hidden') | |
|
2931 | self._branchcaches.update_disk( | |||
|
2932 | served_hidden, detect_pure_topo=dpt | |||
|
2933 | ) | |||
2932 |
|
2934 | |||
2933 | if repository.CACHE_CHANGELOG_CACHE in caches: |
|
2935 | if repository.CACHE_CHANGELOG_CACHE in caches: | |
2934 | self.changelog.update_caches(transaction=tr) |
|
2936 | self.changelog.update_caches(transaction=tr) | |
@@ -2957,7 +2959,7 b' class localrepository:' | |||||
2957 |
|
2959 | |||
2958 | if repository.CACHE_FILE_NODE_TAGS in caches: |
|
2960 | if repository.CACHE_FILE_NODE_TAGS in caches: | |
2959 | # accessing fnode cache warms the cache |
|
2961 | # accessing fnode cache warms the cache | |
2960 | tagsmod.fnoderevs(self.ui, unfi, unfi.changelog.revs()) |
|
2962 | tagsmod.warm_cache(self) | |
2961 |
|
2963 | |||
2962 | if repository.CACHE_TAGS_DEFAULT in caches: |
|
2964 | if repository.CACHE_TAGS_DEFAULT in caches: | |
2963 | # accessing tags warm the cache |
|
2965 | # accessing tags warm the cache | |
@@ -2971,9 +2973,14 b' class localrepository:' | |||||
2971 | # even if they haven't explicitly been requested yet (if they've |
|
2973 | # even if they haven't explicitly been requested yet (if they've | |
2972 | # never been used by hg, they won't ever have been written, even if |
|
2974 | # never been used by hg, they won't ever have been written, even if | |
2973 | # they're a subset of another kind of cache that *has* been used). |
|
2975 | # they're a subset of another kind of cache that *has* been used). | |
|
2976 | dpt = repository.CACHE_BRANCHMAP_DETECT_PURE_TOPO in caches | |||
|
2977 | ||||
2974 | for filt in repoview.filtertable.keys(): |
|
2978 | for filt in repoview.filtertable.keys(): | |
2975 | filtered = self.filtered(filt) |
|
2979 | filtered = self.filtered(filt) | |
2976 | filtered.branchmap().write(filtered) |
|
2980 | self._branchcaches.update_disk(filtered, detect_pure_topo=dpt) | |
|
2981 | ||||
|
2982 | # flush all possibly delayed write. | |||
|
2983 | self._branchcaches.write_dirty(self) | |||
2977 |
|
2984 | |||
2978 | def invalidatecaches(self): |
|
2985 | def invalidatecaches(self): | |
2979 | if '_tagscache' in vars(self): |
|
2986 | if '_tagscache' in vars(self): |
@@ -395,9 +395,18 b' def _donormalize(patterns, default, root' | |||||
395 |
|
395 | |||
396 | class basematcher: |
|
396 | class basematcher: | |
397 | def __init__(self, badfn=None): |
|
397 | def __init__(self, badfn=None): | |
|
398 | self._was_tampered_with = False | |||
398 | if badfn is not None: |
|
399 | if badfn is not None: | |
399 | self.bad = badfn |
|
400 | self.bad = badfn | |
400 |
|
401 | |||
|
402 | def was_tampered_with_nonrec(self): | |||
|
403 | # [_was_tampered_with] is used to track if when extensions changed the matcher | |||
|
404 | # behavior (crazy stuff!), so we disable the rust fast path. | |||
|
405 | return self._was_tampered_with | |||
|
406 | ||||
|
407 | def was_tampered_with(self): | |||
|
408 | return self.was_tampered_with_nonrec() | |||
|
409 | ||||
401 | def __call__(self, fn): |
|
410 | def __call__(self, fn): | |
402 | return self.matchfn(fn) |
|
411 | return self.matchfn(fn) | |
403 |
|
412 | |||
@@ -638,6 +647,11 b' class patternmatcher(basematcher):' | |||||
638 | super(patternmatcher, self).__init__(badfn) |
|
647 | super(patternmatcher, self).__init__(badfn) | |
639 | kindpats.sort() |
|
648 | kindpats.sort() | |
640 |
|
649 | |||
|
650 | if rustmod is not None: | |||
|
651 | # We need to pass the patterns to Rust because they can contain | |||
|
652 | # patterns from the user interface | |||
|
653 | self._kindpats = kindpats | |||
|
654 | ||||
641 | roots, dirs, parents = _rootsdirsandparents(kindpats) |
|
655 | roots, dirs, parents = _rootsdirsandparents(kindpats) | |
642 | self._files = _explicitfiles(kindpats) |
|
656 | self._files = _explicitfiles(kindpats) | |
643 | self._dirs_explicit = set(dirs) |
|
657 | self._dirs_explicit = set(dirs) | |
@@ -880,6 +894,13 b' class differencematcher(basematcher):' | |||||
880 | self.bad = m1.bad |
|
894 | self.bad = m1.bad | |
881 | self.traversedir = m1.traversedir |
|
895 | self.traversedir = m1.traversedir | |
882 |
|
896 | |||
|
897 | def was_tampered_with(self): | |||
|
898 | return ( | |||
|
899 | self.was_tampered_with_nonrec() | |||
|
900 | or self._m1.was_tampered_with() | |||
|
901 | or self._m2.was_tampered_with() | |||
|
902 | ) | |||
|
903 | ||||
883 | def matchfn(self, f): |
|
904 | def matchfn(self, f): | |
884 | return self._m1(f) and not self._m2(f) |
|
905 | return self._m1(f) and not self._m2(f) | |
885 |
|
906 | |||
@@ -963,6 +984,13 b' class intersectionmatcher(basematcher):' | |||||
963 | self.bad = m1.bad |
|
984 | self.bad = m1.bad | |
964 | self.traversedir = m1.traversedir |
|
985 | self.traversedir = m1.traversedir | |
965 |
|
986 | |||
|
987 | def was_tampered_with(self): | |||
|
988 | return ( | |||
|
989 | self.was_tampered_with_nonrec() | |||
|
990 | or self._m1.was_tampered_with() | |||
|
991 | or self._m2.was_tampered_with() | |||
|
992 | ) | |||
|
993 | ||||
966 | @propertycache |
|
994 | @propertycache | |
967 | def _files(self): |
|
995 | def _files(self): | |
968 | if self.isexact(): |
|
996 | if self.isexact(): | |
@@ -1060,6 +1088,11 b' class subdirmatcher(basematcher):' | |||||
1060 | if matcher.prefix(): |
|
1088 | if matcher.prefix(): | |
1061 | self._always = any(f == path for f in matcher._files) |
|
1089 | self._always = any(f == path for f in matcher._files) | |
1062 |
|
1090 | |||
|
1091 | def was_tampered_with(self): | |||
|
1092 | return ( | |||
|
1093 | self.was_tampered_with_nonrec() or self._matcher.was_tampered_with() | |||
|
1094 | ) | |||
|
1095 | ||||
1063 | def bad(self, f, msg): |
|
1096 | def bad(self, f, msg): | |
1064 | self._matcher.bad(self._path + b"/" + f, msg) |
|
1097 | self._matcher.bad(self._path + b"/" + f, msg) | |
1065 |
|
1098 | |||
@@ -1194,6 +1227,11 b' class unionmatcher(basematcher):' | |||||
1194 | self.traversedir = m1.traversedir |
|
1227 | self.traversedir = m1.traversedir | |
1195 | self._matchers = matchers |
|
1228 | self._matchers = matchers | |
1196 |
|
1229 | |||
|
1230 | def was_tampered_with(self): | |||
|
1231 | return self.was_tampered_with_nonrec() or any( | |||
|
1232 | map(lambda m: m.was_tampered_with(), self._matchers) | |||
|
1233 | ) | |||
|
1234 | ||||
1197 | def matchfn(self, f): |
|
1235 | def matchfn(self, f): | |
1198 | for match in self._matchers: |
|
1236 | for match in self._matchers: | |
1199 | if match(f): |
|
1237 | if match(f): |
@@ -771,10 +771,11 b' class obsstore:' | |||||
771 | _addchildren(self.children, markers) |
|
771 | _addchildren(self.children, markers) | |
772 | _checkinvalidmarkers(self.repo, markers) |
|
772 | _checkinvalidmarkers(self.repo, markers) | |
773 |
|
773 | |||
774 | def relevantmarkers(self, nodes): |
|
774 | def relevantmarkers(self, nodes=None, revs=None): | |
775 |
"""return a set of all obsolescence markers relevant to a set of |
|
775 | """return a set of all obsolescence markers relevant to a set of | |
|
776 | nodes or revisions. | |||
776 |
|
777 | |||
777 | "relevant" to a set of nodes mean: |
|
778 | "relevant" to a set of nodes or revisions mean: | |
778 |
|
779 | |||
779 | - marker that use this changeset as successor |
|
780 | - marker that use this changeset as successor | |
780 | - prune marker of direct children on this changeset |
|
781 | - prune marker of direct children on this changeset | |
@@ -782,8 +783,21 b' class obsstore:' | |||||
782 | markers |
|
783 | markers | |
783 |
|
784 | |||
784 | It is a set so you cannot rely on order.""" |
|
785 | It is a set so you cannot rely on order.""" | |
|
786 | if nodes is None: | |||
|
787 | nodes = set() | |||
|
788 | if revs is None: | |||
|
789 | revs = set() | |||
785 |
|
790 | |||
786 | pendingnodes = set(nodes) |
|
791 | get_rev = self.repo.unfiltered().changelog.index.get_rev | |
|
792 | pendingnodes = set() | |||
|
793 | for marker in self._all: | |||
|
794 | for node in (marker[0],) + marker[1] + (marker[5] or ()): | |||
|
795 | if node in nodes: | |||
|
796 | pendingnodes.add(node) | |||
|
797 | elif revs: | |||
|
798 | rev = get_rev(node) | |||
|
799 | if rev is not None and rev in revs: | |||
|
800 | pendingnodes.add(node) | |||
787 | seenmarkers = set() |
|
801 | seenmarkers = set() | |
788 | seennodes = set(pendingnodes) |
|
802 | seennodes = set(pendingnodes) | |
789 | precursorsmarkers = self.predecessors |
|
803 | precursorsmarkers = self.predecessors | |
@@ -818,7 +832,7 b' def makestore(ui, repo):' | |||||
818 | store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs) |
|
832 | store = obsstore(repo, repo.svfs, readonly=readonly, **kwargs) | |
819 | if store and readonly: |
|
833 | if store and readonly: | |
820 | ui.warn( |
|
834 | ui.warn( | |
821 | _(b'obsolete feature not enabled but %i markers found!\n') |
|
835 | _(b'"obsolete" feature not enabled but %i markers found!\n') | |
822 | % len(list(store)) |
|
836 | % len(list(store)) | |
823 | ) |
|
837 | ) | |
824 | return store |
|
838 | return store |
@@ -108,7 +108,7 b' def getmarkers(repo, nodes=None, exclusi' | |||||
108 | elif exclusive: |
|
108 | elif exclusive: | |
109 | rawmarkers = exclusivemarkers(repo, nodes) |
|
109 | rawmarkers = exclusivemarkers(repo, nodes) | |
110 | else: |
|
110 | else: | |
111 | rawmarkers = repo.obsstore.relevantmarkers(nodes) |
|
111 | rawmarkers = repo.obsstore.relevantmarkers(nodes=nodes) | |
112 |
|
112 | |||
113 | for markerdata in rawmarkers: |
|
113 | for markerdata in rawmarkers: | |
114 | yield marker(repo, markerdata) |
|
114 | yield marker(repo, markerdata) |
@@ -180,6 +180,13 b' class pathauditor:' | |||||
180 | self.auditeddir.clear() |
|
180 | self.auditeddir.clear() | |
181 | self._cached = False |
|
181 | self._cached = False | |
182 |
|
182 | |||
|
183 | def clear_audit_cache(self): | |||
|
184 | """reset all audit cache | |||
|
185 | ||||
|
186 | intended for debug and performance benchmark purposes""" | |||
|
187 | self.audited.clear() | |||
|
188 | self.auditeddir.clear() | |||
|
189 | ||||
183 |
|
190 | |||
184 | def canonpath( |
|
191 | def canonpath( | |
185 | root: bytes, |
|
192 | root: bytes, |
@@ -109,6 +109,7 b' import weakref' | |||||
109 | from typing import ( |
|
109 | from typing import ( | |
110 | Any, |
|
110 | Any, | |
111 | Callable, |
|
111 | Callable, | |
|
112 | Collection, | |||
112 | Dict, |
|
113 | Dict, | |
113 | Iterable, |
|
114 | Iterable, | |
114 | List, |
|
115 | List, | |
@@ -127,7 +128,6 b' from .node import (' | |||||
127 | ) |
|
128 | ) | |
128 | from . import ( |
|
129 | from . import ( | |
129 | error, |
|
130 | error, | |
130 | pycompat, |
|
|||
131 | requirements, |
|
131 | requirements, | |
132 | smartset, |
|
132 | smartset, | |
133 | txnutil, |
|
133 | txnutil, | |
@@ -414,6 +414,27 b' class phasecache:' | |||||
414 | ] |
|
414 | ] | |
415 | ) |
|
415 | ) | |
416 |
|
416 | |||
|
417 | def get_raw_set( | |||
|
418 | self, | |||
|
419 | repo: "localrepo.localrepository", | |||
|
420 | phase: int, | |||
|
421 | ) -> Set[int]: | |||
|
422 | """return the set of revision in that phase | |||
|
423 | ||||
|
424 | The returned set is not filtered and might contains revision filtered | |||
|
425 | for the passed repoview. | |||
|
426 | ||||
|
427 | The returned set might be the internal one and MUST NOT be mutated to | |||
|
428 | avoid side effect. | |||
|
429 | """ | |||
|
430 | if phase == public: | |||
|
431 | raise error.ProgrammingError("cannot get_set for public phase") | |||
|
432 | self._ensure_phase_sets(repo.unfiltered()) | |||
|
433 | revs = self._phasesets.get(phase) | |||
|
434 | if revs is None: | |||
|
435 | return set() | |||
|
436 | return revs | |||
|
437 | ||||
417 | def getrevset( |
|
438 | def getrevset( | |
418 | self, |
|
439 | self, | |
419 | repo: "localrepo.localrepository", |
|
440 | repo: "localrepo.localrepository", | |
@@ -1095,7 +1116,11 b' def updatephases(repo, trgetter, headsby' | |||||
1095 | advanceboundary(repo, trgetter(), phase, heads) |
|
1116 | advanceboundary(repo, trgetter(), phase, heads) | |
1096 |
|
1117 | |||
1097 |
|
1118 | |||
1098 |
def analyzeremotephases( |
|
1119 | def analyze_remote_phases( | |
|
1120 | repo, | |||
|
1121 | subset: Collection[int], | |||
|
1122 | roots: Dict[bytes, bytes], | |||
|
1123 | ) -> Tuple[Collection[int], Collection[int]]: | |||
1099 | """Compute phases heads and root in a subset of node from root dict |
|
1124 | """Compute phases heads and root in a subset of node from root dict | |
1100 |
|
1125 | |||
1101 | * subset is heads of the subset |
|
1126 | * subset is heads of the subset | |
@@ -1105,8 +1130,8 b' def analyzeremotephases(repo, subset, ro' | |||||
1105 | """ |
|
1130 | """ | |
1106 | repo = repo.unfiltered() |
|
1131 | repo = repo.unfiltered() | |
1107 | # build list from dictionary |
|
1132 | # build list from dictionary | |
1108 | draftroots = [] |
|
1133 | draft_roots = [] | |
1109 | has_node = repo.changelog.index.has_node # to filter unknown nodes |
|
1134 | to_rev = repo.changelog.index.get_rev | |
1110 | for nhex, phase in roots.items(): |
|
1135 | for nhex, phase in roots.items(): | |
1111 | if nhex == b'publishing': # ignore data related to publish option |
|
1136 | if nhex == b'publishing': # ignore data related to publish option | |
1112 | continue |
|
1137 | continue | |
@@ -1114,49 +1139,53 b' def analyzeremotephases(repo, subset, ro' | |||||
1114 | phase = int(phase) |
|
1139 | phase = int(phase) | |
1115 | if phase == public: |
|
1140 | if phase == public: | |
1116 | if node != repo.nullid: |
|
1141 | if node != repo.nullid: | |
1117 | repo.ui.warn( |
|
1142 | msg = _(b'ignoring inconsistent public root from remote: %s\n') | |
1118 | _( |
|
1143 | repo.ui.warn(msg % nhex) | |
1119 | b'ignoring inconsistent public root' |
|
|||
1120 | b' from remote: %s\n' |
|
|||
1121 | ) |
|
|||
1122 | % nhex |
|
|||
1123 | ) |
|
|||
1124 | elif phase == draft: |
|
1144 | elif phase == draft: | |
1125 |
|
|
1145 | rev = to_rev(node) | |
1126 | draftroots.append(node) |
|
1146 | if rev is not None: # to filter unknown nodes | |
|
1147 | draft_roots.append(rev) | |||
1127 | else: |
|
1148 | else: | |
1128 | repo.ui.warn( |
|
1149 | msg = _(b'ignoring unexpected root from remote: %i %s\n') | |
1129 | _(b'ignoring unexpected root from remote: %i %s\n') |
|
1150 | repo.ui.warn(msg % (phase, nhex)) | |
1130 | % (phase, nhex) |
|
|||
1131 | ) |
|
|||
1132 | # compute heads |
|
1151 | # compute heads | |
1133 | publicheads = newheads(repo, subset, draftroots) |
|
1152 | public_heads = new_heads(repo, subset, draft_roots) | |
1134 | return publicheads, draftroots |
|
1153 | return public_heads, draft_roots | |
1135 |
|
1154 | |||
1136 |
|
1155 | |||
1137 |
class |
|
1156 | class RemotePhasesSummary: | |
1138 | """summarize phase information on the remote side |
|
1157 | """summarize phase information on the remote side | |
1139 |
|
1158 | |||
1140 | :publishing: True is the remote is publishing |
|
1159 | :publishing: True is the remote is publishing | |
1141 |
:publicheads: list of remote public phase heads ( |
|
1160 | :public_heads: list of remote public phase heads (revs) | |
1142 |
:draftheads: list of remote draft phase heads ( |
|
1161 | :draft_heads: list of remote draft phase heads (revs) | |
1143 |
:draftroots: list of remote draft phase root ( |
|
1162 | :draft_roots: list of remote draft phase root (revs) | |
1144 | """ |
|
1163 | """ | |
1145 |
|
1164 | |||
1146 | def __init__(self, repo, remotesubset, remoteroots): |
|
1165 | def __init__( | |
|
1166 | self, | |||
|
1167 | repo, | |||
|
1168 | remote_subset: Collection[int], | |||
|
1169 | remote_roots: Dict[bytes, bytes], | |||
|
1170 | ): | |||
1147 | unfi = repo.unfiltered() |
|
1171 | unfi = repo.unfiltered() | |
1148 | self._allremoteroots = remoteroots |
|
1172 | self._allremoteroots: Dict[bytes, bytes] = remote_roots | |
1149 |
|
||||
1150 | self.publishing = remoteroots.get(b'publishing', False) |
|
|||
1151 |
|
1173 | |||
1152 | ana = analyzeremotephases(repo, remotesubset, remoteroots) |
|
1174 | self.publishing: bool = bool(remote_roots.get(b'publishing', False)) | |
1153 | self.publicheads, self.draftroots = ana |
|
1175 | ||
|
1176 | heads, roots = analyze_remote_phases(repo, remote_subset, remote_roots) | |||
|
1177 | self.public_heads: Collection[int] = heads | |||
|
1178 | self.draft_roots: Collection[int] = roots | |||
1154 | # Get the list of all "heads" revs draft on remote |
|
1179 | # Get the list of all "heads" revs draft on remote | |
1155 |
dheads = unfi.s |
|
1180 | dheads = unfi.revs(b'heads(%ld::%ld)', roots, remote_subset) | |
1156 |
self.draftheads = |
|
1181 | self.draft_heads: Collection[int] = dheads | |
1157 |
|
1182 | |||
1158 |
|
1183 | |||
1159 | def newheads(repo, heads, roots): |
|
1184 | def new_heads( | |
|
1185 | repo, | |||
|
1186 | heads: Collection[int], | |||
|
1187 | roots: Collection[int], | |||
|
1188 | ) -> Collection[int]: | |||
1160 | """compute new head of a subset minus another |
|
1189 | """compute new head of a subset minus another | |
1161 |
|
1190 | |||
1162 | * `heads`: define the first subset |
|
1191 | * `heads`: define the first subset | |
@@ -1165,16 +1194,15 b' def newheads(repo, heads, roots):' | |||||
1165 | # phases > dagop > patch > copies > scmutil > obsolete > obsutil > phases |
|
1194 | # phases > dagop > patch > copies > scmutil > obsolete > obsutil > phases | |
1166 | from . import dagop |
|
1195 | from . import dagop | |
1167 |
|
1196 | |||
1168 | repo = repo.unfiltered() |
|
|||
1169 | cl = repo.changelog |
|
|||
1170 | rev = cl.index.get_rev |
|
|||
1171 | if not roots: |
|
1197 | if not roots: | |
1172 | return heads |
|
1198 | return heads | |
1173 |
if not heads or heads == [ |
|
1199 | if not heads or heads == [nullrev]: | |
1174 | return [] |
|
1200 | return [] | |
1175 | # The logic operated on revisions, convert arguments early for convenience |
|
1201 | # The logic operated on revisions, convert arguments early for convenience | |
1176 | new_heads = {rev(n) for n in heads if n != repo.nullid} |
|
1202 | # PERF-XXX: maybe heads could directly comes as a set without impacting | |
1177 | roots = [rev(n) for n in roots] |
|
1203 | # other user of that value | |
|
1204 | new_heads = set(heads) | |||
|
1205 | new_heads.discard(nullrev) | |||
1178 | # compute the area we need to remove |
|
1206 | # compute the area we need to remove | |
1179 | affected_zone = repo.revs(b"(%ld::%ld)", roots, new_heads) |
|
1207 | affected_zone = repo.revs(b"(%ld::%ld)", roots, new_heads) | |
1180 | # heads in the area are no longer heads |
|
1208 | # heads in the area are no longer heads | |
@@ -1192,7 +1220,9 b' def newheads(repo, heads, roots):' | |||||
1192 | pruned = dagop.reachableroots(repo, candidates, prunestart) |
|
1220 | pruned = dagop.reachableroots(repo, candidates, prunestart) | |
1193 | new_heads.difference_update(pruned) |
|
1221 | new_heads.difference_update(pruned) | |
1194 |
|
1222 | |||
1195 | return pycompat.maplist(cl.node, sorted(new_heads)) |
|
1223 | # PERF-XXX: do we actually need a sorted list here? Could we simply return | |
|
1224 | # a set? | |||
|
1225 | return sorted(new_heads) | |||
1196 |
|
1226 | |||
1197 |
|
1227 | |||
1198 | def newcommitphase(ui: "uimod.ui") -> int: |
|
1228 | def newcommitphase(ui: "uimod.ui") -> int: |
@@ -70,6 +70,7 b' def lsprofile(ui, fp):' | |||||
70 | stats = lsprof.Stats(p.getstats()) |
|
70 | stats = lsprof.Stats(p.getstats()) | |
71 | stats.sort(pycompat.sysstr(field)) |
|
71 | stats.sort(pycompat.sysstr(field)) | |
72 | stats.pprint(limit=limit, file=fp, climit=climit) |
|
72 | stats.pprint(limit=limit, file=fp, climit=climit) | |
|
73 | fp.flush() | |||
73 |
|
74 | |||
74 |
|
75 | |||
75 | @contextlib.contextmanager |
|
76 | @contextlib.contextmanager | |
@@ -97,14 +98,15 b' def flameprofile(ui, fp):' | |||||
97 | finally: |
|
98 | finally: | |
98 | thread.stop() |
|
99 | thread.stop() | |
99 | thread.join() |
|
100 | thread.join() | |
100 | print( |
|
101 | m = b'Collected %d stack frames (%d unique) in %2.2f seconds.' | |
101 | b'Collected %d stack frames (%d unique) in %2.2f seconds.' |
|
102 | m %= ( | |
102 |
|
|
103 | ( | |
103 | util.timer() - start_time, |
|
104 | util.timer() - start_time, | |
104 | thread.num_frames(), |
|
105 | thread.num_frames(), | |
105 | thread.num_frames(unique=True), |
|
106 | thread.num_frames(unique=True), | |
106 | ) |
|
107 | ), | |
107 | ) |
|
108 | ) | |
|
109 | print(m, flush=True) | |||
108 |
|
110 | |||
109 |
|
111 | |||
110 | @contextlib.contextmanager |
|
112 | @contextlib.contextmanager | |
@@ -170,6 +172,7 b' def statprofile(ui, fp):' | |||||
170 | kwargs['showtime'] = showtime |
|
172 | kwargs['showtime'] = showtime | |
171 |
|
173 | |||
172 | statprof.display(fp, data=data, format=displayformat, **kwargs) |
|
174 | statprof.display(fp, data=data, format=displayformat, **kwargs) | |
|
175 | fp.flush() | |||
173 |
|
176 | |||
174 |
|
177 | |||
175 | class profile: |
|
178 | class profile: |
@@ -397,6 +397,9 b' class repoview:' | |||||
397 | """ |
|
397 | """ | |
398 |
|
398 | |||
399 | def __init__(self, repo, filtername, visibilityexceptions=None): |
|
399 | def __init__(self, repo, filtername, visibilityexceptions=None): | |
|
400 | if filtername is None: | |||
|
401 | msg = "repoview should have a non-None filtername" | |||
|
402 | raise error.ProgrammingError(msg) | |||
400 | object.__setattr__(self, '_unfilteredrepo', repo) |
|
403 | object.__setattr__(self, '_unfilteredrepo', repo) | |
401 | object.__setattr__(self, 'filtername', filtername) |
|
404 | object.__setattr__(self, 'filtername', filtername) | |
402 | object.__setattr__(self, '_clcachekey', None) |
|
405 | object.__setattr__(self, '_clcachekey', None) |
@@ -149,6 +149,22 b' def rawsmartset(repo, subset, x, order):' | |||||
149 | return x & subset |
|
149 | return x & subset | |
150 |
|
150 | |||
151 |
|
151 | |||
|
152 | def raw_node_set(repo, subset, x, order): | |||
|
153 | """argument is a list of nodeid, resolve and use them""" | |||
|
154 | nodes = _ordered_node_set(repo, x) | |||
|
155 | if order == followorder: | |||
|
156 | return subset & nodes | |||
|
157 | else: | |||
|
158 | return nodes & subset | |||
|
159 | ||||
|
160 | ||||
|
161 | def _ordered_node_set(repo, nodes): | |||
|
162 | if not nodes: | |||
|
163 | return baseset() | |||
|
164 | to_rev = repo.changelog.index.rev | |||
|
165 | return baseset([to_rev(r) for r in nodes]) | |||
|
166 | ||||
|
167 | ||||
152 | def rangeset(repo, subset, x, y, order): |
|
168 | def rangeset(repo, subset, x, y, order): | |
153 | m = getset(repo, fullreposet(repo), x) |
|
169 | m = getset(repo, fullreposet(repo), x) | |
154 | n = getset(repo, fullreposet(repo), y) |
|
170 | n = getset(repo, fullreposet(repo), y) | |
@@ -2772,6 +2788,7 b' methods = {' | |||||
2772 | b"parent": parentspec, |
|
2788 | b"parent": parentspec, | |
2773 | b"parentpost": parentpost, |
|
2789 | b"parentpost": parentpost, | |
2774 | b"smartset": rawsmartset, |
|
2790 | b"smartset": rawsmartset, | |
|
2791 | b"nodeset": raw_node_set, | |||
2775 | } |
|
2792 | } | |
2776 |
|
2793 | |||
2777 | relations = { |
|
2794 | relations = { |
@@ -392,7 +392,7 b' def _analyze(x):' | |||||
392 | elif op == b'negate': |
|
392 | elif op == b'negate': | |
393 | s = getstring(x[1], _(b"can't negate that")) |
|
393 | s = getstring(x[1], _(b"can't negate that")) | |
394 | return _analyze((b'string', b'-' + s)) |
|
394 | return _analyze((b'string', b'-' + s)) | |
395 | elif op in (b'string', b'symbol', b'smartset'): |
|
395 | elif op in (b'string', b'symbol', b'smartset', b'nodeset'): | |
396 | return x |
|
396 | return x | |
397 | elif op == b'rangeall': |
|
397 | elif op == b'rangeall': | |
398 | return (op, None) |
|
398 | return (op, None) | |
@@ -441,8 +441,9 b' def _optimize(x):' | |||||
441 | return 0, x |
|
441 | return 0, x | |
442 |
|
442 | |||
443 | op = x[0] |
|
443 | op = x[0] | |
444 | if op in (b'string', b'symbol', b'smartset'): |
|
444 | if op in (b'string', b'symbol', b'smartset', b'nodeset'): | |
445 | return 0.5, x # single revisions are small |
|
445 | # single revisions are small, and set of already computed revision are assumed to be cheap. | |
|
446 | return 0.5, x | |||
446 | elif op == b'and': |
|
447 | elif op == b'and': | |
447 | wa, ta = _optimize(x[1]) |
|
448 | wa, ta = _optimize(x[1]) | |
448 | wb, tb = _optimize(x[2]) |
|
449 | wb, tb = _optimize(x[2]) | |
@@ -784,6 +785,8 b' def formatspec(expr, *args):' | |||||
784 | if isinstance(arg, set): |
|
785 | if isinstance(arg, set): | |
785 | arg = sorted(arg) |
|
786 | arg = sorted(arg) | |
786 | ret.append(_formatintlist(list(arg))) |
|
787 | ret.append(_formatintlist(list(arg))) | |
|
788 | elif t == b'nodeset': | |||
|
789 | ret.append(_formatlistexp(list(arg), b"n")) | |||
787 | else: |
|
790 | else: | |
788 | raise error.ProgrammingError(b"unknown revspec item type: %r" % t) |
|
791 | raise error.ProgrammingError(b"unknown revspec item type: %r" % t) | |
789 | return b''.join(ret) |
|
792 | return b''.join(ret) | |
@@ -801,6 +804,10 b' def spectree(expr, *args):' | |||||
801 | newtree = (b'smartset', smartset.baseset(arg)) |
|
804 | newtree = (b'smartset', smartset.baseset(arg)) | |
802 | inputs.append(newtree) |
|
805 | inputs.append(newtree) | |
803 | ret.append(b"$") |
|
806 | ret.append(b"$") | |
|
807 | elif t == b'nodeset': | |||
|
808 | newtree = (b'nodeset', arg) | |||
|
809 | inputs.append(newtree) | |||
|
810 | ret.append(b"$") | |||
804 | else: |
|
811 | else: | |
805 | raise error.ProgrammingError(b"unknown revspec item type: %r" % t) |
|
812 | raise error.ProgrammingError(b"unknown revspec item type: %r" % t) | |
806 | expr = b''.join(ret) |
|
813 | expr = b''.join(ret) | |
@@ -863,6 +870,12 b' def _parseargs(expr, args):' | |||||
863 | ret.append((b'baseset', arg)) |
|
870 | ret.append((b'baseset', arg)) | |
864 | pos += 1 |
|
871 | pos += 1 | |
865 | continue |
|
872 | continue | |
|
873 | elif islist and d == b'n' and arg: | |||
|
874 | # we cannot turn the node into revision yet, but not | |||
|
875 | # serializing them will same a lot of time for large set. | |||
|
876 | ret.append((b'nodeset', arg)) | |||
|
877 | pos += 1 | |||
|
878 | continue | |||
866 | try: |
|
879 | try: | |
867 | ret.append((None, f(list(arg), d))) |
|
880 | ret.append((None, f(list(arg), d))) | |
868 | except (TypeError, ValueError): |
|
881 | except (TypeError, ValueError): |
@@ -60,8 +60,6 b' def systemrcpath() -> List[bytes]:' | |||||
60 | def userrcpath() -> List[bytes]: |
|
60 | def userrcpath() -> List[bytes]: | |
61 | if pycompat.sysplatform == b'plan9': |
|
61 | if pycompat.sysplatform == b'plan9': | |
62 | return [encoding.environ[b'home'] + b'/lib/hgrc'] |
|
62 | return [encoding.environ[b'home'] + b'/lib/hgrc'] | |
63 | elif pycompat.isdarwin: |
|
|||
64 | return [os.path.expanduser(b'~/.hgrc')] |
|
|||
65 | else: |
|
63 | else: | |
66 | confighome = encoding.environ.get(b'XDG_CONFIG_HOME') |
|
64 | confighome = encoding.environ.get(b'XDG_CONFIG_HOME') | |
67 | if confighome is None or not os.path.isabs(confighome): |
|
65 | if confighome is None or not os.path.isabs(confighome): |
@@ -349,7 +349,7 b' class casecollisionauditor:' | |||||
349 | self._newfiles.add(f) |
|
349 | self._newfiles.add(f) | |
350 |
|
350 | |||
351 |
|
351 | |||
352 | def filteredhash(repo, maxrev, needobsolete=False): |
|
352 | def combined_filtered_and_obsolete_hash(repo, maxrev, needobsolete=False): | |
353 | """build hash of filtered revisions in the current repoview. |
|
353 | """build hash of filtered revisions in the current repoview. | |
354 |
|
354 | |||
355 | Multiple caches perform up-to-date validation by checking that the |
|
355 | Multiple caches perform up-to-date validation by checking that the | |
@@ -375,16 +375,69 b' def filteredhash(repo, maxrev, needobsol' | |||||
375 |
|
375 | |||
376 | result = cl._filteredrevs_hashcache.get(key) |
|
376 | result = cl._filteredrevs_hashcache.get(key) | |
377 | if not result: |
|
377 | if not result: | |
378 | revs = sorted(r for r in cl.filteredrevs | obsrevs if r <= maxrev) |
|
378 | revs, obs_revs = _filtered_and_obs_revs(repo, maxrev) | |
|
379 | if needobsolete: | |||
|
380 | revs = revs | obs_revs | |||
|
381 | revs = sorted(revs) | |||
379 | if revs: |
|
382 | if revs: | |
380 |
s = |
|
383 | result = _hash_revs(revs) | |
381 | for rev in revs: |
|
|||
382 | s.update(b'%d;' % rev) |
|
|||
383 | result = s.digest() |
|
|||
384 | cl._filteredrevs_hashcache[key] = result |
|
384 | cl._filteredrevs_hashcache[key] = result | |
385 | return result |
|
385 | return result | |
386 |
|
386 | |||
387 |
|
387 | |||
|
388 | def filtered_and_obsolete_hash(repo, maxrev): | |||
|
389 | """build hashs of filtered and obsolete revisions in the current repoview. | |||
|
390 | ||||
|
391 | Multiple caches perform up-to-date validation by checking that the | |||
|
392 | tiprev and tipnode stored in the cache file match the current repository. | |||
|
393 | However, this is not sufficient for validating repoviews because the set | |||
|
394 | of revisions in the view may change without the repository tiprev and | |||
|
395 | tipnode changing. | |||
|
396 | ||||
|
397 | This function hashes all the revs filtered from the view up to maxrev and | |||
|
398 | returns that SHA-1 digest. The obsolete revisions hashed are only the | |||
|
399 | non-filtered one. | |||
|
400 | """ | |||
|
401 | cl = repo.changelog | |||
|
402 | obs_set = obsolete.getrevs(repo, b'obsolete') | |||
|
403 | key = (maxrev, hash(cl.filteredrevs), hash(obs_set)) | |||
|
404 | ||||
|
405 | result = cl._filteredrevs_hashcache.get(key) | |||
|
406 | if result is None: | |||
|
407 | filtered_hash = None | |||
|
408 | obs_hash = None | |||
|
409 | filtered_revs, obs_revs = _filtered_and_obs_revs(repo, maxrev) | |||
|
410 | if filtered_revs: | |||
|
411 | filtered_hash = _hash_revs(filtered_revs) | |||
|
412 | if obs_revs: | |||
|
413 | obs_hash = _hash_revs(obs_revs) | |||
|
414 | result = (filtered_hash, obs_hash) | |||
|
415 | cl._filteredrevs_hashcache[key] = result | |||
|
416 | return result | |||
|
417 | ||||
|
418 | ||||
|
419 | def _filtered_and_obs_revs(repo, max_rev): | |||
|
420 | """return the set of filtered and non-filtered obsolete revision""" | |||
|
421 | cl = repo.changelog | |||
|
422 | obs_set = obsolete.getrevs(repo, b'obsolete') | |||
|
423 | filtered_set = cl.filteredrevs | |||
|
424 | if cl.filteredrevs: | |||
|
425 | obs_set = obs_set - cl.filteredrevs | |||
|
426 | if max_rev < (len(cl) - 1): | |||
|
427 | # there might be revision to filter out | |||
|
428 | filtered_set = set(r for r in filtered_set if r <= max_rev) | |||
|
429 | obs_set = set(r for r in obs_set if r <= max_rev) | |||
|
430 | return (filtered_set, obs_set) | |||
|
431 | ||||
|
432 | ||||
|
433 | def _hash_revs(revs): | |||
|
434 | """return a hash from a list of revision numbers""" | |||
|
435 | s = hashutil.sha1() | |||
|
436 | for rev in revs: | |||
|
437 | s.update(b'%d;' % rev) | |||
|
438 | return s.digest() | |||
|
439 | ||||
|
440 | ||||
388 | def walkrepos(path, followsym=False, seen_dirs=None, recurse=False): |
|
441 | def walkrepos(path, followsym=False, seen_dirs=None, recurse=False): | |
389 | """yield every hg repository under path, always recursively. |
|
442 | """yield every hg repository under path, always recursively. | |
390 | The recurse flag will only control recursion into repo working dirs""" |
|
443 | The recurse flag will only control recursion into repo working dirs""" |
@@ -453,6 +453,10 b' class StoreFile:' | |||||
453 | self._file_size = 0 |
|
453 | self._file_size = 0 | |
454 | return self._file_size |
|
454 | return self._file_size | |
455 |
|
455 | |||
|
456 | @property | |||
|
457 | def has_size(self): | |||
|
458 | return self._file_size is not None | |||
|
459 | ||||
456 | def get_stream(self, vfs, copies): |
|
460 | def get_stream(self, vfs, copies): | |
457 | """return data "stream" information for this file |
|
461 | """return data "stream" information for this file | |
458 |
|
462 | |||
@@ -480,6 +484,8 b' class BaseStoreEntry:' | |||||
480 |
|
484 | |||
481 | This is returned by `store.walk` and represent some data in the store.""" |
|
485 | This is returned by `store.walk` and represent some data in the store.""" | |
482 |
|
486 | |||
|
487 | maybe_volatile = True | |||
|
488 | ||||
483 | def files(self) -> List[StoreFile]: |
|
489 | def files(self) -> List[StoreFile]: | |
484 | raise NotImplementedError |
|
490 | raise NotImplementedError | |
485 |
|
491 | |||
@@ -505,6 +511,7 b' class SimpleStoreEntry(BaseStoreEntry):' | |||||
505 |
|
511 | |||
506 | is_revlog = False |
|
512 | is_revlog = False | |
507 |
|
513 | |||
|
514 | maybe_volatile = attr.ib() | |||
508 | _entry_path = attr.ib() |
|
515 | _entry_path = attr.ib() | |
509 | _is_volatile = attr.ib(default=False) |
|
516 | _is_volatile = attr.ib(default=False) | |
510 | _file_size = attr.ib(default=None) |
|
517 | _file_size = attr.ib(default=None) | |
@@ -521,6 +528,7 b' class SimpleStoreEntry(BaseStoreEntry):' | |||||
521 | self._is_volatile = is_volatile |
|
528 | self._is_volatile = is_volatile | |
522 | self._file_size = file_size |
|
529 | self._file_size = file_size | |
523 | self._files = None |
|
530 | self._files = None | |
|
531 | self.maybe_volatile = is_volatile | |||
524 |
|
532 | |||
525 | def files(self) -> List[StoreFile]: |
|
533 | def files(self) -> List[StoreFile]: | |
526 | if self._files is None: |
|
534 | if self._files is None: | |
@@ -542,6 +550,7 b' class RevlogStoreEntry(BaseStoreEntry):' | |||||
542 |
|
550 | |||
543 | revlog_type = attr.ib(default=None) |
|
551 | revlog_type = attr.ib(default=None) | |
544 | target_id = attr.ib(default=None) |
|
552 | target_id = attr.ib(default=None) | |
|
553 | maybe_volatile = attr.ib(default=True) | |||
545 | _path_prefix = attr.ib(default=None) |
|
554 | _path_prefix = attr.ib(default=None) | |
546 | _details = attr.ib(default=None) |
|
555 | _details = attr.ib(default=None) | |
547 | _files = attr.ib(default=None) |
|
556 | _files = attr.ib(default=None) | |
@@ -558,6 +567,12 b' class RevlogStoreEntry(BaseStoreEntry):' | |||||
558 | self.target_id = target_id |
|
567 | self.target_id = target_id | |
559 | self._path_prefix = path_prefix |
|
568 | self._path_prefix = path_prefix | |
560 | assert b'.i' in details, (path_prefix, details) |
|
569 | assert b'.i' in details, (path_prefix, details) | |
|
570 | for ext in details: | |||
|
571 | if ext.endswith(REVLOG_FILES_VOLATILE_EXT): | |||
|
572 | self.maybe_volatile = True | |||
|
573 | break | |||
|
574 | else: | |||
|
575 | self.maybe_volatile = False | |||
561 | self._details = details |
|
576 | self._details = details | |
562 | self._files = None |
|
577 | self._files = None | |
563 |
|
578 | |||
@@ -601,7 +616,8 b' class RevlogStoreEntry(BaseStoreEntry):' | |||||
601 | max_changeset=None, |
|
616 | max_changeset=None, | |
602 | preserve_file_count=False, |
|
617 | preserve_file_count=False, | |
603 | ): |
|
618 | ): | |
604 | if ( |
|
619 | pre_sized = all(f.has_size for f in self.files()) | |
|
620 | if pre_sized and ( | |||
605 | repo is None |
|
621 | repo is None | |
606 | or max_changeset is None |
|
622 | or max_changeset is None | |
607 | # This use revlog-v2, ignore for now |
|
623 | # This use revlog-v2, ignore for now |
@@ -646,11 +646,12 b' def _emit2(repo, entries):' | |||||
646 |
|
646 | |||
647 | max_linkrev = len(repo) |
|
647 | max_linkrev = len(repo) | |
648 | file_count = totalfilesize = 0 |
|
648 | file_count = totalfilesize = 0 | |
649 | # record the expected size of every file |
|
649 | with util.nogc(): | |
650 | for k, vfs, e in entries: |
|
650 | # record the expected size of every file | |
651 |
for f in e |
|
651 | for k, vfs, e in entries: | |
652 | file_count += 1 |
|
652 | for f in e.files(): | |
653 | totalfilesize += f.file_size(vfs) |
|
653 | file_count += 1 | |
|
654 | totalfilesize += f.file_size(vfs) | |||
654 |
|
655 | |||
655 | progress = repo.ui.makeprogress( |
|
656 | progress = repo.ui.makeprogress( | |
656 | _(b'bundle'), total=totalfilesize, unit=_(b'bytes') |
|
657 | _(b'bundle'), total=totalfilesize, unit=_(b'bytes') | |
@@ -722,10 +723,12 b' def _emit3(repo, entries):' | |||||
722 | with TempCopyManager() as copy, progress: |
|
723 | with TempCopyManager() as copy, progress: | |
723 | # create a copy of volatile files |
|
724 | # create a copy of volatile files | |
724 | for k, vfs, e in entries: |
|
725 | for k, vfs, e in entries: | |
725 |
|
|
726 | if e.maybe_volatile: | |
726 | f.file_size(vfs) # record the expected size under lock |
|
727 | for f in e.files(): | |
727 | if f.is_volatile: |
|
728 | if f.is_volatile: | |
728 | copy(vfs.join(f.unencoded_path)) |
|
729 | # record the expected size under lock | |
|
730 | f.file_size(vfs) | |||
|
731 | copy(vfs.join(f.unencoded_path)) | |||
729 | # the first yield release the lock on the repository |
|
732 | # the first yield release the lock on the repository | |
730 | yield None |
|
733 | yield None | |
731 |
|
734 | |||
@@ -770,23 +773,26 b' def _entries_walk(repo, includes, exclud' | |||||
770 | matcher = narrowspec.match(repo.root, includes, excludes) |
|
773 | matcher = narrowspec.match(repo.root, includes, excludes) | |
771 |
|
774 | |||
772 | phase = not repo.publishing() |
|
775 | phase = not repo.publishing() | |
773 | entries = _walkstreamfiles( |
|
776 | # Python is getting crazy at all the small container we creates, disabling | |
774 | repo, |
|
777 | # the gc while we do so helps performance a lot. | |
775 | matcher, |
|
778 | with util.nogc(): | |
776 | phase=phase, |
|
779 | entries = _walkstreamfiles( | |
777 | obsolescence=includeobsmarkers, |
|
780 | repo, | |
778 | ) |
|
781 | matcher, | |
779 | for entry in entries: |
|
782 | phase=phase, | |
780 | yield (_srcstore, entry) |
|
783 | obsolescence=includeobsmarkers, | |
|
784 | ) | |||
|
785 | for entry in entries: | |||
|
786 | yield (_srcstore, entry) | |||
781 |
|
787 | |||
782 | for name in cacheutil.cachetocopy(repo): |
|
788 | for name in cacheutil.cachetocopy(repo): | |
783 | if repo.cachevfs.exists(name): |
|
789 | if repo.cachevfs.exists(name): | |
784 | # not really a StoreEntry, but close enough |
|
790 | # not really a StoreEntry, but close enough | |
785 | entry = store.SimpleStoreEntry( |
|
791 | entry = store.SimpleStoreEntry( | |
786 | entry_path=name, |
|
792 | entry_path=name, | |
787 | is_volatile=True, |
|
793 | is_volatile=True, | |
788 | ) |
|
794 | ) | |
789 | yield (_srccache, entry) |
|
795 | yield (_srccache, entry) | |
790 |
|
796 | |||
791 |
|
797 | |||
792 | def generatev2(repo, includes, excludes, includeobsmarkers): |
|
798 | def generatev2(repo, includes, excludes, includeobsmarkers): | |
@@ -847,7 +853,10 b' def generatev3(repo, includes, excludes,' | |||||
847 | - ways to adjust the number of expected entries/files ? |
|
853 | - ways to adjust the number of expected entries/files ? | |
848 | """ |
|
854 | """ | |
849 |
|
855 | |||
850 | with repo.lock(): |
|
856 | # Python is getting crazy at all the small container we creates while | |
|
857 | # considering the files to preserve, disabling the gc while we do so helps | |||
|
858 | # performance a lot. | |||
|
859 | with repo.lock(), util.nogc(): | |||
851 |
|
860 | |||
852 | repo.ui.debug(b'scanning\n') |
|
861 | repo.ui.debug(b'scanning\n') | |
853 |
|
862 |
@@ -21,6 +21,7 b' from .node import (' | |||||
21 | short, |
|
21 | short, | |
22 | ) |
|
22 | ) | |
23 | from .i18n import _ |
|
23 | from .i18n import _ | |
|
24 | from .revlogutils.constants import ENTRY_NODE_ID | |||
24 | from . import ( |
|
25 | from . import ( | |
25 | encoding, |
|
26 | encoding, | |
26 | error, |
|
27 | error, | |
@@ -30,6 +31,7 b' from . import (' | |||||
30 | ) |
|
31 | ) | |
31 | from .utils import stringutil |
|
32 | from .utils import stringutil | |
32 |
|
33 | |||
|
34 | ||||
33 | # Tags computation can be expensive and caches exist to make it fast in |
|
35 | # Tags computation can be expensive and caches exist to make it fast in | |
34 | # the common case. |
|
36 | # the common case. | |
35 | # |
|
37 | # | |
@@ -80,6 +82,34 b' from .utils import stringutil' | |||||
80 | # setting it) for each tag is last. |
|
82 | # setting it) for each tag is last. | |
81 |
|
83 | |||
82 |
|
84 | |||
|
85 | def warm_cache(repo): | |||
|
86 | """ensure the cache is properly filled""" | |||
|
87 | unfi = repo.unfiltered() | |||
|
88 | fnodescache = hgtagsfnodescache(unfi) | |||
|
89 | validated_fnodes = set() | |||
|
90 | unknown_entries = set() | |||
|
91 | flog = None | |||
|
92 | ||||
|
93 | entries = enumerate(repo.changelog.index) | |||
|
94 | node_revs = ((e[ENTRY_NODE_ID], rev) for (rev, e) in entries) | |||
|
95 | ||||
|
96 | for node, rev in node_revs: | |||
|
97 | fnode = fnodescache.getfnode(node=node, rev=rev) | |||
|
98 | if fnode != repo.nullid: | |||
|
99 | if fnode not in validated_fnodes: | |||
|
100 | if flog is None: | |||
|
101 | flog = repo.file(b'.hgtags') | |||
|
102 | if flog.hasnode(fnode): | |||
|
103 | validated_fnodes.add(fnode) | |||
|
104 | else: | |||
|
105 | unknown_entries.add(node) | |||
|
106 | ||||
|
107 | if unknown_entries: | |||
|
108 | fnodescache.refresh_invalid_nodes(unknown_entries) | |||
|
109 | ||||
|
110 | fnodescache.write() | |||
|
111 | ||||
|
112 | ||||
83 | def fnoderevs(ui, repo, revs): |
|
113 | def fnoderevs(ui, repo, revs): | |
84 | """return the list of '.hgtags' fnodes used in a set revisions |
|
114 | """return the list of '.hgtags' fnodes used in a set revisions | |
85 |
|
115 | |||
@@ -433,7 +463,11 b' def _readtagcache(ui, repo):' | |||||
433 | if ( |
|
463 | if ( | |
434 | cacherev == tiprev |
|
464 | cacherev == tiprev | |
435 | and cachenode == tipnode |
|
465 | and cachenode == tipnode | |
436 | and cachehash == scmutil.filteredhash(repo, tiprev) |
|
466 | and cachehash | |
|
467 | == scmutil.combined_filtered_and_obsolete_hash( | |||
|
468 | repo, | |||
|
469 | tiprev, | |||
|
470 | ) | |||
437 | ): |
|
471 | ): | |
438 | tags = _readtags(ui, repo, cachelines, cachefile.name) |
|
472 | tags = _readtags(ui, repo, cachelines, cachefile.name) | |
439 | cachefile.close() |
|
473 | cachefile.close() | |
@@ -441,7 +475,14 b' def _readtagcache(ui, repo):' | |||||
441 | if cachefile: |
|
475 | if cachefile: | |
442 | cachefile.close() # ignore rest of file |
|
476 | cachefile.close() # ignore rest of file | |
443 |
|
477 | |||
444 | valid = (tiprev, tipnode, scmutil.filteredhash(repo, tiprev)) |
|
478 | valid = ( | |
|
479 | tiprev, | |||
|
480 | tipnode, | |||
|
481 | scmutil.combined_filtered_and_obsolete_hash( | |||
|
482 | repo, | |||
|
483 | tiprev, | |||
|
484 | ), | |||
|
485 | ) | |||
445 |
|
486 | |||
446 | repoheads = repo.heads() |
|
487 | repoheads = repo.heads() | |
447 | # Case 2 (uncommon): empty repo; get out quickly and don't bother |
|
488 | # Case 2 (uncommon): empty repo; get out quickly and don't bother | |
@@ -479,7 +520,7 b' def _readtagcache(ui, repo):' | |||||
479 | return (repoheads, cachefnode, valid, None, True) |
|
520 | return (repoheads, cachefnode, valid, None, True) | |
480 |
|
521 | |||
481 |
|
522 | |||
482 | def _getfnodes(ui, repo, nodes): |
|
523 | def _getfnodes(ui, repo, nodes=None, revs=None): | |
483 | """return .hgtags fnodes for a list of changeset nodes |
|
524 | """return .hgtags fnodes for a list of changeset nodes | |
484 |
|
525 | |||
485 | Return value is a {node: fnode} mapping. There will be no entry for nodes |
|
526 | Return value is a {node: fnode} mapping. There will be no entry for nodes | |
@@ -491,9 +532,21 b' def _getfnodes(ui, repo, nodes):' | |||||
491 | validated_fnodes = set() |
|
532 | validated_fnodes = set() | |
492 | unknown_entries = set() |
|
533 | unknown_entries = set() | |
493 |
|
534 | |||
|
535 | if nodes is None and revs is None: | |||
|
536 | raise error.ProgrammingError("need to specify either nodes or revs") | |||
|
537 | elif nodes is not None and revs is None: | |||
|
538 | to_rev = repo.changelog.index.rev | |||
|
539 | nodes_revs = ((n, to_rev(n)) for n in nodes) | |||
|
540 | elif nodes is None and revs is not None: | |||
|
541 | to_node = repo.changelog.node | |||
|
542 | nodes_revs = ((to_node(r), r) for r in revs) | |||
|
543 | else: | |||
|
544 | msg = "need to specify only one of nodes or revs" | |||
|
545 | raise error.ProgrammingError(msg) | |||
|
546 | ||||
494 | flog = None |
|
547 | flog = None | |
495 | for node in nodes: |
|
548 | for node, rev in nodes_revs: | |
496 | fnode = fnodescache.getfnode(node) |
|
549 | fnode = fnodescache.getfnode(node=node, rev=rev) | |
497 | if fnode != repo.nullid: |
|
550 | if fnode != repo.nullid: | |
498 | if fnode not in validated_fnodes: |
|
551 | if fnode not in validated_fnodes: | |
499 | if flog is None: |
|
552 | if flog is None: | |
@@ -746,7 +799,7 b' class hgtagsfnodescache:' | |||||
746 | # TODO: zero fill entire record, because it's invalid not missing? |
|
799 | # TODO: zero fill entire record, because it's invalid not missing? | |
747 | self._raw.extend(b'\xff' * (wantedlen - rawlen)) |
|
800 | self._raw.extend(b'\xff' * (wantedlen - rawlen)) | |
748 |
|
801 | |||
749 | def getfnode(self, node, computemissing=True): |
|
802 | def getfnode(self, node, computemissing=True, rev=None): | |
750 | """Obtain the filenode of the .hgtags file at a specified revision. |
|
803 | """Obtain the filenode of the .hgtags file at a specified revision. | |
751 |
|
804 | |||
752 | If the value is in the cache, the entry will be validated and returned. |
|
805 | If the value is in the cache, the entry will be validated and returned. | |
@@ -761,7 +814,8 b' class hgtagsfnodescache:' | |||||
761 | if node == self._repo.nullid: |
|
814 | if node == self._repo.nullid: | |
762 | return node |
|
815 | return node | |
763 |
|
816 | |||
764 | rev = self._repo.changelog.rev(node) |
|
817 | if rev is None: | |
|
818 | rev = self._repo.changelog.rev(node) | |||
765 |
|
819 | |||
766 | self.lookupcount += 1 |
|
820 | self.lookupcount += 1 | |
767 |
|
821 |
@@ -35,6 +35,7 b' import traceback' | |||||
35 | import warnings |
|
35 | import warnings | |
36 |
|
36 | |||
37 | from typing import ( |
|
37 | from typing import ( | |
|
38 | Any, | |||
38 | Iterable, |
|
39 | Iterable, | |
39 | Iterator, |
|
40 | Iterator, | |
40 | List, |
|
41 | List, | |
@@ -1812,7 +1813,7 b' def never(fn):' | |||||
1812 | return False |
|
1813 | return False | |
1813 |
|
1814 | |||
1814 |
|
1815 | |||
1815 | def nogc(func): |
|
1816 | def nogc(func=None) -> Any: | |
1816 | """disable garbage collector |
|
1817 | """disable garbage collector | |
1817 |
|
1818 | |||
1818 | Python's garbage collector triggers a GC each time a certain number of |
|
1819 | Python's garbage collector triggers a GC each time a certain number of | |
@@ -1825,15 +1826,27 b' def nogc(func):' | |||||
1825 | This garbage collector issue have been fixed in 2.7. But it still affect |
|
1826 | This garbage collector issue have been fixed in 2.7. But it still affect | |
1826 | CPython's performance. |
|
1827 | CPython's performance. | |
1827 | """ |
|
1828 | """ | |
1828 |
|
1829 | if func is None: | ||
|
1830 | return _nogc_context() | |||
|
1831 | else: | |||
|
1832 | return _nogc_decorator(func) | |||
|
1833 | ||||
|
1834 | ||||
|
1835 | @contextlib.contextmanager | |||
|
1836 | def _nogc_context(): | |||
|
1837 | gcenabled = gc.isenabled() | |||
|
1838 | gc.disable() | |||
|
1839 | try: | |||
|
1840 | yield | |||
|
1841 | finally: | |||
|
1842 | if gcenabled: | |||
|
1843 | gc.enable() | |||
|
1844 | ||||
|
1845 | ||||
|
1846 | def _nogc_decorator(func): | |||
1829 | def wrapper(*args, **kwargs): |
|
1847 | def wrapper(*args, **kwargs): | |
1830 | gcenabled = gc.isenabled() |
|
1848 | with _nogc_context(): | |
1831 | gc.disable() |
|
|||
1832 | try: |
|
|||
1833 | return func(*args, **kwargs) |
|
1849 | return func(*args, **kwargs) | |
1834 | finally: |
|
|||
1835 | if gcenabled: |
|
|||
1836 | gc.enable() |
|
|||
1837 |
|
1850 | |||
1838 | return wrapper |
|
1851 | return wrapper | |
1839 |
|
1852 |
@@ -6,6 +6,7 b'' | |||||
6 | # This software may be used and distributed according to the terms of the |
|
6 | # This software may be used and distributed according to the terms of the | |
7 | # GNU General Public License version 2 or any later version. |
|
7 | # GNU General Public License version 2 or any later version. | |
8 |
|
8 | |||
|
9 | from .. import error | |||
9 |
|
10 | |||
10 | ### Nearest subset relation |
|
11 | ### Nearest subset relation | |
11 | # Nearest subset of filter X is a filter Y so that: |
|
12 | # Nearest subset of filter X is a filter Y so that: | |
@@ -21,3 +22,30 b' subsettable = {' | |||||
21 | b'served': b'immutable', |
|
22 | b'served': b'immutable', | |
22 | b'immutable': b'base', |
|
23 | b'immutable': b'base', | |
23 | } |
|
24 | } | |
|
25 | ||||
|
26 | ||||
|
27 | def get_ordered_subset(): | |||
|
28 | """return a list of subset name from dependencies to dependents""" | |||
|
29 | _unfinalized = set(subsettable.values()) | |||
|
30 | ordered = [] | |||
|
31 | ||||
|
32 | # the subset table is expected to be small so we do the stupid N² version | |||
|
33 | # of the algorithm | |||
|
34 | while _unfinalized: | |||
|
35 | this_level = [] | |||
|
36 | for candidate in _unfinalized: | |||
|
37 | dependency = subsettable.get(candidate) | |||
|
38 | if dependency not in _unfinalized: | |||
|
39 | this_level.append(candidate) | |||
|
40 | ||||
|
41 | if not this_level: | |||
|
42 | msg = "cyclic dependencies in repoview subset %r" | |||
|
43 | msg %= subsettable | |||
|
44 | raise error.ProgrammingError(msg) | |||
|
45 | ||||
|
46 | this_level.sort(key=lambda x: x if x is not None else '') | |||
|
47 | ||||
|
48 | ordered.extend(this_level) | |||
|
49 | _unfinalized.difference_update(this_level) | |||
|
50 | ||||
|
51 | return ordered |
@@ -616,6 +616,10 b' class proxyvfs(abstractvfs):' | |||||
616 | def options(self, value): |
|
616 | def options(self, value): | |
617 | self.vfs.options = value |
|
617 | self.vfs.options = value | |
618 |
|
618 | |||
|
619 | @property | |||
|
620 | def audit(self): | |||
|
621 | return self.vfs.audit | |||
|
622 | ||||
619 |
|
623 | |||
620 | class filtervfs(proxyvfs, abstractvfs): |
|
624 | class filtervfs(proxyvfs, abstractvfs): | |
621 | '''Wrapper vfs for filtering filenames with a function.''' |
|
625 | '''Wrapper vfs for filtering filenames with a function.''' |
@@ -312,6 +312,7 b' def clonebundles(repo, proto):' | |||||
312 | if line.startswith(bundlecaches.CLONEBUNDLESCHEME): |
|
312 | if line.startswith(bundlecaches.CLONEBUNDLESCHEME): | |
313 | continue |
|
313 | continue | |
314 | modified_manifest.append(line) |
|
314 | modified_manifest.append(line) | |
|
315 | modified_manifest.append(b'') | |||
315 | return wireprototypes.bytesresponse(b'\n'.join(modified_manifest)) |
|
316 | return wireprototypes.bytesresponse(b'\n'.join(modified_manifest)) | |
316 |
|
317 | |||
317 |
|
318 |
@@ -150,21 +150,21 b' fn escape_pattern(pattern: &[u8]) -> Vec' | |||||
150 | .collect() |
|
150 | .collect() | |
151 | } |
|
151 | } | |
152 |
|
152 | |||
153 | pub fn parse_pattern_syntax( |
|
153 | pub fn parse_pattern_syntax_kind( | |
154 | kind: &[u8], |
|
154 | kind: &[u8], | |
155 | ) -> Result<PatternSyntax, PatternError> { |
|
155 | ) -> Result<PatternSyntax, PatternError> { | |
156 | match kind { |
|
156 | match kind { | |
157 |
b"re |
|
157 | b"re" => Ok(PatternSyntax::Regexp), | |
158 |
b"path |
|
158 | b"path" => Ok(PatternSyntax::Path), | |
159 |
b"filepath |
|
159 | b"filepath" => Ok(PatternSyntax::FilePath), | |
160 |
b"relpath |
|
160 | b"relpath" => Ok(PatternSyntax::RelPath), | |
161 |
b"rootfilesin |
|
161 | b"rootfilesin" => Ok(PatternSyntax::RootFilesIn), | |
162 |
b"relglob |
|
162 | b"relglob" => Ok(PatternSyntax::RelGlob), | |
163 |
b"relre |
|
163 | b"relre" => Ok(PatternSyntax::RelRegexp), | |
164 |
b"glob |
|
164 | b"glob" => Ok(PatternSyntax::Glob), | |
165 |
b"rootglob |
|
165 | b"rootglob" => Ok(PatternSyntax::RootGlob), | |
166 |
b"include |
|
166 | b"include" => Ok(PatternSyntax::Include), | |
167 |
b"subinclude |
|
167 | b"subinclude" => Ok(PatternSyntax::SubInclude), | |
168 | _ => Err(PatternError::UnsupportedSyntax( |
|
168 | _ => Err(PatternError::UnsupportedSyntax( | |
169 | String::from_utf8_lossy(kind).to_string(), |
|
169 | String::from_utf8_lossy(kind).to_string(), | |
170 | )), |
|
170 | )), |
@@ -41,7 +41,7 b' pub mod vfs;' | |||||
41 |
|
41 | |||
42 | use crate::utils::hg_path::{HgPathBuf, HgPathError}; |
|
42 | use crate::utils::hg_path::{HgPathBuf, HgPathError}; | |
43 | pub use filepatterns::{ |
|
43 | pub use filepatterns::{ | |
44 | parse_pattern_syntax, read_pattern_file, IgnorePattern, |
|
44 | parse_pattern_syntax_kind, read_pattern_file, IgnorePattern, | |
45 | PatternFileWarning, PatternSyntax, |
|
45 | PatternFileWarning, PatternSyntax, | |
46 | }; |
|
46 | }; | |
47 | use std::collections::HashMap; |
|
47 | use std::collections::HashMap; |
@@ -18,11 +18,12 b' use crate::{' | |||||
18 | }; |
|
18 | }; | |
19 |
|
19 | |||
20 | pub const INDEX_ENTRY_SIZE: usize = 64; |
|
20 | pub const INDEX_ENTRY_SIZE: usize = 64; | |
|
21 | pub const INDEX_HEADER_SIZE: usize = 4; | |||
21 | pub const COMPRESSION_MODE_INLINE: u8 = 2; |
|
22 | pub const COMPRESSION_MODE_INLINE: u8 = 2; | |
22 |
|
23 | |||
23 | #[derive(Debug)] |
|
24 | #[derive(Debug)] | |
24 | pub struct IndexHeader { |
|
25 | pub struct IndexHeader { | |
25 |
pub(super) header_bytes: [u8; |
|
26 | pub(super) header_bytes: [u8; INDEX_HEADER_SIZE], | |
26 | } |
|
27 | } | |
27 |
|
28 | |||
28 | #[derive(Copy, Clone)] |
|
29 | #[derive(Copy, Clone)] | |
@@ -92,14 +93,21 b' struct IndexData {' | |||||
92 | truncation: Option<usize>, |
|
93 | truncation: Option<usize>, | |
93 | /// Bytes that were added after reading the index |
|
94 | /// Bytes that were added after reading the index | |
94 | added: Vec<u8>, |
|
95 | added: Vec<u8>, | |
|
96 | first_entry: [u8; INDEX_ENTRY_SIZE], | |||
95 | } |
|
97 | } | |
96 |
|
98 | |||
97 | impl IndexData { |
|
99 | impl IndexData { | |
98 | pub fn new(bytes: Box<dyn Deref<Target = [u8]> + Send + Sync>) -> Self { |
|
100 | pub fn new(bytes: Box<dyn Deref<Target = [u8]> + Send + Sync>) -> Self { | |
|
101 | let mut first_entry = [0; INDEX_ENTRY_SIZE]; | |||
|
102 | if bytes.len() >= INDEX_ENTRY_SIZE { | |||
|
103 | first_entry[INDEX_HEADER_SIZE..] | |||
|
104 | .copy_from_slice(&bytes[INDEX_HEADER_SIZE..INDEX_ENTRY_SIZE]) | |||
|
105 | } | |||
99 | Self { |
|
106 | Self { | |
100 | bytes, |
|
107 | bytes, | |
101 | truncation: None, |
|
108 | truncation: None, | |
102 | added: vec![], |
|
109 | added: vec![], | |
|
110 | first_entry, | |||
103 | } |
|
111 | } | |
104 | } |
|
112 | } | |
105 |
|
113 | |||
@@ -356,7 +364,6 b' impl Index {' | |||||
356 | let end = offset + INDEX_ENTRY_SIZE; |
|
364 | let end = offset + INDEX_ENTRY_SIZE; | |
357 | let entry = IndexEntry { |
|
365 | let entry = IndexEntry { | |
358 | bytes: &bytes[offset..end], |
|
366 | bytes: &bytes[offset..end], | |
359 | offset_override: None, |
|
|||
360 | }; |
|
367 | }; | |
361 |
|
368 | |||
362 | offset += INDEX_ENTRY_SIZE + entry.compressed_len() as usize; |
|
369 | offset += INDEX_ENTRY_SIZE + entry.compressed_len() as usize; | |
@@ -449,11 +456,17 b' impl Index {' | |||||
449 | if rev == NULL_REVISION { |
|
456 | if rev == NULL_REVISION { | |
450 | return None; |
|
457 | return None; | |
451 | } |
|
458 | } | |
452 | Some(if self.is_inline() { |
|
459 | if rev.0 == 0 { | |
453 | self.get_entry_inline(rev) |
|
460 | Some(IndexEntry { | |
|
461 | bytes: &self.bytes.first_entry[..], | |||
|
462 | }) | |||
454 | } else { |
|
463 | } else { | |
455 | self.get_entry_separated(rev) |
|
464 | Some(if self.is_inline() { | |
456 | }) |
|
465 | self.get_entry_inline(rev) | |
|
466 | } else { | |||
|
467 | self.get_entry_separated(rev) | |||
|
468 | }) | |||
|
469 | } | |||
457 | } |
|
470 | } | |
458 |
|
471 | |||
459 | /// Return the binary content of the index entry for the given revision |
|
472 | /// Return the binary content of the index entry for the given revision | |
@@ -512,13 +525,7 b' impl Index {' | |||||
512 | let end = start + INDEX_ENTRY_SIZE; |
|
525 | let end = start + INDEX_ENTRY_SIZE; | |
513 | let bytes = &self.bytes[start..end]; |
|
526 | let bytes = &self.bytes[start..end]; | |
514 |
|
527 | |||
515 | // See IndexEntry for an explanation of this override. |
|
528 | IndexEntry { bytes } | |
516 | let offset_override = Some(end); |
|
|||
517 |
|
||||
518 | IndexEntry { |
|
|||
519 | bytes, |
|
|||
520 | offset_override, |
|
|||
521 | } |
|
|||
522 | } |
|
529 | } | |
523 |
|
530 | |||
524 | fn get_entry_separated(&self, rev: Revision) -> IndexEntry { |
|
531 | fn get_entry_separated(&self, rev: Revision) -> IndexEntry { | |
@@ -526,20 +533,12 b' impl Index {' | |||||
526 | let end = start + INDEX_ENTRY_SIZE; |
|
533 | let end = start + INDEX_ENTRY_SIZE; | |
527 | let bytes = &self.bytes[start..end]; |
|
534 | let bytes = &self.bytes[start..end]; | |
528 |
|
535 | |||
529 | // Override the offset of the first revision as its bytes are used |
|
536 | IndexEntry { bytes } | |
530 | // for the index's metadata (saving space because it is always 0) |
|
|||
531 | let offset_override = if rev == Revision(0) { Some(0) } else { None }; |
|
|||
532 |
|
||||
533 | IndexEntry { |
|
|||
534 | bytes, |
|
|||
535 | offset_override, |
|
|||
536 | } |
|
|||
537 | } |
|
537 | } | |
538 |
|
538 | |||
539 | fn null_entry(&self) -> IndexEntry { |
|
539 | fn null_entry(&self) -> IndexEntry { | |
540 | IndexEntry { |
|
540 | IndexEntry { | |
541 | bytes: &[0; INDEX_ENTRY_SIZE], |
|
541 | bytes: &[0; INDEX_ENTRY_SIZE], | |
542 | offset_override: Some(0), |
|
|||
543 | } |
|
542 | } | |
544 | } |
|
543 | } | |
545 |
|
544 | |||
@@ -755,13 +754,20 b' impl Index {' | |||||
755 | revision_data: RevisionDataParams, |
|
754 | revision_data: RevisionDataParams, | |
756 | ) -> Result<(), RevlogError> { |
|
755 | ) -> Result<(), RevlogError> { | |
757 | revision_data.validate()?; |
|
756 | revision_data.validate()?; | |
|
757 | let entry_v1 = revision_data.into_v1(); | |||
|
758 | let entry_bytes = entry_v1.as_bytes(); | |||
|
759 | if self.bytes.len() == 0 { | |||
|
760 | self.bytes.first_entry[INDEX_HEADER_SIZE..].copy_from_slice( | |||
|
761 | &entry_bytes[INDEX_HEADER_SIZE..INDEX_ENTRY_SIZE], | |||
|
762 | ) | |||
|
763 | } | |||
758 | if self.is_inline() { |
|
764 | if self.is_inline() { | |
759 | let new_offset = self.bytes.len(); |
|
765 | let new_offset = self.bytes.len(); | |
760 | if let Some(offsets) = &mut *self.get_offsets_mut() { |
|
766 | if let Some(offsets) = &mut *self.get_offsets_mut() { | |
761 | offsets.push(new_offset) |
|
767 | offsets.push(new_offset) | |
762 | } |
|
768 | } | |
763 | } |
|
769 | } | |
764 |
self.bytes.added.extend( |
|
770 | self.bytes.added.extend(entry_bytes); | |
765 | self.clear_head_revs(); |
|
771 | self.clear_head_revs(); | |
766 | Ok(()) |
|
772 | Ok(()) | |
767 | } |
|
773 | } | |
@@ -1654,7 +1660,6 b' fn inline_scan(bytes: &[u8]) -> (usize, ' | |||||
1654 | let end = offset + INDEX_ENTRY_SIZE; |
|
1660 | let end = offset + INDEX_ENTRY_SIZE; | |
1655 | let entry = IndexEntry { |
|
1661 | let entry = IndexEntry { | |
1656 | bytes: &bytes[offset..end], |
|
1662 | bytes: &bytes[offset..end], | |
1657 | offset_override: None, |
|
|||
1658 | }; |
|
1663 | }; | |
1659 |
|
1664 | |||
1660 | offset += INDEX_ENTRY_SIZE + entry.compressed_len() as usize; |
|
1665 | offset += INDEX_ENTRY_SIZE + entry.compressed_len() as usize; | |
@@ -1678,29 +1683,14 b' impl super::RevlogIndex for Index {' | |||||
1678 | #[derive(Debug)] |
|
1683 | #[derive(Debug)] | |
1679 | pub struct IndexEntry<'a> { |
|
1684 | pub struct IndexEntry<'a> { | |
1680 | bytes: &'a [u8], |
|
1685 | bytes: &'a [u8], | |
1681 | /// Allows to override the offset value of the entry. |
|
|||
1682 | /// |
|
|||
1683 | /// For interleaved index and data, the offset stored in the index |
|
|||
1684 | /// corresponds to the separated data offset. |
|
|||
1685 | /// It has to be overridden with the actual offset in the interleaved |
|
|||
1686 | /// index which is just after the index block. |
|
|||
1687 | /// |
|
|||
1688 | /// For separated index and data, the offset stored in the first index |
|
|||
1689 | /// entry is mixed with the index headers. |
|
|||
1690 | /// It has to be overridden with 0. |
|
|||
1691 | offset_override: Option<usize>, |
|
|||
1692 | } |
|
1686 | } | |
1693 |
|
1687 | |||
1694 | impl<'a> IndexEntry<'a> { |
|
1688 | impl<'a> IndexEntry<'a> { | |
1695 | /// Return the offset of the data. |
|
1689 | /// Return the offset of the data. | |
1696 | pub fn offset(&self) -> usize { |
|
1690 | pub fn offset(&self) -> usize { | |
1697 | if let Some(offset_override) = self.offset_override { |
|
1691 | let mut bytes = [0; 8]; | |
1698 | offset_override |
|
1692 | bytes[2..8].copy_from_slice(&self.bytes[0..=5]); | |
1699 | } else { |
|
1693 | BigEndian::read_u64(&bytes[..]) as usize | |
1700 | let mut bytes = [0; 8]; |
|
|||
1701 | bytes[2..8].copy_from_slice(&self.bytes[0..=5]); |
|
|||
1702 | BigEndian::read_u64(&bytes[..]) as usize |
|
|||
1703 | } |
|
|||
1704 | } |
|
1694 | } | |
1705 | pub fn raw_offset(&self) -> u64 { |
|
1695 | pub fn raw_offset(&self) -> u64 { | |
1706 | BigEndian::read_u64(&self.bytes[0..8]) |
|
1696 | BigEndian::read_u64(&self.bytes[0..8]) | |
@@ -1956,32 +1946,15 b' mod tests {' | |||||
1956 | #[test] |
|
1946 | #[test] | |
1957 | fn test_offset() { |
|
1947 | fn test_offset() { | |
1958 | let bytes = IndexEntryBuilder::new().with_offset(1).build(); |
|
1948 | let bytes = IndexEntryBuilder::new().with_offset(1).build(); | |
1959 | let entry = IndexEntry { |
|
1949 | let entry = IndexEntry { bytes: &bytes }; | |
1960 | bytes: &bytes, |
|
|||
1961 | offset_override: None, |
|
|||
1962 | }; |
|
|||
1963 |
|
1950 | |||
1964 | assert_eq!(entry.offset(), 1) |
|
1951 | assert_eq!(entry.offset(), 1) | |
1965 | } |
|
1952 | } | |
1966 |
|
1953 | |||
1967 | #[test] |
|
1954 | #[test] | |
1968 | fn test_with_overridden_offset() { |
|
|||
1969 | let bytes = IndexEntryBuilder::new().with_offset(1).build(); |
|
|||
1970 | let entry = IndexEntry { |
|
|||
1971 | bytes: &bytes, |
|
|||
1972 | offset_override: Some(2), |
|
|||
1973 | }; |
|
|||
1974 |
|
||||
1975 | assert_eq!(entry.offset(), 2) |
|
|||
1976 | } |
|
|||
1977 |
|
||||
1978 | #[test] |
|
|||
1979 | fn test_compressed_len() { |
|
1955 | fn test_compressed_len() { | |
1980 | let bytes = IndexEntryBuilder::new().with_compressed_len(1).build(); |
|
1956 | let bytes = IndexEntryBuilder::new().with_compressed_len(1).build(); | |
1981 | let entry = IndexEntry { |
|
1957 | let entry = IndexEntry { bytes: &bytes }; | |
1982 | bytes: &bytes, |
|
|||
1983 | offset_override: None, |
|
|||
1984 | }; |
|
|||
1985 |
|
1958 | |||
1986 | assert_eq!(entry.compressed_len(), 1) |
|
1959 | assert_eq!(entry.compressed_len(), 1) | |
1987 | } |
|
1960 | } | |
@@ -1989,10 +1962,7 b' mod tests {' | |||||
1989 | #[test] |
|
1962 | #[test] | |
1990 | fn test_uncompressed_len() { |
|
1963 | fn test_uncompressed_len() { | |
1991 | let bytes = IndexEntryBuilder::new().with_uncompressed_len(1).build(); |
|
1964 | let bytes = IndexEntryBuilder::new().with_uncompressed_len(1).build(); | |
1992 | let entry = IndexEntry { |
|
1965 | let entry = IndexEntry { bytes: &bytes }; | |
1993 | bytes: &bytes, |
|
|||
1994 | offset_override: None, |
|
|||
1995 | }; |
|
|||
1996 |
|
1966 | |||
1997 | assert_eq!(entry.uncompressed_len(), 1) |
|
1967 | assert_eq!(entry.uncompressed_len(), 1) | |
1998 | } |
|
1968 | } | |
@@ -2002,10 +1972,7 b' mod tests {' | |||||
2002 | let bytes = IndexEntryBuilder::new() |
|
1972 | let bytes = IndexEntryBuilder::new() | |
2003 | .with_base_revision_or_base_of_delta_chain(Revision(1)) |
|
1973 | .with_base_revision_or_base_of_delta_chain(Revision(1)) | |
2004 | .build(); |
|
1974 | .build(); | |
2005 | let entry = IndexEntry { |
|
1975 | let entry = IndexEntry { bytes: &bytes }; | |
2006 | bytes: &bytes, |
|
|||
2007 | offset_override: None, |
|
|||
2008 | }; |
|
|||
2009 |
|
1976 | |||
2010 | assert_eq!(entry.base_revision_or_base_of_delta_chain(), 1.into()) |
|
1977 | assert_eq!(entry.base_revision_or_base_of_delta_chain(), 1.into()) | |
2011 | } |
|
1978 | } | |
@@ -2016,10 +1983,7 b' mod tests {' | |||||
2016 | .with_link_revision(Revision(123)) |
|
1983 | .with_link_revision(Revision(123)) | |
2017 | .build(); |
|
1984 | .build(); | |
2018 |
|
1985 | |||
2019 | let entry = IndexEntry { |
|
1986 | let entry = IndexEntry { bytes: &bytes }; | |
2020 | bytes: &bytes, |
|
|||
2021 | offset_override: None, |
|
|||
2022 | }; |
|
|||
2023 |
|
1987 | |||
2024 | assert_eq!(entry.link_revision(), 123.into()); |
|
1988 | assert_eq!(entry.link_revision(), 123.into()); | |
2025 | } |
|
1989 | } | |
@@ -2028,10 +1992,7 b' mod tests {' | |||||
2028 | fn p1_test() { |
|
1992 | fn p1_test() { | |
2029 | let bytes = IndexEntryBuilder::new().with_p1(Revision(123)).build(); |
|
1993 | let bytes = IndexEntryBuilder::new().with_p1(Revision(123)).build(); | |
2030 |
|
1994 | |||
2031 | let entry = IndexEntry { |
|
1995 | let entry = IndexEntry { bytes: &bytes }; | |
2032 | bytes: &bytes, |
|
|||
2033 | offset_override: None, |
|
|||
2034 | }; |
|
|||
2035 |
|
1996 | |||
2036 | assert_eq!(entry.p1(), 123.into()); |
|
1997 | assert_eq!(entry.p1(), 123.into()); | |
2037 | } |
|
1998 | } | |
@@ -2040,10 +2001,7 b' mod tests {' | |||||
2040 | fn p2_test() { |
|
2001 | fn p2_test() { | |
2041 | let bytes = IndexEntryBuilder::new().with_p2(Revision(123)).build(); |
|
2002 | let bytes = IndexEntryBuilder::new().with_p2(Revision(123)).build(); | |
2042 |
|
2003 | |||
2043 | let entry = IndexEntry { |
|
2004 | let entry = IndexEntry { bytes: &bytes }; | |
2044 | bytes: &bytes, |
|
|||
2045 | offset_override: None, |
|
|||
2046 | }; |
|
|||
2047 |
|
2005 | |||
2048 | assert_eq!(entry.p2(), 123.into()); |
|
2006 | assert_eq!(entry.p2(), 123.into()); | |
2049 | } |
|
2007 | } | |
@@ -2054,10 +2012,7 b' mod tests {' | |||||
2054 | .unwrap(); |
|
2012 | .unwrap(); | |
2055 | let bytes = IndexEntryBuilder::new().with_node(node).build(); |
|
2013 | let bytes = IndexEntryBuilder::new().with_node(node).build(); | |
2056 |
|
2014 | |||
2057 | let entry = IndexEntry { |
|
2015 | let entry = IndexEntry { bytes: &bytes }; | |
2058 | bytes: &bytes, |
|
|||
2059 | offset_override: None, |
|
|||
2060 | }; |
|
|||
2061 |
|
2016 | |||
2062 | assert_eq!(*entry.hash(), node); |
|
2017 | assert_eq!(*entry.hash(), node); | |
2063 | } |
|
2018 | } |
@@ -29,6 +29,7 b' use zstd;' | |||||
29 | use self::node::{NODE_BYTES_LENGTH, NULL_NODE}; |
|
29 | use self::node::{NODE_BYTES_LENGTH, NULL_NODE}; | |
30 | use self::nodemap_docket::NodeMapDocket; |
|
30 | use self::nodemap_docket::NodeMapDocket; | |
31 | use super::index::Index; |
|
31 | use super::index::Index; | |
|
32 | use super::index::INDEX_ENTRY_SIZE; | |||
32 | use super::nodemap::{NodeMap, NodeMapError}; |
|
33 | use super::nodemap::{NodeMap, NodeMapError}; | |
33 | use crate::errors::HgError; |
|
34 | use crate::errors::HgError; | |
34 | use crate::vfs::Vfs; |
|
35 | use crate::vfs::Vfs; | |
@@ -537,7 +538,12 b' impl Revlog {' | |||||
537 | .index |
|
538 | .index | |
538 | .get_entry(rev) |
|
539 | .get_entry(rev) | |
539 | .ok_or(RevlogError::InvalidRevision)?; |
|
540 | .ok_or(RevlogError::InvalidRevision)?; | |
540 |
let |
|
541 | let offset = index_entry.offset(); | |
|
542 | let start = if self.index.is_inline() { | |||
|
543 | offset + ((rev.0 as usize + 1) * INDEX_ENTRY_SIZE) | |||
|
544 | } else { | |||
|
545 | offset | |||
|
546 | }; | |||
541 | let end = start + index_entry.compressed_len() as usize; |
|
547 | let end = start + index_entry.compressed_len() as usize; | |
542 | let data = if self.index.is_inline() { |
|
548 | let data = if self.index.is_inline() { | |
543 | self.index.data(start, end) |
|
549 | self.index.data(start, end) | |
@@ -865,7 +871,7 b' fn hash(' | |||||
865 | #[cfg(test)] |
|
871 | #[cfg(test)] | |
866 | mod tests { |
|
872 | mod tests { | |
867 | use super::*; |
|
873 | use super::*; | |
868 |
use crate::index:: |
|
874 | use crate::index::IndexEntryBuilder; | |
869 | use itertools::Itertools; |
|
875 | use itertools::Itertools; | |
870 |
|
876 | |||
871 | #[test] |
|
877 | #[test] | |
@@ -903,15 +909,10 b' mod tests {' | |||||
903 | .is_first(true) |
|
909 | .is_first(true) | |
904 | .with_version(1) |
|
910 | .with_version(1) | |
905 | .with_inline(true) |
|
911 | .with_inline(true) | |
906 | .with_offset(INDEX_ENTRY_SIZE) |
|
|||
907 | .with_node(node0) |
|
912 | .with_node(node0) | |
908 | .build(); |
|
913 | .build(); | |
909 | let entry1_bytes = IndexEntryBuilder::new() |
|
914 | let entry1_bytes = IndexEntryBuilder::new().with_node(node1).build(); | |
910 | .with_offset(INDEX_ENTRY_SIZE) |
|
|||
911 | .with_node(node1) |
|
|||
912 | .build(); |
|
|||
913 | let entry2_bytes = IndexEntryBuilder::new() |
|
915 | let entry2_bytes = IndexEntryBuilder::new() | |
914 | .with_offset(INDEX_ENTRY_SIZE) |
|
|||
915 | .with_p1(Revision(0)) |
|
916 | .with_p1(Revision(0)) | |
916 | .with_p2(Revision(1)) |
|
917 | .with_p2(Revision(1)) | |
917 | .with_node(node2) |
|
918 | .with_node(node2) | |
@@ -977,13 +978,9 b' mod tests {' | |||||
977 | .is_first(true) |
|
978 | .is_first(true) | |
978 | .with_version(1) |
|
979 | .with_version(1) | |
979 | .with_inline(true) |
|
980 | .with_inline(true) | |
980 | .with_offset(INDEX_ENTRY_SIZE) |
|
|||
981 | .with_node(node0) |
|
981 | .with_node(node0) | |
982 | .build(); |
|
982 | .build(); | |
983 | let entry1_bytes = IndexEntryBuilder::new() |
|
983 | let entry1_bytes = IndexEntryBuilder::new().with_node(node1).build(); | |
984 | .with_offset(INDEX_ENTRY_SIZE) |
|
|||
985 | .with_node(node1) |
|
|||
986 | .build(); |
|
|||
987 | let contents = vec![entry0_bytes, entry1_bytes] |
|
984 | let contents = vec![entry0_bytes, entry1_bytes] | |
988 | .into_iter() |
|
985 | .into_iter() | |
989 | .flatten() |
|
986 | .flatten() |
@@ -11,23 +11,23 b'' | |||||
11 |
|
11 | |||
12 | use crate::{dirstate::DirstateMap, exceptions::FallbackError}; |
|
12 | use crate::{dirstate::DirstateMap, exceptions::FallbackError}; | |
13 | use cpython::{ |
|
13 | use cpython::{ | |
14 | exc::ValueError, ObjectProtocol, PyBytes, PyErr, PyList, PyObject, |
|
14 | exc::ValueError, ObjectProtocol, PyBool, PyBytes, PyErr, PyList, PyObject, | |
15 | PyResult, PyTuple, Python, PythonObject, ToPyObject, |
|
15 | PyResult, PyTuple, Python, PythonObject, ToPyObject, | |
16 | }; |
|
16 | }; | |
17 | use hg::dirstate::status::StatusPath; |
|
17 | use hg::dirstate::status::StatusPath; | |
18 | use hg::matchers::{ |
|
18 | use hg::matchers::{ | |
19 | DifferenceMatcher, IntersectionMatcher, Matcher, NeverMatcher, |
|
19 | DifferenceMatcher, IntersectionMatcher, Matcher, NeverMatcher, | |
20 | UnionMatcher, |
|
20 | PatternMatcher, UnionMatcher, | |
21 | }; |
|
21 | }; | |
22 | use hg::{ |
|
22 | use hg::{ | |
23 | matchers::{AlwaysMatcher, FileMatcher, IncludeMatcher}, |
|
23 | matchers::{AlwaysMatcher, FileMatcher, IncludeMatcher}, | |
24 | parse_pattern_syntax, |
|
24 | parse_pattern_syntax_kind, | |
25 | utils::{ |
|
25 | utils::{ | |
26 | files::{get_bytes_from_path, get_path_from_bytes}, |
|
26 | files::{get_bytes_from_path, get_path_from_bytes}, | |
27 | hg_path::{HgPath, HgPathBuf}, |
|
27 | hg_path::{HgPath, HgPathBuf}, | |
28 | }, |
|
28 | }, | |
29 |
BadMatch, DirstateStatus, IgnorePattern, PatternFileWarning, |
|
29 | BadMatch, DirstateStatus, IgnorePattern, PatternError, PatternFileWarning, | |
30 | StatusOptions, |
|
30 | StatusError, StatusOptions, | |
31 | }; |
|
31 | }; | |
32 | use std::borrow::Borrow; |
|
32 | use std::borrow::Borrow; | |
33 |
|
33 | |||
@@ -153,11 +153,46 b' pub fn status_wrapper(' | |||||
153 | ) |
|
153 | ) | |
154 | } |
|
154 | } | |
155 |
|
155 | |||
|
156 | fn collect_kindpats( | |||
|
157 | py: Python, | |||
|
158 | matcher: PyObject, | |||
|
159 | ) -> PyResult<Vec<IgnorePattern>> { | |||
|
160 | matcher | |||
|
161 | .getattr(py, "_kindpats")? | |||
|
162 | .iter(py)? | |||
|
163 | .map(|k| { | |||
|
164 | let k = k?; | |||
|
165 | let syntax = parse_pattern_syntax_kind( | |||
|
166 | k.get_item(py, 0)?.extract::<PyBytes>(py)?.data(py), | |||
|
167 | ) | |||
|
168 | .map_err(|e| handle_fallback(py, StatusError::Pattern(e)))?; | |||
|
169 | let pattern = k.get_item(py, 1)?.extract::<PyBytes>(py)?; | |||
|
170 | let pattern = pattern.data(py); | |||
|
171 | let source = k.get_item(py, 2)?.extract::<PyBytes>(py)?; | |||
|
172 | let source = get_path_from_bytes(source.data(py)); | |||
|
173 | let new = IgnorePattern::new(syntax, pattern, source); | |||
|
174 | Ok(new) | |||
|
175 | }) | |||
|
176 | .collect() | |||
|
177 | } | |||
|
178 | ||||
156 | /// Transform a Python matcher into a Rust matcher. |
|
179 | /// Transform a Python matcher into a Rust matcher. | |
157 | fn extract_matcher( |
|
180 | fn extract_matcher( | |
158 | py: Python, |
|
181 | py: Python, | |
159 | matcher: PyObject, |
|
182 | matcher: PyObject, | |
160 | ) -> PyResult<Box<dyn Matcher + Sync>> { |
|
183 | ) -> PyResult<Box<dyn Matcher + Sync>> { | |
|
184 | let tampered = matcher | |||
|
185 | .call_method(py, "was_tampered_with_nonrec", PyTuple::empty(py), None)? | |||
|
186 | .extract::<PyBool>(py)? | |||
|
187 | .is_true(); | |||
|
188 | if tampered { | |||
|
189 | return Err(handle_fallback( | |||
|
190 | py, | |||
|
191 | StatusError::Pattern(PatternError::UnsupportedSyntax( | |||
|
192 | "Pattern matcher was tampered with!".to_string(), | |||
|
193 | )), | |||
|
194 | )); | |||
|
195 | }; | |||
161 | match matcher.get_type(py).name(py).borrow() { |
|
196 | match matcher.get_type(py).name(py).borrow() { | |
162 | "alwaysmatcher" => Ok(Box::new(AlwaysMatcher)), |
|
197 | "alwaysmatcher" => Ok(Box::new(AlwaysMatcher)), | |
163 | "nevermatcher" => Ok(Box::new(NeverMatcher)), |
|
198 | "nevermatcher" => Ok(Box::new(NeverMatcher)), | |
@@ -187,33 +222,7 b' fn extract_matcher(' | |||||
187 | // Get the patterns from Python even though most of them are |
|
222 | // Get the patterns from Python even though most of them are | |
188 | // redundant with those we will parse later on, as they include |
|
223 | // redundant with those we will parse later on, as they include | |
189 | // those passed from the command line. |
|
224 | // those passed from the command line. | |
190 |
let ignore_patterns |
|
225 | let ignore_patterns = collect_kindpats(py, matcher)?; | |
191 | .getattr(py, "_kindpats")? |
|
|||
192 | .iter(py)? |
|
|||
193 | .map(|k| { |
|
|||
194 | let k = k?; |
|
|||
195 | let syntax = parse_pattern_syntax( |
|
|||
196 | &[ |
|
|||
197 | k.get_item(py, 0)? |
|
|||
198 | .extract::<PyBytes>(py)? |
|
|||
199 | .data(py), |
|
|||
200 | &b":"[..], |
|
|||
201 | ] |
|
|||
202 | .concat(), |
|
|||
203 | ) |
|
|||
204 | .map_err(|e| { |
|
|||
205 | handle_fallback(py, StatusError::Pattern(e)) |
|
|||
206 | })?; |
|
|||
207 | let pattern = k.get_item(py, 1)?.extract::<PyBytes>(py)?; |
|
|||
208 | let pattern = pattern.data(py); |
|
|||
209 | let source = k.get_item(py, 2)?.extract::<PyBytes>(py)?; |
|
|||
210 | let source = get_path_from_bytes(source.data(py)); |
|
|||
211 | let new = IgnorePattern::new(syntax, pattern, source); |
|
|||
212 | Ok(new) |
|
|||
213 | }) |
|
|||
214 | .collect(); |
|
|||
215 |
|
||||
216 | let ignore_patterns = ignore_patterns?; |
|
|||
217 |
|
226 | |||
218 | let matcher = IncludeMatcher::new(ignore_patterns) |
|
227 | let matcher = IncludeMatcher::new(ignore_patterns) | |
219 | .map_err(|e| handle_fallback(py, e.into()))?; |
|
228 | .map_err(|e| handle_fallback(py, e.into()))?; | |
@@ -241,6 +250,14 b' fn extract_matcher(' | |||||
241 |
|
250 | |||
242 | Ok(Box::new(DifferenceMatcher::new(m1, m2))) |
|
251 | Ok(Box::new(DifferenceMatcher::new(m1, m2))) | |
243 | } |
|
252 | } | |
|
253 | "patternmatcher" => { | |||
|
254 | let patterns = collect_kindpats(py, matcher)?; | |||
|
255 | ||||
|
256 | let matcher = PatternMatcher::new(patterns) | |||
|
257 | .map_err(|e| handle_fallback(py, e.into()))?; | |||
|
258 | ||||
|
259 | Ok(Box::new(matcher)) | |||
|
260 | } | |||
244 | e => Err(PyErr::new::<FallbackError, _>( |
|
261 | e => Err(PyErr::new::<FallbackError, _>( | |
245 | py, |
|
262 | py, | |
246 | format!("Unsupported matcher {}", e), |
|
263 | format!("Unsupported matcher {}", e), |
@@ -252,7 +252,10 b' def filterhgerr(err):' | |||||
252 | if ( |
|
252 | if ( | |
253 | not e.startswith(b'not trusting file') |
|
253 | not e.startswith(b'not trusting file') | |
254 | and not e.startswith(b'warning: Not importing') |
|
254 | and not e.startswith(b'warning: Not importing') | |
255 | and not e.startswith(b'obsolete feature not enabled') |
|
255 | and not ( | |
|
256 | e.startswith(b'obsolete feature not enabled') | |||
|
257 | or e.startswith(b'"obsolete" feature not enabled') | |||
|
258 | ) | |||
256 | and not e.startswith(b'*** failed to import extension') |
|
259 | and not e.startswith(b'*** failed to import extension') | |
257 | and not e.startswith(b'devel-warn:') |
|
260 | and not e.startswith(b'devel-warn:') | |
258 | and not ( |
|
261 | and not ( |
@@ -114,14 +114,6 b' substitutions = [' | |||||
114 | br'(.*file:/)/?(/\$TESTTMP.*)', |
|
114 | br'(.*file:/)/?(/\$TESTTMP.*)', | |
115 | lambda m: m.group(1) + b'*' + m.group(2) + b' (glob)', |
|
115 | lambda m: m.group(1) + b'*' + m.group(2) + b' (glob)', | |
116 | ), |
|
116 | ), | |
117 | # `hg clone --stream` output |
|
|||
118 | ( |
|
|||
119 | br'transferred (\S+?) KB in \S+? seconds \(.+?/sec\)(?: \(glob\))?(.*)', |
|
|||
120 | lambda m: ( |
|
|||
121 | br'transferred %s KB in * seconds (* */sec) (glob)%s' |
|
|||
122 | % (m.group(1), m.group(2)) |
|
|||
123 | ), |
|
|||
124 | ), |
|
|||
125 | # `discovery debug output |
|
117 | # `discovery debug output | |
126 | ( |
|
118 | ( | |
127 | br'\b(\d+) total queries in \d.\d\d\d\ds\b', |
|
119 | br'\b(\d+) total queries in \d.\d\d\d\ds\b', |
@@ -167,7 +167,6 b' Extension disabled for lack of acl.sourc' | |||||
167 | listing keys for "phases" |
|
167 | listing keys for "phases" | |
168 | checking for updated bookmarks |
|
168 | checking for updated bookmarks | |
169 | listing keys for "bookmarks" |
|
169 | listing keys for "bookmarks" | |
170 | invalid branch cache (served): tip differs |
|
|||
171 | listing keys for "bookmarks" |
|
170 | listing keys for "bookmarks" | |
172 | 3 changesets found |
|
171 | 3 changesets found | |
173 | list of changesets: |
|
172 | list of changesets: | |
@@ -187,7 +186,6 b' Extension disabled for lack of acl.sourc' | |||||
187 | bundle2-input-part: total payload size * (glob) |
|
186 | bundle2-input-part: total payload size * (glob) | |
188 | bundle2-input-part: "check:updated-heads" supported |
|
187 | bundle2-input-part: "check:updated-heads" supported | |
189 | bundle2-input-part: total payload size * (glob) |
|
188 | bundle2-input-part: total payload size * (glob) | |
190 | invalid branch cache (served): tip differs |
|
|||
191 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
189 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
192 | adding changesets |
|
190 | adding changesets | |
193 | add changeset ef1ea85a6374 |
|
191 | add changeset ef1ea85a6374 | |
@@ -237,7 +235,6 b' No [acl.allow]/[acl.deny]' | |||||
237 | listing keys for "phases" |
|
235 | listing keys for "phases" | |
238 | checking for updated bookmarks |
|
236 | checking for updated bookmarks | |
239 | listing keys for "bookmarks" |
|
237 | listing keys for "bookmarks" | |
240 | invalid branch cache (served): tip differs |
|
|||
241 | listing keys for "bookmarks" |
|
238 | listing keys for "bookmarks" | |
242 | 3 changesets found |
|
239 | 3 changesets found | |
243 | list of changesets: |
|
240 | list of changesets: | |
@@ -257,7 +254,6 b' No [acl.allow]/[acl.deny]' | |||||
257 | bundle2-input-part: total payload size * (glob) |
|
254 | bundle2-input-part: total payload size * (glob) | |
258 | bundle2-input-part: "check:updated-heads" supported |
|
255 | bundle2-input-part: "check:updated-heads" supported | |
259 | bundle2-input-part: total payload size * (glob) |
|
256 | bundle2-input-part: total payload size * (glob) | |
260 | invalid branch cache (served): tip differs |
|
|||
261 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
257 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
262 | adding changesets |
|
258 | adding changesets | |
263 | add changeset ef1ea85a6374 |
|
259 | add changeset ef1ea85a6374 | |
@@ -317,7 +313,6 b' Empty [acl.allow]' | |||||
317 | listing keys for "phases" |
|
313 | listing keys for "phases" | |
318 | checking for updated bookmarks |
|
314 | checking for updated bookmarks | |
319 | listing keys for "bookmarks" |
|
315 | listing keys for "bookmarks" | |
320 | invalid branch cache (served): tip differs |
|
|||
321 | listing keys for "bookmarks" |
|
316 | listing keys for "bookmarks" | |
322 | 3 changesets found |
|
317 | 3 changesets found | |
323 | list of changesets: |
|
318 | list of changesets: | |
@@ -337,7 +332,6 b' Empty [acl.allow]' | |||||
337 | bundle2-input-part: total payload size * (glob) |
|
332 | bundle2-input-part: total payload size * (glob) | |
338 | bundle2-input-part: "check:updated-heads" supported |
|
333 | bundle2-input-part: "check:updated-heads" supported | |
339 | bundle2-input-part: total payload size * (glob) |
|
334 | bundle2-input-part: total payload size * (glob) | |
340 | invalid branch cache (served): tip differs |
|
|||
341 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
335 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
342 | adding changesets |
|
336 | adding changesets | |
343 | add changeset ef1ea85a6374 |
|
337 | add changeset ef1ea85a6374 | |
@@ -388,7 +382,6 b' fred is allowed inside foo/' | |||||
388 | listing keys for "phases" |
|
382 | listing keys for "phases" | |
389 | checking for updated bookmarks |
|
383 | checking for updated bookmarks | |
390 | listing keys for "bookmarks" |
|
384 | listing keys for "bookmarks" | |
391 | invalid branch cache (served): tip differs |
|
|||
392 | listing keys for "bookmarks" |
|
385 | listing keys for "bookmarks" | |
393 | 3 changesets found |
|
386 | 3 changesets found | |
394 | list of changesets: |
|
387 | list of changesets: | |
@@ -408,7 +401,6 b' fred is allowed inside foo/' | |||||
408 | bundle2-input-part: total payload size * (glob) |
|
401 | bundle2-input-part: total payload size * (glob) | |
409 | bundle2-input-part: "check:updated-heads" supported |
|
402 | bundle2-input-part: "check:updated-heads" supported | |
410 | bundle2-input-part: total payload size * (glob) |
|
403 | bundle2-input-part: total payload size * (glob) | |
411 | invalid branch cache (served): tip differs |
|
|||
412 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
404 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
413 | adding changesets |
|
405 | adding changesets | |
414 | add changeset ef1ea85a6374 |
|
406 | add changeset ef1ea85a6374 | |
@@ -463,7 +455,6 b' Empty [acl.deny]' | |||||
463 | listing keys for "phases" |
|
455 | listing keys for "phases" | |
464 | checking for updated bookmarks |
|
456 | checking for updated bookmarks | |
465 | listing keys for "bookmarks" |
|
457 | listing keys for "bookmarks" | |
466 | invalid branch cache (served): tip differs |
|
|||
467 | listing keys for "bookmarks" |
|
458 | listing keys for "bookmarks" | |
468 | 3 changesets found |
|
459 | 3 changesets found | |
469 | list of changesets: |
|
460 | list of changesets: | |
@@ -483,7 +474,6 b' Empty [acl.deny]' | |||||
483 | bundle2-input-part: total payload size * (glob) |
|
474 | bundle2-input-part: total payload size * (glob) | |
484 | bundle2-input-part: "check:updated-heads" supported |
|
475 | bundle2-input-part: "check:updated-heads" supported | |
485 | bundle2-input-part: total payload size * (glob) |
|
476 | bundle2-input-part: total payload size * (glob) | |
486 | invalid branch cache (served): tip differs |
|
|||
487 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
477 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
488 | adding changesets |
|
478 | adding changesets | |
489 | add changeset ef1ea85a6374 |
|
479 | add changeset ef1ea85a6374 | |
@@ -535,7 +525,6 b' fred is allowed inside foo/, but not foo' | |||||
535 | listing keys for "phases" |
|
525 | listing keys for "phases" | |
536 | checking for updated bookmarks |
|
526 | checking for updated bookmarks | |
537 | listing keys for "bookmarks" |
|
527 | listing keys for "bookmarks" | |
538 | invalid branch cache (served): tip differs |
|
|||
539 | listing keys for "bookmarks" |
|
528 | listing keys for "bookmarks" | |
540 | 3 changesets found |
|
529 | 3 changesets found | |
541 | list of changesets: |
|
530 | list of changesets: | |
@@ -555,7 +544,6 b' fred is allowed inside foo/, but not foo' | |||||
555 | bundle2-input-part: total payload size * (glob) |
|
544 | bundle2-input-part: total payload size * (glob) | |
556 | bundle2-input-part: "check:updated-heads" supported |
|
545 | bundle2-input-part: "check:updated-heads" supported | |
557 | bundle2-input-part: total payload size * (glob) |
|
546 | bundle2-input-part: total payload size * (glob) | |
558 | invalid branch cache (served): tip differs |
|
|||
559 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
547 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
560 | adding changesets |
|
548 | adding changesets | |
561 | add changeset ef1ea85a6374 |
|
549 | add changeset ef1ea85a6374 | |
@@ -612,7 +600,6 b' fred is allowed inside foo/, but not foo' | |||||
612 | listing keys for "phases" |
|
600 | listing keys for "phases" | |
613 | checking for updated bookmarks |
|
601 | checking for updated bookmarks | |
614 | listing keys for "bookmarks" |
|
602 | listing keys for "bookmarks" | |
615 | invalid branch cache (served): tip differs |
|
|||
616 | listing keys for "bookmarks" |
|
603 | listing keys for "bookmarks" | |
617 | 3 changesets found |
|
604 | 3 changesets found | |
618 | list of changesets: |
|
605 | list of changesets: | |
@@ -632,7 +619,6 b' fred is allowed inside foo/, but not foo' | |||||
632 | bundle2-input-part: total payload size * (glob) |
|
619 | bundle2-input-part: total payload size * (glob) | |
633 | bundle2-input-part: "check:updated-heads" supported |
|
620 | bundle2-input-part: "check:updated-heads" supported | |
634 | bundle2-input-part: total payload size * (glob) |
|
621 | bundle2-input-part: total payload size * (glob) | |
635 | invalid branch cache (served): tip differs |
|
|||
636 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
622 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
637 | adding changesets |
|
623 | adding changesets | |
638 | add changeset ef1ea85a6374 |
|
624 | add changeset ef1ea85a6374 | |
@@ -686,7 +672,6 b' fred is allowed inside foo/, but not foo' | |||||
686 | listing keys for "phases" |
|
672 | listing keys for "phases" | |
687 | checking for updated bookmarks |
|
673 | checking for updated bookmarks | |
688 | listing keys for "bookmarks" |
|
674 | listing keys for "bookmarks" | |
689 | invalid branch cache (served): tip differs |
|
|||
690 | listing keys for "bookmarks" |
|
675 | listing keys for "bookmarks" | |
691 | 3 changesets found |
|
676 | 3 changesets found | |
692 | list of changesets: |
|
677 | list of changesets: | |
@@ -706,7 +691,6 b' fred is allowed inside foo/, but not foo' | |||||
706 | bundle2-input-part: total payload size * (glob) |
|
691 | bundle2-input-part: total payload size * (glob) | |
707 | bundle2-input-part: "check:updated-heads" supported |
|
692 | bundle2-input-part: "check:updated-heads" supported | |
708 | bundle2-input-part: total payload size * (glob) |
|
693 | bundle2-input-part: total payload size * (glob) | |
709 | invalid branch cache (served): tip differs |
|
|||
710 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
694 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
711 | adding changesets |
|
695 | adding changesets | |
712 | add changeset ef1ea85a6374 |
|
696 | add changeset ef1ea85a6374 | |
@@ -761,7 +745,6 b' fred is not blocked from moving bookmark' | |||||
761 | listing keys for "phases" |
|
745 | listing keys for "phases" | |
762 | checking for updated bookmarks |
|
746 | checking for updated bookmarks | |
763 | listing keys for "bookmarks" |
|
747 | listing keys for "bookmarks" | |
764 | invalid branch cache (served): tip differs |
|
|||
765 | listing keys for "bookmarks" |
|
748 | listing keys for "bookmarks" | |
766 | 1 changesets found |
|
749 | 1 changesets found | |
767 | list of changesets: |
|
750 | list of changesets: | |
@@ -783,7 +766,6 b' fred is not blocked from moving bookmark' | |||||
783 | bundle2-input-part: total payload size * (glob) |
|
766 | bundle2-input-part: total payload size * (glob) | |
784 | bundle2-input-part: "check:updated-heads" supported |
|
767 | bundle2-input-part: "check:updated-heads" supported | |
785 | bundle2-input-part: total payload size * (glob) |
|
768 | bundle2-input-part: total payload size * (glob) | |
786 | invalid branch cache (served): tip differs |
|
|||
787 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
769 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
788 | adding changesets |
|
770 | adding changesets | |
789 | add changeset ef1ea85a6374 |
|
771 | add changeset ef1ea85a6374 | |
@@ -810,7 +792,6 b' fred is not blocked from moving bookmark' | |||||
810 | acl: bookmark access granted: "ef1ea85a6374b77d6da9dcda9541f498f2d17df7" on bookmark "moving-bookmark" |
|
792 | acl: bookmark access granted: "ef1ea85a6374b77d6da9dcda9541f498f2d17df7" on bookmark "moving-bookmark" | |
811 | bundle2-input-bundle: 7 parts total |
|
793 | bundle2-input-bundle: 7 parts total | |
812 | updating the branch cache |
|
794 | updating the branch cache | |
813 | invalid branch cache (served.hidden): tip differs |
|
|||
814 | added 1 changesets with 1 changes to 1 files |
|
795 | added 1 changesets with 1 changes to 1 files | |
815 | bundle2-output-bundle: "HG20", 1 parts total |
|
796 | bundle2-output-bundle: "HG20", 1 parts total | |
816 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
|
797 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload | |
@@ -850,7 +831,6 b' fred is not allowed to move bookmarks' | |||||
850 | listing keys for "phases" |
|
831 | listing keys for "phases" | |
851 | checking for updated bookmarks |
|
832 | checking for updated bookmarks | |
852 | listing keys for "bookmarks" |
|
833 | listing keys for "bookmarks" | |
853 | invalid branch cache (served): tip differs |
|
|||
854 | listing keys for "bookmarks" |
|
834 | listing keys for "bookmarks" | |
855 | 1 changesets found |
|
835 | 1 changesets found | |
856 | list of changesets: |
|
836 | list of changesets: | |
@@ -872,7 +852,6 b' fred is not allowed to move bookmarks' | |||||
872 | bundle2-input-part: total payload size * (glob) |
|
852 | bundle2-input-part: total payload size * (glob) | |
873 | bundle2-input-part: "check:updated-heads" supported |
|
853 | bundle2-input-part: "check:updated-heads" supported | |
874 | bundle2-input-part: total payload size * (glob) |
|
854 | bundle2-input-part: total payload size * (glob) | |
875 | invalid branch cache (served): tip differs |
|
|||
876 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
855 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
877 | adding changesets |
|
856 | adding changesets | |
878 | add changeset ef1ea85a6374 |
|
857 | add changeset ef1ea85a6374 | |
@@ -939,7 +918,6 b' barney is allowed everywhere' | |||||
939 | listing keys for "phases" |
|
918 | listing keys for "phases" | |
940 | checking for updated bookmarks |
|
919 | checking for updated bookmarks | |
941 | listing keys for "bookmarks" |
|
920 | listing keys for "bookmarks" | |
942 | invalid branch cache (served): tip differs |
|
|||
943 | listing keys for "bookmarks" |
|
921 | listing keys for "bookmarks" | |
944 | 3 changesets found |
|
922 | 3 changesets found | |
945 | list of changesets: |
|
923 | list of changesets: | |
@@ -959,7 +937,6 b' barney is allowed everywhere' | |||||
959 | bundle2-input-part: total payload size * (glob) |
|
937 | bundle2-input-part: total payload size * (glob) | |
960 | bundle2-input-part: "check:updated-heads" supported |
|
938 | bundle2-input-part: "check:updated-heads" supported | |
961 | bundle2-input-part: total payload size * (glob) |
|
939 | bundle2-input-part: total payload size * (glob) | |
962 | invalid branch cache (served): tip differs |
|
|||
963 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
940 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
964 | adding changesets |
|
941 | adding changesets | |
965 | add changeset ef1ea85a6374 |
|
942 | add changeset ef1ea85a6374 | |
@@ -1025,7 +1002,6 b' wilma can change files with a .txt exten' | |||||
1025 | listing keys for "phases" |
|
1002 | listing keys for "phases" | |
1026 | checking for updated bookmarks |
|
1003 | checking for updated bookmarks | |
1027 | listing keys for "bookmarks" |
|
1004 | listing keys for "bookmarks" | |
1028 | invalid branch cache (served): tip differs |
|
|||
1029 | listing keys for "bookmarks" |
|
1005 | listing keys for "bookmarks" | |
1030 | 3 changesets found |
|
1006 | 3 changesets found | |
1031 | list of changesets: |
|
1007 | list of changesets: | |
@@ -1045,7 +1021,6 b' wilma can change files with a .txt exten' | |||||
1045 | bundle2-input-part: total payload size * (glob) |
|
1021 | bundle2-input-part: total payload size * (glob) | |
1046 | bundle2-input-part: "check:updated-heads" supported |
|
1022 | bundle2-input-part: "check:updated-heads" supported | |
1047 | bundle2-input-part: total payload size * (glob) |
|
1023 | bundle2-input-part: total payload size * (glob) | |
1048 | invalid branch cache (served): tip differs |
|
|||
1049 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1024 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1050 | adding changesets |
|
1025 | adding changesets | |
1051 | add changeset ef1ea85a6374 |
|
1026 | add changeset ef1ea85a6374 | |
@@ -1109,7 +1084,6 b' file specified by acl.config does not ex' | |||||
1109 | listing keys for "phases" |
|
1084 | listing keys for "phases" | |
1110 | checking for updated bookmarks |
|
1085 | checking for updated bookmarks | |
1111 | listing keys for "bookmarks" |
|
1086 | listing keys for "bookmarks" | |
1112 | invalid branch cache (served): tip differs |
|
|||
1113 | listing keys for "bookmarks" |
|
1087 | listing keys for "bookmarks" | |
1114 | 3 changesets found |
|
1088 | 3 changesets found | |
1115 | list of changesets: |
|
1089 | list of changesets: | |
@@ -1129,7 +1103,6 b' file specified by acl.config does not ex' | |||||
1129 | bundle2-input-part: total payload size * (glob) |
|
1103 | bundle2-input-part: total payload size * (glob) | |
1130 | bundle2-input-part: "check:updated-heads" supported |
|
1104 | bundle2-input-part: "check:updated-heads" supported | |
1131 | bundle2-input-part: total payload size * (glob) |
|
1105 | bundle2-input-part: total payload size * (glob) | |
1132 | invalid branch cache (served): tip differs |
|
|||
1133 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1106 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1134 | adding changesets |
|
1107 | adding changesets | |
1135 | add changeset ef1ea85a6374 |
|
1108 | add changeset ef1ea85a6374 | |
@@ -1187,7 +1160,6 b' betty is allowed inside foo/ by a acl.co' | |||||
1187 | listing keys for "phases" |
|
1160 | listing keys for "phases" | |
1188 | checking for updated bookmarks |
|
1161 | checking for updated bookmarks | |
1189 | listing keys for "bookmarks" |
|
1162 | listing keys for "bookmarks" | |
1190 | invalid branch cache (served): tip differs |
|
|||
1191 | listing keys for "bookmarks" |
|
1163 | listing keys for "bookmarks" | |
1192 | 3 changesets found |
|
1164 | 3 changesets found | |
1193 | list of changesets: |
|
1165 | list of changesets: | |
@@ -1207,7 +1179,6 b' betty is allowed inside foo/ by a acl.co' | |||||
1207 | bundle2-input-part: total payload size * (glob) |
|
1179 | bundle2-input-part: total payload size * (glob) | |
1208 | bundle2-input-part: "check:updated-heads" supported |
|
1180 | bundle2-input-part: "check:updated-heads" supported | |
1209 | bundle2-input-part: total payload size * (glob) |
|
1181 | bundle2-input-part: total payload size * (glob) | |
1210 | invalid branch cache (served): tip differs |
|
|||
1211 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1182 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1212 | adding changesets |
|
1183 | adding changesets | |
1213 | add changeset ef1ea85a6374 |
|
1184 | add changeset ef1ea85a6374 | |
@@ -1276,7 +1247,6 b' acl.config can set only [acl.allow]/[acl' | |||||
1276 | listing keys for "phases" |
|
1247 | listing keys for "phases" | |
1277 | checking for updated bookmarks |
|
1248 | checking for updated bookmarks | |
1278 | listing keys for "bookmarks" |
|
1249 | listing keys for "bookmarks" | |
1279 | invalid branch cache (served): tip differs |
|
|||
1280 | listing keys for "bookmarks" |
|
1250 | listing keys for "bookmarks" | |
1281 | 3 changesets found |
|
1251 | 3 changesets found | |
1282 | list of changesets: |
|
1252 | list of changesets: | |
@@ -1296,7 +1266,6 b' acl.config can set only [acl.allow]/[acl' | |||||
1296 | bundle2-input-part: total payload size * (glob) |
|
1266 | bundle2-input-part: total payload size * (glob) | |
1297 | bundle2-input-part: "check:updated-heads" supported |
|
1267 | bundle2-input-part: "check:updated-heads" supported | |
1298 | bundle2-input-part: total payload size * (glob) |
|
1268 | bundle2-input-part: total payload size * (glob) | |
1299 | invalid branch cache (served): tip differs |
|
|||
1300 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1269 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1301 | adding changesets |
|
1270 | adding changesets | |
1302 | add changeset ef1ea85a6374 |
|
1271 | add changeset ef1ea85a6374 | |
@@ -1366,7 +1335,6 b' fred is always allowed' | |||||
1366 | listing keys for "phases" |
|
1335 | listing keys for "phases" | |
1367 | checking for updated bookmarks |
|
1336 | checking for updated bookmarks | |
1368 | listing keys for "bookmarks" |
|
1337 | listing keys for "bookmarks" | |
1369 | invalid branch cache (served): tip differs |
|
|||
1370 | listing keys for "bookmarks" |
|
1338 | listing keys for "bookmarks" | |
1371 | 3 changesets found |
|
1339 | 3 changesets found | |
1372 | list of changesets: |
|
1340 | list of changesets: | |
@@ -1386,7 +1354,6 b' fred is always allowed' | |||||
1386 | bundle2-input-part: total payload size * (glob) |
|
1354 | bundle2-input-part: total payload size * (glob) | |
1387 | bundle2-input-part: "check:updated-heads" supported |
|
1355 | bundle2-input-part: "check:updated-heads" supported | |
1388 | bundle2-input-part: total payload size * (glob) |
|
1356 | bundle2-input-part: total payload size * (glob) | |
1389 | invalid branch cache (served): tip differs |
|
|||
1390 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1357 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1391 | adding changesets |
|
1358 | adding changesets | |
1392 | add changeset ef1ea85a6374 |
|
1359 | add changeset ef1ea85a6374 | |
@@ -1453,7 +1420,6 b' no one is allowed inside foo/Bar/' | |||||
1453 | listing keys for "phases" |
|
1420 | listing keys for "phases" | |
1454 | checking for updated bookmarks |
|
1421 | checking for updated bookmarks | |
1455 | listing keys for "bookmarks" |
|
1422 | listing keys for "bookmarks" | |
1456 | invalid branch cache (served): tip differs |
|
|||
1457 | listing keys for "bookmarks" |
|
1423 | listing keys for "bookmarks" | |
1458 | 3 changesets found |
|
1424 | 3 changesets found | |
1459 | list of changesets: |
|
1425 | list of changesets: | |
@@ -1473,7 +1439,6 b' no one is allowed inside foo/Bar/' | |||||
1473 | bundle2-input-part: total payload size * (glob) |
|
1439 | bundle2-input-part: total payload size * (glob) | |
1474 | bundle2-input-part: "check:updated-heads" supported |
|
1440 | bundle2-input-part: "check:updated-heads" supported | |
1475 | bundle2-input-part: total payload size * (glob) |
|
1441 | bundle2-input-part: total payload size * (glob) | |
1476 | invalid branch cache (served): tip differs |
|
|||
1477 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1442 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1478 | adding changesets |
|
1443 | adding changesets | |
1479 | add changeset ef1ea85a6374 |
|
1444 | add changeset ef1ea85a6374 | |
@@ -1536,7 +1501,6 b' OS-level groups' | |||||
1536 | listing keys for "phases" |
|
1501 | listing keys for "phases" | |
1537 | checking for updated bookmarks |
|
1502 | checking for updated bookmarks | |
1538 | listing keys for "bookmarks" |
|
1503 | listing keys for "bookmarks" | |
1539 | invalid branch cache (served): tip differs |
|
|||
1540 | listing keys for "bookmarks" |
|
1504 | listing keys for "bookmarks" | |
1541 | 3 changesets found |
|
1505 | 3 changesets found | |
1542 | list of changesets: |
|
1506 | list of changesets: | |
@@ -1556,7 +1520,6 b' OS-level groups' | |||||
1556 | bundle2-input-part: total payload size * (glob) |
|
1520 | bundle2-input-part: total payload size * (glob) | |
1557 | bundle2-input-part: "check:updated-heads" supported |
|
1521 | bundle2-input-part: "check:updated-heads" supported | |
1558 | bundle2-input-part: total payload size * (glob) |
|
1522 | bundle2-input-part: total payload size * (glob) | |
1559 | invalid branch cache (served): tip differs |
|
|||
1560 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1523 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1561 | adding changesets |
|
1524 | adding changesets | |
1562 | add changeset ef1ea85a6374 |
|
1525 | add changeset ef1ea85a6374 | |
@@ -1623,7 +1586,6 b' OS-level groups' | |||||
1623 | listing keys for "phases" |
|
1586 | listing keys for "phases" | |
1624 | checking for updated bookmarks |
|
1587 | checking for updated bookmarks | |
1625 | listing keys for "bookmarks" |
|
1588 | listing keys for "bookmarks" | |
1626 | invalid branch cache (served): tip differs |
|
|||
1627 | listing keys for "bookmarks" |
|
1589 | listing keys for "bookmarks" | |
1628 | 3 changesets found |
|
1590 | 3 changesets found | |
1629 | list of changesets: |
|
1591 | list of changesets: | |
@@ -1643,7 +1605,6 b' OS-level groups' | |||||
1643 | bundle2-input-part: total payload size * (glob) |
|
1605 | bundle2-input-part: total payload size * (glob) | |
1644 | bundle2-input-part: "check:updated-heads" supported |
|
1606 | bundle2-input-part: "check:updated-heads" supported | |
1645 | bundle2-input-part: total payload size * (glob) |
|
1607 | bundle2-input-part: total payload size * (glob) | |
1646 | invalid branch cache (served): tip differs |
|
|||
1647 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported |
|
1608 | bundle2-input-part: "changegroup" (params: 1 mandatory) supported | |
1648 | adding changesets |
|
1609 | adding changesets | |
1649 | add changeset ef1ea85a6374 |
|
1610 | add changeset ef1ea85a6374 | |
@@ -1797,7 +1758,6 b' No branch acls specified' | |||||
1797 | bundle2-input-part: total payload size * (glob) |
|
1758 | bundle2-input-part: total payload size * (glob) | |
1798 | bundle2-input-bundle: 5 parts total |
|
1759 | bundle2-input-bundle: 5 parts total | |
1799 | updating the branch cache |
|
1760 | updating the branch cache | |
1800 | invalid branch cache (served.hidden): tip differs |
|
|||
1801 | added 4 changesets with 4 changes to 4 files (+1 heads) |
|
1761 | added 4 changesets with 4 changes to 4 files (+1 heads) | |
1802 | bundle2-output-bundle: "HG20", 1 parts total |
|
1762 | bundle2-output-bundle: "HG20", 1 parts total | |
1803 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
|
1763 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload | |
@@ -2104,7 +2064,6 b' Branch acl allow other' | |||||
2104 | bundle2-input-part: total payload size * (glob) |
|
2064 | bundle2-input-part: total payload size * (glob) | |
2105 | bundle2-input-bundle: 5 parts total |
|
2065 | bundle2-input-bundle: 5 parts total | |
2106 | updating the branch cache |
|
2066 | updating the branch cache | |
2107 | invalid branch cache (served.hidden): tip differs |
|
|||
2108 | added 4 changesets with 4 changes to 4 files (+1 heads) |
|
2067 | added 4 changesets with 4 changes to 4 files (+1 heads) | |
2109 | bundle2-output-bundle: "HG20", 1 parts total |
|
2068 | bundle2-output-bundle: "HG20", 1 parts total | |
2110 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
|
2069 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload | |
@@ -2196,7 +2155,6 b' push foobar into the remote' | |||||
2196 | bundle2-input-part: total payload size * (glob) |
|
2155 | bundle2-input-part: total payload size * (glob) | |
2197 | bundle2-input-bundle: 5 parts total |
|
2156 | bundle2-input-bundle: 5 parts total | |
2198 | updating the branch cache |
|
2157 | updating the branch cache | |
2199 | invalid branch cache (served.hidden): tip differs |
|
|||
2200 | added 4 changesets with 4 changes to 4 files (+1 heads) |
|
2158 | added 4 changesets with 4 changes to 4 files (+1 heads) | |
2201 | bundle2-output-bundle: "HG20", 1 parts total |
|
2159 | bundle2-output-bundle: "HG20", 1 parts total | |
2202 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
|
2160 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload | |
@@ -2360,7 +2318,6 b" User 'astro' must not be denied" | |||||
2360 | bundle2-input-part: total payload size * (glob) |
|
2318 | bundle2-input-part: total payload size * (glob) | |
2361 | bundle2-input-bundle: 5 parts total |
|
2319 | bundle2-input-bundle: 5 parts total | |
2362 | updating the branch cache |
|
2320 | updating the branch cache | |
2363 | invalid branch cache (served.hidden): tip differs |
|
|||
2364 | added 4 changesets with 4 changes to 4 files (+1 heads) |
|
2321 | added 4 changesets with 4 changes to 4 files (+1 heads) | |
2365 | bundle2-output-bundle: "HG20", 1 parts total |
|
2322 | bundle2-output-bundle: "HG20", 1 parts total | |
2366 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
|
2323 | bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload |
@@ -127,13 +127,11 b' clone, commit, pull' | |||||
127 | added 1 changesets with 1 changes to 1 files |
|
127 | added 1 changesets with 1 changes to 1 files | |
128 | new changesets d02f48003e62 |
|
128 | new changesets d02f48003e62 | |
129 | (run 'hg update' to get a working copy) |
|
129 | (run 'hg update' to get a working copy) | |
130 |
$ hg blackbox -l |
|
130 | $ hg blackbox -l 4 | |
131 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> wrote branch cache (served) with 1 labels and 2 nodes |
|
131 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> wrote branch cache (served) with 1 labels and 2 nodes | |
132 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> updated branch cache (served.hidden) in * seconds (glob) |
|
|||
133 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> wrote branch cache (served.hidden) with 1 labels and 2 nodes |
|
|||
134 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> 1 incoming changes - new heads: d02f48003e62 |
|
132 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> 1 incoming changes - new heads: d02f48003e62 | |
135 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> pull exited 0 after * seconds (glob) |
|
133 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> pull exited 0 after * seconds (glob) | |
136 |
1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> blackbox -l |
|
134 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> blackbox -l 4 | |
137 |
|
135 | |||
138 | we must not cause a failure if we cannot write to the log |
|
136 | we must not cause a failure if we cannot write to the log | |
139 |
|
137 | |||
@@ -190,13 +188,11 b' backup bundles get logged' | |||||
190 | $ hg strip tip |
|
188 | $ hg strip tip | |
191 | 0 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
189 | 0 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
192 | saved backup bundle to $TESTTMP/blackboxtest2/.hg/strip-backup/*-backup.hg (glob) |
|
190 | saved backup bundle to $TESTTMP/blackboxtest2/.hg/strip-backup/*-backup.hg (glob) | |
193 |
$ hg blackbox -l |
|
191 | $ hg blackbox -l 4 | |
194 | 1970-01-01 00:00:00.000 bob @73f6ee326b27d820b0472f1a825e3a50f3dc489b (5000)> strip tip |
|
192 | 1970-01-01 00:00:00.000 bob @73f6ee326b27d820b0472f1a825e3a50f3dc489b (5000)> strip tip | |
195 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> saved backup bundle to $TESTTMP/blackboxtest2/.hg/strip-backup/73f6ee326b27-7612e004-backup.hg |
|
193 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> saved backup bundle to $TESTTMP/blackboxtest2/.hg/strip-backup/73f6ee326b27-7612e004-backup.hg | |
196 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> updated branch cache (immutable) in * seconds (glob) |
|
|||
197 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> wrote branch cache (immutable) with 1 labels and 2 nodes |
|
|||
198 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> strip tip exited 0 after * seconds (glob) |
|
194 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> strip tip exited 0 after * seconds (glob) | |
199 |
1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> blackbox -l |
|
195 | 1970-01-01 00:00:00.000 bob @6563da9dcf87b1949716e38ff3e3dfaa3198eb06 (5000)> blackbox -l 4 | |
200 |
|
196 | |||
201 | extension and python hooks - use the eol extension for a pythonhook |
|
197 | extension and python hooks - use the eol extension for a pythonhook | |
202 |
|
198 |
@@ -1,4 +1,5 b'' | |||||
1 | #testcases mmap nommap |
|
1 | #testcases mmap nommap | |
|
2 | #testcases v2 v3 | |||
2 |
|
3 | |||
3 | #if mmap |
|
4 | #if mmap | |
4 | $ cat <<EOF >> $HGRCPATH |
|
5 | $ cat <<EOF >> $HGRCPATH | |
@@ -7,6 +8,18 b'' | |||||
7 | > EOF |
|
8 | > EOF | |
8 | #endif |
|
9 | #endif | |
9 |
|
10 | |||
|
11 | #if v3 | |||
|
12 | $ cat <<EOF >> $HGRCPATH | |||
|
13 | > [experimental] | |||
|
14 | > branch-cache-v3=yes | |||
|
15 | > EOF | |||
|
16 | #else | |||
|
17 | $ cat <<EOF >> $HGRCPATH | |||
|
18 | > [experimental] | |||
|
19 | > branch-cache-v3=no | |||
|
20 | > EOF | |||
|
21 | #endif | |||
|
22 | ||||
10 | $ hg init a |
|
23 | $ hg init a | |
11 | $ cd a |
|
24 | $ cd a | |
12 |
|
25 | |||
@@ -825,6 +838,7 b' recovery from invalid cache revs file wi' | |||||
825 | truncating cache/rbc-revs-v1 to 160 |
|
838 | truncating cache/rbc-revs-v1 to 160 | |
826 | $ f --size .hg/cache/rbc-revs* |
|
839 | $ f --size .hg/cache/rbc-revs* | |
827 | .hg/cache/rbc-revs-v1: size=160 |
|
840 | .hg/cache/rbc-revs-v1: size=160 | |
|
841 | ||||
828 | recovery from invalid cache file with partial last record |
|
842 | recovery from invalid cache file with partial last record | |
829 | $ mv .hg/cache/rbc-revs-v1 . |
|
843 | $ mv .hg/cache/rbc-revs-v1 . | |
830 | $ f -qDB 119 rbc-revs-v1 > .hg/cache/rbc-revs-v1 |
|
844 | $ f -qDB 119 rbc-revs-v1 > .hg/cache/rbc-revs-v1 | |
@@ -835,6 +849,7 b' recovery from invalid cache file with pa' | |||||
835 | truncating cache/rbc-revs-v1 to 112 |
|
849 | truncating cache/rbc-revs-v1 to 112 | |
836 | $ f --size .hg/cache/rbc-revs* |
|
850 | $ f --size .hg/cache/rbc-revs* | |
837 | .hg/cache/rbc-revs-v1: size=160 |
|
851 | .hg/cache/rbc-revs-v1: size=160 | |
|
852 | ||||
838 | recovery from invalid cache file with missing record - no truncation |
|
853 | recovery from invalid cache file with missing record - no truncation | |
839 | $ mv .hg/cache/rbc-revs-v1 . |
|
854 | $ mv .hg/cache/rbc-revs-v1 . | |
840 | $ f -qDB 112 rbc-revs-v1 > .hg/cache/rbc-revs-v1 |
|
855 | $ f -qDB 112 rbc-revs-v1 > .hg/cache/rbc-revs-v1 | |
@@ -842,6 +857,7 b' recovery from invalid cache file with mi' | |||||
842 | 5 |
|
857 | 5 | |
843 | $ f --size .hg/cache/rbc-revs* |
|
858 | $ f --size .hg/cache/rbc-revs* | |
844 | .hg/cache/rbc-revs-v1: size=160 |
|
859 | .hg/cache/rbc-revs-v1: size=160 | |
|
860 | ||||
845 | recovery from invalid cache file with some bad records |
|
861 | recovery from invalid cache file with some bad records | |
846 | $ mv .hg/cache/rbc-revs-v1 . |
|
862 | $ mv .hg/cache/rbc-revs-v1 . | |
847 | $ f -qDB 8 rbc-revs-v1 > .hg/cache/rbc-revs-v1 |
|
863 | $ f -qDB 8 rbc-revs-v1 > .hg/cache/rbc-revs-v1 | |
@@ -851,7 +867,7 b' recovery from invalid cache file with so' | |||||
851 | $ f --size .hg/cache/rbc-revs* |
|
867 | $ f --size .hg/cache/rbc-revs* | |
852 | .hg/cache/rbc-revs-v1: size=120 |
|
868 | .hg/cache/rbc-revs-v1: size=120 | |
853 | $ hg log -r 'branch(.)' -T '{rev} ' --debug |
|
869 | $ hg log -r 'branch(.)' -T '{rev} ' --debug | |
854 |
history modification detected - truncating revision branch cache to revision |
|
870 | history modification detected - truncating revision branch cache to revision * (glob) | |
855 | history modification detected - truncating revision branch cache to revision 1 |
|
871 | history modification detected - truncating revision branch cache to revision 1 | |
856 | 3 4 8 9 10 11 12 13 truncating cache/rbc-revs-v1 to 8 |
|
872 | 3 4 8 9 10 11 12 13 truncating cache/rbc-revs-v1 to 8 | |
857 | $ rm -f .hg/cache/branch* && hg head a -T '{rev}\n' --debug |
|
873 | $ rm -f .hg/cache/branch* && hg head a -T '{rev}\n' --debug | |
@@ -860,6 +876,7 b' recovery from invalid cache file with so' | |||||
860 | $ f --size --hexdump --bytes=16 .hg/cache/rbc-revs* |
|
876 | $ f --size --hexdump --bytes=16 .hg/cache/rbc-revs* | |
861 | .hg/cache/rbc-revs-v1: size=160 |
|
877 | .hg/cache/rbc-revs-v1: size=160 | |
862 | 0000: 19 70 9c 5a 00 00 00 00 dd 6b 44 0d 00 00 00 01 |.p.Z.....kD.....| |
|
878 | 0000: 19 70 9c 5a 00 00 00 00 dd 6b 44 0d 00 00 00 01 |.p.Z.....kD.....| | |
|
879 | ||||
863 | cache is updated when committing |
|
880 | cache is updated when committing | |
864 | $ hg branch i-will-regret-this |
|
881 | $ hg branch i-will-regret-this | |
865 | marked working directory as branch i-will-regret-this |
|
882 | marked working directory as branch i-will-regret-this | |
@@ -867,30 +884,17 b' cache is updated when committing' | |||||
867 | $ f --size .hg/cache/rbc-* |
|
884 | $ f --size .hg/cache/rbc-* | |
868 | .hg/cache/rbc-names-v1: size=111 |
|
885 | .hg/cache/rbc-names-v1: size=111 | |
869 | .hg/cache/rbc-revs-v1: size=168 |
|
886 | .hg/cache/rbc-revs-v1: size=168 | |
|
887 | ||||
870 | update after rollback - the cache will be correct but rbc-names will will still |
|
888 | update after rollback - the cache will be correct but rbc-names will will still | |
871 | contain the branch name even though it no longer is used |
|
889 | contain the branch name even though it no longer is used | |
872 | $ hg up -qr '.^' |
|
890 | $ hg up -qr '.^' | |
873 | $ hg rollback -qf |
|
891 | $ hg rollback -qf | |
874 |
$ f --size |
|
892 | $ f --size .hg/cache/rbc-names-* | |
875 | .hg/cache/rbc-names-v1: size=111 |
|
893 | .hg/cache/rbc-names-v1: size=111 | |
876 | 0000: 64 65 66 61 75 6c 74 00 61 00 62 00 63 00 61 20 |default.a.b.c.a | |
|
894 | $ grep "i-will-regret-this" .hg/cache/rbc-names-* > /dev/null | |
877 | 0010: 62 72 61 6e 63 68 20 6e 61 6d 65 20 6d 75 63 68 |branch name much| |
|
895 | $ f --size .hg/cache/rbc-revs-* | |
878 | 0020: 20 6c 6f 6e 67 65 72 20 74 68 61 6e 20 74 68 65 | longer than the| |
|
|||
879 | 0030: 20 64 65 66 61 75 6c 74 20 6a 75 73 74 69 66 69 | default justifi| |
|
|||
880 | 0040: 63 61 74 69 6f 6e 20 75 73 65 64 20 62 79 20 62 |cation used by b| |
|
|||
881 | 0050: 72 61 6e 63 68 65 73 00 6d 00 6d 64 00 69 2d 77 |ranches.m.md.i-w| |
|
|||
882 | 0060: 69 6c 6c 2d 72 65 67 72 65 74 2d 74 68 69 73 |ill-regret-this| |
|
|||
883 | .hg/cache/rbc-revs-v1: size=160 |
|
896 | .hg/cache/rbc-revs-v1: size=160 | |
884 | 0000: 19 70 9c 5a 00 00 00 00 dd 6b 44 0d 00 00 00 01 |.p.Z.....kD.....| |
|
897 | ||
885 | 0010: 88 1f e2 b9 00 00 00 01 ac 22 03 33 00 00 00 02 |.........".3....| |
|
|||
886 | 0020: ae e3 9c d1 00 00 00 02 d8 cb c6 1d 00 00 00 01 |................| |
|
|||
887 | 0030: 58 97 36 a2 00 00 00 03 10 ff 58 95 00 00 00 04 |X.6.......X.....| |
|
|||
888 | 0040: ee bb 94 44 00 00 00 02 5f 40 61 bb 00 00 00 02 |...D...._@a.....| |
|
|||
889 | 0050: bf be 84 1b 00 00 00 02 d3 f1 63 45 80 00 00 02 |..........cE....| |
|
|||
890 | 0060: e3 d4 9c 05 80 00 00 02 e2 3b 55 05 00 00 00 02 |.........;U.....| |
|
|||
891 | 0070: f8 94 c2 56 80 00 00 03 f3 44 76 37 00 00 00 05 |...V.....Dv7....| |
|
|||
892 | 0080: a5 8c a5 d3 00 00 00 05 df 34 3b 0d 00 00 00 05 |.........4;.....| |
|
|||
893 | 0090: c9 14 c9 9f 00 00 00 06 cd 21 a8 0b 80 00 00 05 |.........!......| |
|
|||
894 | cache is updated/truncated when stripping - it is thus very hard to get in a |
|
898 | cache is updated/truncated when stripping - it is thus very hard to get in a | |
895 | situation where the cache is out of sync and the hash check detects it |
|
899 | situation where the cache is out of sync and the hash check detects it | |
896 | $ hg --config extensions.strip= strip -r tip --nob |
|
900 | $ hg --config extensions.strip= strip -r tip --nob | |
@@ -902,38 +906,30 b' cache is rebuilt when corruption is dete' | |||||
902 | $ hg log -r '5:&branch(.)' -T '{rev} ' --debug |
|
906 | $ hg log -r '5:&branch(.)' -T '{rev} ' --debug | |
903 | referenced branch names not found - rebuilding revision branch cache from scratch |
|
907 | referenced branch names not found - rebuilding revision branch cache from scratch | |
904 | 8 9 10 11 12 13 truncating cache/rbc-revs-v1 to 40 |
|
908 | 8 9 10 11 12 13 truncating cache/rbc-revs-v1 to 40 | |
905 |
$ f --size |
|
909 | $ f --size .hg/cache/rbc-names-* | |
906 | .hg/cache/rbc-names-v1: size=84 |
|
910 | .hg/cache/rbc-names-v1: size=84 | |
907 | 0000: 62 00 61 00 63 00 61 20 62 72 61 6e 63 68 20 6e |b.a.c.a branch n| |
|
911 | $ grep "i-will-regret-this" .hg/cache/rbc-names-* > /dev/null | |
908 | 0010: 61 6d 65 20 6d 75 63 68 20 6c 6f 6e 67 65 72 20 |ame much longer | |
|
912 | [1] | |
909 | 0020: 74 68 61 6e 20 74 68 65 20 64 65 66 61 75 6c 74 |than the default| |
|
913 | $ f --size .hg/cache/rbc-revs-* | |
910 | 0030: 20 6a 75 73 74 69 66 69 63 61 74 69 6f 6e 20 75 | justification u| |
|
|||
911 | 0040: 73 65 64 20 62 79 20 62 72 61 6e 63 68 65 73 00 |sed by branches.| |
|
|||
912 | 0050: 6d 00 6d 64 |m.md| |
|
|||
913 | .hg/cache/rbc-revs-v1: size=152 |
|
914 | .hg/cache/rbc-revs-v1: size=152 | |
914 | 0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| |
|
|||
915 | 0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| |
|
|||
916 | 0020: 00 00 00 00 00 00 00 00 d8 cb c6 1d 00 00 00 01 |................| |
|
|||
917 | 0030: 58 97 36 a2 00 00 00 02 10 ff 58 95 00 00 00 03 |X.6.......X.....| |
|
|||
918 | 0040: ee bb 94 44 00 00 00 00 5f 40 61 bb 00 00 00 00 |...D...._@a.....| |
|
|||
919 | 0050: bf be 84 1b 00 00 00 00 d3 f1 63 45 80 00 00 00 |..........cE....| |
|
|||
920 | 0060: e3 d4 9c 05 80 00 00 00 e2 3b 55 05 00 00 00 00 |.........;U.....| |
|
|||
921 | 0070: f8 94 c2 56 80 00 00 02 f3 44 76 37 00 00 00 04 |...V.....Dv7....| |
|
|||
922 | 0080: a5 8c a5 d3 00 00 00 04 df 34 3b 0d 00 00 00 04 |.........4;.....| |
|
|||
923 | 0090: c9 14 c9 9f 00 00 00 05 |........| |
|
|||
924 |
|
915 | |||
925 | Test that cache files are created and grows correctly: |
|
916 | Test that cache files are created and grows correctly: | |
926 |
|
917 | |||
927 | $ rm .hg/cache/rbc* |
|
918 | $ rm .hg/cache/rbc* | |
928 | $ hg log -r "5 & branch(5)" -T "{rev}\n" |
|
919 | $ hg log -r "5 & branch(5)" -T "{rev}\n" | |
929 | 5 |
|
920 | 5 | |
930 | $ f --size --hexdump .hg/cache/rbc-* |
|
921 | ||
|
922 | (here v3 is querying branch info for heads so it warm much more of the cache) | |||
|
923 | ||||
|
924 | #if v2 | |||
|
925 | $ f --size .hg/cache/rbc-* | |||
931 | .hg/cache/rbc-names-v1: size=1 |
|
926 | .hg/cache/rbc-names-v1: size=1 | |
932 | 0000: 61 |a| |
|
|||
933 | .hg/cache/rbc-revs-v1: size=48 |
|
927 | .hg/cache/rbc-revs-v1: size=48 | |
934 | 0000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| |
|
928 | #else | |
935 | 0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| |
|
929 | $ f --size .hg/cache/rbc-* | |
936 | 0020: 00 00 00 00 00 00 00 00 d8 cb c6 1d 00 00 00 00 |................| |
|
930 | .hg/cache/rbc-names-v1: size=84 | |
|
931 | .hg/cache/rbc-revs-v1: size=152 | |||
|
932 | #endif | |||
937 |
|
933 | |||
938 | $ cd .. |
|
934 | $ cd .. | |
939 |
|
935 | |||
@@ -948,22 +944,20 b' Test for multiple incorrect branch cache' | |||||
948 | $ hg branch -q branch |
|
944 | $ hg branch -q branch | |
949 | $ hg ci -Amf |
|
945 | $ hg ci -Amf | |
950 |
|
946 | |||
951 | $ f --size --hexdump .hg/cache/rbc-* |
|
947 | #if v2 | |
952 | .hg/cache/rbc-names-v1: size=14 |
|
948 | ||
953 | 0000: 64 65 66 61 75 6c 74 00 62 72 61 6e 63 68 |default.branch| |
|
949 | $ f --size --sha256 .hg/cache/rbc-* | |
954 | .hg/cache/rbc-revs-v1: size=24 |
|
950 | .hg/cache/rbc-names-v1: size=14, sha256=d376f7eea9a7e28fac6470e78dae753c81a5543c9ad436e96999590e004a281c | |
955 | 0000: 66 e5 f5 aa 00 00 00 00 fa 4c 04 e5 00 00 00 00 |f........L......| |
|
951 | .hg/cache/rbc-revs-v1: size=24, sha256=ec89032fd4e66e7282cb6e403848c681a855a9c36c6b44d19179218553b78779 | |
956 | 0010: 56 46 78 69 00 00 00 01 |VFxi....| |
|
952 | ||
957 | $ : > .hg/cache/rbc-revs-v1 |
|
953 | $ : > .hg/cache/rbc-revs-v1 | |
958 |
|
954 | |||
959 | No superfluous rebuilding of cache: |
|
955 | No superfluous rebuilding of cache: | |
960 | $ hg log -r "branch(null)&branch(branch)" --debug |
|
956 | $ hg log -r "branch(null)&branch(branch)" --debug | |
961 |
$ f --size -- |
|
957 | $ f --size --sha256 .hg/cache/rbc-* | |
962 | .hg/cache/rbc-names-v1: size=14 |
|
958 | .hg/cache/rbc-names-v1: size=14, sha256=d376f7eea9a7e28fac6470e78dae753c81a5543c9ad436e96999590e004a281c | |
963 | 0000: 64 65 66 61 75 6c 74 00 62 72 61 6e 63 68 |default.branch| |
|
959 | .hg/cache/rbc-revs-v1: size=24, sha256=ec89032fd4e66e7282cb6e403848c681a855a9c36c6b44d19179218553b78779 | |
964 | .hg/cache/rbc-revs-v1: size=24 |
|
960 | #endif | |
965 | 0000: 66 e5 f5 aa 00 00 00 00 fa 4c 04 e5 00 00 00 00 |f........L......| |
|
|||
966 | 0010: 56 46 78 69 00 00 00 01 |VFxi....| |
|
|||
967 |
|
961 | |||
968 | $ cd .. |
|
962 | $ cd .. | |
969 |
|
963 | |||
@@ -1316,9 +1310,15 b' Unbundling revision should warm the serv' | |||||
1316 | new changesets 2ab8003a1750:99ba08759bc7 |
|
1310 | new changesets 2ab8003a1750:99ba08759bc7 | |
1317 | updating to branch A |
|
1311 | updating to branch A | |
1318 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
1312 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
1319 | $ cat branchmap-update-01/.hg/cache/branch2-served |
|
1313 | #if v3 | |
|
1314 | $ cat branchmap-update-01/.hg/cache/branch3-exp-base | |||
|
1315 | tip-node=99ba08759bc7f6fdbe5304e83d0387f35c082479 tip-rev=1 topo-mode=pure | |||
|
1316 | A | |||
|
1317 | #else | |||
|
1318 | $ cat branchmap-update-01/.hg/cache/branch2-base | |||
1320 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 |
|
1319 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 | |
1321 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A |
|
1320 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A | |
|
1321 | #endif | |||
1322 | $ hg -R branchmap-update-01 unbundle bundle.hg |
|
1322 | $ hg -R branchmap-update-01 unbundle bundle.hg | |
1323 | adding changesets |
|
1323 | adding changesets | |
1324 | adding manifests |
|
1324 | adding manifests | |
@@ -1326,9 +1326,15 b' Unbundling revision should warm the serv' | |||||
1326 | added 2 changesets with 0 changes to 0 files |
|
1326 | added 2 changesets with 0 changes to 0 files | |
1327 | new changesets a3b807b3ff0b:71ca9a6d524e (2 drafts) |
|
1327 | new changesets a3b807b3ff0b:71ca9a6d524e (2 drafts) | |
1328 | (run 'hg update' to get a working copy) |
|
1328 | (run 'hg update' to get a working copy) | |
|
1329 | #if v3 | |||
|
1330 | $ cat branchmap-update-01/.hg/cache/branch3-exp-served | |||
|
1331 | tip-node=71ca9a6d524ed3c2a215119b2086ac3b8c4c8286 tip-rev=3 topo-mode=pure | |||
|
1332 | A | |||
|
1333 | #else | |||
1329 | $ cat branchmap-update-01/.hg/cache/branch2-served |
|
1334 | $ cat branchmap-update-01/.hg/cache/branch2-served | |
1330 | 71ca9a6d524ed3c2a215119b2086ac3b8c4c8286 3 |
|
1335 | 71ca9a6d524ed3c2a215119b2086ac3b8c4c8286 3 | |
1331 | 71ca9a6d524ed3c2a215119b2086ac3b8c4c8286 o A |
|
1336 | 71ca9a6d524ed3c2a215119b2086ac3b8c4c8286 o A | |
|
1337 | #endif | |||
1332 |
|
1338 | |||
1333 | aborted Unbundle should not update the on disk cache |
|
1339 | aborted Unbundle should not update the on disk cache | |
1334 |
|
1340 | |||
@@ -1350,9 +1356,15 b' aborted Unbundle should not update the o' | |||||
1350 | updating to branch A |
|
1356 | updating to branch A | |
1351 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
1357 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
1352 |
|
1358 | |||
1353 | $ cat branchmap-update-02/.hg/cache/branch2-served |
|
1359 | #if v3 | |
|
1360 | $ cat branchmap-update-02/.hg/cache/branch3-exp-base | |||
|
1361 | tip-node=99ba08759bc7f6fdbe5304e83d0387f35c082479 tip-rev=1 topo-mode=pure | |||
|
1362 | A | |||
|
1363 | #else | |||
|
1364 | $ cat branchmap-update-02/.hg/cache/branch2-base | |||
1354 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 |
|
1365 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 | |
1355 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A |
|
1366 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A | |
|
1367 | #endif | |||
1356 | $ hg -R branchmap-update-02 unbundle bundle.hg --config "hooks.pretxnclose=python:$TESTTMP/simplehook.py:hook" |
|
1368 | $ hg -R branchmap-update-02 unbundle bundle.hg --config "hooks.pretxnclose=python:$TESTTMP/simplehook.py:hook" | |
1357 | adding changesets |
|
1369 | adding changesets | |
1358 | adding manifests |
|
1370 | adding manifests | |
@@ -1361,6 +1373,12 b' aborted Unbundle should not update the o' | |||||
1361 | rollback completed |
|
1373 | rollback completed | |
1362 | abort: pretxnclose hook failed |
|
1374 | abort: pretxnclose hook failed | |
1363 | [40] |
|
1375 | [40] | |
1364 | $ cat branchmap-update-02/.hg/cache/branch2-served |
|
1376 | #if v3 | |
|
1377 | $ cat branchmap-update-02/.hg/cache/branch3-exp-base | |||
|
1378 | tip-node=99ba08759bc7f6fdbe5304e83d0387f35c082479 tip-rev=1 topo-mode=pure | |||
|
1379 | A | |||
|
1380 | #else | |||
|
1381 | $ cat branchmap-update-02/.hg/cache/branch2-base | |||
1365 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 |
|
1382 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 1 | |
1366 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A |
|
1383 | 99ba08759bc7f6fdbe5304e83d0387f35c082479 o A | |
|
1384 | #endif |
This diff has been collapsed as it changes many lines, (701 lines changed) Show them Hide them | |||||
@@ -109,150 +109,18 b' Check that the clone went well' | |||||
109 | Check uncompressed |
|
109 | Check uncompressed | |
110 | ================== |
|
110 | ================== | |
111 |
|
111 | |||
112 | Cannot stream clone when server.uncompressed is set |
|
112 | Cannot stream clone when server.uncompressed is set to false | |
|
113 | ------------------------------------------------------------ | |||
|
114 | ||||
|
115 | When `server.uncompressed` is disabled, the client should fallback to a bundle | |||
|
116 | based clone with a warning. | |||
|
117 | ||||
113 |
|
118 | |||
114 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out' |
|
119 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=stream_out' | |
115 | 200 Script output follows |
|
120 | 200 Script output follows | |
116 |
|
121 | |||
117 | 1 |
|
122 | 1 | |
118 |
|
123 | |||
119 | #if stream-legacy |
|
|||
120 | $ hg debugcapabilities http://localhost:$HGPORT |
|
|||
121 | Main capabilities: |
|
|||
122 | batch |
|
|||
123 | branchmap |
|
|||
124 | $USUAL_BUNDLE2_CAPS_SERVER$ |
|
|||
125 | changegroupsubset |
|
|||
126 | compression=$BUNDLE2_COMPRESSIONS$ |
|
|||
127 | getbundle |
|
|||
128 | httpheader=1024 |
|
|||
129 | httpmediatype=0.1rx,0.1tx,0.2tx |
|
|||
130 | known |
|
|||
131 | lookup |
|
|||
132 | pushkey |
|
|||
133 | unbundle=HG10GZ,HG10BZ,HG10UN |
|
|||
134 | unbundlehash |
|
|||
135 | Bundle2 capabilities: |
|
|||
136 | HG20 |
|
|||
137 | bookmarks |
|
|||
138 | changegroup |
|
|||
139 | 01 |
|
|||
140 | 02 |
|
|||
141 | 03 |
|
|||
142 | checkheads |
|
|||
143 | related |
|
|||
144 | digests |
|
|||
145 | md5 |
|
|||
146 | sha1 |
|
|||
147 | sha512 |
|
|||
148 | error |
|
|||
149 | abort |
|
|||
150 | unsupportedcontent |
|
|||
151 | pushraced |
|
|||
152 | pushkey |
|
|||
153 | hgtagsfnodes |
|
|||
154 | listkeys |
|
|||
155 | phases |
|
|||
156 | heads |
|
|||
157 | pushkey |
|
|||
158 | remote-changegroup |
|
|||
159 | http |
|
|||
160 | https |
|
|||
161 |
|
||||
162 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled |
|
|||
163 | warning: stream clone requested but server has them disabled |
|
|||
164 | requesting all changes |
|
|||
165 | adding changesets |
|
|||
166 | adding manifests |
|
|||
167 | adding file changes |
|
|||
168 | added 3 changesets with 1088 changes to 1088 files |
|
|||
169 | new changesets 96ee1d7354c4:5223b5e3265f |
|
|||
170 |
|
||||
171 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
|||
172 | 200 Script output follows |
|
|||
173 | content-type: application/mercurial-0.2 |
|
|||
174 |
|
||||
175 |
|
||||
176 | $ f --size body --hexdump --bytes 100 |
|
|||
177 | body: size=140 |
|
|||
178 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
179 | 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| |
|
|||
180 | 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| |
|
|||
181 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| |
|
|||
182 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| |
|
|||
183 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| |
|
|||
184 | 0060: 69 73 20 66 |is f| |
|
|||
185 |
|
||||
186 | #endif |
|
|||
187 | #if stream-bundle2-v2 |
|
|||
188 | $ hg debugcapabilities http://localhost:$HGPORT |
|
|||
189 | Main capabilities: |
|
|||
190 | batch |
|
|||
191 | branchmap |
|
|||
192 | $USUAL_BUNDLE2_CAPS_SERVER$ |
|
|||
193 | changegroupsubset |
|
|||
194 | compression=$BUNDLE2_COMPRESSIONS$ |
|
|||
195 | getbundle |
|
|||
196 | httpheader=1024 |
|
|||
197 | httpmediatype=0.1rx,0.1tx,0.2tx |
|
|||
198 | known |
|
|||
199 | lookup |
|
|||
200 | pushkey |
|
|||
201 | unbundle=HG10GZ,HG10BZ,HG10UN |
|
|||
202 | unbundlehash |
|
|||
203 | Bundle2 capabilities: |
|
|||
204 | HG20 |
|
|||
205 | bookmarks |
|
|||
206 | changegroup |
|
|||
207 | 01 |
|
|||
208 | 02 |
|
|||
209 | 03 |
|
|||
210 | checkheads |
|
|||
211 | related |
|
|||
212 | digests |
|
|||
213 | md5 |
|
|||
214 | sha1 |
|
|||
215 | sha512 |
|
|||
216 | error |
|
|||
217 | abort |
|
|||
218 | unsupportedcontent |
|
|||
219 | pushraced |
|
|||
220 | pushkey |
|
|||
221 | hgtagsfnodes |
|
|||
222 | listkeys |
|
|||
223 | phases |
|
|||
224 | heads |
|
|||
225 | pushkey |
|
|||
226 | remote-changegroup |
|
|||
227 | http |
|
|||
228 | https |
|
|||
229 |
|
||||
230 | $ hg clone --stream -U http://localhost:$HGPORT server-disabled |
|
|||
231 | warning: stream clone requested but server has them disabled |
|
|||
232 | requesting all changes |
|
|||
233 | adding changesets |
|
|||
234 | adding manifests |
|
|||
235 | adding file changes |
|
|||
236 | added 3 changesets with 1088 changes to 1088 files |
|
|||
237 | new changesets 96ee1d7354c4:5223b5e3265f |
|
|||
238 |
|
||||
239 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
|||
240 | 200 Script output follows |
|
|||
241 | content-type: application/mercurial-0.2 |
|
|||
242 |
|
||||
243 |
|
||||
244 | $ f --size body --hexdump --bytes 100 |
|
|||
245 | body: size=140 |
|
|||
246 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
247 | 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| |
|
|||
248 | 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| |
|
|||
249 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| |
|
|||
250 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| |
|
|||
251 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| |
|
|||
252 | 0060: 69 73 20 66 |is f| |
|
|||
253 |
|
||||
254 | #endif |
|
|||
255 | #if stream-bundle2-v3 |
|
|||
256 | $ hg debugcapabilities http://localhost:$HGPORT |
|
124 | $ hg debugcapabilities http://localhost:$HGPORT | |
257 | Main capabilities: |
|
125 | Main capabilities: | |
258 | batch |
|
126 | batch | |
@@ -304,23 +172,6 b' Cannot stream clone when server.uncompre' | |||||
304 | added 3 changesets with 1088 changes to 1088 files |
|
172 | added 3 changesets with 1088 changes to 1088 files | |
305 | new changesets 96ee1d7354c4:5223b5e3265f |
|
173 | new changesets 96ee1d7354c4:5223b5e3265f | |
306 |
|
174 | |||
307 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto 0.2 --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
|||
308 | 200 Script output follows |
|
|||
309 | content-type: application/mercurial-0.2 |
|
|||
310 |
|
||||
311 |
|
||||
312 | $ f --size body --hexdump --bytes 100 |
|
|||
313 | body: size=140 |
|
|||
314 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
315 | 0010: 73 0b 45 52 52 4f 52 3a 41 42 4f 52 54 00 00 00 |s.ERROR:ABORT...| |
|
|||
316 | 0020: 00 01 01 07 3c 04 16 6d 65 73 73 61 67 65 73 74 |....<..messagest| |
|
|||
317 | 0030: 72 65 61 6d 20 64 61 74 61 20 72 65 71 75 65 73 |ream data reques| |
|
|||
318 | 0040: 74 65 64 20 62 75 74 20 73 65 72 76 65 72 20 64 |ted but server d| |
|
|||
319 | 0050: 6f 65 73 20 6e 6f 74 20 61 6c 6c 6f 77 20 74 68 |oes not allow th| |
|
|||
320 | 0060: 69 73 20 66 |is f| |
|
|||
321 |
|
||||
322 | #endif |
|
|||
323 |
|
||||
324 | $ killdaemons.py |
|
175 | $ killdaemons.py | |
325 | $ cd server |
|
176 | $ cd server | |
326 | $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt |
|
177 | $ hg serve -p $HGPORT -d --pid-file=hg.pid --error errors.txt | |
@@ -328,6 +179,13 b' Cannot stream clone when server.uncompre' | |||||
328 | $ cd .. |
|
179 | $ cd .. | |
329 |
|
180 | |||
330 | Basic clone |
|
181 | Basic clone | |
|
182 | ----------- | |||
|
183 | ||||
|
184 | Check that --stream trigger a stream clone and result in a valid repositoty | |||
|
185 | ||||
|
186 | We check the associated output for exact bytes on file number as changes in | |||
|
187 | these value implies changes in the data transfered and can detect unintended | |||
|
188 | changes in the process. | |||
331 |
|
189 | |||
332 | #if stream-legacy |
|
190 | #if stream-legacy | |
333 | $ hg clone --stream -U http://localhost:$HGPORT clone1 |
|
191 | $ hg clone --stream -U http://localhost:$HGPORT clone1 | |
@@ -338,7 +196,6 b' Basic clone' | |||||
338 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
196 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) | |
339 | searching for changes |
|
197 | searching for changes | |
340 | no changes found |
|
198 | no changes found | |
341 | $ cat server/errors.txt |
|
|||
342 | #endif |
|
199 | #endif | |
343 | #if stream-bundle2-v2 |
|
200 | #if stream-bundle2-v2 | |
344 | $ hg clone --stream -U http://localhost:$HGPORT clone1 |
|
201 | $ hg clone --stream -U http://localhost:$HGPORT clone1 | |
@@ -349,20 +206,8 b' Basic clone' | |||||
349 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
206 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) | |
350 | 1096 files to transfer, 99.0 KB of data (zstd rust !) |
|
207 | 1096 files to transfer, 99.0 KB of data (zstd rust !) | |
351 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
208 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) | |
|
209 | #endif | |||
352 |
|
210 | |||
353 | $ ls -1 clone1/.hg/cache |
|
|||
354 | branch2-base |
|
|||
355 | branch2-immutable |
|
|||
356 | branch2-served |
|
|||
357 | branch2-served.hidden |
|
|||
358 | branch2-visible |
|
|||
359 | branch2-visible-hidden |
|
|||
360 | rbc-names-v1 |
|
|||
361 | rbc-revs-v1 |
|
|||
362 | tags2 |
|
|||
363 | tags2-served |
|
|||
364 | $ cat server/errors.txt |
|
|||
365 | #endif |
|
|||
366 | #if stream-bundle2-v3 |
|
211 | #if stream-bundle2-v3 | |
367 | $ hg clone --stream -U http://localhost:$HGPORT clone1 |
|
212 | $ hg clone --stream -U http://localhost:$HGPORT clone1 | |
368 | streaming all changes |
|
213 | streaming all changes | |
@@ -370,244 +215,68 b' Basic clone' | |||||
370 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
215 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) | |
371 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
216 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) | |
372 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
217 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) | |
|
218 | #endif | |||
373 |
|
219 | |||
|
220 | #if no-stream-legacy | |||
374 | $ ls -1 clone1/.hg/cache |
|
221 | $ ls -1 clone1/.hg/cache | |
375 | branch2-base |
|
222 | branch2-base | |
376 | branch2-immutable |
|
|||
377 | branch2-served |
|
223 | branch2-served | |
378 | branch2-served.hidden |
|
|||
379 | branch2-visible |
|
|||
380 | branch2-visible-hidden |
|
|||
381 | rbc-names-v1 |
|
224 | rbc-names-v1 | |
382 | rbc-revs-v1 |
|
225 | rbc-revs-v1 | |
383 | tags2 |
|
226 | tags2 | |
384 | tags2-served |
|
227 | tags2-served | |
385 | $ cat server/errors.txt |
|
|||
386 | #endif |
|
228 | #endif | |
387 |
|
229 | |||
|
230 | $ hg -R clone1 verify --quiet | |||
|
231 | $ cat server/errors.txt | |||
|
232 | ||||
388 | getbundle requests with stream=1 are uncompressed |
|
233 | getbundle requests with stream=1 are uncompressed | |
|
234 | ------------------------------------------------- | |||
|
235 | ||||
|
236 | We check that `getbundle` will return a stream bundle when requested. | |||
|
237 | ||||
|
238 | XXX manually building the --requestheader is fragile and will drift away from actual usage | |||
389 |
|
239 | |||
390 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" |
|
240 | $ get-with-headers.py $LOCALIP:$HGPORT '?cmd=getbundle' content-type --bodyfile body --hgproto '0.1 0.2 comp=zlib,none' --requestheader "x-hgarg-1=bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Astream%253Dv2&cg=0&common=0000000000000000000000000000000000000000&heads=c17445101a72edac06facd130d14808dfbd5c7c2&stream=1" | |
391 | 200 Script output follows |
|
241 | 200 Script output follows | |
392 | content-type: application/mercurial-0.2 |
|
242 | content-type: application/mercurial-0.2 | |
393 |
|
243 | |||
394 |
|
244 | |||
395 | #if no-zstd no-rust |
|
245 | $ f --size --hex --bytes 48 body | |
396 | $ f --size --hex --bytes 256 body |
|
246 | body: size=* (glob) | |
397 | body: size=119140 |
|
|||
398 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
399 | 0010: 62 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |b.STREAM2.......| |
|
|||
400 | 0020: 06 09 04 0c 26 62 79 74 65 63 6f 75 6e 74 31 30 |....&bytecount10| |
|
|||
401 | 0030: 34 31 31 35 66 69 6c 65 63 6f 75 6e 74 31 30 39 |4115filecount109| |
|
|||
402 | 0040: 34 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |4requirementsgen| |
|
|||
403 | 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl| |
|
|||
404 | 0060: 6f 67 76 31 25 32 43 73 70 61 72 73 65 72 65 76 |ogv1%2Csparserev| |
|
|||
405 | 0070: 6c 6f 67 00 00 80 00 73 08 42 64 61 74 61 2f 30 |log....s.Bdata/0| |
|
|||
406 | 0080: 2e 69 00 03 00 01 00 00 00 00 00 00 00 02 00 00 |.i..............| |
|
|||
407 | 0090: 00 01 00 00 00 00 00 00 00 01 ff ff ff ff ff ff |................| |
|
|||
408 | 00a0: ff ff 80 29 63 a0 49 d3 23 87 bf ce fe 56 67 92 |...)c.I.#....Vg.| |
|
|||
409 | 00b0: 67 2c 69 d1 ec 39 00 00 00 00 00 00 00 00 00 00 |g,i..9..........| |
|
|||
410 | 00c0: 00 00 75 30 73 26 45 64 61 74 61 2f 30 30 63 68 |..u0s&Edata/00ch| |
|
|||
411 | 00d0: 61 6e 67 65 6c 6f 67 2d 61 62 33 34 39 31 38 30 |angelog-ab349180| |
|
|||
412 | 00e0: 61 30 34 30 35 30 31 30 2e 6e 64 2e 69 00 03 00 |a0405010.nd.i...| |
|
|||
413 | 00f0: 01 00 00 00 00 00 00 00 05 00 00 00 04 00 00 00 |................| |
|
|||
414 | #endif |
|
|||
415 | #if zstd no-rust |
|
|||
416 | $ f --size --hex --bytes 256 body |
|
|||
417 | body: size=116327 (no-bigendian !) |
|
|||
418 | body: size=116322 (bigendian !) |
|
|||
419 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
247 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| | |
420 |
0010: |
|
248 | 0010: ?? 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |?.STREAM2.......| (glob) | |
421 |
0020: 06 09 04 0c |
|
249 | 0020: 06 09 04 0c ?? 62 79 74 65 63 6f 75 6e 74 31 30 |....?bytecount10| (glob) | |
422 | 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-bigendian !) |
|
|||
423 | 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (bigendian !) |
|
|||
424 | 0040: 34 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |4requirementsgen| |
|
|||
425 | 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl| |
|
|||
426 | 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z| |
|
|||
427 | 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2| |
|
|||
428 | 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...| |
|
|||
429 | 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....| |
|
|||
430 | 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................| |
|
|||
431 | 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.| |
|
|||
432 | 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9| |
|
|||
433 | 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&| |
|
|||
434 | 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo| |
|
|||
435 | 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050| |
|
|||
436 | #endif |
|
|||
437 | #if zstd rust no-dirstate-v2 |
|
|||
438 | $ f --size --hex --bytes 256 body |
|
|||
439 | body: size=116310 (no-rust !) |
|
|||
440 | body: size=116495 (rust no-stream-legacy no-bigendian !) |
|
|||
441 | body: size=116490 (rust no-stream-legacy bigendian !) |
|
|||
442 | body: size=116327 (rust stream-legacy no-bigendian !) |
|
|||
443 | body: size=116322 (rust stream-legacy bigendian !) |
|
|||
444 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
445 | 0010: 7c 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 ||.STREAM2.......| |
|
|||
446 | 0020: 06 09 04 0c 40 62 79 74 65 63 6f 75 6e 74 31 30 |....@bytecount10| |
|
|||
447 | 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (no-rust !) |
|
|||
448 | 0040: 33 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |3requirementsgen| (no-rust !) |
|
|||
449 | 0030: 31 34 30 32 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1402filecount109| (rust no-stream-legacy no-bigendian !) |
|
|||
450 | 0030: 31 33 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1397filecount109| (rust no-stream-legacy bigendian !) |
|
|||
451 | 0040: 36 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |6requirementsgen| (rust no-stream-legacy !) |
|
|||
452 | 0030: 31 32 37 36 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1276filecount109| (rust stream-legacy no-bigendian !) |
|
|||
453 | 0030: 31 32 37 31 66 69 6c 65 63 6f 75 6e 74 31 30 39 |1271filecount109| (rust stream-legacy bigendian !) |
|
|||
454 | 0040: 34 72 65 71 75 69 72 65 6d 65 6e 74 73 67 65 6e |4requirementsgen| (rust stream-legacy !) |
|
|||
455 | 0050: 65 72 61 6c 64 65 6c 74 61 25 32 43 72 65 76 6c |eraldelta%2Crevl| |
|
|||
456 | 0060: 6f 67 2d 63 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a |og-compression-z| |
|
|||
457 | 0070: 73 74 64 25 32 43 72 65 76 6c 6f 67 76 31 25 32 |std%2Crevlogv1%2| |
|
|||
458 | 0080: 43 73 70 61 72 73 65 72 65 76 6c 6f 67 00 00 80 |Csparserevlog...| |
|
|||
459 | 0090: 00 73 08 42 64 61 74 61 2f 30 2e 69 00 03 00 01 |.s.Bdata/0.i....| |
|
|||
460 | 00a0: 00 00 00 00 00 00 00 02 00 00 00 01 00 00 00 00 |................| |
|
|||
461 | 00b0: 00 00 00 01 ff ff ff ff ff ff ff ff 80 29 63 a0 |.............)c.| |
|
|||
462 | 00c0: 49 d3 23 87 bf ce fe 56 67 92 67 2c 69 d1 ec 39 |I.#....Vg.g,i..9| |
|
|||
463 | 00d0: 00 00 00 00 00 00 00 00 00 00 00 00 75 30 73 26 |............u0s&| |
|
|||
464 | 00e0: 45 64 61 74 61 2f 30 30 63 68 61 6e 67 65 6c 6f |Edata/00changelo| |
|
|||
465 | 00f0: 67 2d 61 62 33 34 39 31 38 30 61 30 34 30 35 30 |g-ab349180a04050| |
|
|||
466 | #endif |
|
|||
467 | #if zstd dirstate-v2 |
|
|||
468 | $ f --size --hex --bytes 256 body |
|
|||
469 | body: size=109549 |
|
|||
470 | 0000: 04 6e 6f 6e 65 48 47 32 30 00 00 00 00 00 00 00 |.noneHG20.......| |
|
|||
471 | 0010: c0 07 53 54 52 45 41 4d 32 00 00 00 00 03 00 09 |..STREAM2.......| |
|
|||
472 | 0020: 05 09 04 0c 85 62 79 74 65 63 6f 75 6e 74 39 35 |.....bytecount95| |
|
|||
473 | 0030: 38 39 37 66 69 6c 65 63 6f 75 6e 74 31 30 33 30 |897filecount1030| |
|
|||
474 | 0040: 72 65 71 75 69 72 65 6d 65 6e 74 73 64 6f 74 65 |requirementsdote| |
|
|||
475 | 0050: 6e 63 6f 64 65 25 32 43 65 78 70 2d 64 69 72 73 |ncode%2Cexp-dirs| |
|
|||
476 | 0060: 74 61 74 65 2d 76 32 25 32 43 66 6e 63 61 63 68 |tate-v2%2Cfncach| |
|
|||
477 | 0070: 65 25 32 43 67 65 6e 65 72 61 6c 64 65 6c 74 61 |e%2Cgeneraldelta| |
|
|||
478 | 0080: 25 32 43 70 65 72 73 69 73 74 65 6e 74 2d 6e 6f |%2Cpersistent-no| |
|
|||
479 | 0090: 64 65 6d 61 70 25 32 43 72 65 76 6c 6f 67 2d 63 |demap%2Crevlog-c| |
|
|||
480 | 00a0: 6f 6d 70 72 65 73 73 69 6f 6e 2d 7a 73 74 64 25 |ompression-zstd%| |
|
|||
481 | 00b0: 32 43 72 65 76 6c 6f 67 76 31 25 32 43 73 70 61 |2Crevlogv1%2Cspa| |
|
|||
482 | 00c0: 72 73 65 72 65 76 6c 6f 67 25 32 43 73 74 6f 72 |rserevlog%2Cstor| |
|
|||
483 | 00d0: 65 00 00 80 00 73 08 42 64 61 74 61 2f 30 2e 69 |e....s.Bdata/0.i| |
|
|||
484 | 00e0: 00 03 00 01 00 00 00 00 00 00 00 02 00 00 00 01 |................| |
|
|||
485 | 00f0: 00 00 00 00 00 00 00 01 ff ff ff ff ff ff ff ff |................| |
|
|||
486 | #endif |
|
|||
487 |
|
|
250 | ||
488 | --uncompressed is an alias to --stream |
|
251 | --uncompressed is an alias to --stream | |
|
252 | --------------------------------------- | |||
489 |
|
253 | |||
490 | #if stream-legacy |
|
254 | The alias flag should trigger a stream clone too. | |
491 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed |
|
255 | ||
492 | streaming all changes |
|
|||
493 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
494 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
495 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
496 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
497 | searching for changes |
|
|||
498 | no changes found |
|
|||
499 | #endif |
|
|||
500 | #if stream-bundle2-v2 |
|
|||
501 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed |
|
256 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed | |
502 | streaming all changes |
|
257 | streaming all changes | |
503 |
|
|
258 | * files to transfer* (glob) (no-stream-bundle2-v3 !) | |
504 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
259 | * entries to transfer (glob) (stream-bundle2-v3 !) | |
505 | 1094 files to transfer, 98.9 KB of data (zstd no-rust !) |
|
260 | transferred * KB in * seconds (* */sec) (glob) | |
506 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
261 | searching for changes (stream-legacy !) | |
507 | 1096 files to transfer, 99.0 KB of data (zstd rust !) |
|
262 | no changes found (stream-legacy !) | |
508 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
509 | #endif |
|
|||
510 | #if stream-bundle2-v3 |
|
|||
511 | $ hg clone --uncompressed -U http://localhost:$HGPORT clone1-uncompressed |
|
|||
512 | streaming all changes |
|
|||
513 | 1093 entries to transfer |
|
|||
514 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
515 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
516 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
517 | #endif |
|
|||
518 |
|
|
263 | ||
519 | Clone with background file closing enabled |
|
264 | Clone with background file closing enabled | |
|
265 | ------------------------------------------- | |||
520 |
|
266 | |||
521 | #if stream-legacy |
|
267 | The backgound file closing logic should trigger when configured to do so, and | |
522 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding |
|
268 | the result should be a valid repository. | |
523 | using http://localhost:$HGPORT/ |
|
269 | ||
524 | sending capabilities command |
|
270 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep "background file closing" | |
525 | sending branchmap command |
|
|||
526 | streaming all changes |
|
|||
527 | sending stream_out command |
|
|||
528 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
529 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
530 | starting 4 threads for background file closing |
|
|||
531 | updating the branch cache |
|
|||
532 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
533 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
534 | query 1; heads |
|
|||
535 | sending batch command |
|
|||
536 | searching for changes |
|
|||
537 | all remote heads known locally |
|
|||
538 | no changes found |
|
|||
539 | sending getbundle command |
|
|||
540 | bundle2-input-bundle: with-transaction |
|
|||
541 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
|||
542 | bundle2-input-part: "phase-heads" supported |
|
|||
543 | bundle2-input-part: total payload size 24 |
|
|||
544 | bundle2-input-bundle: 2 parts total |
|
|||
545 | checking for updated bookmarks |
|
|||
546 | updating the branch cache |
|
|||
547 | (sent 5 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
|||
548 | #endif |
|
|||
549 | #if stream-bundle2-v2 |
|
|||
550 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding |
|
|||
551 | using http://localhost:$HGPORT/ |
|
|||
552 | sending capabilities command |
|
|||
553 | query 1; heads |
|
|||
554 | sending batch command |
|
|||
555 | streaming all changes |
|
|||
556 | sending getbundle command |
|
|||
557 | bundle2-input-bundle: with-transaction |
|
|||
558 | bundle2-input-part: "stream2" (params: 3 mandatory) supported |
|
|||
559 | applying stream bundle |
|
|||
560 | 1094 files to transfer, 102 KB of data (no-zstd !) |
|
|||
561 | 1094 files to transfer, 98.9 KB of data (zstd no-rust !) |
|
|||
562 | 1096 files to transfer, 99.0 KB of data (zstd rust !) |
|
|||
563 | starting 4 threads for background file closing |
|
|||
564 | starting 4 threads for background file closing |
|
271 | starting 4 threads for background file closing | |
565 | updating the branch cache |
|
272 | starting 4 threads for background file closing (no-stream-legacy !) | |
566 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
273 | $ hg verify -R clone-background --quiet | |
567 | bundle2-input-part: total payload size 119001 (no-zstd !) |
|
|||
568 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
569 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
570 | bundle2-input-part: total payload size 116162 (zstd no-bigendian no-rust !) |
|
|||
571 | bundle2-input-part: total payload size 116330 (zstd no-bigendian rust !) |
|
|||
572 | bundle2-input-part: total payload size 116157 (zstd bigendian no-rust !) |
|
|||
573 | bundle2-input-part: total payload size 116325 (zstd bigendian rust !) |
|
|||
574 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
|||
575 | bundle2-input-bundle: 2 parts total |
|
|||
576 | checking for updated bookmarks |
|
|||
577 | updating the branch cache |
|
|||
578 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
|||
579 | #endif |
|
|||
580 | #if stream-bundle2-v3 |
|
|||
581 | $ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --stream -U http://localhost:$HGPORT clone-background | grep -v adding |
|
|||
582 | using http://localhost:$HGPORT/ |
|
|||
583 | sending capabilities command |
|
|||
584 | query 1; heads |
|
|||
585 | sending batch command |
|
|||
586 | streaming all changes |
|
|||
587 | sending getbundle command |
|
|||
588 | bundle2-input-bundle: with-transaction |
|
|||
589 | bundle2-input-part: "stream3-exp" (params: 1 mandatory) supported |
|
|||
590 | applying stream bundle |
|
|||
591 | 1093 entries to transfer |
|
|||
592 | starting 4 threads for background file closing |
|
|||
593 | starting 4 threads for background file closing |
|
|||
594 | updating the branch cache |
|
|||
595 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
596 | bundle2-input-part: total payload size 120096 (no-zstd !) |
|
|||
597 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
598 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
599 | bundle2-input-part: total payload size 117257 (zstd no-rust no-bigendian !) |
|
|||
600 | bundle2-input-part: total payload size 117425 (zstd rust no-bigendian !) |
|
|||
601 | bundle2-input-part: total payload size 117252 (zstd bigendian no-rust !) |
|
|||
602 | bundle2-input-part: total payload size 117420 (zstd bigendian rust !) |
|
|||
603 | bundle2-input-part: "listkeys" (params: 1 mandatory) supported |
|
|||
604 | bundle2-input-bundle: 2 parts total |
|
|||
605 | checking for updated bookmarks |
|
|||
606 | updating the branch cache |
|
|||
607 | (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob) |
|
|||
608 | #endif |
|
|||
609 |
|
|
274 | ||
610 | Cannot stream clone when there are secret changesets |
|
275 | Cannot stream clone when there are secret changesets | |
|
276 | ---------------------------------------------------- | |||
|
277 | ||||
|
278 | If secret changeset are present the should not be cloned (by default) and the | |||
|
279 | clone falls back to a bundle clone. | |||
611 |
|
280 | |||
612 | $ hg -R server phase --force --secret -r tip |
|
281 | $ hg -R server phase --force --secret -r tip | |
613 | $ hg clone --stream -U http://localhost:$HGPORT secret-denied |
|
282 | $ hg clone --stream -U http://localhost:$HGPORT secret-denied | |
@@ -622,44 +291,30 b' Cannot stream clone when there are secre' | |||||
622 | $ killdaemons.py |
|
291 | $ killdaemons.py | |
623 |
|
292 | |||
624 | Streaming of secrets can be overridden by server config |
|
293 | Streaming of secrets can be overridden by server config | |
|
294 | ------------------------------------------------------- | |||
|
295 | ||||
|
296 | Secret changeset can still be streamed if the server is configured to do so. | |||
625 |
|
297 | |||
626 | $ cd server |
|
298 | $ cd server | |
627 | $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid |
|
299 | $ hg serve --config server.uncompressedallowsecret=true -p $HGPORT -d --pid-file=hg.pid | |
628 | $ cat hg.pid > $DAEMON_PIDS |
|
300 | $ cat hg.pid > $DAEMON_PIDS | |
629 | $ cd .. |
|
301 | $ cd .. | |
630 |
|
302 | |||
631 | #if stream-legacy |
|
|||
632 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed |
|
|||
633 | streaming all changes |
|
|||
634 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
635 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
636 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
637 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
638 | searching for changes |
|
|||
639 | no changes found |
|
|||
640 | #endif |
|
|||
641 | #if stream-bundle2-v2 |
|
|||
642 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed |
|
303 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed | |
643 | streaming all changes |
|
304 | streaming all changes | |
644 |
|
|
305 | * files to transfer* (glob) (no-stream-bundle2-v3 !) | |
645 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
306 | * entries to transfer (glob) (stream-bundle2-v3 !) | |
646 | 1094 files to transfer, 98.9 KB of data (zstd no-rust !) |
|
307 | transferred * KB in * seconds (* */sec) (glob) | |
647 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
308 | searching for changes (stream-legacy !) | |
648 | 1096 files to transfer, 99.0 KB of data (zstd rust !) |
|
309 | no changes found (stream-legacy !) | |
649 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
650 | #endif |
|
|||
651 | #if stream-bundle2-v3 |
|
|||
652 | $ hg clone --stream -U http://localhost:$HGPORT secret-allowed |
|
|||
653 | streaming all changes |
|
|||
654 | 1093 entries to transfer |
|
|||
655 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
656 | transferred 98.9 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
657 | transferred 99.0 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
658 | #endif |
|
|||
659 |
|
|
310 | ||
660 | $ killdaemons.py |
|
311 | $ killdaemons.py | |
661 |
|
312 | |||
662 | Verify interaction between preferuncompressed and secret presence |
|
313 | Verify interaction between preferuncompressed and secret presence | |
|
314 | ----------------------------------------------------------------- | |||
|
315 | ||||
|
316 | Secret presence will still make the clone falls back to a normal bundle even if | |||
|
317 | the server prefers stream clone. | |||
663 |
|
318 | |||
664 | $ cd server |
|
319 | $ cd server | |
665 | $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid |
|
320 | $ hg serve --config server.preferuncompressed=true -p $HGPORT -d --pid-file=hg.pid | |
@@ -677,6 +332,9 b' Verify interaction between preferuncompr' | |||||
677 | $ killdaemons.py |
|
332 | $ killdaemons.py | |
678 |
|
333 | |||
679 | Clone not allowed when full bundles disabled and can't serve secrets |
|
334 | Clone not allowed when full bundles disabled and can't serve secrets | |
|
335 | -------------------------------------------------------------------- | |||
|
336 | ||||
|
337 | The clone should fail as no valid option is found. | |||
680 |
|
338 | |||
681 | $ cd server |
|
339 | $ cd server | |
682 | $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid |
|
340 | $ hg serve --config server.disablefullbundle=true -p $HGPORT -d --pid-file=hg.pid | |
@@ -692,6 +350,8 b' Clone not allowed when full bundles disa' | |||||
692 | [100] |
|
350 | [100] | |
693 |
|
351 | |||
694 | Local stream clone with secrets involved |
|
352 | Local stream clone with secrets involved | |
|
353 | ---------------------------------------- | |||
|
354 | ||||
695 | (This is just a test over behavior: if you have access to the repo's files, |
|
355 | (This is just a test over behavior: if you have access to the repo's files, | |
696 | there is no security so it isn't important to prevent a clone here.) |
|
356 | there is no security so it isn't important to prevent a clone here.) | |
697 |
|
357 | |||
@@ -704,12 +364,20 b" there is no security so it isn't importa" | |||||
704 | added 2 changesets with 1025 changes to 1025 files |
|
364 | added 2 changesets with 1025 changes to 1025 files | |
705 | new changesets 96ee1d7354c4:c17445101a72 |
|
365 | new changesets 96ee1d7354c4:c17445101a72 | |
706 |
|
366 | |||
|
367 | (revert introduction of secret changeset) | |||
|
368 | ||||
|
369 | $ hg -R server phase --draft 'secret()' | |||
|
370 | ||||
707 | Stream clone while repo is changing: |
|
371 | Stream clone while repo is changing: | |
|
372 | ------------------------------------ | |||
|
373 | ||||
|
374 | We should send a repository in a valid state, ignoring the ongoing transaction. | |||
708 |
|
375 | |||
709 | $ mkdir changing |
|
376 | $ mkdir changing | |
710 | $ cd changing |
|
377 | $ cd changing | |
711 |
|
378 | |||
712 | prepare repo with small and big file to cover both code paths in emitrevlogdata |
|
379 | prepare repo with small and big file to cover both code paths in emitrevlogdata | |
|
380 | (inlined revlog and non-inlined revlogs). | |||
713 |
|
381 | |||
714 | $ hg init repo |
|
382 | $ hg init repo | |
715 | $ touch repo/f1 |
|
383 | $ touch repo/f1 | |
@@ -740,15 +408,14 b' actually serving file content' | |||||
740 | $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 |
|
408 | $ $RUNTESTDIR/testlib/wait-on-file 10 $HG_TEST_STREAM_WALKED_FILE_3 | |
741 | $ hg -R clone id |
|
409 | $ hg -R clone id | |
742 | 000000000000 |
|
410 | 000000000000 | |
|
411 | $ hg -R clone verify --quiet | |||
743 | $ cat errors.log |
|
412 | $ cat errors.log | |
744 | $ cd .. |
|
413 | $ cd .. | |
745 |
|
414 | |||
746 | Stream repository with bookmarks |
|
415 | Stream repository with bookmarks | |
747 | -------------------------------- |
|
416 | -------------------------------- | |
748 |
|
417 | |||
749 | (revert introduction of secret changeset) |
|
418 | The bookmark file should be send over in the stream bundle. | |
750 |
|
||||
751 | $ hg -R server phase --draft 'secret()' |
|
|||
752 |
|
419 | |||
753 | add a bookmark |
|
420 | add a bookmark | |
754 |
|
421 | |||
@@ -756,40 +423,17 b' add a bookmark' | |||||
756 |
|
423 | |||
757 | clone it |
|
424 | clone it | |
758 |
|
425 | |||
759 | #if stream-legacy |
|
|||
760 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks |
|
|||
761 | streaming all changes |
|
|||
762 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
763 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
764 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
765 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
766 | searching for changes |
|
|||
767 | no changes found |
|
|||
768 | updating to branch default |
|
|||
769 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
770 | #endif |
|
|||
771 | #if stream-bundle2-v2 |
|
|||
772 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks |
|
426 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks | |
773 | streaming all changes |
|
427 | streaming all changes | |
774 |
109 |
|
428 | 1091 files to transfer, * KB of data (glob) (stream-legacy !) | |
775 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
429 | 1097 files to transfer, * KB of data (glob) (stream-bundle2-v2 no-rust !) | |
776 |
109 |
|
430 | 1099 files to transfer, * KB of data (glob) (stream-bundle2-v2 rust !) | |
777 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
431 | 1096 entries to transfer (stream-bundle2-v3 !) | |
778 | 1099 files to transfer, 99.2 KB of data (zstd rust !) |
|
432 | transferred * KB in * seconds (* */sec) (glob) | |
779 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
433 | searching for changes (stream-legacy !) | |
|
434 | no changes found (stream-legacy !) | |||
780 | updating to branch default |
|
435 | updating to branch default | |
781 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
436 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
782 | #endif |
|
|||
783 | #if stream-bundle2-v3 |
|
|||
784 | $ hg clone --stream http://localhost:$HGPORT with-bookmarks |
|
|||
785 | streaming all changes |
|
|||
786 | 1096 entries to transfer |
|
|||
787 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
788 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
789 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
790 | updating to branch default |
|
|||
791 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
792 | #endif |
|
|||
793 | $ hg verify -R with-bookmarks -q |
|
437 | $ hg verify -R with-bookmarks -q | |
794 | $ hg -R with-bookmarks bookmarks |
|
438 | $ hg -R with-bookmarks bookmarks | |
795 | some-bookmark 2:5223b5e3265f |
|
439 | some-bookmark 2:5223b5e3265f | |
@@ -797,6 +441,9 b' clone it' | |||||
797 | Stream repository with phases |
|
441 | Stream repository with phases | |
798 | ----------------------------- |
|
442 | ----------------------------- | |
799 |
|
443 | |||
|
444 | The file storing phases information (e.g. phaseroots) should be sent as part of | |||
|
445 | the stream bundle. | |||
|
446 | ||||
800 | Clone as publishing |
|
447 | Clone as publishing | |
801 |
|
448 | |||
802 | $ hg -R server phase -r 'all()' |
|
449 | $ hg -R server phase -r 'all()' | |
@@ -804,40 +451,17 b' Clone as publishing' | |||||
804 | 1: draft |
|
451 | 1: draft | |
805 | 2: draft |
|
452 | 2: draft | |
806 |
|
453 | |||
807 | #if stream-legacy |
|
|||
808 | $ hg clone --stream http://localhost:$HGPORT phase-publish |
|
|||
809 | streaming all changes |
|
|||
810 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
811 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
812 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
813 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
814 | searching for changes |
|
|||
815 | no changes found |
|
|||
816 | updating to branch default |
|
|||
817 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
818 | #endif |
|
|||
819 | #if stream-bundle2-v2 |
|
|||
820 | $ hg clone --stream http://localhost:$HGPORT phase-publish |
|
454 | $ hg clone --stream http://localhost:$HGPORT phase-publish | |
821 | streaming all changes |
|
455 | streaming all changes | |
822 |
109 |
|
456 | 1091 files to transfer, * KB of data (glob) (stream-legacy !) | |
823 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
457 | 1097 files to transfer, * KB of data (glob) (stream-bundle2-v2 no-rust !) | |
824 |
109 |
|
458 | 1099 files to transfer, * KB of data (glob) (stream-bundle2-v2 rust !) | |
825 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
459 | 1096 entries to transfer (stream-bundle2-v3 !) | |
826 | 1099 files to transfer, 99.2 KB of data (zstd rust !) |
|
460 | transferred * KB in * seconds (* */sec) (glob) | |
827 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
461 | searching for changes (stream-legacy !) | |
|
462 | no changes found (stream-legacy !) | |||
828 | updating to branch default |
|
463 | updating to branch default | |
829 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
464 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
830 | #endif |
|
|||
831 | #if stream-bundle2-v3 |
|
|||
832 | $ hg clone --stream http://localhost:$HGPORT phase-publish |
|
|||
833 | streaming all changes |
|
|||
834 | 1096 entries to transfer |
|
|||
835 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
836 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
837 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
838 | updating to branch default |
|
|||
839 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
840 | #endif |
|
|||
841 | $ hg verify -R phase-publish -q |
|
465 | $ hg verify -R phase-publish -q | |
842 | $ hg -R phase-publish phase -r 'all()' |
|
466 | $ hg -R phase-publish phase -r 'all()' | |
843 | 0: public |
|
467 | 0: public | |
@@ -854,73 +478,47 b' Clone as non publishing' | |||||
854 | $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid |
|
478 | $ hg -R server serve -p $HGPORT -d --pid-file=hg.pid | |
855 | $ cat hg.pid > $DAEMON_PIDS |
|
479 | $ cat hg.pid > $DAEMON_PIDS | |
856 |
|
480 | |||
857 | #if stream-legacy |
|
|||
858 |
|
||||
859 | With v1 of the stream protocol, changeset are always cloned as public. It make |
|
|||
860 | stream v1 unsuitable for non-publishing repository. |
|
|||
861 |
|
||||
862 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish |
|
|||
863 | streaming all changes |
|
|||
864 | 1091 files to transfer, 102 KB of data (no-zstd !) |
|
|||
865 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
866 | 1091 files to transfer, 98.8 KB of data (zstd !) |
|
|||
867 | transferred 98.8 KB in * seconds (* */sec) (glob) (zstd !) |
|
|||
868 | searching for changes |
|
|||
869 | no changes found |
|
|||
870 | updating to branch default |
|
|||
871 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
872 | $ hg -R phase-no-publish phase -r 'all()' |
|
|||
873 | 0: public |
|
|||
874 | 1: public |
|
|||
875 | 2: public |
|
|||
876 | #endif |
|
|||
877 | #if stream-bundle2-v2 |
|
|||
878 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish |
|
481 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish | |
879 | streaming all changes |
|
482 | streaming all changes | |
880 |
109 |
|
483 | 1091 files to transfer, * KB of data (glob) (stream-legacy !) | |
881 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
484 | 1098 files to transfer, * KB of data (glob) (stream-bundle2-v2 no-rust !) | |
882 |
1 |
|
485 | 1100 files to transfer, * KB of data (glob) (stream-bundle2-v2 rust !) | |
883 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
486 | 1097 entries to transfer (stream-bundle2-v3 !) | |
884 | 1100 files to transfer, 99.2 KB of data (zstd rust !) |
|
487 | transferred * KB in * seconds (* */sec) (glob) | |
885 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
488 | searching for changes (stream-legacy !) | |
|
489 | no changes found (stream-legacy !) | |||
886 | updating to branch default |
|
490 | updating to branch default | |
887 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
491 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
|
492 | ||||
|
493 | Note: With v1 of the stream protocol, changeset are always cloned as public. It | |||
|
494 | make stream v1 unsuitable for non-publishing repository. | |||
|
495 | ||||
888 | $ hg -R phase-no-publish phase -r 'all()' |
|
496 | $ hg -R phase-no-publish phase -r 'all()' | |
889 | 0: draft |
|
497 | 0: public (stream-legacy !) | |
890 | 1: draft |
|
498 | 1: public (stream-legacy !) | |
891 | 2: draft |
|
499 | 2: public (stream-legacy !) | |
892 | #endif |
|
500 | 0: draft (no-stream-legacy !) | |
893 | #if stream-bundle2-v3 |
|
501 | 1: draft (no-stream-legacy !) | |
894 | $ hg clone --stream http://localhost:$HGPORT phase-no-publish |
|
502 | 2: draft (no-stream-legacy !) | |
895 | streaming all changes |
|
|||
896 | 1097 entries to transfer |
|
|||
897 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
898 | transferred 99.1 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
899 | transferred 99.2 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
900 | updating to branch default |
|
|||
901 | 1088 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
|||
902 | $ hg -R phase-no-publish phase -r 'all()' |
|
|||
903 | 0: draft |
|
|||
904 | 1: draft |
|
|||
905 | 2: draft |
|
|||
906 | #endif |
|
|||
907 | $ hg verify -R phase-no-publish -q |
|
503 | $ hg verify -R phase-no-publish -q | |
908 |
|
504 | |||
909 | $ killdaemons.py |
|
505 | $ killdaemons.py | |
910 |
|
506 | |||
|
507 | ||||
|
508 | Stream repository with obsolescence | |||
|
509 | ----------------------------------- | |||
|
510 | ||||
911 | #if stream-legacy |
|
511 | #if stream-legacy | |
912 |
|
512 | |||
913 | With v1 of the stream protocol, changeset are always cloned as public. There's |
|
513 | With v1 of the stream protocol, changeset are always cloned as public. There's | |
914 | no obsolescence markers exchange in stream v1. |
|
514 | no obsolescence markers exchange in stream v1. | |
915 |
|
515 | |||
916 | #endif |
|
516 | #else | |
917 | #if stream-bundle2-v2 |
|
|||
918 |
|
||||
919 | Stream repository with obsolescence |
|
|||
920 | ----------------------------------- |
|
|||
921 |
|
517 | |||
922 | Clone non-publishing with obsolescence |
|
518 | Clone non-publishing with obsolescence | |
923 |
|
519 | |||
|
520 | The obsstore file should be send as part of the stream bundle | |||
|
521 | ||||
924 | $ cat >> $HGRCPATH << EOF |
|
522 | $ cat >> $HGRCPATH << EOF | |
925 | > [experimental] |
|
523 | > [experimental] | |
926 | > evolution=all |
|
524 | > evolution=all | |
@@ -943,62 +541,10 b' Clone non-publishing with obsolescence' | |||||
943 |
|
541 | |||
944 | $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence |
|
542 | $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence | |
945 | streaming all changes |
|
543 | streaming all changes | |
946 |
1099 files to transfer, |
|
544 | 1099 files to transfer, * KB of data (glob) (stream-bundle2-v2 no-rust !) | |
947 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
545 | 1101 files to transfer, * KB of data (glob) (stream-bundle2-v2 rust !) | |
948 | 1099 files to transfer, 99.5 KB of data (zstd no-rust !) |
|
546 | 1098 entries to transfer (no-stream-bundle2-v2 !) | |
949 |
transferred |
|
547 | transferred * KB in * seconds (* */sec) (glob) | |
950 | 1101 files to transfer, 99.6 KB of data (zstd rust !) |
|
|||
951 | transferred 99.6 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
952 | $ hg -R with-obsolescence log -T '{rev}: {phase}\n' |
|
|||
953 | 2: draft |
|
|||
954 | 1: draft |
|
|||
955 | 0: draft |
|
|||
956 | $ hg debugobsolete -R with-obsolescence |
|
|||
957 | 8c206a663911c1f97f2f9d7382e417ae55872cfa 0 {5223b5e3265f0df40bb743da62249413d74ac70f} (Thu Jan 01 00:00:00 1970 +0000) {'user': 'test'} |
|
|||
958 | $ hg verify -R with-obsolescence -q |
|
|||
959 |
|
||||
960 | $ hg clone -U --stream --config experimental.evolution=0 http://localhost:$HGPORT with-obsolescence-no-evolution |
|
|||
961 | streaming all changes |
|
|||
962 | remote: abort: server has obsolescence markers, but client cannot receive them via stream clone |
|
|||
963 | abort: pull failed on remote |
|
|||
964 | [100] |
|
|||
965 |
|
||||
966 | $ killdaemons.py |
|
|||
967 |
|
||||
968 | #endif |
|
|||
969 | #if stream-bundle2-v3 |
|
|||
970 |
|
||||
971 | Stream repository with obsolescence |
|
|||
972 | ----------------------------------- |
|
|||
973 |
|
||||
974 | Clone non-publishing with obsolescence |
|
|||
975 |
|
||||
976 | $ cat >> $HGRCPATH << EOF |
|
|||
977 | > [experimental] |
|
|||
978 | > evolution=all |
|
|||
979 | > EOF |
|
|||
980 |
|
||||
981 | $ cd server |
|
|||
982 | $ echo foo > foo |
|
|||
983 | $ hg -q commit -m 'about to be pruned' |
|
|||
984 | $ hg debugobsolete `hg log -r . -T '{node}'` -d '0 0' -u test --record-parents |
|
|||
985 | 1 new obsolescence markers |
|
|||
986 | obsoleted 1 changesets |
|
|||
987 | $ hg up null -q |
|
|||
988 | $ hg log -T '{rev}: {phase}\n' |
|
|||
989 | 2: draft |
|
|||
990 | 1: draft |
|
|||
991 | 0: draft |
|
|||
992 | $ hg serve -p $HGPORT -d --pid-file=hg.pid |
|
|||
993 | $ cat hg.pid > $DAEMON_PIDS |
|
|||
994 | $ cd .. |
|
|||
995 |
|
||||
996 | $ hg clone -U --stream http://localhost:$HGPORT with-obsolescence |
|
|||
997 | streaming all changes |
|
|||
998 | 1098 entries to transfer |
|
|||
999 | transferred 102 KB in * seconds (* */sec) (glob) (no-zstd !) |
|
|||
1000 | transferred 99.5 KB in * seconds (* */sec) (glob) (zstd no-rust !) |
|
|||
1001 | transferred 99.6 KB in * seconds (* */sec) (glob) (zstd rust !) |
|
|||
1002 |
$ |
|
548 | $ hg -R with-obsolescence log -T '{rev}: {phase}\n' | |
1003 | 2: draft |
|
549 | 2: draft | |
1004 | 1: draft |
|
550 | 1: draft | |
@@ -1018,19 +564,16 b' Clone non-publishing with obsolescence' | |||||
1018 | #endif |
|
564 | #endif | |
1019 |
|
565 | |||
1020 | Cloning a repo with no requirements doesn't give some obscure error |
|
566 | Cloning a repo with no requirements doesn't give some obscure error | |
|
567 | ------------------------------------------------------------------- | |||
1021 |
|
568 | |||
1022 | $ mkdir -p empty-repo/.hg |
|
569 | $ mkdir -p empty-repo/.hg | |
1023 | $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2 |
|
570 | $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo2 | |
1024 | $ hg --cwd empty-repo2 verify -q |
|
571 | $ hg --cwd empty-repo2 verify -q | |
1025 |
|
572 | |||
1026 | Cloning a repo with an empty manifestlog doesn't give some weird error |
|
573 | Cloning a repo with an empty manifestlog doesn't give some weird error | |
|
574 | ---------------------------------------------------------------------- | |||
1027 |
|
575 | |||
1028 | $ rm -r empty-repo; hg init empty-repo |
|
576 | $ rm -r empty-repo; hg init empty-repo | |
1029 | $ (cd empty-repo; touch x; hg commit -Am empty; hg debugstrip -r 0) > /dev/null |
|
577 | $ (cd empty-repo; touch x; hg commit -Am empty; hg debugstrip -r 0) > /dev/null | |
1030 | $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo3 |
|
578 | $ hg clone -q --stream ssh://user@dummy/empty-repo empty-repo3 | |
1031 |
$ hg --cwd empty-repo3 verify -q |
|
579 | $ hg --cwd empty-repo3 verify -q | |
1032 | [1] |
|
|||
1033 |
|
||||
1034 | The warnings filtered out here are talking about zero-length 'orphan' data files. |
|
|||
1035 | Those are harmless, so that's fine. |
|
|||
1036 |
|
@@ -47,11 +47,7 b' Ensure branchcache got copied over:' | |||||
47 |
|
47 | |||
48 | $ ls .hg/cache |
|
48 | $ ls .hg/cache | |
49 | branch2-base |
|
49 | branch2-base | |
50 | branch2-immutable |
|
|||
51 | branch2-served |
|
50 | branch2-served | |
52 | branch2-served.hidden |
|
|||
53 | branch2-visible |
|
|||
54 | branch2-visible-hidden |
|
|||
55 | rbc-names-v1 |
|
51 | rbc-names-v1 | |
56 | rbc-revs-v1 |
|
52 | rbc-revs-v1 | |
57 | tags2 |
|
53 | tags2 | |
@@ -71,42 +67,34 b' No update, with debug option:' | |||||
71 |
|
67 | |||
72 | #if hardlink |
|
68 | #if hardlink | |
73 | $ hg --debug clone -U . ../c --config progress.debug=true |
|
69 | $ hg --debug clone -U . ../c --config progress.debug=true | |
74 |
linking: 1/1 |
|
70 | linking: 1/12 files (8.33%) (no-rust !) | |
75 |
linking: 2/1 |
|
71 | linking: 2/12 files (16.67%) (no-rust !) | |
76 |
linking: 3/1 |
|
72 | linking: 3/12 files (25.00%) (no-rust !) | |
77 |
linking: 4/1 |
|
73 | linking: 4/12 files (33.33%) (no-rust !) | |
78 |
linking: 5/1 |
|
74 | linking: 5/12 files (41.67%) (no-rust !) | |
79 |
linking: 6/1 |
|
75 | linking: 6/12 files (50.00%) (no-rust !) | |
80 |
linking: 7/1 |
|
76 | linking: 7/12 files (58.33%) (no-rust !) | |
81 |
linking: 8/1 |
|
77 | linking: 8/12 files (66.67%) (no-rust !) | |
82 |
linking: 9/1 |
|
78 | linking: 9/12 files (75.00%) (no-rust !) | |
83 |
linking: 10/1 |
|
79 | linking: 10/12 files (83.33%) (no-rust !) | |
84 |
linking: 11/1 |
|
80 | linking: 11/12 files (91.67%) (no-rust !) | |
85 |
linking: 12/1 |
|
81 | linking: 12/12 files (100.00%) (no-rust !) | |
86 |
link |
|
82 | linked 12 files (no-rust !) | |
87 |
linking: 14 |
|
83 | linking: 1/14 files (7.14%) (rust !) | |
88 |
linking: |
|
84 | linking: 2/14 files (14.29%) (rust !) | |
89 |
linking: |
|
85 | linking: 3/14 files (21.43%) (rust !) | |
90 |
link |
|
86 | linking: 4/14 files (28.57%) (rust !) | |
91 |
linking: |
|
87 | linking: 5/14 files (35.71%) (rust !) | |
92 |
linking: |
|
88 | linking: 6/14 files (42.86%) (rust !) | |
93 |
linking: |
|
89 | linking: 7/14 files (50.00%) (rust !) | |
94 |
linking: |
|
90 | linking: 8/14 files (57.14%) (rust !) | |
95 |
linking: |
|
91 | linking: 9/14 files (64.29%) (rust !) | |
96 |
linking: |
|
92 | linking: 10/14 files (71.43%) (rust !) | |
97 |
linking: |
|
93 | linking: 11/14 files (78.57%) (rust !) | |
98 |
linking: |
|
94 | linking: 12/14 files (85.71%) (rust !) | |
99 |
linking: |
|
95 | linking: 13/14 files (92.86%) (rust !) | |
100 |
linking: 1 |
|
96 | linking: 14/14 files (100.00%) (rust !) | |
101 |
link |
|
97 | linked 14 files (rust !) | |
102 | linking: 12/18 files (66.67%) (rust !) |
|
|||
103 | linking: 13/18 files (72.22%) (rust !) |
|
|||
104 | linking: 14/18 files (77.78%) (rust !) |
|
|||
105 | linking: 15/18 files (83.33%) (rust !) |
|
|||
106 | linking: 16/18 files (88.89%) (rust !) |
|
|||
107 | linking: 17/18 files (94.44%) (rust !) |
|
|||
108 | linking: 18/18 files (100.00%) (rust !) |
|
|||
109 | linked 18 files (rust !) |
|
|||
110 | updating the branch cache |
|
98 | updating the branch cache | |
111 | #else |
|
99 | #else | |
112 | $ hg --debug clone -U . ../c --config progress.debug=true |
|
100 | $ hg --debug clone -U . ../c --config progress.debug=true | |
@@ -125,11 +113,7 b' Ensure branchcache got copied over:' | |||||
125 |
|
113 | |||
126 | $ ls .hg/cache |
|
114 | $ ls .hg/cache | |
127 | branch2-base |
|
115 | branch2-base | |
128 | branch2-immutable |
|
|||
129 | branch2-served |
|
116 | branch2-served | |
130 | branch2-served.hidden |
|
|||
131 | branch2-visible |
|
|||
132 | branch2-visible-hidden |
|
|||
133 | rbc-names-v1 |
|
117 | rbc-names-v1 | |
134 | rbc-revs-v1 |
|
118 | rbc-revs-v1 | |
135 | tags2 |
|
119 | tags2 |
@@ -394,9 +394,9 b' No bundle spec should work' | |||||
394 | $ hg clone -U http://localhost:$HGPORT stream-clone-no-spec |
|
394 | $ hg clone -U http://localhost:$HGPORT stream-clone-no-spec | |
395 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
395 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
396 | 5 files to transfer, 613 bytes of data (no-rust !) |
|
396 | 5 files to transfer, 613 bytes of data (no-rust !) | |
397 |
transferred 613 bytes in * |
|
397 | transferred 613 bytes in * seconds (* */sec) (glob) (no-rust !) | |
398 | 7 files to transfer, 739 bytes of data (rust !) |
|
398 | 7 files to transfer, 739 bytes of data (rust !) | |
399 |
transferred 739 bytes in * |
|
399 | transferred 739 bytes in * seconds (* */sec) (glob) (rust !) | |
400 | finished applying clone bundle |
|
400 | finished applying clone bundle | |
401 | searching for changes |
|
401 | searching for changes | |
402 | no changes found |
|
402 | no changes found | |
@@ -409,10 +409,8 b' Bundle spec without parameters should wo' | |||||
409 |
|
409 | |||
410 | $ hg clone -U http://localhost:$HGPORT stream-clone-vanilla-spec |
|
410 | $ hg clone -U http://localhost:$HGPORT stream-clone-vanilla-spec | |
411 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
411 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
412 |
|
|
412 | * files to transfer, * bytes of data (glob) | |
413 |
transferred |
|
413 | transferred * bytes in * seconds (* */sec) (glob) | |
414 | 7 files to transfer, 739 bytes of data (rust !) |
|
|||
415 | transferred 739 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
416 | finished applying clone bundle |
|
414 | finished applying clone bundle | |
417 | searching for changes |
|
415 | searching for changes | |
418 | no changes found |
|
416 | no changes found | |
@@ -425,10 +423,8 b' Bundle spec with format requirements sho' | |||||
425 |
|
423 | |||
426 | $ hg clone -U http://localhost:$HGPORT stream-clone-supported-requirements |
|
424 | $ hg clone -U http://localhost:$HGPORT stream-clone-supported-requirements | |
427 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
425 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
428 |
|
|
426 | * files to transfer, * bytes of data (glob) | |
429 |
transferred |
|
427 | transferred * bytes in * seconds (* */sec) (glob) | |
430 | 7 files to transfer, 739 bytes of data (rust !) |
|
|||
431 | transferred 739 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
432 | finished applying clone bundle |
|
428 | finished applying clone bundle | |
433 | searching for changes |
|
429 | searching for changes | |
434 | no changes found |
|
430 | no changes found | |
@@ -574,10 +570,8 b' A manifest with just a gzip bundle' | |||||
574 | no compatible clone bundles available on server; falling back to regular clone |
|
570 | no compatible clone bundles available on server; falling back to regular clone | |
575 | (you may want to report this to the server operator) |
|
571 | (you may want to report this to the server operator) | |
576 | streaming all changes |
|
572 | streaming all changes | |
577 |
|
|
573 | * files to transfer, * bytes of data (glob) | |
578 |
transferred |
|
574 | transferred * bytes in * seconds (* */sec) (glob) | |
579 | 12 files to transfer, 942 bytes of data (rust !) |
|
|||
580 | transferred 942 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
581 |
|
575 | |||
582 | A manifest with a stream clone but no BUNDLESPEC |
|
576 | A manifest with a stream clone but no BUNDLESPEC | |
583 |
|
577 | |||
@@ -589,10 +583,8 b' A manifest with a stream clone but no BU' | |||||
589 | no compatible clone bundles available on server; falling back to regular clone |
|
583 | no compatible clone bundles available on server; falling back to regular clone | |
590 | (you may want to report this to the server operator) |
|
584 | (you may want to report this to the server operator) | |
591 | streaming all changes |
|
585 | streaming all changes | |
592 |
|
|
586 | * files to transfer, * bytes of data (glob) | |
593 |
transferred |
|
587 | transferred * bytes in * seconds (* */sec) (glob) | |
594 | 12 files to transfer, 942 bytes of data (rust !) |
|
|||
595 | transferred 942 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
596 |
|
588 | |||
597 | A manifest with a gzip bundle and a stream clone |
|
589 | A manifest with a gzip bundle and a stream clone | |
598 |
|
590 | |||
@@ -603,10 +595,8 b' A manifest with a gzip bundle and a stre' | |||||
603 |
|
595 | |||
604 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed |
|
596 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed | |
605 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
597 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
606 |
|
|
598 | * files to transfer, * bytes of data (glob) | |
607 |
transferred |
|
599 | transferred * bytes in * seconds (* */sec) (glob) | |
608 | 7 files to transfer, 739 bytes of data (rust !) |
|
|||
609 | transferred 739 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
610 | finished applying clone bundle |
|
600 | finished applying clone bundle | |
611 | searching for changes |
|
601 | searching for changes | |
612 | no changes found |
|
602 | no changes found | |
@@ -620,10 +610,8 b' A manifest with a gzip bundle and stream' | |||||
620 |
|
610 | |||
621 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-requirements |
|
611 | $ hg clone -U --stream http://localhost:$HGPORT uncompressed-gzip-packed-requirements | |
622 | applying clone bundle from http://localhost:$HGPORT1/packed.hg |
|
612 | applying clone bundle from http://localhost:$HGPORT1/packed.hg | |
623 |
|
|
613 | * files to transfer, * bytes of data (glob) | |
624 |
transferred |
|
614 | transferred * bytes in * seconds (* */sec) (glob) | |
625 | 7 files to transfer, 739 bytes of data (rust !) |
|
|||
626 | transferred 739 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
627 | finished applying clone bundle |
|
615 | finished applying clone bundle | |
628 | searching for changes |
|
616 | searching for changes | |
629 | no changes found |
|
617 | no changes found | |
@@ -639,10 +627,8 b' A manifest with a gzip bundle and a stre' | |||||
639 | no compatible clone bundles available on server; falling back to regular clone |
|
627 | no compatible clone bundles available on server; falling back to regular clone | |
640 | (you may want to report this to the server operator) |
|
628 | (you may want to report this to the server operator) | |
641 | streaming all changes |
|
629 | streaming all changes | |
642 |
|
|
630 | * files to transfer, * bytes of data (glob) | |
643 |
transferred |
|
631 | transferred * bytes in * seconds (* */sec) (glob) | |
644 | 12 files to transfer, 942 bytes of data (rust !) |
|
|||
645 | transferred 942 bytes in *.* seconds (*) (glob) (rust !) |
|
|||
646 |
|
632 | |||
647 | Test clone bundle retrieved through bundle2 |
|
633 | Test clone bundle retrieved through bundle2 | |
648 |
|
634 |
@@ -284,7 +284,7 b' Show all commands + options' | |||||
284 | debug-revlog-stats: changelog, manifest, filelogs, template |
|
284 | debug-revlog-stats: changelog, manifest, filelogs, template | |
285 | debug::stable-tail-sort: template |
|
285 | debug::stable-tail-sort: template | |
286 | debug::stable-tail-sort-leaps: template, specific |
|
286 | debug::stable-tail-sort-leaps: template, specific | |
287 |
debug::unbundle: |
|
287 | debug::unbundle: | |
288 | debugancestor: |
|
288 | debugancestor: | |
289 | debugantivirusrunning: |
|
289 | debugantivirusrunning: | |
290 | debugapplystreamclonebundle: |
|
290 | debugapplystreamclonebundle: |
@@ -59,8 +59,11 b' perfstatus' | |||||
59 | number of run to perform before starting measurement. |
|
59 | number of run to perform before starting measurement. | |
60 |
|
60 | |||
61 | "profile-benchmark" |
|
61 | "profile-benchmark" | |
62 |
Enable profiling for the benchmarked section. ( |
|
62 | Enable profiling for the benchmarked section. (by default, the first | |
63 | benchmarked) |
|
63 | iteration is benchmarked) | |
|
64 | ||||
|
65 | "profiled-runs" | |||
|
66 | list of iteration to profile (starting from 0) | |||
64 |
|
67 | |||
65 | "run-limits" |
|
68 | "run-limits" | |
66 | Control the number of runs each benchmark will perform. The option value |
|
69 | Control the number of runs each benchmark will perform. The option value |
@@ -652,12 +652,7 b' Test cache warming command' | |||||
652 | .hg/cache/rbc-revs-v1 |
|
652 | .hg/cache/rbc-revs-v1 | |
653 | .hg/cache/rbc-names-v1 |
|
653 | .hg/cache/rbc-names-v1 | |
654 | .hg/cache/hgtagsfnodes1 |
|
654 | .hg/cache/hgtagsfnodes1 | |
655 | .hg/cache/branch2-visible-hidden |
|
|||
656 | .hg/cache/branch2-visible |
|
|||
657 | .hg/cache/branch2-served.hidden |
|
|||
658 | .hg/cache/branch2-served |
|
655 | .hg/cache/branch2-served | |
659 | .hg/cache/branch2-immutable |
|
|||
660 | .hg/cache/branch2-base |
|
|||
661 |
|
656 | |||
662 | Test debug::unbundle |
|
657 | Test debug::unbundle | |
663 |
|
658 | |||
@@ -668,9 +663,6 b' Test debug::unbundle' | |||||
668 | adding manifests |
|
663 | adding manifests | |
669 | adding file changes |
|
664 | adding file changes | |
670 | added 0 changesets with 0 changes to 1 files (no-pure !) |
|
665 | added 0 changesets with 0 changes to 1 files (no-pure !) | |
671 | 9 local changesets published (no-pure !) |
|
|||
672 | 3 local changesets published (pure !) |
|
|||
673 | (run 'hg update' to get a working copy) |
|
|||
674 |
|
666 | |||
675 | Test debugcolor |
|
667 | Test debugcolor | |
676 |
|
668 |
@@ -263,11 +263,7 b' r4 has hardlinks in the working dir (not' | |||||
263 | 2 r4/.hg/00changelog.i |
|
263 | 2 r4/.hg/00changelog.i | |
264 | [24] r4/.hg/branch (re) |
|
264 | [24] r4/.hg/branch (re) | |
265 | 2 r4/.hg/cache/branch2-base |
|
265 | 2 r4/.hg/cache/branch2-base | |
266 | 2 r4/.hg/cache/branch2-immutable |
|
|||
267 | 2 r4/.hg/cache/branch2-served |
|
266 | 2 r4/.hg/cache/branch2-served | |
268 | 2 r4/.hg/cache/branch2-served.hidden |
|
|||
269 | 2 r4/.hg/cache/branch2-visible |
|
|||
270 | 2 r4/.hg/cache/branch2-visible-hidden |
|
|||
271 | 2 r4/.hg/cache/rbc-names-v1 |
|
267 | 2 r4/.hg/cache/rbc-names-v1 | |
272 | 2 r4/.hg/cache/rbc-revs-v1 |
|
268 | 2 r4/.hg/cache/rbc-revs-v1 | |
273 | 2 r4/.hg/cache/tags2 |
|
269 | 2 r4/.hg/cache/tags2 | |
@@ -320,11 +316,7 b' Update back to revision 12 in r4 should ' | |||||
320 | 2 r4/.hg/00changelog.i |
|
316 | 2 r4/.hg/00changelog.i | |
321 | 1 r4/.hg/branch |
|
317 | 1 r4/.hg/branch | |
322 | 2 r4/.hg/cache/branch2-base |
|
318 | 2 r4/.hg/cache/branch2-base | |
323 | 2 r4/.hg/cache/branch2-immutable |
|
|||
324 | 2 r4/.hg/cache/branch2-served |
|
319 | 2 r4/.hg/cache/branch2-served | |
325 | 2 r4/.hg/cache/branch2-served.hidden |
|
|||
326 | 2 r4/.hg/cache/branch2-visible |
|
|||
327 | 2 r4/.hg/cache/branch2-visible-hidden |
|
|||
328 | 2 r4/.hg/cache/rbc-names-v1 |
|
320 | 2 r4/.hg/cache/rbc-names-v1 | |
329 | 2 r4/.hg/cache/rbc-revs-v1 |
|
321 | 2 r4/.hg/cache/rbc-revs-v1 | |
330 | 2 r4/.hg/cache/tags2 |
|
322 | 2 r4/.hg/cache/tags2 |
@@ -730,7 +730,7 b' getoldnodedrevmap() in later phabsends.' | |||||
730 | $ hg amend --config experimental.evolution=all --config extensions.amend= |
|
730 | $ hg amend --config experimental.evolution=all --config extensions.amend= | |
731 | 1 new orphan changesets |
|
731 | 1 new orphan changesets | |
732 | $ hg up 3 |
|
732 | $ hg up 3 | |
733 | obsolete feature not enabled but 1 markers found! |
|
733 | "obsolete" feature not enabled but 1 markers found! | |
734 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
734 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
735 | $ hg rebase --config experimental.evolution=all --config extensions.rebase= |
|
735 | $ hg rebase --config experimental.evolution=all --config extensions.rebase= | |
736 | note: not rebasing 2:832553266fe8 "two: second commit to review", already in destination as 4:0124e5474c88 tip "two: second commit to review" |
|
736 | note: not rebasing 2:832553266fe8 "two: second commit to review", already in destination as 4:0124e5474c88 tip "two: second commit to review" | |
@@ -741,7 +741,7 b' updated.' | |||||
741 |
|
741 | |||
742 | $ echo y | hg phabsend --fold --confirm -r 1:: \ |
|
742 | $ echo y | hg phabsend --fold --confirm -r 1:: \ | |
743 | > --test-vcr "$VCR/phabsend-fold-updated.json" |
|
743 | > --test-vcr "$VCR/phabsend-fold-updated.json" | |
744 | obsolete feature not enabled but 2 markers found! |
|
744 | "obsolete" feature not enabled but 2 markers found! | |
745 | 602c4e738243 mapped to old nodes ['602c4e738243'] |
|
745 | 602c4e738243 mapped to old nodes ['602c4e738243'] | |
746 | 0124e5474c88 mapped to old nodes ['832553266fe8'] |
|
746 | 0124e5474c88 mapped to old nodes ['832553266fe8'] | |
747 | e4edb1fe3565 mapped to old nodes ['921f8265efbd'] |
|
747 | e4edb1fe3565 mapped to old nodes ['921f8265efbd'] | |
@@ -752,11 +752,11 b' updated.' | |||||
752 | D8387 - updated - 1:602c4e738243 "one: first commit to review" |
|
752 | D8387 - updated - 1:602c4e738243 "one: first commit to review" | |
753 | D8387 - updated - 4:0124e5474c88 "two: second commit to review" |
|
753 | D8387 - updated - 4:0124e5474c88 "two: second commit to review" | |
754 | D8387 - updated - 5:e4edb1fe3565 tip "3: a commit with no detailed message" |
|
754 | D8387 - updated - 5:e4edb1fe3565 tip "3: a commit with no detailed message" | |
755 | obsolete feature not enabled but 2 markers found! (?) |
|
755 | "obsolete" feature not enabled but 2 markers found! (?) | |
756 | updating local commit list for D8387 |
|
756 | updating local commit list for D8387 | |
757 | new commits: ['602c4e738243', '0124e5474c88', 'e4edb1fe3565'] |
|
757 | new commits: ['602c4e738243', '0124e5474c88', 'e4edb1fe3565'] | |
758 | $ hg log -Tcompact |
|
758 | $ hg log -Tcompact | |
759 | obsolete feature not enabled but 2 markers found! |
|
759 | "obsolete" feature not enabled but 2 markers found! | |
760 | 5[tip] e4edb1fe3565 1970-01-01 00:00 +0000 test |
|
760 | 5[tip] e4edb1fe3565 1970-01-01 00:00 +0000 test | |
761 | 3: a commit with no detailed message |
|
761 | 3: a commit with no detailed message | |
762 |
|
762 | |||
@@ -773,17 +773,17 b' When nothing has changed locally since t' | |||||
773 | updated, and nothing is changed locally afterward. |
|
773 | updated, and nothing is changed locally afterward. | |
774 |
|
774 | |||
775 | $ hg phabsend --fold -r 1:: --test-vcr "$VCR/phabsend-fold-no-changes.json" |
|
775 | $ hg phabsend --fold -r 1:: --test-vcr "$VCR/phabsend-fold-no-changes.json" | |
776 | obsolete feature not enabled but 2 markers found! |
|
776 | "obsolete" feature not enabled but 2 markers found! | |
777 | 602c4e738243 mapped to old nodes ['602c4e738243'] |
|
777 | 602c4e738243 mapped to old nodes ['602c4e738243'] | |
778 | 0124e5474c88 mapped to old nodes ['0124e5474c88'] |
|
778 | 0124e5474c88 mapped to old nodes ['0124e5474c88'] | |
779 | e4edb1fe3565 mapped to old nodes ['e4edb1fe3565'] |
|
779 | e4edb1fe3565 mapped to old nodes ['e4edb1fe3565'] | |
780 | D8387 - updated - 1:602c4e738243 "one: first commit to review" |
|
780 | D8387 - updated - 1:602c4e738243 "one: first commit to review" | |
781 | D8387 - updated - 4:0124e5474c88 "two: second commit to review" |
|
781 | D8387 - updated - 4:0124e5474c88 "two: second commit to review" | |
782 | D8387 - updated - 5:e4edb1fe3565 tip "3: a commit with no detailed message" |
|
782 | D8387 - updated - 5:e4edb1fe3565 tip "3: a commit with no detailed message" | |
783 | obsolete feature not enabled but 2 markers found! (?) |
|
783 | "obsolete" feature not enabled but 2 markers found! (?) | |
784 | local commit list for D8387 is already up-to-date |
|
784 | local commit list for D8387 is already up-to-date | |
785 | $ hg log -Tcompact |
|
785 | $ hg log -Tcompact | |
786 | obsolete feature not enabled but 2 markers found! |
|
786 | "obsolete" feature not enabled but 2 markers found! | |
787 | 5[tip] e4edb1fe3565 1970-01-01 00:00 +0000 test |
|
787 | 5[tip] e4edb1fe3565 1970-01-01 00:00 +0000 test | |
788 | 3: a commit with no detailed message |
|
788 | 3: a commit with no detailed message | |
789 |
|
789 | |||
@@ -800,7 +800,7 b' Fold will accept new revisions at the en' | |||||
800 |
|
800 | |||
801 | $ echo 'another mod' > file2.txt |
|
801 | $ echo 'another mod' > file2.txt | |
802 | $ hg ci -m 'four: extend the fold range' |
|
802 | $ hg ci -m 'four: extend the fold range' | |
803 | obsolete feature not enabled but 2 markers found! |
|
803 | "obsolete" feature not enabled but 2 markers found! | |
804 | $ hg phabsend --fold -r 1:: --test-vcr "$VCR/phabsend-fold-extend-end.json" \ |
|
804 | $ hg phabsend --fold -r 1:: --test-vcr "$VCR/phabsend-fold-extend-end.json" \ | |
805 | > --config experimental.evolution=all |
|
805 | > --config experimental.evolution=all | |
806 | 602c4e738243 mapped to old nodes ['602c4e738243'] |
|
806 | 602c4e738243 mapped to old nodes ['602c4e738243'] | |
@@ -817,7 +817,7 b' Fold will accept new revisions at the en' | |||||
817 |
|
817 | |||
818 | Differential Revision: https://phab.mercurial-scm.org/D8387 |
|
818 | Differential Revision: https://phab.mercurial-scm.org/D8387 | |
819 | $ hg log -T'{rev} {if(phabreview, "{phabreview.url} {phabreview.id}")}\n' -r 1:: |
|
819 | $ hg log -T'{rev} {if(phabreview, "{phabreview.url} {phabreview.id}")}\n' -r 1:: | |
820 | obsolete feature not enabled but 3 markers found! |
|
820 | "obsolete" feature not enabled but 3 markers found! | |
821 | 1 https://phab.mercurial-scm.org/D8387 D8387 |
|
821 | 1 https://phab.mercurial-scm.org/D8387 D8387 | |
822 | 4 https://phab.mercurial-scm.org/D8387 D8387 |
|
822 | 4 https://phab.mercurial-scm.org/D8387 D8387 | |
823 | 5 https://phab.mercurial-scm.org/D8387 D8387 |
|
823 | 5 https://phab.mercurial-scm.org/D8387 D8387 | |
@@ -846,7 +846,7 b' TODO: See if it can reuse the existing D' | |||||
846 | new commits: ['15e9b14b4b4c', '6320b7d714cf', '3ee132d41dbc', '30682b960804', 'ac7db67f0991'] |
|
846 | new commits: ['15e9b14b4b4c', '6320b7d714cf', '3ee132d41dbc', '30682b960804', 'ac7db67f0991'] | |
847 |
|
847 | |||
848 | $ hg log -T '{rev}:{node|short}\n{indent(desc, " ")}\n' |
|
848 | $ hg log -T '{rev}:{node|short}\n{indent(desc, " ")}\n' | |
849 | obsolete feature not enabled but 8 markers found! |
|
849 | "obsolete" feature not enabled but 8 markers found! | |
850 | 12:ac7db67f0991 |
|
850 | 12:ac7db67f0991 | |
851 | four: extend the fold range |
|
851 | four: extend the fold range | |
852 |
|
852 | |||
@@ -962,7 +962,7 b' Test phabsend --fold with an `hg fold` a' | |||||
962 | new commits: ['15e9b14b4b4c', '6320b7d714cf', '3ee132d41dbc', '30682b960804', 'e919cdf3d4fe'] |
|
962 | new commits: ['15e9b14b4b4c', '6320b7d714cf', '3ee132d41dbc', '30682b960804', 'e919cdf3d4fe'] | |
963 |
|
963 | |||
964 | $ hg log -r tip -v |
|
964 | $ hg log -r tip -v | |
965 | obsolete feature not enabled but 12 markers found! |
|
965 | "obsolete" feature not enabled but 12 markers found! | |
966 | changeset: 16:e919cdf3d4fe |
|
966 | changeset: 16:e919cdf3d4fe | |
967 | tag: tip |
|
967 | tag: tip | |
968 | parent: 11:30682b960804 |
|
968 | parent: 11:30682b960804 |
@@ -36,12 +36,7 b' Check same result using `experimental.ex' | |||||
36 | $ hg -R test --config experimental.extra-filter-revs='not public()' debugupdatecache |
|
36 | $ hg -R test --config experimental.extra-filter-revs='not public()' debugupdatecache | |
37 | $ ls -1 test/.hg/cache/ |
|
37 | $ ls -1 test/.hg/cache/ | |
38 | branch2-base%89c45d2fa07e |
|
38 | branch2-base%89c45d2fa07e | |
39 | branch2-immutable%89c45d2fa07e |
|
|||
40 | branch2-served |
|
39 | branch2-served | |
41 | branch2-served%89c45d2fa07e |
|
|||
42 | branch2-served.hidden%89c45d2fa07e |
|
|||
43 | branch2-visible%89c45d2fa07e |
|
|||
44 | branch2-visible-hidden%89c45d2fa07e |
|
|||
45 | hgtagsfnodes1 |
|
40 | hgtagsfnodes1 | |
46 | rbc-names-v1 |
|
41 | rbc-names-v1 | |
47 | rbc-revs-v1 |
|
42 | rbc-revs-v1 |
@@ -63,11 +63,7 b' Cloning a shared repo should pick up the' | |||||
63 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
63 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
64 | $ ls -1 ../repo2-clone/.hg/cache |
|
64 | $ ls -1 ../repo2-clone/.hg/cache | |
65 | branch2-base |
|
65 | branch2-base | |
66 | branch2-immutable |
|
|||
67 | branch2-served |
|
66 | branch2-served | |
68 | branch2-served.hidden |
|
|||
69 | branch2-visible |
|
|||
70 | branch2-visible-hidden |
|
|||
71 | rbc-names-v1 |
|
67 | rbc-names-v1 | |
72 | rbc-revs-v1 |
|
68 | rbc-revs-v1 | |
73 | tags2 |
|
69 | tags2 |
@@ -72,8 +72,8 b' clone bookmarks via stream' | |||||
72 | $ hg -R local-stream book mybook |
|
72 | $ hg -R local-stream book mybook | |
73 | $ hg clone --stream ssh://user@dummy/local-stream stream2 |
|
73 | $ hg clone --stream ssh://user@dummy/local-stream stream2 | |
74 | streaming all changes |
|
74 | streaming all changes | |
75 |
1 |
|
75 | 12 files to transfer, * of data (glob) (no-rust !) | |
76 |
1 |
|
76 | 14 files to transfer, * of data (glob) (rust !) | |
77 | transferred * in * seconds (*) (glob) |
|
77 | transferred * in * seconds (*) (glob) | |
78 | updating to branch default |
|
78 | updating to branch default | |
79 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
79 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
@@ -74,6 +74,23 b' The extension requires a repo (currently' | |||||
74 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) |
|
74 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) | |
75 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) |
|
75 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) | |
76 |
|
76 | |||
|
77 | $ hg bundle -a --type="none-$bundle_format" bundle.hg | |||
|
78 | $ hg debugbundle bundle.hg | |||
|
79 | Stream params: {} | |||
|
80 | stream2 -- {bytecount: 1693, filecount: 12, requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 no-zstd !) | |||
|
81 | stream2 -- {bytecount: 1693, filecount: 12, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 zstd no-rust !) | |||
|
82 | stream2 -- {bytecount: 1819, filecount: 14, requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v2 rust !) | |||
|
83 | stream3-exp -- {requirements: generaldelta%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 no-zstd !) | |||
|
84 | stream3-exp -- {requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 zstd no-rust !) | |||
|
85 | stream3-exp -- {requirements: generaldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog} (mandatory: True) (stream-v3 rust !) | |||
|
86 | $ hg debugbundle --spec bundle.hg | |||
|
87 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v2 no-zstd !) | |||
|
88 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 zstd no-rust !) | |||
|
89 | none-v2;stream=v2;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v2 rust !) | |||
|
90 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlogv1%2Csparserevlog (stream-v3 no-zstd !) | |||
|
91 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 zstd no-rust !) | |||
|
92 | none-v2;stream=v3-exp;requirements%3Dgeneraldelta%2Crevlog-compression-zstd%2Crevlogv1%2Csparserevlog (stream-v3 rust !) | |||
|
93 | ||||
77 |
|
|
94 | Test that we can apply the bundle as a stream clone bundle | |
78 |
|
95 | |||
79 |
$ |
|
96 | $ cat > .hg/clonebundles.manifest << EOF |
@@ -1,3 +1,5 b'' | |||||
|
1 | This test cover a bug that no longer exist. | |||
|
2 | ||||
1 | Define helpers. |
|
3 | Define helpers. | |
2 |
|
4 | |||
3 | $ hg_log () { hg log -G -T "{rev}:{node|short}"; } |
|
5 | $ hg_log () { hg log -G -T "{rev}:{node|short}"; } | |
@@ -18,7 +20,10 b' Setup hg repo.' | |||||
18 |
|
20 | |||
19 | $ hg pull -q ../repo |
|
21 | $ hg pull -q ../repo | |
20 |
|
22 | |||
21 |
$ |
|
23 | $ ls -1 .hg/cache/branch?* | |
|
24 | .hg/cache/branch2-base | |||
|
25 | .hg/cache/branch2-served | |||
|
26 | $ cat .hg/cache/branch?-served | |||
22 | 222ae9789a75703f9836e44de7db179cbfd420ee 2 |
|
27 | 222ae9789a75703f9836e44de7db179cbfd420ee 2 | |
23 | a3498d6e39376d2456425dd8c692367bdbf00fa2 o default |
|
28 | a3498d6e39376d2456425dd8c692367bdbf00fa2 o default | |
24 | 222ae9789a75703f9836e44de7db179cbfd420ee o default |
|
29 | 222ae9789a75703f9836e44de7db179cbfd420ee o default | |
@@ -33,24 +38,36 b' Setup hg repo.' | |||||
33 |
|
38 | |||
34 | $ strip '1:' |
|
39 | $ strip '1:' | |
35 |
|
40 | |||
36 | The branchmap cache is not adjusted on strip. |
|
41 | After the strip the "served" cache is now identical to the "base" one, and the | |
37 | Now mentions a changelog entry that has been stripped. |
|
42 | older one have been actively deleted. | |
38 |
|
43 | |||
39 |
$ |
|
44 | $ ls -1 .hg/cache/branch?* | |
40 | 222ae9789a75703f9836e44de7db179cbfd420ee 2 |
|
45 | .hg/cache/branch2-base | |
41 | a3498d6e39376d2456425dd8c692367bdbf00fa2 o default |
|
46 | $ cat .hg/cache/branch?-base | |
42 | 222ae9789a75703f9836e44de7db179cbfd420ee o default |
|
47 | 7ab0a3bd758a58b9f79557ce708533e627776cce 0 | |
|
48 | 7ab0a3bd758a58b9f79557ce708533e627776cce o default | |||
|
49 | ||||
|
50 | We do a new commit and we get a new valid branchmap for the served version | |||
43 |
|
51 | |||
44 |
$ |
|
52 | $ commit c | |
45 |
|
53 | $ ls -1 .hg/cache/branch?* | ||
46 | Not adjusted on commit, either. |
|
54 | .hg/cache/branch2-base | |
|
55 | .hg/cache/branch2-served | |||
|
56 | $ cat .hg/cache/branch?-served | |||
|
57 | a1602b357cfca067600406eb19060c7128804d72 1 | |||
|
58 | a1602b357cfca067600406eb19060c7128804d72 o default | |||
47 |
|
59 | |||
48 | $ cat .hg/cache/branch2-visible |
|
|||
49 | 222ae9789a75703f9836e44de7db179cbfd420ee 2 |
|
|||
50 | a3498d6e39376d2456425dd8c692367bdbf00fa2 o default |
|
|||
51 | 222ae9789a75703f9836e44de7db179cbfd420ee o default |
|
|||
52 |
|
60 | |||
53 | On pull we end up with the same tip, and so wrongly reuse the invalid cache and crash. |
|
61 | On pull we end up with the same tip, and so wrongly reuse the invalid cache and crash. | |
54 |
|
62 | |||
55 | $ hg pull ../repo 2>&1 | grep 'ValueError:' |
|
63 | $ hg pull ../repo --quiet | |
56 | ValueError: node a3498d6e39376d2456425dd8c692367bdbf00fa2 does not exist (known-bad-output !) |
|
64 | $ hg heads -T '{rev} {node} {branch}\n' | |
|
65 | 2 222ae9789a75703f9836e44de7db179cbfd420ee default | |||
|
66 | 1 a1602b357cfca067600406eb19060c7128804d72 default | |||
|
67 | $ ls -1 .hg/cache/branch?* | |||
|
68 | .hg/cache/branch2-base | |||
|
69 | .hg/cache/branch2-served | |||
|
70 | $ cat .hg/cache/branch?-served | |||
|
71 | 222ae9789a75703f9836e44de7db179cbfd420ee 2 | |||
|
72 | a1602b357cfca067600406eb19060c7128804d72 o default | |||
|
73 | 222ae9789a75703f9836e44de7db179cbfd420ee o default |
@@ -792,11 +792,6 b' Missing tags2* files means the cache was' | |||||
792 |
|
792 | |||
793 | $ ls tagsclient/.hg/cache |
|
793 | $ ls tagsclient/.hg/cache | |
794 | branch2-base |
|
794 | branch2-base | |
795 | branch2-immutable |
|
|||
796 | branch2-served |
|
|||
797 | branch2-served.hidden |
|
|||
798 | branch2-visible |
|
|||
799 | branch2-visible-hidden |
|
|||
800 | hgtagsfnodes1 |
|
795 | hgtagsfnodes1 | |
801 | rbc-names-v1 |
|
796 | rbc-names-v1 | |
802 | rbc-revs-v1 |
|
797 | rbc-revs-v1 | |
@@ -823,11 +818,6 b' Running hg tags should produce tags2* fi' | |||||
823 |
|
818 | |||
824 | $ ls tagsclient/.hg/cache |
|
819 | $ ls tagsclient/.hg/cache | |
825 | branch2-base |
|
820 | branch2-base | |
826 | branch2-immutable |
|
|||
827 | branch2-served |
|
|||
828 | branch2-served.hidden |
|
|||
829 | branch2-visible |
|
|||
830 | branch2-visible-hidden |
|
|||
831 | hgtagsfnodes1 |
|
821 | hgtagsfnodes1 | |
832 | rbc-names-v1 |
|
822 | rbc-names-v1 | |
833 | rbc-revs-v1 |
|
823 | rbc-revs-v1 |
@@ -761,8 +761,8 b' Stream clone with basicstore' | |||||
761 | $ hg clone --config experimental.changegroup3=True --stream -U \ |
|
761 | $ hg clone --config experimental.changegroup3=True --stream -U \ | |
762 | > http://localhost:$HGPORT1 stream-clone-basicstore |
|
762 | > http://localhost:$HGPORT1 stream-clone-basicstore | |
763 | streaming all changes |
|
763 | streaming all changes | |
764 |
2 |
|
764 | 24 files to transfer, * of data (glob) (no-rust !) | |
765 |
|
|
765 | 26 files to transfer, * of data (glob) (rust !) | |
766 | transferred * in * seconds (*) (glob) |
|
766 | transferred * in * seconds (*) (glob) | |
767 | $ hg -R stream-clone-basicstore verify -q |
|
767 | $ hg -R stream-clone-basicstore verify -q | |
768 | $ cat port-1-errors.log |
|
768 | $ cat port-1-errors.log | |
@@ -771,8 +771,8 b' Stream clone with encodedstore' | |||||
771 | $ hg clone --config experimental.changegroup3=True --stream -U \ |
|
771 | $ hg clone --config experimental.changegroup3=True --stream -U \ | |
772 | > http://localhost:$HGPORT2 stream-clone-encodedstore |
|
772 | > http://localhost:$HGPORT2 stream-clone-encodedstore | |
773 | streaming all changes |
|
773 | streaming all changes | |
774 |
2 |
|
774 | 24 files to transfer, * of data (glob) (no-rust !) | |
775 |
|
|
775 | 26 files to transfer, * of data (glob) (rust !) | |
776 | transferred * in * seconds (*) (glob) |
|
776 | transferred * in * seconds (*) (glob) | |
777 | $ hg -R stream-clone-encodedstore verify -q |
|
777 | $ hg -R stream-clone-encodedstore verify -q | |
778 | $ cat port-2-errors.log |
|
778 | $ cat port-2-errors.log |
General Comments 0
You need to be logged in to leave comments.
Login now