##// END OF EJS Templates
branching: merge default into stable for 6.8rc0
Raphaël Gomès -
r52541:6454c117 merge 6.8rc0 stable
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: new file 100644
The requested commit or file is too big and content was truncated. Show full diff
@@ -1,159 +1,159 b''
1 1 # All revsets ever used with revsetbenchmarks.py script
2 2 #
3 3 # The goal of this file is to gather all revsets ever used for benchmarking
4 4 # revset's performance. It should be used to gather revsets that test a
5 5 # specific usecase or a specific implementation of revset predicates.
6 6 # If you are working on the smartset implementation itself, check
7 7 # 'base-revsets.txt'.
8 8 #
9 9 # Please update this file with any revsets you use for benchmarking a change so
10 10 # that future contributors can easily find and retest it when doing further
11 11 # modification. Feel free to highlight interesting variants if needed.
12 12
13 13
14 14 ## Revset from this section are all extracted from changelog when this file was
15 15 # created. Feel free to dig and improve documentation.
16 16
17 17 # Used in revision da05fe01170b
18 18 (20000::) - (20000)
19 19 # Used in revision 95af98616aa7
20 20 parents(20000)
21 21 # Used in revision 186fd06283b4
22 22 (_intlist('20000\x0020001')) and merge()
23 23 # Used in revision 911f5a6579d1
24 24 p1(20000)
25 25 p2(10000)
26 26 # Used in revision b6dc3b79bb25
27 27 0::
28 28 # Used in revision faf4f63533ff
29 29 bookmark()
30 30 # Used in revision 22ba2c0825da
31 31 tip~25
32 32 # Used in revision 0cf46b8298fe
33 33 bisect(range)
34 34 # Used in revision 5b65429721d5
35 35 divergent()
36 36 # Used in revision 6261b9c549a2
37 37 file(COPYING)
38 38 # Used in revision 44f471102f3a
39 39 follow(COPYING)
40 40 # Used in revision 8040a44aab1c
41 41 origin(tip)
42 42 # Used in revision bbf4f3dfd700
43 43 rev(25)
44 44 # Used in revision a428db9ab61d
45 45 p1()
46 46 # Used in revision c1546d7400ef
47 47 min(0::)
48 48 # Used in revision 546fa6576815
49 author(lmoscovicz) or author(olivia)
50 author(olivia) or author(lmoscovicz)
49 author(lmoscovicz) or author("pierre-yves")
50 author("pierre-yves") or author(lmoscovicz)
51 51 # Used in revision 9bfe68357c01
52 52 public() and id("d82e2223f132")
53 53 # Used in revision ba89f7b542c9
54 54 rev(25)
55 55 # Used in revision eb763217152a
56 56 rev(210000)
57 57 # Used in revision 69524a05a7fa
58 58 10:100
59 59 parents(10):parents(100)
60 60 # Used in revision 6f1b8b3f12fd
61 61 100~5
62 62 parents(100)~5
63 63 (100~5)~5
64 64 # Used in revision 7a42e5d4c418
65 65 children(tip~100)
66 66 # Used in revision 7e8737e6ab08
67 67 100^1
68 68 parents(100)^1
69 69 (100^1)^1
70 70 # Used in revision 30e0dcd7c5ff
71 71 matching(100)
72 72 matching(parents(100))
73 73 # Used in revision aafeaba22826
74 74 0|1|2|3|4|5|6|7|8|9
75 75 # Used in revision 33c7a94d4dd0
76 76 tip:0
77 77 # Used in revision 7d369fae098e
78 78 (0:100000)
79 79 # Used in revision b333ca94403d
80 80 0 + 1 + 2 + ... + 200
81 81 0 + 1 + 2 + ... + 1000
82 82 sort(0 + 1 + 2 + ... + 200)
83 83 sort(0 + 1 + 2 + ... + 1000)
84 84 # Used in revision 7fbef7932af9
85 85 first(0 + 1 + 2 + ... + 1000)
86 86 # Used in revision ceaf04bb14ff
87 87 0:1000
88 88 # Used in revision 262e6ad93885
89 89 not public()
90 90 (tip~1000::) - public()
91 91 not public() and branch("default")
92 92 # Used in revision 15412bba5a68
93 93 0::tip
94 94
95 95 ## all the revsets from this section have been taken from the former central file
96 96 # for revset's benchmarking, they are undocumented for this reason.
97 97 all()
98 98 draft()
99 99 ::tip
100 100 draft() and ::tip
101 101 ::tip and draft()
102 102 author(lmoscovicz)
103 author(olivia)
103 author("pierre-yves")
104 104 ::p1(p1(tip))::
105 105 public()
106 106 :10000 and public()
107 107 :10000 and draft()
108 108 (not public() - obsolete())
109 109
110 110 # The one below is used by rebase
111 111 (children(ancestor(tip~5, tip)) and ::(tip~5))::
112 112
113 113 # those two `roots(...)` inputs are close to what phase movement use.
114 114 roots((tip~100::) - (tip~100::tip))
115 115 roots((0::) - (0::tip))
116 116
117 117 # more roots testing
118 118 roots(tip~100:)
119 119 roots(:42)
120 120 roots(not public())
121 121 roots((0:tip)::)
122 122 roots(0::tip)
123 123 42:68 and roots(42:tip)
124 124 # Used in revision f140d6207cca
125 125 roots(0:tip)
126 126 # test disjoint set with multiple roots
127 127 roots((:42) + (tip~42:))
128 128
129 129 # Testing the behavior of "head()" in various situations
130 130 head()
131 131 head() - public()
132 132 draft() and head()
133 head() and author("olivia")
133 head() and author("pierre-yves")
134 134
135 135 # testing the mutable phases set
136 136 draft()
137 137 secret()
138 138
139 139 # test finding common ancestors
140 140 heads(commonancestors(last(head(), 2)))
141 141 heads(commonancestors(head()))
142 142
143 143 # more heads testing
144 144 heads(all())
145 145 heads(-10000:-1)
146 146 (-5000:-1000) and heads(-10000:-1)
147 147 heads(matching(tip, "author"))
148 148 heads(matching(tip, "author")) and -10000:-1
149 149 (-10000:-1) and heads(matching(tip, "author"))
150 150 # more roots testing
151 151 roots(all())
152 152 roots(-10000:-1)
153 153 (-5000:-1000) and roots(-10000:-1)
154 154 roots(matching(tip, "author"))
155 155 roots(matching(tip, "author")) and -10000:-1
156 156 (-10000:-1) and roots(matching(tip, "author"))
157 157 only(max(head()))
158 158 only(max(head()), min(head()))
159 159 only(max(head()), limit(head(), 1, 1))
@@ -1,52 +1,52 b''
1 1 # Base Revsets to be used with revsetbenchmarks.py script
2 2 #
3 3 # The goal of this file is to gather a limited amount of revsets that allow a
4 4 # good coverage of the internal revsets mechanisms. Revsets included should not
5 5 # be selected for their individual implementation, but for what they reveal of
6 6 # the internal implementation of smartsets classes (and their interactions).
7 7 #
8 8 # Use and update this file when you change internal implementation of these
9 9 # smartsets classes. Please include a comment explaining what each of your
10 10 # addition is testing. Also check if your changes to the smartset class makes
11 11 # some of the tests inadequate and replace them with a new one testing the same
12 12 # behavior.
13 13 #
14 14 # If you want to benchmark revsets predicate itself, check 'all-revsets.txt'.
15 15 #
16 16 # The current content of this file is currently likely not reaching this goal
17 17 # entirely, feel free, to audit its content and comment on each revset to
18 18 # highlight what internal mechanisms they test.
19 19
20 20 all()
21 21 draft()
22 22 ::tip
23 23 draft() and ::tip
24 24 ::tip and draft()
25 25 0::tip
26 26 roots(0::tip)
27 27 author(lmoscovicz)
28 author(olivia)
29 author(lmoscovicz) or author(olivia)
30 author(olivia) or author(lmoscovicz)
28 author("pierre-yves")
29 author(lmoscovicz) or author("pierre-yves")
30 author("pierre-yves") or author(lmoscovicz)
31 31 tip:0
32 32 0::
33 33 # those two `roots(...)` inputs are close to what phase movement use.
34 34 roots((tip~100::) - (tip~100::tip))
35 35 roots((0::) - (0::tip))
36 36 42:68 and roots(42:tip)
37 37 ::p1(p1(tip))::
38 38 public()
39 39 :10000 and public()
40 40 draft()
41 41 :10000 and draft()
42 42 roots((0:tip)::)
43 43 (not public() - obsolete())
44 44 (_intlist('20000\x0020001')) and merge()
45 45 parents(20000)
46 46 (20000::) - (20000)
47 47 # The one below is used by rebase
48 48 (children(ancestor(tip~5, tip)) and ::(tip~5))::
49 49 heads(commonancestors(last(head(), 2)))
50 50 heads(-10000:-1)
51 51 roots(-10000:-1)
52 52 only(max(head()), min(head()))
@@ -1,4651 +1,4725 b''
1 1 # perf.py - performance test routines
2 2 '''helper extension to measure performance
3 3
4 4 Configurations
5 5 ==============
6 6
7 7 ``perf``
8 8 --------
9 9
10 10 ``all-timing``
11 11 When set, additional statistics will be reported for each benchmark: best,
12 12 worst, median average. If not set only the best timing is reported
13 13 (default: off).
14 14
15 15 ``presleep``
16 16 number of second to wait before any group of runs (default: 1)
17 17
18 18 ``pre-run``
19 19 number of run to perform before starting measurement.
20 20
21 21 ``profile-benchmark``
22 22 Enable profiling for the benchmarked section.
23 (The first iteration is benchmarked)
23 (by default, the first iteration is benchmarked)
24
25 ``profiled-runs``
26 list of iteration to profile (starting from 0)
24 27
25 28 ``run-limits``
26 29 Control the number of runs each benchmark will perform. The option value
27 30 should be a list of `<time>-<numberofrun>` pairs. After each run the
28 31 conditions are considered in order with the following logic:
29 32
30 33 If benchmark has been running for <time> seconds, and we have performed
31 34 <numberofrun> iterations, stop the benchmark,
32 35
33 36 The default value is: `3.0-100, 10.0-3`
34 37
35 38 ``stub``
36 39 When set, benchmarks will only be run once, useful for testing
37 40 (default: off)
38 41 '''
39 42
40 43 # "historical portability" policy of perf.py:
41 44 #
42 45 # We have to do:
43 46 # - make perf.py "loadable" with as wide Mercurial version as possible
44 47 # This doesn't mean that perf commands work correctly with that Mercurial.
45 48 # BTW, perf.py itself has been available since 1.1 (or eb240755386d).
46 49 # - make historical perf command work correctly with as wide Mercurial
47 50 # version as possible
48 51 #
49 52 # We have to do, if possible with reasonable cost:
50 53 # - make recent perf command for historical feature work correctly
51 54 # with early Mercurial
52 55 #
53 56 # We don't have to do:
54 57 # - make perf command for recent feature work correctly with early
55 58 # Mercurial
56 59
57 60 import contextlib
58 61 import functools
59 62 import gc
60 63 import os
61 64 import random
62 65 import shutil
63 66 import struct
64 67 import sys
65 68 import tempfile
66 69 import threading
67 70 import time
68 71
69 72 import mercurial.revlog
70 73 from mercurial import (
71 74 changegroup,
72 75 cmdutil,
73 76 commands,
74 77 copies,
75 78 error,
76 79 extensions,
77 80 hg,
78 81 mdiff,
79 82 merge,
80 83 util,
81 84 )
82 85
83 86 # for "historical portability":
84 87 # try to import modules separately (in dict order), and ignore
85 88 # failure, because these aren't available with early Mercurial
86 89 try:
87 90 from mercurial import branchmap # since 2.5 (or bcee63733aad)
88 91 except ImportError:
89 92 pass
90 93 try:
91 94 from mercurial import obsolete # since 2.3 (or ad0d6c2b3279)
92 95 except ImportError:
93 96 pass
94 97 try:
95 98 from mercurial import registrar # since 3.7 (or 37d50250b696)
96 99
97 100 dir(registrar) # forcibly load it
98 101 except ImportError:
99 102 registrar = None
100 103 try:
101 104 from mercurial import repoview # since 2.5 (or 3a6ddacb7198)
102 105 except ImportError:
103 106 pass
104 107 try:
105 108 from mercurial.utils import repoviewutil # since 5.0
106 109 except ImportError:
107 110 repoviewutil = None
108 111 try:
109 112 from mercurial import scmutil # since 1.9 (or 8b252e826c68)
110 113 except ImportError:
111 114 pass
112 115 try:
113 116 from mercurial import setdiscovery # since 1.9 (or cb98fed52495)
114 117 except ImportError:
115 118 pass
116 119
117 120 try:
118 121 from mercurial import profiling
119 122 except ImportError:
120 123 profiling = None
121 124
122 125 try:
123 126 from mercurial.revlogutils import constants as revlog_constants
124 127
125 128 perf_rl_kind = (revlog_constants.KIND_OTHER, b'created-by-perf')
126 129
127 130 def revlog(opener, *args, **kwargs):
128 131 return mercurial.revlog.revlog(opener, perf_rl_kind, *args, **kwargs)
129 132
130 133
131 134 except (ImportError, AttributeError):
132 135 perf_rl_kind = None
133 136
134 137 def revlog(opener, *args, **kwargs):
135 138 return mercurial.revlog.revlog(opener, *args, **kwargs)
136 139
137 140
138 141 def identity(a):
139 142 return a
140 143
141 144
142 145 try:
143 146 from mercurial import pycompat
144 147
145 148 getargspec = pycompat.getargspec # added to module after 4.5
146 149 _byteskwargs = pycompat.byteskwargs # since 4.1 (or fbc3f73dc802)
147 150 _sysstr = pycompat.sysstr # since 4.0 (or 2219f4f82ede)
148 151 _bytestr = pycompat.bytestr # since 4.2 (or b70407bd84d5)
149 152 _xrange = pycompat.xrange # since 4.8 (or 7eba8f83129b)
150 153 fsencode = pycompat.fsencode # since 3.9 (or f4a5e0e86a7e)
151 154 if pycompat.ispy3:
152 155 _maxint = sys.maxsize # per py3 docs for replacing maxint
153 156 else:
154 157 _maxint = sys.maxint
155 158 except (NameError, ImportError, AttributeError):
156 159 import inspect
157 160
158 161 getargspec = inspect.getargspec
159 162 _byteskwargs = identity
160 163 _bytestr = str
161 164 fsencode = identity # no py3 support
162 165 _maxint = sys.maxint # no py3 support
163 166 _sysstr = lambda x: x # no py3 support
164 167 _xrange = xrange
165 168
166 169 try:
167 170 # 4.7+
168 171 queue = pycompat.queue.Queue
169 172 except (NameError, AttributeError, ImportError):
170 173 # <4.7.
171 174 try:
172 175 queue = pycompat.queue
173 176 except (NameError, AttributeError, ImportError):
174 177 import Queue as queue
175 178
176 179 try:
177 180 from mercurial import logcmdutil
178 181
179 182 makelogtemplater = logcmdutil.maketemplater
180 183 except (AttributeError, ImportError):
181 184 try:
182 185 makelogtemplater = cmdutil.makelogtemplater
183 186 except (AttributeError, ImportError):
184 187 makelogtemplater = None
185 188
186 189 # for "historical portability":
187 190 # define util.safehasattr forcibly, because util.safehasattr has been
188 191 # available since 1.9.3 (or 94b200a11cf7)
189 192 _undefined = object()
190 193
191 194
192 195 def safehasattr(thing, attr):
193 196 return getattr(thing, _sysstr(attr), _undefined) is not _undefined
194 197
195 198
196 199 setattr(util, 'safehasattr', safehasattr)
197 200
198 201 # for "historical portability":
199 202 # define util.timer forcibly, because util.timer has been available
200 203 # since ae5d60bb70c9
201 204 if safehasattr(time, 'perf_counter'):
202 205 util.timer = time.perf_counter
203 206 elif os.name == b'nt':
204 207 util.timer = time.clock
205 208 else:
206 209 util.timer = time.time
207 210
208 211 # for "historical portability":
209 212 # use locally defined empty option list, if formatteropts isn't
210 213 # available, because commands.formatteropts has been available since
211 214 # 3.2 (or 7a7eed5176a4), even though formatting itself has been
212 215 # available since 2.2 (or ae5f92e154d3)
213 216 formatteropts = getattr(
214 217 cmdutil, "formatteropts", getattr(commands, "formatteropts", [])
215 218 )
216 219
217 220 # for "historical portability":
218 221 # use locally defined option list, if debugrevlogopts isn't available,
219 222 # because commands.debugrevlogopts has been available since 3.7 (or
220 223 # 5606f7d0d063), even though cmdutil.openrevlog() has been available
221 224 # since 1.9 (or a79fea6b3e77).
222 225 revlogopts = getattr(
223 226 cmdutil,
224 227 "debugrevlogopts",
225 228 getattr(
226 229 commands,
227 230 "debugrevlogopts",
228 231 [
229 232 (b'c', b'changelog', False, b'open changelog'),
230 233 (b'm', b'manifest', False, b'open manifest'),
231 234 (b'', b'dir', False, b'open directory manifest'),
232 235 ],
233 236 ),
234 237 )
235 238
236 239 cmdtable = {}
237 240
238 241
239 242 # for "historical portability":
240 243 # define parsealiases locally, because cmdutil.parsealiases has been
241 244 # available since 1.5 (or 6252852b4332)
242 245 def parsealiases(cmd):
243 246 return cmd.split(b"|")
244 247
245 248
246 249 if safehasattr(registrar, 'command'):
247 250 command = registrar.command(cmdtable)
248 251 elif safehasattr(cmdutil, 'command'):
249 252 command = cmdutil.command(cmdtable)
250 253 if 'norepo' not in getargspec(command).args:
251 254 # for "historical portability":
252 255 # wrap original cmdutil.command, because "norepo" option has
253 256 # been available since 3.1 (or 75a96326cecb)
254 257 _command = command
255 258
256 259 def command(name, options=(), synopsis=None, norepo=False):
257 260 if norepo:
258 261 commands.norepo += b' %s' % b' '.join(parsealiases(name))
259 262 return _command(name, list(options), synopsis)
260 263
261 264
262 265 else:
263 266 # for "historical portability":
264 267 # define "@command" annotation locally, because cmdutil.command
265 268 # has been available since 1.9 (or 2daa5179e73f)
266 269 def command(name, options=(), synopsis=None, norepo=False):
267 270 def decorator(func):
268 271 if synopsis:
269 272 cmdtable[name] = func, list(options), synopsis
270 273 else:
271 274 cmdtable[name] = func, list(options)
272 275 if norepo:
273 276 commands.norepo += b' %s' % b' '.join(parsealiases(name))
274 277 return func
275 278
276 279 return decorator
277 280
278 281
279 282 try:
280 283 import mercurial.registrar
281 284 import mercurial.configitems
282 285
283 286 configtable = {}
284 287 configitem = mercurial.registrar.configitem(configtable)
285 288 configitem(
286 289 b'perf',
287 290 b'presleep',
288 291 default=mercurial.configitems.dynamicdefault,
289 292 experimental=True,
290 293 )
291 294 configitem(
292 295 b'perf',
293 296 b'stub',
294 297 default=mercurial.configitems.dynamicdefault,
295 298 experimental=True,
296 299 )
297 300 configitem(
298 301 b'perf',
299 302 b'parentscount',
300 303 default=mercurial.configitems.dynamicdefault,
301 304 experimental=True,
302 305 )
303 306 configitem(
304 307 b'perf',
305 308 b'all-timing',
306 309 default=mercurial.configitems.dynamicdefault,
307 310 experimental=True,
308 311 )
309 312 configitem(
310 313 b'perf',
311 314 b'pre-run',
312 315 default=mercurial.configitems.dynamicdefault,
313 316 )
314 317 configitem(
315 318 b'perf',
316 319 b'profile-benchmark',
317 320 default=mercurial.configitems.dynamicdefault,
318 321 )
319 322 configitem(
320 323 b'perf',
324 b'profiled-runs',
325 default=mercurial.configitems.dynamicdefault,
326 )
327 configitem(
328 b'perf',
321 329 b'run-limits',
322 330 default=mercurial.configitems.dynamicdefault,
323 331 experimental=True,
324 332 )
325 333 except (ImportError, AttributeError):
326 334 pass
327 335 except TypeError:
328 336 # compatibility fix for a11fd395e83f
329 337 # hg version: 5.2
330 338 configitem(
331 339 b'perf',
332 340 b'presleep',
333 341 default=mercurial.configitems.dynamicdefault,
334 342 )
335 343 configitem(
336 344 b'perf',
337 345 b'stub',
338 346 default=mercurial.configitems.dynamicdefault,
339 347 )
340 348 configitem(
341 349 b'perf',
342 350 b'parentscount',
343 351 default=mercurial.configitems.dynamicdefault,
344 352 )
345 353 configitem(
346 354 b'perf',
347 355 b'all-timing',
348 356 default=mercurial.configitems.dynamicdefault,
349 357 )
350 358 configitem(
351 359 b'perf',
352 360 b'pre-run',
353 361 default=mercurial.configitems.dynamicdefault,
354 362 )
355 363 configitem(
356 364 b'perf',
357 b'profile-benchmark',
365 b'profiled-runs',
358 366 default=mercurial.configitems.dynamicdefault,
359 367 )
360 368 configitem(
361 369 b'perf',
362 370 b'run-limits',
363 371 default=mercurial.configitems.dynamicdefault,
364 372 )
365 373
366 374
367 375 def getlen(ui):
368 376 if ui.configbool(b"perf", b"stub", False):
369 377 return lambda x: 1
370 378 return len
371 379
372 380
373 381 class noop:
374 382 """dummy context manager"""
375 383
376 384 def __enter__(self):
377 385 pass
378 386
379 387 def __exit__(self, *args):
380 388 pass
381 389
382 390
383 391 NOOPCTX = noop()
384 392
385 393
386 394 def gettimer(ui, opts=None):
387 395 """return a timer function and formatter: (timer, formatter)
388 396
389 397 This function exists to gather the creation of formatter in a single
390 398 place instead of duplicating it in all performance commands."""
391 399
392 400 # enforce an idle period before execution to counteract power management
393 401 # experimental config: perf.presleep
394 402 time.sleep(getint(ui, b"perf", b"presleep", 1))
395 403
396 404 if opts is None:
397 405 opts = {}
398 406 # redirect all to stderr unless buffer api is in use
399 407 if not ui._buffers:
400 408 ui = ui.copy()
401 409 uifout = safeattrsetter(ui, b'fout', ignoremissing=True)
402 410 if uifout:
403 411 # for "historical portability":
404 412 # ui.fout/ferr have been available since 1.9 (or 4e1ccd4c2b6d)
405 413 uifout.set(ui.ferr)
406 414
407 415 # get a formatter
408 416 uiformatter = getattr(ui, 'formatter', None)
409 417 if uiformatter:
410 418 fm = uiformatter(b'perf', opts)
411 419 else:
412 420 # for "historical portability":
413 421 # define formatter locally, because ui.formatter has been
414 422 # available since 2.2 (or ae5f92e154d3)
415 423 from mercurial import node
416 424
417 425 class defaultformatter:
418 426 """Minimized composition of baseformatter and plainformatter"""
419 427
420 428 def __init__(self, ui, topic, opts):
421 429 self._ui = ui
422 430 if ui.debugflag:
423 431 self.hexfunc = node.hex
424 432 else:
425 433 self.hexfunc = node.short
426 434
427 435 def __nonzero__(self):
428 436 return False
429 437
430 438 __bool__ = __nonzero__
431 439
432 440 def startitem(self):
433 441 pass
434 442
435 443 def data(self, **data):
436 444 pass
437 445
438 446 def write(self, fields, deftext, *fielddata, **opts):
439 447 self._ui.write(deftext % fielddata, **opts)
440 448
441 449 def condwrite(self, cond, fields, deftext, *fielddata, **opts):
442 450 if cond:
443 451 self._ui.write(deftext % fielddata, **opts)
444 452
445 453 def plain(self, text, **opts):
446 454 self._ui.write(text, **opts)
447 455
448 456 def end(self):
449 457 pass
450 458
451 459 fm = defaultformatter(ui, b'perf', opts)
452 460
453 461 # stub function, runs code only once instead of in a loop
454 462 # experimental config: perf.stub
455 463 if ui.configbool(b"perf", b"stub", False):
456 464 return functools.partial(stub_timer, fm), fm
457 465
458 466 # experimental config: perf.all-timing
459 467 displayall = ui.configbool(b"perf", b"all-timing", True)
460 468
461 469 # experimental config: perf.run-limits
462 470 limitspec = ui.configlist(b"perf", b"run-limits", [])
463 471 limits = []
464 472 for item in limitspec:
465 473 parts = item.split(b'-', 1)
466 474 if len(parts) < 2:
467 475 ui.warn((b'malformatted run limit entry, missing "-": %s\n' % item))
468 476 continue
469 477 try:
470 478 time_limit = float(_sysstr(parts[0]))
471 479 except ValueError as e:
472 480 ui.warn(
473 481 (
474 482 b'malformatted run limit entry, %s: %s\n'
475 483 % (_bytestr(e), item)
476 484 )
477 485 )
478 486 continue
479 487 try:
480 488 run_limit = int(_sysstr(parts[1]))
481 489 except ValueError as e:
482 490 ui.warn(
483 491 (
484 492 b'malformatted run limit entry, %s: %s\n'
485 493 % (_bytestr(e), item)
486 494 )
487 495 )
488 496 continue
489 497 limits.append((time_limit, run_limit))
490 498 if not limits:
491 499 limits = DEFAULTLIMITS
492 500
493 501 profiler = None
502 profiled_runs = set()
494 503 if profiling is not None:
495 504 if ui.configbool(b"perf", b"profile-benchmark", False):
496 profiler = profiling.profile(ui)
505 profiler = lambda: profiling.profile(ui)
506 for run in ui.configlist(b"perf", b"profiled-runs", [0]):
507 profiled_runs.add(int(run))
497 508
498 509 prerun = getint(ui, b"perf", b"pre-run", 0)
499 510 t = functools.partial(
500 511 _timer,
501 512 fm,
502 513 displayall=displayall,
503 514 limits=limits,
504 515 prerun=prerun,
505 516 profiler=profiler,
517 profiled_runs=profiled_runs,
506 518 )
507 519 return t, fm
508 520
509 521
510 522 def stub_timer(fm, func, setup=None, title=None):
511 523 if setup is not None:
512 524 setup()
513 525 func()
514 526
515 527
516 528 @contextlib.contextmanager
517 529 def timeone():
518 530 r = []
519 531 ostart = os.times()
520 532 cstart = util.timer()
521 533 yield r
522 534 cstop = util.timer()
523 535 ostop = os.times()
524 536 a, b = ostart, ostop
525 537 r.append((cstop - cstart, b[0] - a[0], b[1] - a[1]))
526 538
527 539
528 540 # list of stop condition (elapsed time, minimal run count)
529 541 DEFAULTLIMITS = (
530 542 (3.0, 100),
531 543 (10.0, 3),
532 544 )
533 545
534 546
535 547 @contextlib.contextmanager
536 548 def noop_context():
537 549 yield
538 550
539 551
540 552 def _timer(
541 553 fm,
542 554 func,
543 555 setup=None,
544 556 context=noop_context,
545 557 title=None,
546 558 displayall=False,
547 559 limits=DEFAULTLIMITS,
548 560 prerun=0,
549 561 profiler=None,
562 profiled_runs=(0,),
550 563 ):
551 564 gc.collect()
552 565 results = []
553 begin = util.timer()
554 566 count = 0
555 567 if profiler is None:
556 profiler = NOOPCTX
568 profiler = lambda: NOOPCTX
557 569 for i in range(prerun):
558 570 if setup is not None:
559 571 setup()
560 572 with context():
561 573 func()
574 begin = util.timer()
562 575 keepgoing = True
563 576 while keepgoing:
577 if count in profiled_runs:
578 prof = profiler()
579 else:
580 prof = NOOPCTX
564 581 if setup is not None:
565 582 setup()
566 583 with context():
567 with profiler:
584 gc.collect()
585 with prof:
568 586 with timeone() as item:
569 587 r = func()
570 profiler = NOOPCTX
571 588 count += 1
572 589 results.append(item[0])
573 590 cstop = util.timer()
574 591 # Look for a stop condition.
575 592 elapsed = cstop - begin
576 593 for t, mincount in limits:
577 594 if elapsed >= t and count >= mincount:
578 595 keepgoing = False
579 596 break
580 597
581 598 formatone(fm, results, title=title, result=r, displayall=displayall)
582 599
583 600
584 601 def formatone(fm, timings, title=None, result=None, displayall=False):
585 602 count = len(timings)
586 603
587 604 fm.startitem()
588 605
589 606 if title:
590 607 fm.write(b'title', b'! %s\n', title)
591 608 if result:
592 609 fm.write(b'result', b'! result: %s\n', result)
593 610
594 611 def display(role, entry):
595 612 prefix = b''
596 613 if role != b'best':
597 614 prefix = b'%s.' % role
598 615 fm.plain(b'!')
599 616 fm.write(prefix + b'wall', b' wall %f', entry[0])
600 617 fm.write(prefix + b'comb', b' comb %f', entry[1] + entry[2])
601 618 fm.write(prefix + b'user', b' user %f', entry[1])
602 619 fm.write(prefix + b'sys', b' sys %f', entry[2])
603 620 fm.write(prefix + b'count', b' (%s of %%d)' % role, count)
604 621 fm.plain(b'\n')
605 622
606 623 timings.sort()
607 624 min_val = timings[0]
608 625 display(b'best', min_val)
609 626 if displayall:
610 627 max_val = timings[-1]
611 628 display(b'max', max_val)
612 629 avg = tuple([sum(x) / count for x in zip(*timings)])
613 630 display(b'avg', avg)
614 631 median = timings[len(timings) // 2]
615 632 display(b'median', median)
616 633
617 634
618 635 # utilities for historical portability
619 636
620 637
621 638 def getint(ui, section, name, default):
622 639 # for "historical portability":
623 640 # ui.configint has been available since 1.9 (or fa2b596db182)
624 641 v = ui.config(section, name, None)
625 642 if v is None:
626 643 return default
627 644 try:
628 645 return int(v)
629 646 except ValueError:
630 647 raise error.ConfigError(
631 648 b"%s.%s is not an integer ('%s')" % (section, name, v)
632 649 )
633 650
634 651
635 652 def safeattrsetter(obj, name, ignoremissing=False):
636 653 """Ensure that 'obj' has 'name' attribute before subsequent setattr
637 654
638 655 This function is aborted, if 'obj' doesn't have 'name' attribute
639 656 at runtime. This avoids overlooking removal of an attribute, which
640 657 breaks assumption of performance measurement, in the future.
641 658
642 659 This function returns the object to (1) assign a new value, and
643 660 (2) restore an original value to the attribute.
644 661
645 662 If 'ignoremissing' is true, missing 'name' attribute doesn't cause
646 663 abortion, and this function returns None. This is useful to
647 664 examine an attribute, which isn't ensured in all Mercurial
648 665 versions.
649 666 """
650 667 if not util.safehasattr(obj, name):
651 668 if ignoremissing:
652 669 return None
653 670 raise error.Abort(
654 671 (
655 672 b"missing attribute %s of %s might break assumption"
656 673 b" of performance measurement"
657 674 )
658 675 % (name, obj)
659 676 )
660 677
661 678 origvalue = getattr(obj, _sysstr(name))
662 679
663 680 class attrutil:
664 681 def set(self, newvalue):
665 682 setattr(obj, _sysstr(name), newvalue)
666 683
667 684 def restore(self):
668 685 setattr(obj, _sysstr(name), origvalue)
669 686
670 687 return attrutil()
671 688
672 689
673 690 # utilities to examine each internal API changes
674 691
675 692
676 693 def getbranchmapsubsettable():
677 694 # for "historical portability":
678 695 # subsettable is defined in:
679 696 # - branchmap since 2.9 (or 175c6fd8cacc)
680 697 # - repoview since 2.5 (or 59a9f18d4587)
681 698 # - repoviewutil since 5.0
682 699 for mod in (branchmap, repoview, repoviewutil):
683 700 subsettable = getattr(mod, 'subsettable', None)
684 701 if subsettable:
685 702 return subsettable
686 703
687 704 # bisecting in bcee63733aad::59a9f18d4587 can reach here (both
688 705 # branchmap and repoview modules exist, but subsettable attribute
689 706 # doesn't)
690 707 raise error.Abort(
691 708 b"perfbranchmap not available with this Mercurial",
692 709 hint=b"use 2.5 or later",
693 710 )
694 711
695 712
696 713 def getsvfs(repo):
697 714 """Return appropriate object to access files under .hg/store"""
698 715 # for "historical portability":
699 716 # repo.svfs has been available since 2.3 (or 7034365089bf)
700 717 svfs = getattr(repo, 'svfs', None)
701 718 if svfs:
702 719 return svfs
703 720 else:
704 721 return getattr(repo, 'sopener')
705 722
706 723
707 724 def getvfs(repo):
708 725 """Return appropriate object to access files under .hg"""
709 726 # for "historical portability":
710 727 # repo.vfs has been available since 2.3 (or 7034365089bf)
711 728 vfs = getattr(repo, 'vfs', None)
712 729 if vfs:
713 730 return vfs
714 731 else:
715 732 return getattr(repo, 'opener')
716 733
717 734
718 735 def repocleartagscachefunc(repo):
719 736 """Return the function to clear tags cache according to repo internal API"""
720 737 if util.safehasattr(repo, b'_tagscache'): # since 2.0 (or 9dca7653b525)
721 738 # in this case, setattr(repo, '_tagscache', None) or so isn't
722 739 # correct way to clear tags cache, because existing code paths
723 740 # expect _tagscache to be a structured object.
724 741 def clearcache():
725 742 # _tagscache has been filteredpropertycache since 2.5 (or
726 743 # 98c867ac1330), and delattr() can't work in such case
727 744 if '_tagscache' in vars(repo):
728 745 del repo.__dict__['_tagscache']
729 746
730 747 return clearcache
731 748
732 749 repotags = safeattrsetter(repo, b'_tags', ignoremissing=True)
733 750 if repotags: # since 1.4 (or 5614a628d173)
734 751 return lambda: repotags.set(None)
735 752
736 753 repotagscache = safeattrsetter(repo, b'tagscache', ignoremissing=True)
737 754 if repotagscache: # since 0.6 (or d7df759d0e97)
738 755 return lambda: repotagscache.set(None)
739 756
740 757 # Mercurial earlier than 0.6 (or d7df759d0e97) logically reaches
741 758 # this point, but it isn't so problematic, because:
742 759 # - repo.tags of such Mercurial isn't "callable", and repo.tags()
743 760 # in perftags() causes failure soon
744 761 # - perf.py itself has been available since 1.1 (or eb240755386d)
745 762 raise error.Abort(b"tags API of this hg command is unknown")
746 763
747 764
748 765 # utilities to clear cache
749 766
750 767
751 768 def clearfilecache(obj, attrname):
752 769 unfiltered = getattr(obj, 'unfiltered', None)
753 770 if unfiltered is not None:
754 771 obj = obj.unfiltered()
755 772 if attrname in vars(obj):
756 773 delattr(obj, attrname)
757 774 obj._filecache.pop(attrname, None)
758 775
759 776
760 777 def clearchangelog(repo):
761 778 if repo is not repo.unfiltered():
762 779 object.__setattr__(repo, '_clcachekey', None)
763 780 object.__setattr__(repo, '_clcache', None)
764 781 clearfilecache(repo.unfiltered(), 'changelog')
765 782
766 783
767 784 # perf commands
768 785
769 786
770 787 @command(b'perf::walk|perfwalk', formatteropts)
771 788 def perfwalk(ui, repo, *pats, **opts):
772 789 opts = _byteskwargs(opts)
773 790 timer, fm = gettimer(ui, opts)
774 791 m = scmutil.match(repo[None], pats, {})
775 792 timer(
776 793 lambda: len(
777 794 list(
778 795 repo.dirstate.walk(m, subrepos=[], unknown=True, ignored=False)
779 796 )
780 797 )
781 798 )
782 799 fm.end()
783 800
784 801
785 802 @command(b'perf::annotate|perfannotate', formatteropts)
786 803 def perfannotate(ui, repo, f, **opts):
787 804 opts = _byteskwargs(opts)
788 805 timer, fm = gettimer(ui, opts)
789 806 fc = repo[b'.'][f]
790 807 timer(lambda: len(fc.annotate(True)))
791 808 fm.end()
792 809
793 810
794 811 @command(
795 812 b'perf::status|perfstatus',
796 813 [
797 814 (b'u', b'unknown', False, b'ask status to look for unknown files'),
798 815 (b'', b'dirstate', False, b'benchmark the internal dirstate call'),
799 816 ]
800 817 + formatteropts,
801 818 )
802 819 def perfstatus(ui, repo, **opts):
803 820 """benchmark the performance of a single status call
804 821
805 822 The repository data are preserved between each call.
806 823
807 824 By default, only the status of the tracked file are requested. If
808 825 `--unknown` is passed, the "unknown" files are also tracked.
809 826 """
810 827 opts = _byteskwargs(opts)
811 828 # m = match.always(repo.root, repo.getcwd())
812 829 # timer(lambda: sum(map(len, repo.dirstate.status(m, [], False, False,
813 830 # False))))
814 831 timer, fm = gettimer(ui, opts)
815 832 if opts[b'dirstate']:
816 833 dirstate = repo.dirstate
817 834 m = scmutil.matchall(repo)
818 835 unknown = opts[b'unknown']
819 836
820 837 def status_dirstate():
821 838 s = dirstate.status(
822 839 m, subrepos=[], ignored=False, clean=False, unknown=unknown
823 840 )
824 841 sum(map(bool, s))
825 842
826 843 if util.safehasattr(dirstate, 'running_status'):
827 844 with dirstate.running_status(repo):
828 845 timer(status_dirstate)
829 846 dirstate.invalidate()
830 847 else:
831 848 timer(status_dirstate)
832 849 else:
833 850 timer(lambda: sum(map(len, repo.status(unknown=opts[b'unknown']))))
834 851 fm.end()
835 852
836 853
837 854 @command(b'perf::addremove|perfaddremove', formatteropts)
838 855 def perfaddremove(ui, repo, **opts):
839 856 opts = _byteskwargs(opts)
840 857 timer, fm = gettimer(ui, opts)
841 858 try:
842 859 oldquiet = repo.ui.quiet
843 860 repo.ui.quiet = True
844 861 matcher = scmutil.match(repo[None])
845 862 opts[b'dry_run'] = True
846 863 if 'uipathfn' in getargspec(scmutil.addremove).args:
847 864 uipathfn = scmutil.getuipathfn(repo)
848 865 timer(lambda: scmutil.addremove(repo, matcher, b"", uipathfn, opts))
849 866 else:
850 867 timer(lambda: scmutil.addremove(repo, matcher, b"", opts))
851 868 finally:
852 869 repo.ui.quiet = oldquiet
853 870 fm.end()
854 871
855 872
856 873 def clearcaches(cl):
857 874 # behave somewhat consistently across internal API changes
858 875 if util.safehasattr(cl, b'clearcaches'):
859 876 cl.clearcaches()
860 877 elif util.safehasattr(cl, b'_nodecache'):
861 878 # <= hg-5.2
862 879 from mercurial.node import nullid, nullrev
863 880
864 881 cl._nodecache = {nullid: nullrev}
865 882 cl._nodepos = None
866 883
867 884
868 885 @command(b'perf::heads|perfheads', formatteropts)
869 886 def perfheads(ui, repo, **opts):
870 887 """benchmark the computation of a changelog heads"""
871 888 opts = _byteskwargs(opts)
872 889 timer, fm = gettimer(ui, opts)
873 890 cl = repo.changelog
874 891
875 892 def s():
876 893 clearcaches(cl)
877 894
878 895 def d():
879 896 len(cl.headrevs())
880 897
881 898 timer(d, setup=s)
882 899 fm.end()
883 900
884 901
885 902 def _default_clear_on_disk_tags_cache(repo):
886 903 from mercurial import tags
887 904
888 905 repo.cachevfs.tryunlink(tags._filename(repo))
889 906
890 907
891 908 def _default_clear_on_disk_tags_fnodes_cache(repo):
892 909 from mercurial import tags
893 910
894 911 repo.cachevfs.tryunlink(tags._fnodescachefile)
895 912
896 913
897 914 def _default_forget_fnodes(repo, revs):
898 915 """function used by the perf extension to prune some entries from the
899 916 fnodes cache"""
900 917 from mercurial import tags
901 918
902 919 missing_1 = b'\xff' * 4
903 920 missing_2 = b'\xff' * 20
904 921 cache = tags.hgtagsfnodescache(repo.unfiltered())
905 922 for r in revs:
906 923 cache._writeentry(r * tags._fnodesrecsize, missing_1, missing_2)
907 924 cache.write()
908 925
909 926
910 927 @command(
911 928 b'perf::tags|perftags',
912 929 formatteropts
913 930 + [
914 931 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
915 932 (
916 933 b'',
917 934 b'clear-on-disk-cache',
918 935 False,
919 936 b'clear on disk tags cache (DESTRUCTIVE)',
920 937 ),
921 938 (
922 939 b'',
923 940 b'clear-fnode-cache-all',
924 941 False,
925 942 b'clear on disk file node cache (DESTRUCTIVE),',
926 943 ),
927 944 (
928 945 b'',
929 946 b'clear-fnode-cache-rev',
930 947 [],
931 948 b'clear on disk file node cache (DESTRUCTIVE),',
932 949 b'REVS',
933 950 ),
934 951 (
935 952 b'',
936 953 b'update-last',
937 954 b'',
938 955 b'simulate an update over the last N revisions (DESTRUCTIVE),',
939 956 b'N',
940 957 ),
941 958 ],
942 959 )
943 960 def perftags(ui, repo, **opts):
944 961 """Benchmark tags retrieval in various situation
945 962
946 963 The option marked as (DESTRUCTIVE) will alter the on-disk cache, possibly
947 964 altering performance after the command was run. However, it does not
948 965 destroy any stored data.
949 966 """
950 967 from mercurial import tags
951 968
952 969 opts = _byteskwargs(opts)
953 970 timer, fm = gettimer(ui, opts)
954 971 repocleartagscache = repocleartagscachefunc(repo)
955 972 clearrevlogs = opts[b'clear_revlogs']
956 973 clear_disk = opts[b'clear_on_disk_cache']
957 974 clear_fnode = opts[b'clear_fnode_cache_all']
958 975
959 976 clear_fnode_revs = opts[b'clear_fnode_cache_rev']
960 977 update_last_str = opts[b'update_last']
961 978 update_last = None
962 979 if update_last_str:
963 980 try:
964 981 update_last = int(update_last_str)
965 982 except ValueError:
966 983 msg = b'could not parse value for update-last: "%s"'
967 984 msg %= update_last_str
968 985 hint = b'value should be an integer'
969 986 raise error.Abort(msg, hint=hint)
970 987
971 988 clear_disk_fn = getattr(
972 989 tags,
973 990 "clear_cache_on_disk",
974 991 _default_clear_on_disk_tags_cache,
975 992 )
976 993 if getattr(tags, 'clear_cache_fnodes_is_working', False):
977 994 clear_fnodes_fn = tags.clear_cache_fnodes
978 995 else:
979 996 clear_fnodes_fn = _default_clear_on_disk_tags_fnodes_cache
980 997 clear_fnodes_rev_fn = getattr(
981 998 tags,
982 999 "forget_fnodes",
983 1000 _default_forget_fnodes,
984 1001 )
985 1002
986 1003 clear_revs = []
987 1004 if clear_fnode_revs:
988 1005 clear_revs.extend(scmutil.revrange(repo, clear_fnode_revs))
989 1006
990 1007 if update_last:
991 1008 revset = b'last(all(), %d)' % update_last
992 1009 last_revs = repo.unfiltered().revs(revset)
993 1010 clear_revs.extend(last_revs)
994 1011
995 1012 from mercurial import repoview
996 1013
997 1014 rev_filter = {(b'experimental', b'extra-filter-revs'): revset}
998 1015 with repo.ui.configoverride(rev_filter, source=b"perf"):
999 1016 filter_id = repoview.extrafilter(repo.ui)
1000 1017
1001 1018 filter_name = b'%s%%%s' % (repo.filtername, filter_id)
1002 1019 pre_repo = repo.filtered(filter_name)
1003 1020 pre_repo.tags() # warm the cache
1004 1021 old_tags_path = repo.cachevfs.join(tags._filename(pre_repo))
1005 1022 new_tags_path = repo.cachevfs.join(tags._filename(repo))
1006 1023
1007 1024 clear_revs = sorted(set(clear_revs))
1008 1025
1009 1026 def s():
1010 1027 if update_last:
1011 1028 util.copyfile(old_tags_path, new_tags_path)
1012 1029 if clearrevlogs:
1013 1030 clearchangelog(repo)
1014 1031 clearfilecache(repo.unfiltered(), 'manifest')
1015 1032 if clear_disk:
1016 1033 clear_disk_fn(repo)
1017 1034 if clear_fnode:
1018 1035 clear_fnodes_fn(repo)
1019 1036 elif clear_revs:
1020 1037 clear_fnodes_rev_fn(repo, clear_revs)
1021 1038 repocleartagscache()
1022 1039
1023 1040 def t():
1024 1041 len(repo.tags())
1025 1042
1026 1043 timer(t, setup=s)
1027 1044 fm.end()
1028 1045
1029 1046
1030 1047 @command(b'perf::ancestors|perfancestors', formatteropts)
1031 1048 def perfancestors(ui, repo, **opts):
1032 1049 opts = _byteskwargs(opts)
1033 1050 timer, fm = gettimer(ui, opts)
1034 1051 heads = repo.changelog.headrevs()
1035 1052
1036 1053 def d():
1037 1054 for a in repo.changelog.ancestors(heads):
1038 1055 pass
1039 1056
1040 1057 timer(d)
1041 1058 fm.end()
1042 1059
1043 1060
1044 1061 @command(b'perf::ancestorset|perfancestorset', formatteropts)
1045 1062 def perfancestorset(ui, repo, revset, **opts):
1046 1063 opts = _byteskwargs(opts)
1047 1064 timer, fm = gettimer(ui, opts)
1048 1065 revs = repo.revs(revset)
1049 1066 heads = repo.changelog.headrevs()
1050 1067
1051 1068 def d():
1052 1069 s = repo.changelog.ancestors(heads)
1053 1070 for rev in revs:
1054 1071 rev in s
1055 1072
1056 1073 timer(d)
1057 1074 fm.end()
1058 1075
1059 1076
1060 1077 @command(
1061 1078 b'perf::delta-find',
1062 1079 revlogopts + formatteropts,
1063 1080 b'-c|-m|FILE REV',
1064 1081 )
1065 1082 def perf_delta_find(ui, repo, arg_1, arg_2=None, **opts):
1066 1083 """benchmark the process of finding a valid delta for a revlog revision
1067 1084
1068 1085 When a revlog receives a new revision (e.g. from a commit, or from an
1069 1086 incoming bundle), it searches for a suitable delta-base to produce a delta.
1070 1087 This perf command measures how much time we spend in this process. It
1071 1088 operates on an already stored revision.
1072 1089
1073 1090 See `hg help debug-delta-find` for another related command.
1074 1091 """
1075 1092 from mercurial import revlogutils
1076 1093 import mercurial.revlogutils.deltas as deltautil
1077 1094
1078 1095 opts = _byteskwargs(opts)
1079 1096 if arg_2 is None:
1080 1097 file_ = None
1081 1098 rev = arg_1
1082 1099 else:
1083 1100 file_ = arg_1
1084 1101 rev = arg_2
1085 1102
1086 1103 repo = repo.unfiltered()
1087 1104
1088 1105 timer, fm = gettimer(ui, opts)
1089 1106
1090 1107 rev = int(rev)
1091 1108
1092 1109 revlog = cmdutil.openrevlog(repo, b'perf::delta-find', file_, opts)
1093 1110
1094 1111 deltacomputer = deltautil.deltacomputer(revlog)
1095 1112
1096 1113 node = revlog.node(rev)
1097 1114 p1r, p2r = revlog.parentrevs(rev)
1098 1115 p1 = revlog.node(p1r)
1099 1116 p2 = revlog.node(p2r)
1100 1117 full_text = revlog.revision(rev)
1101 1118 textlen = len(full_text)
1102 1119 cachedelta = None
1103 1120 flags = revlog.flags(rev)
1104 1121
1105 1122 revinfo = revlogutils.revisioninfo(
1106 1123 node,
1107 1124 p1,
1108 1125 p2,
1109 1126 [full_text], # btext
1110 1127 textlen,
1111 1128 cachedelta,
1112 1129 flags,
1113 1130 )
1114 1131
1115 1132 # Note: we should probably purge the potential caches (like the full
1116 1133 # manifest cache) between runs.
1117 1134 def find_one():
1118 1135 with revlog._datafp() as fh:
1119 1136 deltacomputer.finddeltainfo(revinfo, fh, target_rev=rev)
1120 1137
1121 1138 timer(find_one)
1122 1139 fm.end()
1123 1140
1124 1141
1125 1142 @command(b'perf::discovery|perfdiscovery', formatteropts, b'PATH')
1126 1143 def perfdiscovery(ui, repo, path, **opts):
1127 1144 """benchmark discovery between local repo and the peer at given path"""
1128 1145 repos = [repo, None]
1129 1146 timer, fm = gettimer(ui, opts)
1130 1147
1131 1148 try:
1132 1149 from mercurial.utils.urlutil import get_unique_pull_path_obj
1133 1150
1134 1151 path = get_unique_pull_path_obj(b'perfdiscovery', ui, path)
1135 1152 except ImportError:
1136 1153 try:
1137 1154 from mercurial.utils.urlutil import get_unique_pull_path
1138 1155
1139 1156 path = get_unique_pull_path(b'perfdiscovery', repo, ui, path)[0]
1140 1157 except ImportError:
1141 1158 path = ui.expandpath(path)
1142 1159
1143 1160 def s():
1144 1161 repos[1] = hg.peer(ui, opts, path)
1145 1162
1146 1163 def d():
1147 1164 setdiscovery.findcommonheads(ui, *repos)
1148 1165
1149 1166 timer(d, setup=s)
1150 1167 fm.end()
1151 1168
1152 1169
1153 1170 @command(
1154 1171 b'perf::bookmarks|perfbookmarks',
1155 1172 formatteropts
1156 1173 + [
1157 1174 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
1158 1175 ],
1159 1176 )
1160 1177 def perfbookmarks(ui, repo, **opts):
1161 1178 """benchmark parsing bookmarks from disk to memory"""
1162 1179 opts = _byteskwargs(opts)
1163 1180 timer, fm = gettimer(ui, opts)
1164 1181
1165 1182 clearrevlogs = opts[b'clear_revlogs']
1166 1183
1167 1184 def s():
1168 1185 if clearrevlogs:
1169 1186 clearchangelog(repo)
1170 1187 clearfilecache(repo, b'_bookmarks')
1171 1188
1172 1189 def d():
1173 1190 repo._bookmarks
1174 1191
1175 1192 timer(d, setup=s)
1176 1193 fm.end()
1177 1194
1178 1195
1179 1196 @command(
1180 1197 b'perf::bundle',
1181 1198 [
1182 1199 (
1183 1200 b'r',
1184 1201 b'rev',
1185 1202 [],
1186 1203 b'changesets to bundle',
1187 1204 b'REV',
1188 1205 ),
1189 1206 (
1190 1207 b't',
1191 1208 b'type',
1192 1209 b'none',
1193 1210 b'bundlespec to use (see `hg help bundlespec`)',
1194 1211 b'TYPE',
1195 1212 ),
1196 1213 ]
1197 1214 + formatteropts,
1198 1215 b'REVS',
1199 1216 )
1200 1217 def perfbundle(ui, repo, *revs, **opts):
1201 1218 """benchmark the creation of a bundle from a repository
1202 1219
1203 1220 For now, this only supports "none" compression.
1204 1221 """
1205 1222 try:
1206 1223 from mercurial import bundlecaches
1207 1224
1208 1225 parsebundlespec = bundlecaches.parsebundlespec
1209 1226 except ImportError:
1210 1227 from mercurial import exchange
1211 1228
1212 1229 parsebundlespec = exchange.parsebundlespec
1213 1230
1214 1231 from mercurial import discovery
1215 1232 from mercurial import bundle2
1216 1233
1217 1234 opts = _byteskwargs(opts)
1218 1235 timer, fm = gettimer(ui, opts)
1219 1236
1220 1237 cl = repo.changelog
1221 1238 revs = list(revs)
1222 1239 revs.extend(opts.get(b'rev', ()))
1223 1240 revs = scmutil.revrange(repo, revs)
1224 1241 if not revs:
1225 1242 raise error.Abort(b"not revision specified")
1226 1243 # make it a consistent set (ie: without topological gaps)
1227 1244 old_len = len(revs)
1228 1245 revs = list(repo.revs(b"%ld::%ld", revs, revs))
1229 1246 if old_len != len(revs):
1230 1247 new_count = len(revs) - old_len
1231 1248 msg = b"add %d new revisions to make it a consistent set\n"
1232 1249 ui.write_err(msg % new_count)
1233 1250
1234 1251 targets = [cl.node(r) for r in repo.revs(b"heads(::%ld)", revs)]
1235 1252 bases = [cl.node(r) for r in repo.revs(b"heads(::%ld - %ld)", revs, revs)]
1236 1253 outgoing = discovery.outgoing(repo, bases, targets)
1237 1254
1238 1255 bundle_spec = opts.get(b'type')
1239 1256
1240 1257 bundle_spec = parsebundlespec(repo, bundle_spec, strict=False)
1241 1258
1242 1259 cgversion = bundle_spec.params.get(b"cg.version")
1243 1260 if cgversion is None:
1244 1261 if bundle_spec.version == b'v1':
1245 1262 cgversion = b'01'
1246 1263 if bundle_spec.version == b'v2':
1247 1264 cgversion = b'02'
1248 1265 if cgversion not in changegroup.supportedoutgoingversions(repo):
1249 1266 err = b"repository does not support bundle version %s"
1250 1267 raise error.Abort(err % cgversion)
1251 1268
1252 1269 if cgversion == b'01': # bundle1
1253 1270 bversion = b'HG10' + bundle_spec.wirecompression
1254 1271 bcompression = None
1255 1272 elif cgversion in (b'02', b'03'):
1256 1273 bversion = b'HG20'
1257 1274 bcompression = bundle_spec.wirecompression
1258 1275 else:
1259 1276 err = b'perf::bundle: unexpected changegroup version %s'
1260 1277 raise error.ProgrammingError(err % cgversion)
1261 1278
1262 1279 if bcompression is None:
1263 1280 bcompression = b'UN'
1264 1281
1265 1282 if bcompression != b'UN':
1266 1283 err = b'perf::bundle: compression currently unsupported: %s'
1267 1284 raise error.ProgrammingError(err % bcompression)
1268 1285
1269 1286 def do_bundle():
1270 1287 bundle2.writenewbundle(
1271 1288 ui,
1272 1289 repo,
1273 1290 b'perf::bundle',
1274 1291 os.devnull,
1275 1292 bversion,
1276 1293 outgoing,
1277 1294 bundle_spec.params,
1278 1295 )
1279 1296
1280 1297 timer(do_bundle)
1281 1298 fm.end()
1282 1299
1283 1300
1284 1301 @command(b'perf::bundleread|perfbundleread', formatteropts, b'BUNDLE')
1285 1302 def perfbundleread(ui, repo, bundlepath, **opts):
1286 1303 """Benchmark reading of bundle files.
1287 1304
1288 1305 This command is meant to isolate the I/O part of bundle reading as
1289 1306 much as possible.
1290 1307 """
1291 1308 from mercurial import (
1292 1309 bundle2,
1293 1310 exchange,
1294 1311 streamclone,
1295 1312 )
1296 1313
1297 1314 opts = _byteskwargs(opts)
1298 1315
1299 1316 def makebench(fn):
1300 1317 def run():
1301 1318 with open(bundlepath, b'rb') as fh:
1302 1319 bundle = exchange.readbundle(ui, fh, bundlepath)
1303 1320 fn(bundle)
1304 1321
1305 1322 return run
1306 1323
1307 1324 def makereadnbytes(size):
1308 1325 def run():
1309 1326 with open(bundlepath, b'rb') as fh:
1310 1327 bundle = exchange.readbundle(ui, fh, bundlepath)
1311 1328 while bundle.read(size):
1312 1329 pass
1313 1330
1314 1331 return run
1315 1332
1316 1333 def makestdioread(size):
1317 1334 def run():
1318 1335 with open(bundlepath, b'rb') as fh:
1319 1336 while fh.read(size):
1320 1337 pass
1321 1338
1322 1339 return run
1323 1340
1324 1341 # bundle1
1325 1342
1326 1343 def deltaiter(bundle):
1327 1344 for delta in bundle.deltaiter():
1328 1345 pass
1329 1346
1330 1347 def iterchunks(bundle):
1331 1348 for chunk in bundle.getchunks():
1332 1349 pass
1333 1350
1334 1351 # bundle2
1335 1352
1336 1353 def forwardchunks(bundle):
1337 1354 for chunk in bundle._forwardchunks():
1338 1355 pass
1339 1356
1340 1357 def iterparts(bundle):
1341 1358 for part in bundle.iterparts():
1342 1359 pass
1343 1360
1344 1361 def iterpartsseekable(bundle):
1345 1362 for part in bundle.iterparts(seekable=True):
1346 1363 pass
1347 1364
1348 1365 def seek(bundle):
1349 1366 for part in bundle.iterparts(seekable=True):
1350 1367 part.seek(0, os.SEEK_END)
1351 1368
1352 1369 def makepartreadnbytes(size):
1353 1370 def run():
1354 1371 with open(bundlepath, b'rb') as fh:
1355 1372 bundle = exchange.readbundle(ui, fh, bundlepath)
1356 1373 for part in bundle.iterparts():
1357 1374 while part.read(size):
1358 1375 pass
1359 1376
1360 1377 return run
1361 1378
1362 1379 benches = [
1363 1380 (makestdioread(8192), b'read(8k)'),
1364 1381 (makestdioread(16384), b'read(16k)'),
1365 1382 (makestdioread(32768), b'read(32k)'),
1366 1383 (makestdioread(131072), b'read(128k)'),
1367 1384 ]
1368 1385
1369 1386 with open(bundlepath, b'rb') as fh:
1370 1387 bundle = exchange.readbundle(ui, fh, bundlepath)
1371 1388
1372 1389 if isinstance(bundle, changegroup.cg1unpacker):
1373 1390 benches.extend(
1374 1391 [
1375 1392 (makebench(deltaiter), b'cg1 deltaiter()'),
1376 1393 (makebench(iterchunks), b'cg1 getchunks()'),
1377 1394 (makereadnbytes(8192), b'cg1 read(8k)'),
1378 1395 (makereadnbytes(16384), b'cg1 read(16k)'),
1379 1396 (makereadnbytes(32768), b'cg1 read(32k)'),
1380 1397 (makereadnbytes(131072), b'cg1 read(128k)'),
1381 1398 ]
1382 1399 )
1383 1400 elif isinstance(bundle, bundle2.unbundle20):
1384 1401 benches.extend(
1385 1402 [
1386 1403 (makebench(forwardchunks), b'bundle2 forwardchunks()'),
1387 1404 (makebench(iterparts), b'bundle2 iterparts()'),
1388 1405 (
1389 1406 makebench(iterpartsseekable),
1390 1407 b'bundle2 iterparts() seekable',
1391 1408 ),
1392 1409 (makebench(seek), b'bundle2 part seek()'),
1393 1410 (makepartreadnbytes(8192), b'bundle2 part read(8k)'),
1394 1411 (makepartreadnbytes(16384), b'bundle2 part read(16k)'),
1395 1412 (makepartreadnbytes(32768), b'bundle2 part read(32k)'),
1396 1413 (makepartreadnbytes(131072), b'bundle2 part read(128k)'),
1397 1414 ]
1398 1415 )
1399 1416 elif isinstance(bundle, streamclone.streamcloneapplier):
1400 1417 raise error.Abort(b'stream clone bundles not supported')
1401 1418 else:
1402 1419 raise error.Abort(b'unhandled bundle type: %s' % type(bundle))
1403 1420
1404 1421 for fn, title in benches:
1405 1422 timer, fm = gettimer(ui, opts)
1406 1423 timer(fn, title=title)
1407 1424 fm.end()
1408 1425
1409 1426
1410 1427 @command(
1411 1428 b'perf::changegroupchangelog|perfchangegroupchangelog',
1412 1429 formatteropts
1413 1430 + [
1414 1431 (b'', b'cgversion', b'02', b'changegroup version'),
1415 1432 (b'r', b'rev', b'', b'revisions to add to changegroup'),
1416 1433 ],
1417 1434 )
1418 1435 def perfchangegroupchangelog(ui, repo, cgversion=b'02', rev=None, **opts):
1419 1436 """Benchmark producing a changelog group for a changegroup.
1420 1437
1421 1438 This measures the time spent processing the changelog during a
1422 1439 bundle operation. This occurs during `hg bundle` and on a server
1423 1440 processing a `getbundle` wire protocol request (handles clones
1424 1441 and pull requests).
1425 1442
1426 1443 By default, all revisions are added to the changegroup.
1427 1444 """
1428 1445 opts = _byteskwargs(opts)
1429 1446 cl = repo.changelog
1430 1447 nodes = [cl.lookup(r) for r in repo.revs(rev or b'all()')]
1431 1448 bundler = changegroup.getbundler(cgversion, repo)
1432 1449
1433 1450 def d():
1434 1451 state, chunks = bundler._generatechangelog(cl, nodes)
1435 1452 for chunk in chunks:
1436 1453 pass
1437 1454
1438 1455 timer, fm = gettimer(ui, opts)
1439 1456
1440 1457 # Terminal printing can interfere with timing. So disable it.
1441 1458 with ui.configoverride({(b'progress', b'disable'): True}):
1442 1459 timer(d)
1443 1460
1444 1461 fm.end()
1445 1462
1446 1463
1447 1464 @command(b'perf::dirs|perfdirs', formatteropts)
1448 1465 def perfdirs(ui, repo, **opts):
1449 1466 opts = _byteskwargs(opts)
1450 1467 timer, fm = gettimer(ui, opts)
1451 1468 dirstate = repo.dirstate
1452 1469 b'a' in dirstate
1453 1470
1454 1471 def d():
1455 1472 dirstate.hasdir(b'a')
1456 1473 try:
1457 1474 del dirstate._map._dirs
1458 1475 except AttributeError:
1459 1476 pass
1460 1477
1461 1478 timer(d)
1462 1479 fm.end()
1463 1480
1464 1481
1465 1482 @command(
1466 1483 b'perf::dirstate|perfdirstate',
1467 1484 [
1468 1485 (
1469 1486 b'',
1470 1487 b'iteration',
1471 1488 None,
1472 1489 b'benchmark a full iteration for the dirstate',
1473 1490 ),
1474 1491 (
1475 1492 b'',
1476 1493 b'contains',
1477 1494 None,
1478 1495 b'benchmark a large amount of `nf in dirstate` calls',
1479 1496 ),
1480 1497 ]
1481 1498 + formatteropts,
1482 1499 )
1483 1500 def perfdirstate(ui, repo, **opts):
1484 1501 """benchmap the time of various distate operations
1485 1502
1486 1503 By default benchmark the time necessary to load a dirstate from scratch.
1487 1504 The dirstate is loaded to the point were a "contains" request can be
1488 1505 answered.
1489 1506 """
1490 1507 opts = _byteskwargs(opts)
1491 1508 timer, fm = gettimer(ui, opts)
1492 1509 b"a" in repo.dirstate
1493 1510
1494 1511 if opts[b'iteration'] and opts[b'contains']:
1495 1512 msg = b'only specify one of --iteration or --contains'
1496 1513 raise error.Abort(msg)
1497 1514
1498 1515 if opts[b'iteration']:
1499 1516 setup = None
1500 1517 dirstate = repo.dirstate
1501 1518
1502 1519 def d():
1503 1520 for f in dirstate:
1504 1521 pass
1505 1522
1506 1523 elif opts[b'contains']:
1507 1524 setup = None
1508 1525 dirstate = repo.dirstate
1509 1526 allfiles = list(dirstate)
1510 1527 # also add file path that will be "missing" from the dirstate
1511 1528 allfiles.extend([f[::-1] for f in allfiles])
1512 1529
1513 1530 def d():
1514 1531 for f in allfiles:
1515 1532 f in dirstate
1516 1533
1517 1534 else:
1518 1535
1519 1536 def setup():
1520 1537 repo.dirstate.invalidate()
1521 1538
1522 1539 def d():
1523 1540 b"a" in repo.dirstate
1524 1541
1525 1542 timer(d, setup=setup)
1526 1543 fm.end()
1527 1544
1528 1545
1529 1546 @command(b'perf::dirstatedirs|perfdirstatedirs', formatteropts)
1530 1547 def perfdirstatedirs(ui, repo, **opts):
1531 1548 """benchmap a 'dirstate.hasdir' call from an empty `dirs` cache"""
1532 1549 opts = _byteskwargs(opts)
1533 1550 timer, fm = gettimer(ui, opts)
1534 1551 repo.dirstate.hasdir(b"a")
1535 1552
1536 1553 def setup():
1537 1554 try:
1538 1555 del repo.dirstate._map._dirs
1539 1556 except AttributeError:
1540 1557 pass
1541 1558
1542 1559 def d():
1543 1560 repo.dirstate.hasdir(b"a")
1544 1561
1545 1562 timer(d, setup=setup)
1546 1563 fm.end()
1547 1564
1548 1565
1549 1566 @command(b'perf::dirstatefoldmap|perfdirstatefoldmap', formatteropts)
1550 1567 def perfdirstatefoldmap(ui, repo, **opts):
1551 1568 """benchmap a `dirstate._map.filefoldmap.get()` request
1552 1569
1553 1570 The dirstate filefoldmap cache is dropped between every request.
1554 1571 """
1555 1572 opts = _byteskwargs(opts)
1556 1573 timer, fm = gettimer(ui, opts)
1557 1574 dirstate = repo.dirstate
1558 1575 dirstate._map.filefoldmap.get(b'a')
1559 1576
1560 1577 def setup():
1561 1578 del dirstate._map.filefoldmap
1562 1579
1563 1580 def d():
1564 1581 dirstate._map.filefoldmap.get(b'a')
1565 1582
1566 1583 timer(d, setup=setup)
1567 1584 fm.end()
1568 1585
1569 1586
1570 1587 @command(b'perf::dirfoldmap|perfdirfoldmap', formatteropts)
1571 1588 def perfdirfoldmap(ui, repo, **opts):
1572 1589 """benchmap a `dirstate._map.dirfoldmap.get()` request
1573 1590
1574 1591 The dirstate dirfoldmap cache is dropped between every request.
1575 1592 """
1576 1593 opts = _byteskwargs(opts)
1577 1594 timer, fm = gettimer(ui, opts)
1578 1595 dirstate = repo.dirstate
1579 1596 dirstate._map.dirfoldmap.get(b'a')
1580 1597
1581 1598 def setup():
1582 1599 del dirstate._map.dirfoldmap
1583 1600 try:
1584 1601 del dirstate._map._dirs
1585 1602 except AttributeError:
1586 1603 pass
1587 1604
1588 1605 def d():
1589 1606 dirstate._map.dirfoldmap.get(b'a')
1590 1607
1591 1608 timer(d, setup=setup)
1592 1609 fm.end()
1593 1610
1594 1611
1595 1612 @command(b'perf::dirstatewrite|perfdirstatewrite', formatteropts)
1596 1613 def perfdirstatewrite(ui, repo, **opts):
1597 1614 """benchmap the time it take to write a dirstate on disk"""
1598 1615 opts = _byteskwargs(opts)
1599 1616 timer, fm = gettimer(ui, opts)
1600 1617 ds = repo.dirstate
1601 1618 b"a" in ds
1602 1619
1603 1620 def setup():
1604 1621 ds._dirty = True
1605 1622
1606 1623 def d():
1607 1624 ds.write(repo.currenttransaction())
1608 1625
1609 1626 with repo.wlock():
1610 1627 timer(d, setup=setup)
1611 1628 fm.end()
1612 1629
1613 1630
1614 1631 def _getmergerevs(repo, opts):
1615 1632 """parse command argument to return rev involved in merge
1616 1633
1617 1634 input: options dictionnary with `rev`, `from` and `bse`
1618 1635 output: (localctx, otherctx, basectx)
1619 1636 """
1620 1637 if opts[b'from']:
1621 1638 fromrev = scmutil.revsingle(repo, opts[b'from'])
1622 1639 wctx = repo[fromrev]
1623 1640 else:
1624 1641 wctx = repo[None]
1625 1642 # we don't want working dir files to be stat'd in the benchmark, so
1626 1643 # prime that cache
1627 1644 wctx.dirty()
1628 1645 rctx = scmutil.revsingle(repo, opts[b'rev'], opts[b'rev'])
1629 1646 if opts[b'base']:
1630 1647 fromrev = scmutil.revsingle(repo, opts[b'base'])
1631 1648 ancestor = repo[fromrev]
1632 1649 else:
1633 1650 ancestor = wctx.ancestor(rctx)
1634 1651 return (wctx, rctx, ancestor)
1635 1652
1636 1653
1637 1654 @command(
1638 1655 b'perf::mergecalculate|perfmergecalculate',
1639 1656 [
1640 1657 (b'r', b'rev', b'.', b'rev to merge against'),
1641 1658 (b'', b'from', b'', b'rev to merge from'),
1642 1659 (b'', b'base', b'', b'the revision to use as base'),
1643 1660 ]
1644 1661 + formatteropts,
1645 1662 )
1646 1663 def perfmergecalculate(ui, repo, **opts):
1647 1664 opts = _byteskwargs(opts)
1648 1665 timer, fm = gettimer(ui, opts)
1649 1666
1650 1667 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1651 1668
1652 1669 def d():
1653 1670 # acceptremote is True because we don't want prompts in the middle of
1654 1671 # our benchmark
1655 1672 merge.calculateupdates(
1656 1673 repo,
1657 1674 wctx,
1658 1675 rctx,
1659 1676 [ancestor],
1660 1677 branchmerge=False,
1661 1678 force=False,
1662 1679 acceptremote=True,
1663 1680 followcopies=True,
1664 1681 )
1665 1682
1666 1683 timer(d)
1667 1684 fm.end()
1668 1685
1669 1686
1670 1687 @command(
1671 1688 b'perf::mergecopies|perfmergecopies',
1672 1689 [
1673 1690 (b'r', b'rev', b'.', b'rev to merge against'),
1674 1691 (b'', b'from', b'', b'rev to merge from'),
1675 1692 (b'', b'base', b'', b'the revision to use as base'),
1676 1693 ]
1677 1694 + formatteropts,
1678 1695 )
1679 1696 def perfmergecopies(ui, repo, **opts):
1680 1697 """measure runtime of `copies.mergecopies`"""
1681 1698 opts = _byteskwargs(opts)
1682 1699 timer, fm = gettimer(ui, opts)
1683 1700 wctx, rctx, ancestor = _getmergerevs(repo, opts)
1684 1701
1685 1702 def d():
1686 1703 # acceptremote is True because we don't want prompts in the middle of
1687 1704 # our benchmark
1688 1705 copies.mergecopies(repo, wctx, rctx, ancestor)
1689 1706
1690 1707 timer(d)
1691 1708 fm.end()
1692 1709
1693 1710
1694 1711 @command(b'perf::pathcopies|perfpathcopies', [], b"REV REV")
1695 1712 def perfpathcopies(ui, repo, rev1, rev2, **opts):
1696 1713 """benchmark the copy tracing logic"""
1697 1714 opts = _byteskwargs(opts)
1698 1715 timer, fm = gettimer(ui, opts)
1699 1716 ctx1 = scmutil.revsingle(repo, rev1, rev1)
1700 1717 ctx2 = scmutil.revsingle(repo, rev2, rev2)
1701 1718
1702 1719 def d():
1703 1720 copies.pathcopies(ctx1, ctx2)
1704 1721
1705 1722 timer(d)
1706 1723 fm.end()
1707 1724
1708 1725
1709 1726 @command(
1710 1727 b'perf::phases|perfphases',
1711 1728 [
1712 1729 (b'', b'full', False, b'include file reading time too'),
1713 1730 ]
1714 1731 + formatteropts,
1715 1732 b"",
1716 1733 )
1717 1734 def perfphases(ui, repo, **opts):
1718 1735 """benchmark phasesets computation"""
1719 1736 opts = _byteskwargs(opts)
1720 1737 timer, fm = gettimer(ui, opts)
1721 1738 _phases = repo._phasecache
1722 1739 full = opts.get(b'full')
1723 1740 tip_rev = repo.changelog.tiprev()
1724 1741
1725 1742 def d():
1726 1743 phases = _phases
1727 1744 if full:
1728 1745 clearfilecache(repo, b'_phasecache')
1729 1746 phases = repo._phasecache
1730 1747 phases.invalidate()
1731 1748 phases.phase(repo, tip_rev)
1732 1749
1733 1750 timer(d)
1734 1751 fm.end()
1735 1752
1736 1753
1737 1754 @command(b'perf::phasesremote|perfphasesremote', [], b"[DEST]")
1738 1755 def perfphasesremote(ui, repo, dest=None, **opts):
1739 1756 """benchmark time needed to analyse phases of the remote server"""
1740 1757 from mercurial.node import bin
1741 1758 from mercurial import (
1742 1759 exchange,
1743 1760 hg,
1744 1761 phases,
1745 1762 )
1746 1763
1747 1764 opts = _byteskwargs(opts)
1748 1765 timer, fm = gettimer(ui, opts)
1749 1766
1750 1767 path = ui.getpath(dest, default=(b'default-push', b'default'))
1751 1768 if not path:
1752 1769 raise error.Abort(
1753 1770 b'default repository not configured!',
1754 1771 hint=b"see 'hg help config.paths'",
1755 1772 )
1756 1773 if util.safehasattr(path, 'main_path'):
1757 1774 path = path.get_push_variant()
1758 1775 dest = path.loc
1759 1776 else:
1760 1777 dest = path.pushloc or path.loc
1761 1778 ui.statusnoi18n(b'analysing phase of %s\n' % util.hidepassword(dest))
1762 1779 other = hg.peer(repo, opts, dest)
1763 1780
1764 1781 # easier to perform discovery through the operation
1765 1782 op = exchange.pushoperation(repo, other)
1766 1783 exchange._pushdiscoverychangeset(op)
1767 1784
1768 1785 remotesubset = op.fallbackheads
1769 1786
1770 1787 with other.commandexecutor() as e:
1771 1788 remotephases = e.callcommand(
1772 1789 b'listkeys', {b'namespace': b'phases'}
1773 1790 ).result()
1774 1791 del other
1775 1792 publishing = remotephases.get(b'publishing', False)
1776 1793 if publishing:
1777 1794 ui.statusnoi18n(b'publishing: yes\n')
1778 1795 else:
1779 1796 ui.statusnoi18n(b'publishing: no\n')
1780 1797
1781 1798 has_node = getattr(repo.changelog.index, 'has_node', None)
1782 1799 if has_node is None:
1783 1800 has_node = repo.changelog.nodemap.__contains__
1784 1801 nonpublishroots = 0
1785 1802 for nhex, phase in remotephases.iteritems():
1786 1803 if nhex == b'publishing': # ignore data related to publish option
1787 1804 continue
1788 1805 node = bin(nhex)
1789 1806 if has_node(node) and int(phase):
1790 1807 nonpublishroots += 1
1791 1808 ui.statusnoi18n(b'number of roots: %d\n' % len(remotephases))
1792 1809 ui.statusnoi18n(b'number of known non public roots: %d\n' % nonpublishroots)
1793 1810
1794 1811 def d():
1795 1812 phases.remotephasessummary(repo, remotesubset, remotephases)
1796 1813
1797 1814 timer(d)
1798 1815 fm.end()
1799 1816
1800 1817
1801 1818 @command(
1802 1819 b'perf::manifest|perfmanifest',
1803 1820 [
1804 1821 (b'm', b'manifest-rev', False, b'Look up a manifest node revision'),
1805 1822 (b'', b'clear-disk', False, b'clear on-disk caches too'),
1806 1823 ]
1807 1824 + formatteropts,
1808 1825 b'REV|NODE',
1809 1826 )
1810 1827 def perfmanifest(ui, repo, rev, manifest_rev=False, clear_disk=False, **opts):
1811 1828 """benchmark the time to read a manifest from disk and return a usable
1812 1829 dict-like object
1813 1830
1814 1831 Manifest caches are cleared before retrieval."""
1815 1832 opts = _byteskwargs(opts)
1816 1833 timer, fm = gettimer(ui, opts)
1817 1834 if not manifest_rev:
1818 1835 ctx = scmutil.revsingle(repo, rev, rev)
1819 1836 t = ctx.manifestnode()
1820 1837 else:
1821 1838 from mercurial.node import bin
1822 1839
1823 1840 if len(rev) == 40:
1824 1841 t = bin(rev)
1825 1842 else:
1826 1843 try:
1827 1844 rev = int(rev)
1828 1845
1829 1846 if util.safehasattr(repo.manifestlog, b'getstorage'):
1830 1847 t = repo.manifestlog.getstorage(b'').node(rev)
1831 1848 else:
1832 1849 t = repo.manifestlog._revlog.lookup(rev)
1833 1850 except ValueError:
1834 1851 raise error.Abort(
1835 1852 b'manifest revision must be integer or full node'
1836 1853 )
1837 1854
1838 1855 def d():
1839 1856 repo.manifestlog.clearcaches(clear_persisted_data=clear_disk)
1840 1857 repo.manifestlog[t].read()
1841 1858
1842 1859 timer(d)
1843 1860 fm.end()
1844 1861
1845 1862
1846 1863 @command(b'perf::changeset|perfchangeset', formatteropts)
1847 1864 def perfchangeset(ui, repo, rev, **opts):
1848 1865 opts = _byteskwargs(opts)
1849 1866 timer, fm = gettimer(ui, opts)
1850 1867 n = scmutil.revsingle(repo, rev).node()
1851 1868
1852 1869 def d():
1853 1870 repo.changelog.read(n)
1854 1871 # repo.changelog._cache = None
1855 1872
1856 1873 timer(d)
1857 1874 fm.end()
1858 1875
1859 1876
1860 1877 @command(b'perf::ignore|perfignore', formatteropts)
1861 1878 def perfignore(ui, repo, **opts):
1862 1879 """benchmark operation related to computing ignore"""
1863 1880 opts = _byteskwargs(opts)
1864 1881 timer, fm = gettimer(ui, opts)
1865 1882 dirstate = repo.dirstate
1866 1883
1867 1884 def setupone():
1868 1885 dirstate.invalidate()
1869 1886 clearfilecache(dirstate, b'_ignore')
1870 1887
1871 1888 def runone():
1872 1889 dirstate._ignore
1873 1890
1874 1891 timer(runone, setup=setupone, title=b"load")
1875 1892 fm.end()
1876 1893
1877 1894
1878 1895 @command(
1879 1896 b'perf::index|perfindex',
1880 1897 [
1881 1898 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1882 1899 (b'', b'no-lookup', None, b'do not revision lookup post creation'),
1883 1900 ]
1884 1901 + formatteropts,
1885 1902 )
1886 1903 def perfindex(ui, repo, **opts):
1887 1904 """benchmark index creation time followed by a lookup
1888 1905
1889 1906 The default is to look `tip` up. Depending on the index implementation,
1890 1907 the revision looked up can matters. For example, an implementation
1891 1908 scanning the index will have a faster lookup time for `--rev tip` than for
1892 1909 `--rev 0`. The number of looked up revisions and their order can also
1893 1910 matters.
1894 1911
1895 1912 Example of useful set to test:
1896 1913
1897 1914 * tip
1898 1915 * 0
1899 1916 * -10:
1900 1917 * :10
1901 1918 * -10: + :10
1902 1919 * :10: + -10:
1903 1920 * -10000:
1904 1921 * -10000: + 0
1905 1922
1906 1923 It is not currently possible to check for lookup of a missing node. For
1907 1924 deeper lookup benchmarking, checkout the `perfnodemap` command."""
1908 1925 import mercurial.revlog
1909 1926
1910 1927 opts = _byteskwargs(opts)
1911 1928 timer, fm = gettimer(ui, opts)
1912 1929 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1913 1930 if opts[b'no_lookup']:
1914 1931 if opts['rev']:
1915 1932 raise error.Abort('--no-lookup and --rev are mutually exclusive')
1916 1933 nodes = []
1917 1934 elif not opts[b'rev']:
1918 1935 nodes = [repo[b"tip"].node()]
1919 1936 else:
1920 1937 revs = scmutil.revrange(repo, opts[b'rev'])
1921 1938 cl = repo.changelog
1922 1939 nodes = [cl.node(r) for r in revs]
1923 1940
1924 1941 unfi = repo.unfiltered()
1925 1942 # find the filecache func directly
1926 1943 # This avoid polluting the benchmark with the filecache logic
1927 1944 makecl = unfi.__class__.changelog.func
1928 1945
1929 1946 def setup():
1930 1947 # probably not necessary, but for good measure
1931 1948 clearchangelog(unfi)
1932 1949
1933 1950 def d():
1934 1951 cl = makecl(unfi)
1935 1952 for n in nodes:
1936 1953 cl.rev(n)
1937 1954
1938 1955 timer(d, setup=setup)
1939 1956 fm.end()
1940 1957
1941 1958
1942 1959 @command(
1943 1960 b'perf::nodemap|perfnodemap',
1944 1961 [
1945 1962 (b'', b'rev', [], b'revision to be looked up (default tip)'),
1946 1963 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
1947 1964 ]
1948 1965 + formatteropts,
1949 1966 )
1950 1967 def perfnodemap(ui, repo, **opts):
1951 1968 """benchmark the time necessary to look up revision from a cold nodemap
1952 1969
1953 1970 Depending on the implementation, the amount and order of revision we look
1954 1971 up can varies. Example of useful set to test:
1955 1972 * tip
1956 1973 * 0
1957 1974 * -10:
1958 1975 * :10
1959 1976 * -10: + :10
1960 1977 * :10: + -10:
1961 1978 * -10000:
1962 1979 * -10000: + 0
1963 1980
1964 1981 The command currently focus on valid binary lookup. Benchmarking for
1965 1982 hexlookup, prefix lookup and missing lookup would also be valuable.
1966 1983 """
1967 1984 import mercurial.revlog
1968 1985
1969 1986 opts = _byteskwargs(opts)
1970 1987 timer, fm = gettimer(ui, opts)
1971 1988 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
1972 1989
1973 1990 unfi = repo.unfiltered()
1974 1991 clearcaches = opts[b'clear_caches']
1975 1992 # find the filecache func directly
1976 1993 # This avoid polluting the benchmark with the filecache logic
1977 1994 makecl = unfi.__class__.changelog.func
1978 1995 if not opts[b'rev']:
1979 1996 raise error.Abort(b'use --rev to specify revisions to look up')
1980 1997 revs = scmutil.revrange(repo, opts[b'rev'])
1981 1998 cl = repo.changelog
1982 1999 nodes = [cl.node(r) for r in revs]
1983 2000
1984 2001 # use a list to pass reference to a nodemap from one closure to the next
1985 2002 nodeget = [None]
1986 2003
1987 2004 def setnodeget():
1988 2005 # probably not necessary, but for good measure
1989 2006 clearchangelog(unfi)
1990 2007 cl = makecl(unfi)
1991 2008 if util.safehasattr(cl.index, 'get_rev'):
1992 2009 nodeget[0] = cl.index.get_rev
1993 2010 else:
1994 2011 nodeget[0] = cl.nodemap.get
1995 2012
1996 2013 def d():
1997 2014 get = nodeget[0]
1998 2015 for n in nodes:
1999 2016 get(n)
2000 2017
2001 2018 setup = None
2002 2019 if clearcaches:
2003 2020
2004 2021 def setup():
2005 2022 setnodeget()
2006 2023
2007 2024 else:
2008 2025 setnodeget()
2009 2026 d() # prewarm the data structure
2010 2027 timer(d, setup=setup)
2011 2028 fm.end()
2012 2029
2013 2030
2014 2031 @command(b'perf::startup|perfstartup', formatteropts)
2015 2032 def perfstartup(ui, repo, **opts):
2016 2033 opts = _byteskwargs(opts)
2017 2034 timer, fm = gettimer(ui, opts)
2018 2035
2019 2036 def d():
2020 2037 if os.name != 'nt':
2021 2038 os.system(
2022 2039 b"HGRCPATH= %s version -q > /dev/null" % fsencode(sys.argv[0])
2023 2040 )
2024 2041 else:
2025 2042 os.environ['HGRCPATH'] = r' '
2026 2043 os.system("%s version -q > NUL" % sys.argv[0])
2027 2044
2028 2045 timer(d)
2029 2046 fm.end()
2030 2047
2031 2048
2049 def _clear_store_audit_cache(repo):
2050 vfs = getsvfs(repo)
2051 # unwrap the fncache proxy
2052 if not hasattr(vfs, "audit"):
2053 vfs = getattr(vfs, "vfs", vfs)
2054 auditor = vfs.audit
2055 if hasattr(auditor, "clear_audit_cache"):
2056 auditor.clear_audit_cache()
2057 elif hasattr(auditor, "audited"):
2058 auditor.audited.clear()
2059 auditor.auditeddir.clear()
2060
2061
2032 2062 def _find_stream_generator(version):
2033 2063 """find the proper generator function for this stream version"""
2034 2064 import mercurial.streamclone
2035 2065
2036 2066 available = {}
2037 2067
2038 2068 # try to fetch a v1 generator
2039 2069 generatev1 = getattr(mercurial.streamclone, "generatev1", None)
2040 2070 if generatev1 is not None:
2041 2071
2042 2072 def generate(repo):
2043 entries, bytes, data = generatev2(repo, None, None, True)
2073 entries, bytes, data = generatev1(repo, None, None, True)
2044 2074 return data
2045 2075
2046 2076 available[b'v1'] = generatev1
2047 2077 # try to fetch a v2 generator
2048 2078 generatev2 = getattr(mercurial.streamclone, "generatev2", None)
2049 2079 if generatev2 is not None:
2050 2080
2051 2081 def generate(repo):
2052 2082 entries, bytes, data = generatev2(repo, None, None, True)
2053 2083 return data
2054 2084
2055 2085 available[b'v2'] = generate
2056 2086 # try to fetch a v3 generator
2057 2087 generatev3 = getattr(mercurial.streamclone, "generatev3", None)
2058 2088 if generatev3 is not None:
2059 2089
2060 2090 def generate(repo):
2061 entries, bytes, data = generatev3(repo, None, None, True)
2062 return data
2091 return generatev3(repo, None, None, True)
2063 2092
2064 2093 available[b'v3-exp'] = generate
2065 2094
2066 2095 # resolve the request
2067 2096 if version == b"latest":
2068 2097 # latest is the highest non experimental version
2069 2098 latest_key = max(v for v in available if b'-exp' not in v)
2070 2099 return available[latest_key]
2071 2100 elif version in available:
2072 2101 return available[version]
2073 2102 else:
2074 2103 msg = b"unkown or unavailable version: %s"
2075 2104 msg %= version
2076 2105 hint = b"available versions: %s"
2077 2106 hint %= b', '.join(sorted(available))
2078 2107 raise error.Abort(msg, hint=hint)
2079 2108
2080 2109
2081 2110 @command(
2082 2111 b'perf::stream-locked-section',
2083 2112 [
2084 2113 (
2085 2114 b'',
2086 2115 b'stream-version',
2087 2116 b'latest',
2088 b'stream version to use ("v1", "v2", "v3" or "latest", (the default))',
2117 b'stream version to use ("v1", "v2", "v3-exp" '
2118 b'or "latest", (the default))',
2089 2119 ),
2090 2120 ]
2091 2121 + formatteropts,
2092 2122 )
2093 2123 def perf_stream_clone_scan(ui, repo, stream_version, **opts):
2094 2124 """benchmark the initial, repo-locked, section of a stream-clone"""
2095 2125
2096 2126 opts = _byteskwargs(opts)
2097 2127 timer, fm = gettimer(ui, opts)
2098 2128
2099 2129 # deletion of the generator may trigger some cleanup that we do not want to
2100 2130 # measure
2101 2131 result_holder = [None]
2102 2132
2103 2133 def setupone():
2104 2134 result_holder[0] = None
2135 # This is important for the full generation, even if it does not
2136 # currently matters, it seems safer to also real it here.
2137 _clear_store_audit_cache(repo)
2105 2138
2106 2139 generate = _find_stream_generator(stream_version)
2107 2140
2108 2141 def runone():
2109 2142 # the lock is held for the duration the initialisation
2110 2143 result_holder[0] = generate(repo)
2111 2144
2112 2145 timer(runone, setup=setupone, title=b"load")
2113 2146 fm.end()
2114 2147
2115 2148
2116 2149 @command(
2117 2150 b'perf::stream-generate',
2118 2151 [
2119 2152 (
2120 2153 b'',
2121 2154 b'stream-version',
2122 2155 b'latest',
2123 b'stream version to us ("v1", "v2" or "latest", (the default))',
2156 b'stream version to us ("v1", "v2", "v3-exp" '
2157 b'or "latest", (the default))',
2124 2158 ),
2125 2159 ]
2126 2160 + formatteropts,
2127 2161 )
2128 2162 def perf_stream_clone_generate(ui, repo, stream_version, **opts):
2129 2163 """benchmark the full generation of a stream clone"""
2130 2164
2131 2165 opts = _byteskwargs(opts)
2132 2166 timer, fm = gettimer(ui, opts)
2133 2167
2134 2168 # deletion of the generator may trigger some cleanup that we do not want to
2135 2169 # measure
2136 2170
2137 2171 generate = _find_stream_generator(stream_version)
2138 2172
2173 def setup():
2174 _clear_store_audit_cache(repo)
2175
2139 2176 def runone():
2140 2177 # the lock is held for the duration the initialisation
2141 2178 for chunk in generate(repo):
2142 2179 pass
2143 2180
2144 timer(runone, title=b"generate")
2181 timer(runone, setup=setup, title=b"generate")
2145 2182 fm.end()
2146 2183
2147 2184
2148 2185 @command(
2149 2186 b'perf::stream-consume',
2150 2187 formatteropts,
2151 2188 )
2152 2189 def perf_stream_clone_consume(ui, repo, filename, **opts):
2153 2190 """benchmark the full application of a stream clone
2154 2191
2155 2192 This include the creation of the repository
2156 2193 """
2157 2194 # try except to appease check code
2158 2195 msg = b"mercurial too old, missing necessary module: %s"
2159 2196 try:
2160 2197 from mercurial import bundle2
2161 2198 except ImportError as exc:
2162 2199 msg %= _bytestr(exc)
2163 2200 raise error.Abort(msg)
2164 2201 try:
2165 2202 from mercurial import exchange
2166 2203 except ImportError as exc:
2167 2204 msg %= _bytestr(exc)
2168 2205 raise error.Abort(msg)
2169 2206 try:
2170 2207 from mercurial import hg
2171 2208 except ImportError as exc:
2172 2209 msg %= _bytestr(exc)
2173 2210 raise error.Abort(msg)
2174 2211 try:
2175 2212 from mercurial import localrepo
2176 2213 except ImportError as exc:
2177 2214 msg %= _bytestr(exc)
2178 2215 raise error.Abort(msg)
2179 2216
2180 2217 opts = _byteskwargs(opts)
2181 2218 timer, fm = gettimer(ui, opts)
2182 2219
2183 2220 # deletion of the generator may trigger some cleanup that we do not want to
2184 2221 # measure
2185 2222 if not (os.path.isfile(filename) and os.access(filename, os.R_OK)):
2186 2223 raise error.Abort("not a readable file: %s" % filename)
2187 2224
2188 2225 run_variables = [None, None]
2189 2226
2227 # we create the new repository next to the other one for two reasons:
2228 # - this way we use the same file system, which are relevant for benchmark
2229 # - if /tmp/ is small, the operation could overfills it.
2230 source_repo_dir = os.path.dirname(repo.root)
2231
2190 2232 @contextlib.contextmanager
2191 2233 def context():
2192 2234 with open(filename, mode='rb') as bundle:
2193 with tempfile.TemporaryDirectory() as tmp_dir:
2235 with tempfile.TemporaryDirectory(
2236 prefix=b'hg-perf-stream-consume-',
2237 dir=source_repo_dir,
2238 ) as tmp_dir:
2194 2239 tmp_dir = fsencode(tmp_dir)
2195 2240 run_variables[0] = bundle
2196 2241 run_variables[1] = tmp_dir
2197 2242 yield
2198 2243 run_variables[0] = None
2199 2244 run_variables[1] = None
2200 2245
2201 2246 def runone():
2202 2247 bundle = run_variables[0]
2203 2248 tmp_dir = run_variables[1]
2249
2250 # we actually wants to copy all config to ensure the repo config is
2251 # taken in account during the benchmark
2252 new_ui = repo.ui.__class__(repo.ui)
2204 2253 # only pass ui when no srcrepo
2205 2254 localrepo.createrepository(
2206 repo.ui, tmp_dir, requirements=repo.requirements
2255 new_ui, tmp_dir, requirements=repo.requirements
2207 2256 )
2208 target = hg.repository(repo.ui, tmp_dir)
2257 target = hg.repository(new_ui, tmp_dir)
2209 2258 gen = exchange.readbundle(target.ui, bundle, bundle.name)
2210 2259 # stream v1
2211 2260 if util.safehasattr(gen, 'apply'):
2212 2261 gen.apply(target)
2213 2262 else:
2214 2263 with target.transaction(b"perf::stream-consume") as tr:
2215 2264 bundle2.applybundle(
2216 2265 target,
2217 2266 gen,
2218 2267 tr,
2219 2268 source=b'unbundle',
2220 2269 url=filename,
2221 2270 )
2222 2271
2223 2272 timer(runone, context=context, title=b"consume")
2224 2273 fm.end()
2225 2274
2226 2275
2227 2276 @command(b'perf::parents|perfparents', formatteropts)
2228 2277 def perfparents(ui, repo, **opts):
2229 2278 """benchmark the time necessary to fetch one changeset's parents.
2230 2279
2231 2280 The fetch is done using the `node identifier`, traversing all object layers
2232 2281 from the repository object. The first N revisions will be used for this
2233 2282 benchmark. N is controlled by the ``perf.parentscount`` config option
2234 2283 (default: 1000).
2235 2284 """
2236 2285 opts = _byteskwargs(opts)
2237 2286 timer, fm = gettimer(ui, opts)
2238 2287 # control the number of commits perfparents iterates over
2239 2288 # experimental config: perf.parentscount
2240 2289 count = getint(ui, b"perf", b"parentscount", 1000)
2241 2290 if len(repo.changelog) < count:
2242 2291 raise error.Abort(b"repo needs %d commits for this test" % count)
2243 2292 repo = repo.unfiltered()
2244 2293 nl = [repo.changelog.node(i) for i in _xrange(count)]
2245 2294
2246 2295 def d():
2247 2296 for n in nl:
2248 2297 repo.changelog.parents(n)
2249 2298
2250 2299 timer(d)
2251 2300 fm.end()
2252 2301
2253 2302
2254 2303 @command(b'perf::ctxfiles|perfctxfiles', formatteropts)
2255 2304 def perfctxfiles(ui, repo, x, **opts):
2256 2305 opts = _byteskwargs(opts)
2257 2306 x = int(x)
2258 2307 timer, fm = gettimer(ui, opts)
2259 2308
2260 2309 def d():
2261 2310 len(repo[x].files())
2262 2311
2263 2312 timer(d)
2264 2313 fm.end()
2265 2314
2266 2315
2267 2316 @command(b'perf::rawfiles|perfrawfiles', formatteropts)
2268 2317 def perfrawfiles(ui, repo, x, **opts):
2269 2318 opts = _byteskwargs(opts)
2270 2319 x = int(x)
2271 2320 timer, fm = gettimer(ui, opts)
2272 2321 cl = repo.changelog
2273 2322
2274 2323 def d():
2275 2324 len(cl.read(x)[3])
2276 2325
2277 2326 timer(d)
2278 2327 fm.end()
2279 2328
2280 2329
2281 2330 @command(b'perf::lookup|perflookup', formatteropts)
2282 2331 def perflookup(ui, repo, rev, **opts):
2283 2332 opts = _byteskwargs(opts)
2284 2333 timer, fm = gettimer(ui, opts)
2285 2334 timer(lambda: len(repo.lookup(rev)))
2286 2335 fm.end()
2287 2336
2288 2337
2289 2338 @command(
2290 2339 b'perf::linelogedits|perflinelogedits',
2291 2340 [
2292 2341 (b'n', b'edits', 10000, b'number of edits'),
2293 2342 (b'', b'max-hunk-lines', 10, b'max lines in a hunk'),
2294 2343 ],
2295 2344 norepo=True,
2296 2345 )
2297 2346 def perflinelogedits(ui, **opts):
2298 2347 from mercurial import linelog
2299 2348
2300 2349 opts = _byteskwargs(opts)
2301 2350
2302 2351 edits = opts[b'edits']
2303 2352 maxhunklines = opts[b'max_hunk_lines']
2304 2353
2305 2354 maxb1 = 100000
2306 2355 random.seed(0)
2307 2356 randint = random.randint
2308 2357 currentlines = 0
2309 2358 arglist = []
2310 2359 for rev in _xrange(edits):
2311 2360 a1 = randint(0, currentlines)
2312 2361 a2 = randint(a1, min(currentlines, a1 + maxhunklines))
2313 2362 b1 = randint(0, maxb1)
2314 2363 b2 = randint(b1, b1 + maxhunklines)
2315 2364 currentlines += (b2 - b1) - (a2 - a1)
2316 2365 arglist.append((rev, a1, a2, b1, b2))
2317 2366
2318 2367 def d():
2319 2368 ll = linelog.linelog()
2320 2369 for args in arglist:
2321 2370 ll.replacelines(*args)
2322 2371
2323 2372 timer, fm = gettimer(ui, opts)
2324 2373 timer(d)
2325 2374 fm.end()
2326 2375
2327 2376
2328 2377 @command(b'perf::revrange|perfrevrange', formatteropts)
2329 2378 def perfrevrange(ui, repo, *specs, **opts):
2330 2379 opts = _byteskwargs(opts)
2331 2380 timer, fm = gettimer(ui, opts)
2332 2381 revrange = scmutil.revrange
2333 2382 timer(lambda: len(revrange(repo, specs)))
2334 2383 fm.end()
2335 2384
2336 2385
2337 2386 @command(b'perf::nodelookup|perfnodelookup', formatteropts)
2338 2387 def perfnodelookup(ui, repo, rev, **opts):
2339 2388 opts = _byteskwargs(opts)
2340 2389 timer, fm = gettimer(ui, opts)
2341 2390 import mercurial.revlog
2342 2391
2343 2392 mercurial.revlog._prereadsize = 2 ** 24 # disable lazy parser in old hg
2344 2393 n = scmutil.revsingle(repo, rev).node()
2345 2394
2346 2395 try:
2347 2396 cl = revlog(getsvfs(repo), radix=b"00changelog")
2348 2397 except TypeError:
2349 2398 cl = revlog(getsvfs(repo), indexfile=b"00changelog.i")
2350 2399
2351 2400 def d():
2352 2401 cl.rev(n)
2353 2402 clearcaches(cl)
2354 2403
2355 2404 timer(d)
2356 2405 fm.end()
2357 2406
2358 2407
2359 2408 @command(
2360 2409 b'perf::log|perflog',
2361 2410 [(b'', b'rename', False, b'ask log to follow renames')] + formatteropts,
2362 2411 )
2363 2412 def perflog(ui, repo, rev=None, **opts):
2364 2413 opts = _byteskwargs(opts)
2365 2414 if rev is None:
2366 2415 rev = []
2367 2416 timer, fm = gettimer(ui, opts)
2368 2417 ui.pushbuffer()
2369 2418 timer(
2370 2419 lambda: commands.log(
2371 2420 ui, repo, rev=rev, date=b'', user=b'', copies=opts.get(b'rename')
2372 2421 )
2373 2422 )
2374 2423 ui.popbuffer()
2375 2424 fm.end()
2376 2425
2377 2426
2378 2427 @command(b'perf::moonwalk|perfmoonwalk', formatteropts)
2379 2428 def perfmoonwalk(ui, repo, **opts):
2380 2429 """benchmark walking the changelog backwards
2381 2430
2382 2431 This also loads the changelog data for each revision in the changelog.
2383 2432 """
2384 2433 opts = _byteskwargs(opts)
2385 2434 timer, fm = gettimer(ui, opts)
2386 2435
2387 2436 def moonwalk():
2388 2437 for i in repo.changelog.revs(start=(len(repo) - 1), stop=-1):
2389 2438 ctx = repo[i]
2390 2439 ctx.branch() # read changelog data (in addition to the index)
2391 2440
2392 2441 timer(moonwalk)
2393 2442 fm.end()
2394 2443
2395 2444
2396 2445 @command(
2397 2446 b'perf::templating|perftemplating',
2398 2447 [
2399 2448 (b'r', b'rev', [], b'revisions to run the template on'),
2400 2449 ]
2401 2450 + formatteropts,
2402 2451 )
2403 2452 def perftemplating(ui, repo, testedtemplate=None, **opts):
2404 2453 """test the rendering time of a given template"""
2405 2454 if makelogtemplater is None:
2406 2455 raise error.Abort(
2407 2456 b"perftemplating not available with this Mercurial",
2408 2457 hint=b"use 4.3 or later",
2409 2458 )
2410 2459
2411 2460 opts = _byteskwargs(opts)
2412 2461
2413 2462 nullui = ui.copy()
2414 2463 nullui.fout = open(os.devnull, 'wb')
2415 2464 nullui.disablepager()
2416 2465 revs = opts.get(b'rev')
2417 2466 if not revs:
2418 2467 revs = [b'all()']
2419 2468 revs = list(scmutil.revrange(repo, revs))
2420 2469
2421 2470 defaulttemplate = (
2422 2471 b'{date|shortdate} [{rev}:{node|short}]'
2423 2472 b' {author|person}: {desc|firstline}\n'
2424 2473 )
2425 2474 if testedtemplate is None:
2426 2475 testedtemplate = defaulttemplate
2427 2476 displayer = makelogtemplater(nullui, repo, testedtemplate)
2428 2477
2429 2478 def format():
2430 2479 for r in revs:
2431 2480 ctx = repo[r]
2432 2481 displayer.show(ctx)
2433 2482 displayer.flush(ctx)
2434 2483
2435 2484 timer, fm = gettimer(ui, opts)
2436 2485 timer(format)
2437 2486 fm.end()
2438 2487
2439 2488
2440 2489 def _displaystats(ui, opts, entries, data):
2441 2490 # use a second formatter because the data are quite different, not sure
2442 2491 # how it flies with the templater.
2443 2492 fm = ui.formatter(b'perf-stats', opts)
2444 2493 for key, title in entries:
2445 2494 values = data[key]
2446 2495 nbvalues = len(data)
2447 2496 values.sort()
2448 2497 stats = {
2449 2498 'key': key,
2450 2499 'title': title,
2451 2500 'nbitems': len(values),
2452 2501 'min': values[0][0],
2453 2502 '10%': values[(nbvalues * 10) // 100][0],
2454 2503 '25%': values[(nbvalues * 25) // 100][0],
2455 2504 '50%': values[(nbvalues * 50) // 100][0],
2456 2505 '75%': values[(nbvalues * 75) // 100][0],
2457 2506 '80%': values[(nbvalues * 80) // 100][0],
2458 2507 '85%': values[(nbvalues * 85) // 100][0],
2459 2508 '90%': values[(nbvalues * 90) // 100][0],
2460 2509 '95%': values[(nbvalues * 95) // 100][0],
2461 2510 '99%': values[(nbvalues * 99) // 100][0],
2462 2511 'max': values[-1][0],
2463 2512 }
2464 2513 fm.startitem()
2465 2514 fm.data(**stats)
2466 2515 # make node pretty for the human output
2467 2516 fm.plain('### %s (%d items)\n' % (title, len(values)))
2468 2517 lines = [
2469 2518 'min',
2470 2519 '10%',
2471 2520 '25%',
2472 2521 '50%',
2473 2522 '75%',
2474 2523 '80%',
2475 2524 '85%',
2476 2525 '90%',
2477 2526 '95%',
2478 2527 '99%',
2479 2528 'max',
2480 2529 ]
2481 2530 for l in lines:
2482 2531 fm.plain('%s: %s\n' % (l, stats[l]))
2483 2532 fm.end()
2484 2533
2485 2534
2486 2535 @command(
2487 2536 b'perf::helper-mergecopies|perfhelper-mergecopies',
2488 2537 formatteropts
2489 2538 + [
2490 2539 (b'r', b'revs', [], b'restrict search to these revisions'),
2491 2540 (b'', b'timing', False, b'provides extra data (costly)'),
2492 2541 (b'', b'stats', False, b'provides statistic about the measured data'),
2493 2542 ],
2494 2543 )
2495 2544 def perfhelpermergecopies(ui, repo, revs=[], **opts):
2496 2545 """find statistics about potential parameters for `perfmergecopies`
2497 2546
2498 2547 This command find (base, p1, p2) triplet relevant for copytracing
2499 2548 benchmarking in the context of a merge. It reports values for some of the
2500 2549 parameters that impact merge copy tracing time during merge.
2501 2550
2502 2551 If `--timing` is set, rename detection is run and the associated timing
2503 2552 will be reported. The extra details come at the cost of slower command
2504 2553 execution.
2505 2554
2506 2555 Since rename detection is only run once, other factors might easily
2507 2556 affect the precision of the timing. However it should give a good
2508 2557 approximation of which revision triplets are very costly.
2509 2558 """
2510 2559 opts = _byteskwargs(opts)
2511 2560 fm = ui.formatter(b'perf', opts)
2512 2561 dotiming = opts[b'timing']
2513 2562 dostats = opts[b'stats']
2514 2563
2515 2564 output_template = [
2516 2565 ("base", "%(base)12s"),
2517 2566 ("p1", "%(p1.node)12s"),
2518 2567 ("p2", "%(p2.node)12s"),
2519 2568 ("p1.nb-revs", "%(p1.nbrevs)12d"),
2520 2569 ("p1.nb-files", "%(p1.nbmissingfiles)12d"),
2521 2570 ("p1.renames", "%(p1.renamedfiles)12d"),
2522 2571 ("p1.time", "%(p1.time)12.3f"),
2523 2572 ("p2.nb-revs", "%(p2.nbrevs)12d"),
2524 2573 ("p2.nb-files", "%(p2.nbmissingfiles)12d"),
2525 2574 ("p2.renames", "%(p2.renamedfiles)12d"),
2526 2575 ("p2.time", "%(p2.time)12.3f"),
2527 2576 ("renames", "%(nbrenamedfiles)12d"),
2528 2577 ("total.time", "%(time)12.3f"),
2529 2578 ]
2530 2579 if not dotiming:
2531 2580 output_template = [
2532 2581 i
2533 2582 for i in output_template
2534 2583 if not ('time' in i[0] or 'renames' in i[0])
2535 2584 ]
2536 2585 header_names = [h for (h, v) in output_template]
2537 2586 output = ' '.join([v for (h, v) in output_template]) + '\n'
2538 2587 header = ' '.join(['%12s'] * len(header_names)) + '\n'
2539 2588 fm.plain(header % tuple(header_names))
2540 2589
2541 2590 if not revs:
2542 2591 revs = ['all()']
2543 2592 revs = scmutil.revrange(repo, revs)
2544 2593
2545 2594 if dostats:
2546 2595 alldata = {
2547 2596 'nbrevs': [],
2548 2597 'nbmissingfiles': [],
2549 2598 }
2550 2599 if dotiming:
2551 2600 alldata['parentnbrenames'] = []
2552 2601 alldata['totalnbrenames'] = []
2553 2602 alldata['parenttime'] = []
2554 2603 alldata['totaltime'] = []
2555 2604
2556 2605 roi = repo.revs('merge() and %ld', revs)
2557 2606 for r in roi:
2558 2607 ctx = repo[r]
2559 2608 p1 = ctx.p1()
2560 2609 p2 = ctx.p2()
2561 2610 bases = repo.changelog._commonancestorsheads(p1.rev(), p2.rev())
2562 2611 for b in bases:
2563 2612 b = repo[b]
2564 2613 p1missing = copies._computeforwardmissing(b, p1)
2565 2614 p2missing = copies._computeforwardmissing(b, p2)
2566 2615 data = {
2567 2616 b'base': b.hex(),
2568 2617 b'p1.node': p1.hex(),
2569 2618 b'p1.nbrevs': len(repo.revs('only(%d, %d)', p1.rev(), b.rev())),
2570 2619 b'p1.nbmissingfiles': len(p1missing),
2571 2620 b'p2.node': p2.hex(),
2572 2621 b'p2.nbrevs': len(repo.revs('only(%d, %d)', p2.rev(), b.rev())),
2573 2622 b'p2.nbmissingfiles': len(p2missing),
2574 2623 }
2575 2624 if dostats:
2576 2625 if p1missing:
2577 2626 alldata['nbrevs'].append(
2578 2627 (data['p1.nbrevs'], b.hex(), p1.hex())
2579 2628 )
2580 2629 alldata['nbmissingfiles'].append(
2581 2630 (data['p1.nbmissingfiles'], b.hex(), p1.hex())
2582 2631 )
2583 2632 if p2missing:
2584 2633 alldata['nbrevs'].append(
2585 2634 (data['p2.nbrevs'], b.hex(), p2.hex())
2586 2635 )
2587 2636 alldata['nbmissingfiles'].append(
2588 2637 (data['p2.nbmissingfiles'], b.hex(), p2.hex())
2589 2638 )
2590 2639 if dotiming:
2591 2640 begin = util.timer()
2592 2641 mergedata = copies.mergecopies(repo, p1, p2, b)
2593 2642 end = util.timer()
2594 2643 # not very stable timing since we did only one run
2595 2644 data['time'] = end - begin
2596 2645 # mergedata contains five dicts: "copy", "movewithdir",
2597 2646 # "diverge", "renamedelete" and "dirmove".
2598 2647 # The first 4 are about renamed file so lets count that.
2599 2648 renames = len(mergedata[0])
2600 2649 renames += len(mergedata[1])
2601 2650 renames += len(mergedata[2])
2602 2651 renames += len(mergedata[3])
2603 2652 data['nbrenamedfiles'] = renames
2604 2653 begin = util.timer()
2605 2654 p1renames = copies.pathcopies(b, p1)
2606 2655 end = util.timer()
2607 2656 data['p1.time'] = end - begin
2608 2657 begin = util.timer()
2609 2658 p2renames = copies.pathcopies(b, p2)
2610 2659 end = util.timer()
2611 2660 data['p2.time'] = end - begin
2612 2661 data['p1.renamedfiles'] = len(p1renames)
2613 2662 data['p2.renamedfiles'] = len(p2renames)
2614 2663
2615 2664 if dostats:
2616 2665 if p1missing:
2617 2666 alldata['parentnbrenames'].append(
2618 2667 (data['p1.renamedfiles'], b.hex(), p1.hex())
2619 2668 )
2620 2669 alldata['parenttime'].append(
2621 2670 (data['p1.time'], b.hex(), p1.hex())
2622 2671 )
2623 2672 if p2missing:
2624 2673 alldata['parentnbrenames'].append(
2625 2674 (data['p2.renamedfiles'], b.hex(), p2.hex())
2626 2675 )
2627 2676 alldata['parenttime'].append(
2628 2677 (data['p2.time'], b.hex(), p2.hex())
2629 2678 )
2630 2679 if p1missing or p2missing:
2631 2680 alldata['totalnbrenames'].append(
2632 2681 (
2633 2682 data['nbrenamedfiles'],
2634 2683 b.hex(),
2635 2684 p1.hex(),
2636 2685 p2.hex(),
2637 2686 )
2638 2687 )
2639 2688 alldata['totaltime'].append(
2640 2689 (data['time'], b.hex(), p1.hex(), p2.hex())
2641 2690 )
2642 2691 fm.startitem()
2643 2692 fm.data(**data)
2644 2693 # make node pretty for the human output
2645 2694 out = data.copy()
2646 2695 out['base'] = fm.hexfunc(b.node())
2647 2696 out['p1.node'] = fm.hexfunc(p1.node())
2648 2697 out['p2.node'] = fm.hexfunc(p2.node())
2649 2698 fm.plain(output % out)
2650 2699
2651 2700 fm.end()
2652 2701 if dostats:
2653 2702 # use a second formatter because the data are quite different, not sure
2654 2703 # how it flies with the templater.
2655 2704 entries = [
2656 2705 ('nbrevs', 'number of revision covered'),
2657 2706 ('nbmissingfiles', 'number of missing files at head'),
2658 2707 ]
2659 2708 if dotiming:
2660 2709 entries.append(
2661 2710 ('parentnbrenames', 'rename from one parent to base')
2662 2711 )
2663 2712 entries.append(('totalnbrenames', 'total number of renames'))
2664 2713 entries.append(('parenttime', 'time for one parent'))
2665 2714 entries.append(('totaltime', 'time for both parents'))
2666 2715 _displaystats(ui, opts, entries, alldata)
2667 2716
2668 2717
2669 2718 @command(
2670 2719 b'perf::helper-pathcopies|perfhelper-pathcopies',
2671 2720 formatteropts
2672 2721 + [
2673 2722 (b'r', b'revs', [], b'restrict search to these revisions'),
2674 2723 (b'', b'timing', False, b'provides extra data (costly)'),
2675 2724 (b'', b'stats', False, b'provides statistic about the measured data'),
2676 2725 ],
2677 2726 )
2678 2727 def perfhelperpathcopies(ui, repo, revs=[], **opts):
2679 2728 """find statistic about potential parameters for the `perftracecopies`
2680 2729
2681 2730 This command find source-destination pair relevant for copytracing testing.
2682 2731 It report value for some of the parameters that impact copy tracing time.
2683 2732
2684 2733 If `--timing` is set, rename detection is run and the associated timing
2685 2734 will be reported. The extra details comes at the cost of a slower command
2686 2735 execution.
2687 2736
2688 2737 Since the rename detection is only run once, other factors might easily
2689 2738 affect the precision of the timing. However it should give a good
2690 2739 approximation of which revision pairs are very costly.
2691 2740 """
2692 2741 opts = _byteskwargs(opts)
2693 2742 fm = ui.formatter(b'perf', opts)
2694 2743 dotiming = opts[b'timing']
2695 2744 dostats = opts[b'stats']
2696 2745
2697 2746 if dotiming:
2698 2747 header = '%12s %12s %12s %12s %12s %12s\n'
2699 2748 output = (
2700 2749 "%(source)12s %(destination)12s "
2701 2750 "%(nbrevs)12d %(nbmissingfiles)12d "
2702 2751 "%(nbrenamedfiles)12d %(time)18.5f\n"
2703 2752 )
2704 2753 header_names = (
2705 2754 "source",
2706 2755 "destination",
2707 2756 "nb-revs",
2708 2757 "nb-files",
2709 2758 "nb-renames",
2710 2759 "time",
2711 2760 )
2712 2761 fm.plain(header % header_names)
2713 2762 else:
2714 2763 header = '%12s %12s %12s %12s\n'
2715 2764 output = (
2716 2765 "%(source)12s %(destination)12s "
2717 2766 "%(nbrevs)12d %(nbmissingfiles)12d\n"
2718 2767 )
2719 2768 fm.plain(header % ("source", "destination", "nb-revs", "nb-files"))
2720 2769
2721 2770 if not revs:
2722 2771 revs = ['all()']
2723 2772 revs = scmutil.revrange(repo, revs)
2724 2773
2725 2774 if dostats:
2726 2775 alldata = {
2727 2776 'nbrevs': [],
2728 2777 'nbmissingfiles': [],
2729 2778 }
2730 2779 if dotiming:
2731 2780 alldata['nbrenames'] = []
2732 2781 alldata['time'] = []
2733 2782
2734 2783 roi = repo.revs('merge() and %ld', revs)
2735 2784 for r in roi:
2736 2785 ctx = repo[r]
2737 2786 p1 = ctx.p1().rev()
2738 2787 p2 = ctx.p2().rev()
2739 2788 bases = repo.changelog._commonancestorsheads(p1, p2)
2740 2789 for p in (p1, p2):
2741 2790 for b in bases:
2742 2791 base = repo[b]
2743 2792 parent = repo[p]
2744 2793 missing = copies._computeforwardmissing(base, parent)
2745 2794 if not missing:
2746 2795 continue
2747 2796 data = {
2748 2797 b'source': base.hex(),
2749 2798 b'destination': parent.hex(),
2750 2799 b'nbrevs': len(repo.revs('only(%d, %d)', p, b)),
2751 2800 b'nbmissingfiles': len(missing),
2752 2801 }
2753 2802 if dostats:
2754 2803 alldata['nbrevs'].append(
2755 2804 (
2756 2805 data['nbrevs'],
2757 2806 base.hex(),
2758 2807 parent.hex(),
2759 2808 )
2760 2809 )
2761 2810 alldata['nbmissingfiles'].append(
2762 2811 (
2763 2812 data['nbmissingfiles'],
2764 2813 base.hex(),
2765 2814 parent.hex(),
2766 2815 )
2767 2816 )
2768 2817 if dotiming:
2769 2818 begin = util.timer()
2770 2819 renames = copies.pathcopies(base, parent)
2771 2820 end = util.timer()
2772 2821 # not very stable timing since we did only one run
2773 2822 data['time'] = end - begin
2774 2823 data['nbrenamedfiles'] = len(renames)
2775 2824 if dostats:
2776 2825 alldata['time'].append(
2777 2826 (
2778 2827 data['time'],
2779 2828 base.hex(),
2780 2829 parent.hex(),
2781 2830 )
2782 2831 )
2783 2832 alldata['nbrenames'].append(
2784 2833 (
2785 2834 data['nbrenamedfiles'],
2786 2835 base.hex(),
2787 2836 parent.hex(),
2788 2837 )
2789 2838 )
2790 2839 fm.startitem()
2791 2840 fm.data(**data)
2792 2841 out = data.copy()
2793 2842 out['source'] = fm.hexfunc(base.node())
2794 2843 out['destination'] = fm.hexfunc(parent.node())
2795 2844 fm.plain(output % out)
2796 2845
2797 2846 fm.end()
2798 2847 if dostats:
2799 2848 entries = [
2800 2849 ('nbrevs', 'number of revision covered'),
2801 2850 ('nbmissingfiles', 'number of missing files at head'),
2802 2851 ]
2803 2852 if dotiming:
2804 2853 entries.append(('nbrenames', 'renamed files'))
2805 2854 entries.append(('time', 'time'))
2806 2855 _displaystats(ui, opts, entries, alldata)
2807 2856
2808 2857
2809 2858 @command(b'perf::cca|perfcca', formatteropts)
2810 2859 def perfcca(ui, repo, **opts):
2811 2860 opts = _byteskwargs(opts)
2812 2861 timer, fm = gettimer(ui, opts)
2813 2862 timer(lambda: scmutil.casecollisionauditor(ui, False, repo.dirstate))
2814 2863 fm.end()
2815 2864
2816 2865
2817 2866 @command(b'perf::fncacheload|perffncacheload', formatteropts)
2818 2867 def perffncacheload(ui, repo, **opts):
2819 2868 opts = _byteskwargs(opts)
2820 2869 timer, fm = gettimer(ui, opts)
2821 2870 s = repo.store
2822 2871
2823 2872 def d():
2824 2873 s.fncache._load()
2825 2874
2826 2875 timer(d)
2827 2876 fm.end()
2828 2877
2829 2878
2830 2879 @command(b'perf::fncachewrite|perffncachewrite', formatteropts)
2831 2880 def perffncachewrite(ui, repo, **opts):
2832 2881 opts = _byteskwargs(opts)
2833 2882 timer, fm = gettimer(ui, opts)
2834 2883 s = repo.store
2835 2884 lock = repo.lock()
2836 2885 s.fncache._load()
2837 2886 tr = repo.transaction(b'perffncachewrite')
2838 2887 tr.addbackup(b'fncache')
2839 2888
2840 2889 def d():
2841 2890 s.fncache._dirty = True
2842 2891 s.fncache.write(tr)
2843 2892
2844 2893 timer(d)
2845 2894 tr.close()
2846 2895 lock.release()
2847 2896 fm.end()
2848 2897
2849 2898
2850 2899 @command(b'perf::fncacheencode|perffncacheencode', formatteropts)
2851 2900 def perffncacheencode(ui, repo, **opts):
2852 2901 opts = _byteskwargs(opts)
2853 2902 timer, fm = gettimer(ui, opts)
2854 2903 s = repo.store
2855 2904 s.fncache._load()
2856 2905
2857 2906 def d():
2858 2907 for p in s.fncache.entries:
2859 2908 s.encode(p)
2860 2909
2861 2910 timer(d)
2862 2911 fm.end()
2863 2912
2864 2913
2865 2914 def _bdiffworker(q, blocks, xdiff, ready, done):
2866 2915 while not done.is_set():
2867 2916 pair = q.get()
2868 2917 while pair is not None:
2869 2918 if xdiff:
2870 2919 mdiff.bdiff.xdiffblocks(*pair)
2871 2920 elif blocks:
2872 2921 mdiff.bdiff.blocks(*pair)
2873 2922 else:
2874 2923 mdiff.textdiff(*pair)
2875 2924 q.task_done()
2876 2925 pair = q.get()
2877 2926 q.task_done() # for the None one
2878 2927 with ready:
2879 2928 ready.wait()
2880 2929
2881 2930
2882 2931 def _manifestrevision(repo, mnode):
2883 2932 ml = repo.manifestlog
2884 2933
2885 2934 if util.safehasattr(ml, b'getstorage'):
2886 2935 store = ml.getstorage(b'')
2887 2936 else:
2888 2937 store = ml._revlog
2889 2938
2890 2939 return store.revision(mnode)
2891 2940
2892 2941
2893 2942 @command(
2894 2943 b'perf::bdiff|perfbdiff',
2895 2944 revlogopts
2896 2945 + formatteropts
2897 2946 + [
2898 2947 (
2899 2948 b'',
2900 2949 b'count',
2901 2950 1,
2902 2951 b'number of revisions to test (when using --startrev)',
2903 2952 ),
2904 2953 (b'', b'alldata', False, b'test bdiffs for all associated revisions'),
2905 2954 (b'', b'threads', 0, b'number of thread to use (disable with 0)'),
2906 2955 (b'', b'blocks', False, b'test computing diffs into blocks'),
2907 2956 (b'', b'xdiff', False, b'use xdiff algorithm'),
2908 2957 ],
2909 2958 b'-c|-m|FILE REV',
2910 2959 )
2911 2960 def perfbdiff(ui, repo, file_, rev=None, count=None, threads=0, **opts):
2912 2961 """benchmark a bdiff between revisions
2913 2962
2914 2963 By default, benchmark a bdiff between its delta parent and itself.
2915 2964
2916 2965 With ``--count``, benchmark bdiffs between delta parents and self for N
2917 2966 revisions starting at the specified revision.
2918 2967
2919 2968 With ``--alldata``, assume the requested revision is a changeset and
2920 2969 measure bdiffs for all changes related to that changeset (manifest
2921 2970 and filelogs).
2922 2971 """
2923 2972 opts = _byteskwargs(opts)
2924 2973
2925 2974 if opts[b'xdiff'] and not opts[b'blocks']:
2926 2975 raise error.CommandError(b'perfbdiff', b'--xdiff requires --blocks')
2927 2976
2928 2977 if opts[b'alldata']:
2929 2978 opts[b'changelog'] = True
2930 2979
2931 2980 if opts.get(b'changelog') or opts.get(b'manifest'):
2932 2981 file_, rev = None, file_
2933 2982 elif rev is None:
2934 2983 raise error.CommandError(b'perfbdiff', b'invalid arguments')
2935 2984
2936 2985 blocks = opts[b'blocks']
2937 2986 xdiff = opts[b'xdiff']
2938 2987 textpairs = []
2939 2988
2940 2989 r = cmdutil.openrevlog(repo, b'perfbdiff', file_, opts)
2941 2990
2942 2991 startrev = r.rev(r.lookup(rev))
2943 2992 for rev in range(startrev, min(startrev + count, len(r) - 1)):
2944 2993 if opts[b'alldata']:
2945 2994 # Load revisions associated with changeset.
2946 2995 ctx = repo[rev]
2947 2996 mtext = _manifestrevision(repo, ctx.manifestnode())
2948 2997 for pctx in ctx.parents():
2949 2998 pman = _manifestrevision(repo, pctx.manifestnode())
2950 2999 textpairs.append((pman, mtext))
2951 3000
2952 3001 # Load filelog revisions by iterating manifest delta.
2953 3002 man = ctx.manifest()
2954 3003 pman = ctx.p1().manifest()
2955 3004 for filename, change in pman.diff(man).items():
2956 3005 fctx = repo.file(filename)
2957 3006 f1 = fctx.revision(change[0][0] or -1)
2958 3007 f2 = fctx.revision(change[1][0] or -1)
2959 3008 textpairs.append((f1, f2))
2960 3009 else:
2961 3010 dp = r.deltaparent(rev)
2962 3011 textpairs.append((r.revision(dp), r.revision(rev)))
2963 3012
2964 3013 withthreads = threads > 0
2965 3014 if not withthreads:
2966 3015
2967 3016 def d():
2968 3017 for pair in textpairs:
2969 3018 if xdiff:
2970 3019 mdiff.bdiff.xdiffblocks(*pair)
2971 3020 elif blocks:
2972 3021 mdiff.bdiff.blocks(*pair)
2973 3022 else:
2974 3023 mdiff.textdiff(*pair)
2975 3024
2976 3025 else:
2977 3026 q = queue()
2978 3027 for i in _xrange(threads):
2979 3028 q.put(None)
2980 3029 ready = threading.Condition()
2981 3030 done = threading.Event()
2982 3031 for i in _xrange(threads):
2983 3032 threading.Thread(
2984 3033 target=_bdiffworker, args=(q, blocks, xdiff, ready, done)
2985 3034 ).start()
2986 3035 q.join()
2987 3036
2988 3037 def d():
2989 3038 for pair in textpairs:
2990 3039 q.put(pair)
2991 3040 for i in _xrange(threads):
2992 3041 q.put(None)
2993 3042 with ready:
2994 3043 ready.notify_all()
2995 3044 q.join()
2996 3045
2997 3046 timer, fm = gettimer(ui, opts)
2998 3047 timer(d)
2999 3048 fm.end()
3000 3049
3001 3050 if withthreads:
3002 3051 done.set()
3003 3052 for i in _xrange(threads):
3004 3053 q.put(None)
3005 3054 with ready:
3006 3055 ready.notify_all()
3007 3056
3008 3057
3009 3058 @command(
3010 3059 b'perf::unbundle',
3011 3060 [
3012 3061 (b'', b'as-push', None, b'pretend the bundle comes from a push'),
3013 3062 ]
3014 3063 + formatteropts,
3015 3064 b'BUNDLE_FILE',
3016 3065 )
3017 3066 def perf_unbundle(ui, repo, fname, **opts):
3018 3067 """benchmark application of a bundle in a repository.
3019 3068
3020 3069 This does not include the final transaction processing
3021 3070
3022 3071 The --as-push option make the unbundle operation appears like it comes from
3023 3072 a client push. It change some aspect of the processing and associated
3024 3073 performance profile.
3025 3074 """
3026 3075
3027 3076 from mercurial import exchange
3028 3077 from mercurial import bundle2
3029 3078 from mercurial import transaction
3030 3079
3031 3080 opts = _byteskwargs(opts)
3032 3081
3033 3082 ### some compatibility hotfix
3034 3083 #
3035 3084 # the data attribute is dropped in 63edc384d3b7 a changeset introducing a
3036 3085 # critical regression that break transaction rollback for files that are
3037 3086 # de-inlined.
3038 3087 method = transaction.transaction._addentry
3039 3088 pre_63edc384d3b7 = "data" in getargspec(method).args
3040 3089 # the `detailed_exit_code` attribute is introduced in 33c0c25d0b0f
3041 3090 # a changeset that is a close descendant of 18415fc918a1, the changeset
3042 3091 # that conclude the fix run for the bug introduced in 63edc384d3b7.
3043 3092 args = getargspec(error.Abort.__init__).args
3044 3093 post_18415fc918a1 = "detailed_exit_code" in args
3045 3094
3046 3095 unbundle_source = b'perf::unbundle'
3047 3096 if opts[b'as_push']:
3048 3097 unbundle_source = b'push'
3049 3098
3050 3099 old_max_inline = None
3051 3100 try:
3052 3101 if not (pre_63edc384d3b7 or post_18415fc918a1):
3053 3102 # disable inlining
3054 3103 old_max_inline = mercurial.revlog._maxinline
3055 3104 # large enough to never happen
3056 3105 mercurial.revlog._maxinline = 2 ** 50
3057 3106
3058 3107 with repo.lock():
3059 3108 bundle = [None, None]
3060 3109 orig_quiet = repo.ui.quiet
3061 3110 try:
3062 3111 repo.ui.quiet = True
3063 3112 with open(fname, mode="rb") as f:
3064 3113
3065 3114 def noop_report(*args, **kwargs):
3066 3115 pass
3067 3116
3068 3117 def setup():
3069 3118 gen, tr = bundle
3070 3119 if tr is not None:
3071 3120 tr.abort()
3072 3121 bundle[:] = [None, None]
3073 3122 f.seek(0)
3074 3123 bundle[0] = exchange.readbundle(ui, f, fname)
3075 3124 bundle[1] = repo.transaction(b'perf::unbundle')
3076 3125 # silence the transaction
3077 3126 bundle[1]._report = noop_report
3078 3127
3079 3128 def apply():
3080 3129 gen, tr = bundle
3081 3130 bundle2.applybundle(
3082 3131 repo,
3083 3132 gen,
3084 3133 tr,
3085 3134 source=unbundle_source,
3086 3135 url=fname,
3087 3136 )
3088 3137
3089 3138 timer, fm = gettimer(ui, opts)
3090 3139 timer(apply, setup=setup)
3091 3140 fm.end()
3092 3141 finally:
3093 3142 repo.ui.quiet == orig_quiet
3094 3143 gen, tr = bundle
3095 3144 if tr is not None:
3096 3145 tr.abort()
3097 3146 finally:
3098 3147 if old_max_inline is not None:
3099 3148 mercurial.revlog._maxinline = old_max_inline
3100 3149
3101 3150
3102 3151 @command(
3103 3152 b'perf::unidiff|perfunidiff',
3104 3153 revlogopts
3105 3154 + formatteropts
3106 3155 + [
3107 3156 (
3108 3157 b'',
3109 3158 b'count',
3110 3159 1,
3111 3160 b'number of revisions to test (when using --startrev)',
3112 3161 ),
3113 3162 (b'', b'alldata', False, b'test unidiffs for all associated revisions'),
3114 3163 ],
3115 3164 b'-c|-m|FILE REV',
3116 3165 )
3117 3166 def perfunidiff(ui, repo, file_, rev=None, count=None, **opts):
3118 3167 """benchmark a unified diff between revisions
3119 3168
3120 3169 This doesn't include any copy tracing - it's just a unified diff
3121 3170 of the texts.
3122 3171
3123 3172 By default, benchmark a diff between its delta parent and itself.
3124 3173
3125 3174 With ``--count``, benchmark diffs between delta parents and self for N
3126 3175 revisions starting at the specified revision.
3127 3176
3128 3177 With ``--alldata``, assume the requested revision is a changeset and
3129 3178 measure diffs for all changes related to that changeset (manifest
3130 3179 and filelogs).
3131 3180 """
3132 3181 opts = _byteskwargs(opts)
3133 3182 if opts[b'alldata']:
3134 3183 opts[b'changelog'] = True
3135 3184
3136 3185 if opts.get(b'changelog') or opts.get(b'manifest'):
3137 3186 file_, rev = None, file_
3138 3187 elif rev is None:
3139 3188 raise error.CommandError(b'perfunidiff', b'invalid arguments')
3140 3189
3141 3190 textpairs = []
3142 3191
3143 3192 r = cmdutil.openrevlog(repo, b'perfunidiff', file_, opts)
3144 3193
3145 3194 startrev = r.rev(r.lookup(rev))
3146 3195 for rev in range(startrev, min(startrev + count, len(r) - 1)):
3147 3196 if opts[b'alldata']:
3148 3197 # Load revisions associated with changeset.
3149 3198 ctx = repo[rev]
3150 3199 mtext = _manifestrevision(repo, ctx.manifestnode())
3151 3200 for pctx in ctx.parents():
3152 3201 pman = _manifestrevision(repo, pctx.manifestnode())
3153 3202 textpairs.append((pman, mtext))
3154 3203
3155 3204 # Load filelog revisions by iterating manifest delta.
3156 3205 man = ctx.manifest()
3157 3206 pman = ctx.p1().manifest()
3158 3207 for filename, change in pman.diff(man).items():
3159 3208 fctx = repo.file(filename)
3160 3209 f1 = fctx.revision(change[0][0] or -1)
3161 3210 f2 = fctx.revision(change[1][0] or -1)
3162 3211 textpairs.append((f1, f2))
3163 3212 else:
3164 3213 dp = r.deltaparent(rev)
3165 3214 textpairs.append((r.revision(dp), r.revision(rev)))
3166 3215
3167 3216 def d():
3168 3217 for left, right in textpairs:
3169 3218 # The date strings don't matter, so we pass empty strings.
3170 3219 headerlines, hunks = mdiff.unidiff(
3171 3220 left, b'', right, b'', b'left', b'right', binary=False
3172 3221 )
3173 3222 # consume iterators in roughly the way patch.py does
3174 3223 b'\n'.join(headerlines)
3175 3224 b''.join(sum((list(hlines) for hrange, hlines in hunks), []))
3176 3225
3177 3226 timer, fm = gettimer(ui, opts)
3178 3227 timer(d)
3179 3228 fm.end()
3180 3229
3181 3230
3182 3231 @command(b'perf::diffwd|perfdiffwd', formatteropts)
3183 3232 def perfdiffwd(ui, repo, **opts):
3184 3233 """Profile diff of working directory changes"""
3185 3234 opts = _byteskwargs(opts)
3186 3235 timer, fm = gettimer(ui, opts)
3187 3236 options = {
3188 3237 'w': 'ignore_all_space',
3189 3238 'b': 'ignore_space_change',
3190 3239 'B': 'ignore_blank_lines',
3191 3240 }
3192 3241
3193 3242 for diffopt in ('', 'w', 'b', 'B', 'wB'):
3194 3243 opts = {options[c]: b'1' for c in diffopt}
3195 3244
3196 3245 def d():
3197 3246 ui.pushbuffer()
3198 3247 commands.diff(ui, repo, **opts)
3199 3248 ui.popbuffer()
3200 3249
3201 3250 diffopt = diffopt.encode('ascii')
3202 3251 title = b'diffopts: %s' % (diffopt and (b'-' + diffopt) or b'none')
3203 3252 timer(d, title=title)
3204 3253 fm.end()
3205 3254
3206 3255
3207 3256 @command(
3208 3257 b'perf::revlogindex|perfrevlogindex',
3209 3258 revlogopts + formatteropts,
3210 3259 b'-c|-m|FILE',
3211 3260 )
3212 3261 def perfrevlogindex(ui, repo, file_=None, **opts):
3213 3262 """Benchmark operations against a revlog index.
3214 3263
3215 3264 This tests constructing a revlog instance, reading index data,
3216 3265 parsing index data, and performing various operations related to
3217 3266 index data.
3218 3267 """
3219 3268
3220 3269 opts = _byteskwargs(opts)
3221 3270
3222 3271 rl = cmdutil.openrevlog(repo, b'perfrevlogindex', file_, opts)
3223 3272
3224 3273 opener = getattr(rl, 'opener') # trick linter
3225 3274 # compat with hg <= 5.8
3226 3275 radix = getattr(rl, 'radix', None)
3227 3276 indexfile = getattr(rl, '_indexfile', None)
3228 3277 if indexfile is None:
3229 3278 # compatibility with <= hg-5.8
3230 3279 indexfile = getattr(rl, 'indexfile')
3231 3280 data = opener.read(indexfile)
3232 3281
3233 3282 header = struct.unpack(b'>I', data[0:4])[0]
3234 3283 version = header & 0xFFFF
3235 3284 if version == 1:
3236 3285 inline = header & (1 << 16)
3237 3286 else:
3238 3287 raise error.Abort(b'unsupported revlog version: %d' % version)
3239 3288
3240 3289 parse_index_v1 = getattr(mercurial.revlog, 'parse_index_v1', None)
3241 3290 if parse_index_v1 is None:
3242 3291 parse_index_v1 = mercurial.revlog.revlogio().parseindex
3243 3292
3244 3293 rllen = len(rl)
3245 3294
3246 3295 node0 = rl.node(0)
3247 3296 node25 = rl.node(rllen // 4)
3248 3297 node50 = rl.node(rllen // 2)
3249 3298 node75 = rl.node(rllen // 4 * 3)
3250 3299 node100 = rl.node(rllen - 1)
3251 3300
3252 3301 allrevs = range(rllen)
3253 3302 allrevsrev = list(reversed(allrevs))
3254 3303 allnodes = [rl.node(rev) for rev in range(rllen)]
3255 3304 allnodesrev = list(reversed(allnodes))
3256 3305
3257 3306 def constructor():
3258 3307 if radix is not None:
3259 3308 revlog(opener, radix=radix)
3260 3309 else:
3261 3310 # hg <= 5.8
3262 3311 revlog(opener, indexfile=indexfile)
3263 3312
3264 3313 def read():
3265 3314 with opener(indexfile) as fh:
3266 3315 fh.read()
3267 3316
3268 3317 def parseindex():
3269 3318 parse_index_v1(data, inline)
3270 3319
3271 3320 def getentry(revornode):
3272 3321 index = parse_index_v1(data, inline)[0]
3273 3322 index[revornode]
3274 3323
3275 3324 def getentries(revs, count=1):
3276 3325 index = parse_index_v1(data, inline)[0]
3277 3326
3278 3327 for i in range(count):
3279 3328 for rev in revs:
3280 3329 index[rev]
3281 3330
3282 3331 def resolvenode(node):
3283 3332 index = parse_index_v1(data, inline)[0]
3284 3333 rev = getattr(index, 'rev', None)
3285 3334 if rev is None:
3286 3335 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3287 3336 # This only works for the C code.
3288 3337 if nodemap is None:
3289 3338 return
3290 3339 rev = nodemap.__getitem__
3291 3340
3292 3341 try:
3293 3342 rev(node)
3294 3343 except error.RevlogError:
3295 3344 pass
3296 3345
3297 3346 def resolvenodes(nodes, count=1):
3298 3347 index = parse_index_v1(data, inline)[0]
3299 3348 rev = getattr(index, 'rev', None)
3300 3349 if rev is None:
3301 3350 nodemap = getattr(parse_index_v1(data, inline)[0], 'nodemap', None)
3302 3351 # This only works for the C code.
3303 3352 if nodemap is None:
3304 3353 return
3305 3354 rev = nodemap.__getitem__
3306 3355
3307 3356 for i in range(count):
3308 3357 for node in nodes:
3309 3358 try:
3310 3359 rev(node)
3311 3360 except error.RevlogError:
3312 3361 pass
3313 3362
3314 3363 benches = [
3315 3364 (constructor, b'revlog constructor'),
3316 3365 (read, b'read'),
3317 3366 (parseindex, b'create index object'),
3318 3367 (lambda: getentry(0), b'retrieve index entry for rev 0'),
3319 3368 (lambda: resolvenode(b'a' * 20), b'look up missing node'),
3320 3369 (lambda: resolvenode(node0), b'look up node at rev 0'),
3321 3370 (lambda: resolvenode(node25), b'look up node at 1/4 len'),
3322 3371 (lambda: resolvenode(node50), b'look up node at 1/2 len'),
3323 3372 (lambda: resolvenode(node75), b'look up node at 3/4 len'),
3324 3373 (lambda: resolvenode(node100), b'look up node at tip'),
3325 3374 # 2x variation is to measure caching impact.
3326 3375 (lambda: resolvenodes(allnodes), b'look up all nodes (forward)'),
3327 3376 (lambda: resolvenodes(allnodes, 2), b'look up all nodes 2x (forward)'),
3328 3377 (lambda: resolvenodes(allnodesrev), b'look up all nodes (reverse)'),
3329 3378 (
3330 3379 lambda: resolvenodes(allnodesrev, 2),
3331 3380 b'look up all nodes 2x (reverse)',
3332 3381 ),
3333 3382 (lambda: getentries(allrevs), b'retrieve all index entries (forward)'),
3334 3383 (
3335 3384 lambda: getentries(allrevs, 2),
3336 3385 b'retrieve all index entries 2x (forward)',
3337 3386 ),
3338 3387 (
3339 3388 lambda: getentries(allrevsrev),
3340 3389 b'retrieve all index entries (reverse)',
3341 3390 ),
3342 3391 (
3343 3392 lambda: getentries(allrevsrev, 2),
3344 3393 b'retrieve all index entries 2x (reverse)',
3345 3394 ),
3346 3395 ]
3347 3396
3348 3397 for fn, title in benches:
3349 3398 timer, fm = gettimer(ui, opts)
3350 3399 timer(fn, title=title)
3351 3400 fm.end()
3352 3401
3353 3402
3354 3403 @command(
3355 3404 b'perf::revlogrevisions|perfrevlogrevisions',
3356 3405 revlogopts
3357 3406 + formatteropts
3358 3407 + [
3359 3408 (b'd', b'dist', 100, b'distance between the revisions'),
3360 3409 (b's', b'startrev', 0, b'revision to start reading at'),
3361 3410 (b'', b'reverse', False, b'read in reverse'),
3362 3411 ],
3363 3412 b'-c|-m|FILE',
3364 3413 )
3365 3414 def perfrevlogrevisions(
3366 3415 ui, repo, file_=None, startrev=0, reverse=False, **opts
3367 3416 ):
3368 3417 """Benchmark reading a series of revisions from a revlog.
3369 3418
3370 3419 By default, we read every ``-d/--dist`` revision from 0 to tip of
3371 3420 the specified revlog.
3372 3421
3373 3422 The start revision can be defined via ``-s/--startrev``.
3374 3423 """
3375 3424 opts = _byteskwargs(opts)
3376 3425
3377 3426 rl = cmdutil.openrevlog(repo, b'perfrevlogrevisions', file_, opts)
3378 3427 rllen = getlen(ui)(rl)
3379 3428
3380 3429 if startrev < 0:
3381 3430 startrev = rllen + startrev
3382 3431
3383 3432 def d():
3384 3433 rl.clearcaches()
3385 3434
3386 3435 beginrev = startrev
3387 3436 endrev = rllen
3388 3437 dist = opts[b'dist']
3389 3438
3390 3439 if reverse:
3391 3440 beginrev, endrev = endrev - 1, beginrev - 1
3392 3441 dist = -1 * dist
3393 3442
3394 3443 for x in _xrange(beginrev, endrev, dist):
3395 3444 # Old revisions don't support passing int.
3396 3445 n = rl.node(x)
3397 3446 rl.revision(n)
3398 3447
3399 3448 timer, fm = gettimer(ui, opts)
3400 3449 timer(d)
3401 3450 fm.end()
3402 3451
3403 3452
3404 3453 @command(
3405 3454 b'perf::revlogwrite|perfrevlogwrite',
3406 3455 revlogopts
3407 3456 + formatteropts
3408 3457 + [
3409 3458 (b's', b'startrev', 1000, b'revision to start writing at'),
3410 3459 (b'', b'stoprev', -1, b'last revision to write'),
3411 3460 (b'', b'count', 3, b'number of passes to perform'),
3412 3461 (b'', b'details', False, b'print timing for every revisions tested'),
3413 3462 (b'', b'source', b'full', b'the kind of data feed in the revlog'),
3414 3463 (b'', b'lazydeltabase', True, b'try the provided delta first'),
3415 3464 (b'', b'clear-caches', True, b'clear revlog cache between calls'),
3416 3465 ],
3417 3466 b'-c|-m|FILE',
3418 3467 )
3419 3468 def perfrevlogwrite(ui, repo, file_=None, startrev=1000, stoprev=-1, **opts):
3420 3469 """Benchmark writing a series of revisions to a revlog.
3421 3470
3422 3471 Possible source values are:
3423 3472 * `full`: add from a full text (default).
3424 3473 * `parent-1`: add from a delta to the first parent
3425 3474 * `parent-2`: add from a delta to the second parent if it exists
3426 3475 (use a delta from the first parent otherwise)
3427 3476 * `parent-smallest`: add from the smallest delta (either p1 or p2)
3428 3477 * `storage`: add from the existing precomputed deltas
3429 3478
3430 3479 Note: This performance command measures performance in a custom way. As a
3431 3480 result some of the global configuration of the 'perf' command does not
3432 3481 apply to it:
3433 3482
3434 3483 * ``pre-run``: disabled
3435 3484
3436 3485 * ``profile-benchmark``: disabled
3437 3486
3438 3487 * ``run-limits``: disabled use --count instead
3439 3488 """
3440 3489 opts = _byteskwargs(opts)
3441 3490
3442 3491 rl = cmdutil.openrevlog(repo, b'perfrevlogwrite', file_, opts)
3443 3492 rllen = getlen(ui)(rl)
3444 3493 if startrev < 0:
3445 3494 startrev = rllen + startrev
3446 3495 if stoprev < 0:
3447 3496 stoprev = rllen + stoprev
3448 3497
3449 3498 lazydeltabase = opts['lazydeltabase']
3450 3499 source = opts['source']
3451 3500 clearcaches = opts['clear_caches']
3452 3501 validsource = (
3453 3502 b'full',
3454 3503 b'parent-1',
3455 3504 b'parent-2',
3456 3505 b'parent-smallest',
3457 3506 b'storage',
3458 3507 )
3459 3508 if source not in validsource:
3460 3509 raise error.Abort('invalid source type: %s' % source)
3461 3510
3462 3511 ### actually gather results
3463 3512 count = opts['count']
3464 3513 if count <= 0:
3465 3514 raise error.Abort('invalide run count: %d' % count)
3466 3515 allresults = []
3467 3516 for c in range(count):
3468 3517 timing = _timeonewrite(
3469 3518 ui,
3470 3519 rl,
3471 3520 source,
3472 3521 startrev,
3473 3522 stoprev,
3474 3523 c + 1,
3475 3524 lazydeltabase=lazydeltabase,
3476 3525 clearcaches=clearcaches,
3477 3526 )
3478 3527 allresults.append(timing)
3479 3528
3480 3529 ### consolidate the results in a single list
3481 3530 results = []
3482 3531 for idx, (rev, t) in enumerate(allresults[0]):
3483 3532 ts = [t]
3484 3533 for other in allresults[1:]:
3485 3534 orev, ot = other[idx]
3486 3535 assert orev == rev
3487 3536 ts.append(ot)
3488 3537 results.append((rev, ts))
3489 3538 resultcount = len(results)
3490 3539
3491 3540 ### Compute and display relevant statistics
3492 3541
3493 3542 # get a formatter
3494 3543 fm = ui.formatter(b'perf', opts)
3495 3544 displayall = ui.configbool(b"perf", b"all-timing", True)
3496 3545
3497 3546 # print individual details if requested
3498 3547 if opts['details']:
3499 3548 for idx, item in enumerate(results, 1):
3500 3549 rev, data = item
3501 3550 title = 'revisions #%d of %d, rev %d' % (idx, resultcount, rev)
3502 3551 formatone(fm, data, title=title, displayall=displayall)
3503 3552
3504 3553 # sorts results by median time
3505 3554 results.sort(key=lambda x: sorted(x[1])[len(x[1]) // 2])
3506 3555 # list of (name, index) to display)
3507 3556 relevants = [
3508 3557 ("min", 0),
3509 3558 ("10%", resultcount * 10 // 100),
3510 3559 ("25%", resultcount * 25 // 100),
3511 3560 ("50%", resultcount * 70 // 100),
3512 3561 ("75%", resultcount * 75 // 100),
3513 3562 ("90%", resultcount * 90 // 100),
3514 3563 ("95%", resultcount * 95 // 100),
3515 3564 ("99%", resultcount * 99 // 100),
3516 3565 ("99.9%", resultcount * 999 // 1000),
3517 3566 ("99.99%", resultcount * 9999 // 10000),
3518 3567 ("99.999%", resultcount * 99999 // 100000),
3519 3568 ("max", -1),
3520 3569 ]
3521 3570 if not ui.quiet:
3522 3571 for name, idx in relevants:
3523 3572 data = results[idx]
3524 3573 title = '%s of %d, rev %d' % (name, resultcount, data[0])
3525 3574 formatone(fm, data[1], title=title, displayall=displayall)
3526 3575
3527 3576 # XXX summing that many float will not be very precise, we ignore this fact
3528 3577 # for now
3529 3578 totaltime = []
3530 3579 for item in allresults:
3531 3580 totaltime.append(
3532 3581 (
3533 3582 sum(x[1][0] for x in item),
3534 3583 sum(x[1][1] for x in item),
3535 3584 sum(x[1][2] for x in item),
3536 3585 )
3537 3586 )
3538 3587 formatone(
3539 3588 fm,
3540 3589 totaltime,
3541 3590 title="total time (%d revs)" % resultcount,
3542 3591 displayall=displayall,
3543 3592 )
3544 3593 fm.end()
3545 3594
3546 3595
3547 3596 class _faketr:
3548 3597 def add(s, x, y, z=None):
3549 3598 return None
3550 3599
3551 3600
3552 3601 def _timeonewrite(
3553 3602 ui,
3554 3603 orig,
3555 3604 source,
3556 3605 startrev,
3557 3606 stoprev,
3558 3607 runidx=None,
3559 3608 lazydeltabase=True,
3560 3609 clearcaches=True,
3561 3610 ):
3562 3611 timings = []
3563 3612 tr = _faketr()
3564 3613 with _temprevlog(ui, orig, startrev) as dest:
3565 3614 if hasattr(dest, "delta_config"):
3566 3615 dest.delta_config.lazy_delta_base = lazydeltabase
3567 3616 else:
3568 3617 dest._lazydeltabase = lazydeltabase
3569 3618 revs = list(orig.revs(startrev, stoprev))
3570 3619 total = len(revs)
3571 3620 topic = 'adding'
3572 3621 if runidx is not None:
3573 3622 topic += ' (run #%d)' % runidx
3574 3623 # Support both old and new progress API
3575 3624 if util.safehasattr(ui, 'makeprogress'):
3576 3625 progress = ui.makeprogress(topic, unit='revs', total=total)
3577 3626
3578 3627 def updateprogress(pos):
3579 3628 progress.update(pos)
3580 3629
3581 3630 def completeprogress():
3582 3631 progress.complete()
3583 3632
3584 3633 else:
3585 3634
3586 3635 def updateprogress(pos):
3587 3636 ui.progress(topic, pos, unit='revs', total=total)
3588 3637
3589 3638 def completeprogress():
3590 3639 ui.progress(topic, None, unit='revs', total=total)
3591 3640
3592 3641 for idx, rev in enumerate(revs):
3593 3642 updateprogress(idx)
3594 3643 addargs, addkwargs = _getrevisionseed(orig, rev, tr, source)
3595 3644 if clearcaches:
3596 3645 dest.index.clearcaches()
3597 3646 dest.clearcaches()
3598 3647 with timeone() as r:
3599 3648 dest.addrawrevision(*addargs, **addkwargs)
3600 3649 timings.append((rev, r[0]))
3601 3650 updateprogress(total)
3602 3651 completeprogress()
3603 3652 return timings
3604 3653
3605 3654
3606 3655 def _getrevisionseed(orig, rev, tr, source):
3607 3656 from mercurial.node import nullid
3608 3657
3609 3658 linkrev = orig.linkrev(rev)
3610 3659 node = orig.node(rev)
3611 3660 p1, p2 = orig.parents(node)
3612 3661 flags = orig.flags(rev)
3613 3662 cachedelta = None
3614 3663 text = None
3615 3664
3616 3665 if source == b'full':
3617 3666 text = orig.revision(rev)
3618 3667 elif source == b'parent-1':
3619 3668 baserev = orig.rev(p1)
3620 3669 cachedelta = (baserev, orig.revdiff(p1, rev))
3621 3670 elif source == b'parent-2':
3622 3671 parent = p2
3623 3672 if p2 == nullid:
3624 3673 parent = p1
3625 3674 baserev = orig.rev(parent)
3626 3675 cachedelta = (baserev, orig.revdiff(parent, rev))
3627 3676 elif source == b'parent-smallest':
3628 3677 p1diff = orig.revdiff(p1, rev)
3629 3678 parent = p1
3630 3679 diff = p1diff
3631 3680 if p2 != nullid:
3632 3681 p2diff = orig.revdiff(p2, rev)
3633 3682 if len(p1diff) > len(p2diff):
3634 3683 parent = p2
3635 3684 diff = p2diff
3636 3685 baserev = orig.rev(parent)
3637 3686 cachedelta = (baserev, diff)
3638 3687 elif source == b'storage':
3639 3688 baserev = orig.deltaparent(rev)
3640 3689 cachedelta = (baserev, orig.revdiff(orig.node(baserev), rev))
3641 3690
3642 3691 return (
3643 3692 (text, tr, linkrev, p1, p2),
3644 3693 {'node': node, 'flags': flags, 'cachedelta': cachedelta},
3645 3694 )
3646 3695
3647 3696
3648 3697 @contextlib.contextmanager
3649 3698 def _temprevlog(ui, orig, truncaterev):
3650 3699 from mercurial import vfs as vfsmod
3651 3700
3652 3701 if orig._inline:
3653 3702 raise error.Abort('not supporting inline revlog (yet)')
3654 3703 revlogkwargs = {}
3655 3704 k = 'upperboundcomp'
3656 3705 if util.safehasattr(orig, k):
3657 3706 revlogkwargs[k] = getattr(orig, k)
3658 3707
3659 3708 indexfile = getattr(orig, '_indexfile', None)
3660 3709 if indexfile is None:
3661 3710 # compatibility with <= hg-5.8
3662 3711 indexfile = getattr(orig, 'indexfile')
3663 3712 origindexpath = orig.opener.join(indexfile)
3664 3713
3665 3714 datafile = getattr(orig, '_datafile', getattr(orig, 'datafile'))
3666 3715 origdatapath = orig.opener.join(datafile)
3667 3716 radix = b'revlog'
3668 3717 indexname = b'revlog.i'
3669 3718 dataname = b'revlog.d'
3670 3719
3671 3720 tmpdir = tempfile.mkdtemp(prefix='tmp-hgperf-')
3672 3721 try:
3673 3722 # copy the data file in a temporary directory
3674 3723 ui.debug('copying data in %s\n' % tmpdir)
3675 3724 destindexpath = os.path.join(tmpdir, 'revlog.i')
3676 3725 destdatapath = os.path.join(tmpdir, 'revlog.d')
3677 3726 shutil.copyfile(origindexpath, destindexpath)
3678 3727 shutil.copyfile(origdatapath, destdatapath)
3679 3728
3680 3729 # remove the data we want to add again
3681 3730 ui.debug('truncating data to be rewritten\n')
3682 3731 with open(destindexpath, 'ab') as index:
3683 3732 index.seek(0)
3684 3733 index.truncate(truncaterev * orig._io.size)
3685 3734 with open(destdatapath, 'ab') as data:
3686 3735 data.seek(0)
3687 3736 data.truncate(orig.start(truncaterev))
3688 3737
3689 3738 # instantiate a new revlog from the temporary copy
3690 3739 ui.debug('truncating adding to be rewritten\n')
3691 3740 vfs = vfsmod.vfs(tmpdir)
3692 3741 vfs.options = getattr(orig.opener, 'options', None)
3693 3742
3694 3743 try:
3695 3744 dest = revlog(vfs, radix=radix, **revlogkwargs)
3696 3745 except TypeError:
3697 3746 dest = revlog(
3698 3747 vfs, indexfile=indexname, datafile=dataname, **revlogkwargs
3699 3748 )
3700 3749 if dest._inline:
3701 3750 raise error.Abort('not supporting inline revlog (yet)')
3702 3751 # make sure internals are initialized
3703 3752 dest.revision(len(dest) - 1)
3704 3753 yield dest
3705 3754 del dest, vfs
3706 3755 finally:
3707 3756 shutil.rmtree(tmpdir, True)
3708 3757
3709 3758
3710 3759 @command(
3711 3760 b'perf::revlogchunks|perfrevlogchunks',
3712 3761 revlogopts
3713 3762 + formatteropts
3714 3763 + [
3715 3764 (b'e', b'engines', b'', b'compression engines to use'),
3716 3765 (b's', b'startrev', 0, b'revision to start at'),
3717 3766 ],
3718 3767 b'-c|-m|FILE',
3719 3768 )
3720 3769 def perfrevlogchunks(ui, repo, file_=None, engines=None, startrev=0, **opts):
3721 3770 """Benchmark operations on revlog chunks.
3722 3771
3723 3772 Logically, each revlog is a collection of fulltext revisions. However,
3724 3773 stored within each revlog are "chunks" of possibly compressed data. This
3725 3774 data needs to be read and decompressed or compressed and written.
3726 3775
3727 3776 This command measures the time it takes to read+decompress and recompress
3728 3777 chunks in a revlog. It effectively isolates I/O and compression performance.
3729 3778 For measurements of higher-level operations like resolving revisions,
3730 3779 see ``perfrevlogrevisions`` and ``perfrevlogrevision``.
3731 3780 """
3732 3781 opts = _byteskwargs(opts)
3733 3782
3734 3783 rl = cmdutil.openrevlog(repo, b'perfrevlogchunks', file_, opts)
3735 3784
3736 3785 # - _chunkraw was renamed to _getsegmentforrevs
3737 3786 # - _getsegmentforrevs was moved on the inner object
3738 3787 try:
3739 3788 segmentforrevs = rl._inner.get_segment_for_revs
3740 3789 except AttributeError:
3741 3790 try:
3742 3791 segmentforrevs = rl._getsegmentforrevs
3743 3792 except AttributeError:
3744 3793 segmentforrevs = rl._chunkraw
3745 3794
3746 3795 # Verify engines argument.
3747 3796 if engines:
3748 3797 engines = {e.strip() for e in engines.split(b',')}
3749 3798 for engine in engines:
3750 3799 try:
3751 3800 util.compressionengines[engine]
3752 3801 except KeyError:
3753 3802 raise error.Abort(b'unknown compression engine: %s' % engine)
3754 3803 else:
3755 3804 engines = []
3756 3805 for e in util.compengines:
3757 3806 engine = util.compengines[e]
3758 3807 try:
3759 3808 if engine.available():
3760 3809 engine.revlogcompressor().compress(b'dummy')
3761 3810 engines.append(e)
3762 3811 except NotImplementedError:
3763 3812 pass
3764 3813
3765 3814 revs = list(rl.revs(startrev, len(rl) - 1))
3766 3815
3767 3816 @contextlib.contextmanager
3768 3817 def reading(rl):
3769 3818 if getattr(rl, 'reading', None) is not None:
3770 3819 with rl.reading():
3771 3820 yield None
3772 3821 elif rl._inline:
3773 3822 indexfile = getattr(rl, '_indexfile', None)
3774 3823 if indexfile is None:
3775 3824 # compatibility with <= hg-5.8
3776 3825 indexfile = getattr(rl, 'indexfile')
3777 3826 yield getsvfs(repo)(indexfile)
3778 3827 else:
3779 3828 datafile = getattr(rl, 'datafile', getattr(rl, 'datafile'))
3780 3829 yield getsvfs(repo)(datafile)
3781 3830
3782 3831 if getattr(rl, 'reading', None) is not None:
3783 3832
3784 3833 @contextlib.contextmanager
3785 3834 def lazy_reading(rl):
3786 3835 with rl.reading():
3787 3836 yield
3788 3837
3789 3838 else:
3790 3839
3791 3840 @contextlib.contextmanager
3792 3841 def lazy_reading(rl):
3793 3842 yield
3794 3843
3795 3844 def doread():
3796 3845 rl.clearcaches()
3797 3846 for rev in revs:
3798 3847 with lazy_reading(rl):
3799 3848 segmentforrevs(rev, rev)
3800 3849
3801 3850 def doreadcachedfh():
3802 3851 rl.clearcaches()
3803 3852 with reading(rl) as fh:
3804 3853 if fh is not None:
3805 3854 for rev in revs:
3806 3855 segmentforrevs(rev, rev, df=fh)
3807 3856 else:
3808 3857 for rev in revs:
3809 3858 segmentforrevs(rev, rev)
3810 3859
3811 3860 def doreadbatch():
3812 3861 rl.clearcaches()
3813 3862 with lazy_reading(rl):
3814 3863 segmentforrevs(revs[0], revs[-1])
3815 3864
3816 3865 def doreadbatchcachedfh():
3817 3866 rl.clearcaches()
3818 3867 with reading(rl) as fh:
3819 3868 if fh is not None:
3820 3869 segmentforrevs(revs[0], revs[-1], df=fh)
3821 3870 else:
3822 3871 segmentforrevs(revs[0], revs[-1])
3823 3872
3824 3873 def dochunk():
3825 3874 rl.clearcaches()
3826 3875 # chunk used to be available directly on the revlog
3827 3876 _chunk = getattr(rl, '_inner', rl)._chunk
3828 3877 with reading(rl) as fh:
3829 3878 if fh is not None:
3830 3879 for rev in revs:
3831 3880 _chunk(rev, df=fh)
3832 3881 else:
3833 3882 for rev in revs:
3834 3883 _chunk(rev)
3835 3884
3836 3885 chunks = [None]
3837 3886
3838 3887 def dochunkbatch():
3839 3888 rl.clearcaches()
3840 3889 _chunks = getattr(rl, '_inner', rl)._chunks
3841 3890 with reading(rl) as fh:
3842 3891 if fh is not None:
3843 3892 # Save chunks as a side-effect.
3844 3893 chunks[0] = _chunks(revs, df=fh)
3845 3894 else:
3846 3895 # Save chunks as a side-effect.
3847 3896 chunks[0] = _chunks(revs)
3848 3897
3849 3898 def docompress(compressor):
3850 3899 rl.clearcaches()
3851 3900
3852 3901 compressor_holder = getattr(rl, '_inner', rl)
3853 3902
3854 3903 try:
3855 3904 # Swap in the requested compression engine.
3856 3905 oldcompressor = compressor_holder._compressor
3857 3906 compressor_holder._compressor = compressor
3858 3907 for chunk in chunks[0]:
3859 3908 rl.compress(chunk)
3860 3909 finally:
3861 3910 compressor_holder._compressor = oldcompressor
3862 3911
3863 3912 benches = [
3864 3913 (lambda: doread(), b'read'),
3865 3914 (lambda: doreadcachedfh(), b'read w/ reused fd'),
3866 3915 (lambda: doreadbatch(), b'read batch'),
3867 3916 (lambda: doreadbatchcachedfh(), b'read batch w/ reused fd'),
3868 3917 (lambda: dochunk(), b'chunk'),
3869 3918 (lambda: dochunkbatch(), b'chunk batch'),
3870 3919 ]
3871 3920
3872 3921 for engine in sorted(engines):
3873 3922 compressor = util.compengines[engine].revlogcompressor()
3874 3923 benches.append(
3875 3924 (
3876 3925 functools.partial(docompress, compressor),
3877 3926 b'compress w/ %s' % engine,
3878 3927 )
3879 3928 )
3880 3929
3881 3930 for fn, title in benches:
3882 3931 timer, fm = gettimer(ui, opts)
3883 3932 timer(fn, title=title)
3884 3933 fm.end()
3885 3934
3886 3935
3887 3936 @command(
3888 3937 b'perf::revlogrevision|perfrevlogrevision',
3889 3938 revlogopts
3890 3939 + formatteropts
3891 3940 + [(b'', b'cache', False, b'use caches instead of clearing')],
3892 3941 b'-c|-m|FILE REV',
3893 3942 )
3894 3943 def perfrevlogrevision(ui, repo, file_, rev=None, cache=None, **opts):
3895 3944 """Benchmark obtaining a revlog revision.
3896 3945
3897 3946 Obtaining a revlog revision consists of roughly the following steps:
3898 3947
3899 3948 1. Compute the delta chain
3900 3949 2. Slice the delta chain if applicable
3901 3950 3. Obtain the raw chunks for that delta chain
3902 3951 4. Decompress each raw chunk
3903 3952 5. Apply binary patches to obtain fulltext
3904 3953 6. Verify hash of fulltext
3905 3954
3906 3955 This command measures the time spent in each of these phases.
3907 3956 """
3908 3957 opts = _byteskwargs(opts)
3909 3958
3910 3959 if opts.get(b'changelog') or opts.get(b'manifest'):
3911 3960 file_, rev = None, file_
3912 3961 elif rev is None:
3913 3962 raise error.CommandError(b'perfrevlogrevision', b'invalid arguments')
3914 3963
3915 3964 r = cmdutil.openrevlog(repo, b'perfrevlogrevision', file_, opts)
3916 3965
3917 3966 # _chunkraw was renamed to _getsegmentforrevs.
3918 3967 try:
3919 3968 segmentforrevs = r._inner.get_segment_for_revs
3920 3969 except AttributeError:
3921 3970 try:
3922 3971 segmentforrevs = r._getsegmentforrevs
3923 3972 except AttributeError:
3924 3973 segmentforrevs = r._chunkraw
3925 3974
3926 3975 node = r.lookup(rev)
3927 3976 rev = r.rev(node)
3928 3977
3929 3978 if getattr(r, 'reading', None) is not None:
3930 3979
3931 3980 @contextlib.contextmanager
3932 3981 def lazy_reading(r):
3933 3982 with r.reading():
3934 3983 yield
3935 3984
3936 3985 else:
3937 3986
3938 3987 @contextlib.contextmanager
3939 3988 def lazy_reading(r):
3940 3989 yield
3941 3990
3942 3991 def getrawchunks(data, chain):
3943 3992 start = r.start
3944 3993 length = r.length
3945 3994 inline = r._inline
3946 3995 try:
3947 3996 iosize = r.index.entry_size
3948 3997 except AttributeError:
3949 3998 iosize = r._io.size
3950 3999 buffer = util.buffer
3951 4000
3952 4001 chunks = []
3953 4002 ladd = chunks.append
3954 4003 for idx, item in enumerate(chain):
3955 4004 offset = start(item[0])
3956 4005 bits = data[idx]
3957 4006 for rev in item:
3958 4007 chunkstart = start(rev)
3959 4008 if inline:
3960 4009 chunkstart += (rev + 1) * iosize
3961 4010 chunklength = length(rev)
3962 4011 ladd(buffer(bits, chunkstart - offset, chunklength))
3963 4012
3964 4013 return chunks
3965 4014
3966 4015 def dodeltachain(rev):
3967 4016 if not cache:
3968 4017 r.clearcaches()
3969 4018 r._deltachain(rev)
3970 4019
3971 4020 def doread(chain):
3972 4021 if not cache:
3973 4022 r.clearcaches()
3974 4023 for item in slicedchain:
3975 4024 with lazy_reading(r):
3976 4025 segmentforrevs(item[0], item[-1])
3977 4026
3978 4027 def doslice(r, chain, size):
3979 4028 for s in slicechunk(r, chain, targetsize=size):
3980 4029 pass
3981 4030
3982 4031 def dorawchunks(data, chain):
3983 4032 if not cache:
3984 4033 r.clearcaches()
3985 4034 getrawchunks(data, chain)
3986 4035
3987 4036 def dodecompress(chunks):
3988 4037 decomp = r.decompress
3989 4038 for chunk in chunks:
3990 4039 decomp(chunk)
3991 4040
3992 4041 def dopatch(text, bins):
3993 4042 if not cache:
3994 4043 r.clearcaches()
3995 4044 mdiff.patches(text, bins)
3996 4045
3997 4046 def dohash(text):
3998 4047 if not cache:
3999 4048 r.clearcaches()
4000 4049 r.checkhash(text, node, rev=rev)
4001 4050
4002 4051 def dorevision():
4003 4052 if not cache:
4004 4053 r.clearcaches()
4005 4054 r.revision(node)
4006 4055
4007 4056 try:
4008 4057 from mercurial.revlogutils.deltas import slicechunk
4009 4058 except ImportError:
4010 4059 slicechunk = getattr(revlog, '_slicechunk', None)
4011 4060
4012 4061 size = r.length(rev)
4013 4062 chain = r._deltachain(rev)[0]
4014 4063
4015 4064 with_sparse_read = False
4016 4065 if hasattr(r, 'data_config'):
4017 4066 with_sparse_read = r.data_config.with_sparse_read
4018 4067 elif hasattr(r, '_withsparseread'):
4019 4068 with_sparse_read = r._withsparseread
4020 4069 if with_sparse_read:
4021 4070 slicedchain = (chain,)
4022 4071 else:
4023 4072 slicedchain = tuple(slicechunk(r, chain, targetsize=size))
4024 4073 data = [segmentforrevs(seg[0], seg[-1])[1] for seg in slicedchain]
4025 4074 rawchunks = getrawchunks(data, slicedchain)
4026 4075 bins = r._inner._chunks(chain)
4027 4076 text = bytes(bins[0])
4028 4077 bins = bins[1:]
4029 4078 text = mdiff.patches(text, bins)
4030 4079
4031 4080 benches = [
4032 4081 (lambda: dorevision(), b'full'),
4033 4082 (lambda: dodeltachain(rev), b'deltachain'),
4034 4083 (lambda: doread(chain), b'read'),
4035 4084 ]
4036 4085
4037 4086 if with_sparse_read:
4038 4087 slicing = (lambda: doslice(r, chain, size), b'slice-sparse-chain')
4039 4088 benches.append(slicing)
4040 4089
4041 4090 benches.extend(
4042 4091 [
4043 4092 (lambda: dorawchunks(data, slicedchain), b'rawchunks'),
4044 4093 (lambda: dodecompress(rawchunks), b'decompress'),
4045 4094 (lambda: dopatch(text, bins), b'patch'),
4046 4095 (lambda: dohash(text), b'hash'),
4047 4096 ]
4048 4097 )
4049 4098
4050 4099 timer, fm = gettimer(ui, opts)
4051 4100 for fn, title in benches:
4052 4101 timer(fn, title=title)
4053 4102 fm.end()
4054 4103
4055 4104
4056 4105 @command(
4057 4106 b'perf::revset|perfrevset',
4058 4107 [
4059 4108 (b'C', b'clear', False, b'clear volatile cache between each call.'),
4060 4109 (b'', b'contexts', False, b'obtain changectx for each revision'),
4061 4110 ]
4062 4111 + formatteropts,
4063 4112 b"REVSET",
4064 4113 )
4065 4114 def perfrevset(ui, repo, expr, clear=False, contexts=False, **opts):
4066 4115 """benchmark the execution time of a revset
4067 4116
4068 4117 Use the --clean option if need to evaluate the impact of build volatile
4069 4118 revisions set cache on the revset execution. Volatile cache hold filtered
4070 4119 and obsolete related cache."""
4071 4120 opts = _byteskwargs(opts)
4072 4121
4073 4122 timer, fm = gettimer(ui, opts)
4074 4123
4075 4124 def d():
4076 4125 if clear:
4077 4126 repo.invalidatevolatilesets()
4078 4127 if contexts:
4079 4128 for ctx in repo.set(expr):
4080 4129 pass
4081 4130 else:
4082 4131 for r in repo.revs(expr):
4083 4132 pass
4084 4133
4085 4134 timer(d)
4086 4135 fm.end()
4087 4136
4088 4137
4089 4138 @command(
4090 4139 b'perf::volatilesets|perfvolatilesets',
4091 4140 [
4092 4141 (b'', b'clear-obsstore', False, b'drop obsstore between each call.'),
4093 4142 ]
4094 4143 + formatteropts,
4095 4144 )
4096 4145 def perfvolatilesets(ui, repo, *names, **opts):
4097 4146 """benchmark the computation of various volatile set
4098 4147
4099 4148 Volatile set computes element related to filtering and obsolescence."""
4100 4149 opts = _byteskwargs(opts)
4101 4150 timer, fm = gettimer(ui, opts)
4102 4151 repo = repo.unfiltered()
4103 4152
4104 4153 def getobs(name):
4105 4154 def d():
4106 4155 repo.invalidatevolatilesets()
4107 4156 if opts[b'clear_obsstore']:
4108 4157 clearfilecache(repo, b'obsstore')
4109 4158 obsolete.getrevs(repo, name)
4110 4159
4111 4160 return d
4112 4161
4113 4162 allobs = sorted(obsolete.cachefuncs)
4114 4163 if names:
4115 4164 allobs = [n for n in allobs if n in names]
4116 4165
4117 4166 for name in allobs:
4118 4167 timer(getobs(name), title=name)
4119 4168
4120 4169 def getfiltered(name):
4121 4170 def d():
4122 4171 repo.invalidatevolatilesets()
4123 4172 if opts[b'clear_obsstore']:
4124 4173 clearfilecache(repo, b'obsstore')
4125 4174 repoview.filterrevs(repo, name)
4126 4175
4127 4176 return d
4128 4177
4129 4178 allfilter = sorted(repoview.filtertable)
4130 4179 if names:
4131 4180 allfilter = [n for n in allfilter if n in names]
4132 4181
4133 4182 for name in allfilter:
4134 4183 timer(getfiltered(name), title=name)
4135 4184 fm.end()
4136 4185
4137 4186
4138 4187 @command(
4139 4188 b'perf::branchmap|perfbranchmap',
4140 4189 [
4141 4190 (b'f', b'full', False, b'Includes build time of subset'),
4142 4191 (
4143 4192 b'',
4144 4193 b'clear-revbranch',
4145 4194 False,
4146 4195 b'purge the revbranch cache between computation',
4147 4196 ),
4148 4197 ]
4149 4198 + formatteropts,
4150 4199 )
4151 4200 def perfbranchmap(ui, repo, *filternames, **opts):
4152 4201 """benchmark the update of a branchmap
4153 4202
4154 4203 This benchmarks the full repo.branchmap() call with read and write disabled
4155 4204 """
4156 4205 opts = _byteskwargs(opts)
4157 4206 full = opts.get(b"full", False)
4158 4207 clear_revbranch = opts.get(b"clear_revbranch", False)
4159 4208 timer, fm = gettimer(ui, opts)
4160 4209
4161 4210 def getbranchmap(filtername):
4162 4211 """generate a benchmark function for the filtername"""
4163 4212 if filtername is None:
4164 4213 view = repo
4165 4214 else:
4166 4215 view = repo.filtered(filtername)
4167 4216 if util.safehasattr(view._branchcaches, '_per_filter'):
4168 4217 filtered = view._branchcaches._per_filter
4169 4218 else:
4170 4219 # older versions
4171 4220 filtered = view._branchcaches
4172 4221
4173 4222 def d():
4174 4223 if clear_revbranch:
4175 4224 repo.revbranchcache()._clear()
4176 4225 if full:
4177 4226 view._branchcaches.clear()
4178 4227 else:
4179 4228 filtered.pop(filtername, None)
4180 4229 view.branchmap()
4181 4230
4182 4231 return d
4183 4232
4184 4233 # add filter in smaller subset to bigger subset
4185 4234 possiblefilters = set(repoview.filtertable)
4186 4235 if filternames:
4187 4236 possiblefilters &= set(filternames)
4188 4237 subsettable = getbranchmapsubsettable()
4189 4238 allfilters = []
4190 4239 while possiblefilters:
4191 4240 for name in possiblefilters:
4192 4241 subset = subsettable.get(name)
4193 4242 if subset not in possiblefilters:
4194 4243 break
4195 4244 else:
4196 4245 assert False, b'subset cycle %s!' % possiblefilters
4197 4246 allfilters.append(name)
4198 4247 possiblefilters.remove(name)
4199 4248
4200 4249 # warm the cache
4201 4250 if not full:
4202 4251 for name in allfilters:
4203 4252 repo.filtered(name).branchmap()
4204 4253 if not filternames or b'unfiltered' in filternames:
4205 4254 # add unfiltered
4206 4255 allfilters.append(None)
4207 4256
4208 if util.safehasattr(branchmap.branchcache, 'fromfile'):
4257 old_branch_cache_from_file = None
4258 branchcacheread = None
4259 if util.safehasattr(branchmap, 'branch_cache_from_file'):
4260 old_branch_cache_from_file = branchmap.branch_cache_from_file
4261 branchmap.branch_cache_from_file = lambda *args: None
4262 elif util.safehasattr(branchmap.branchcache, 'fromfile'):
4209 4263 branchcacheread = safeattrsetter(branchmap.branchcache, b'fromfile')
4210 4264 branchcacheread.set(classmethod(lambda *args: None))
4211 4265 else:
4212 4266 # older versions
4213 4267 branchcacheread = safeattrsetter(branchmap, b'read')
4214 4268 branchcacheread.set(lambda *args: None)
4269 if util.safehasattr(branchmap, '_LocalBranchCache'):
4270 branchcachewrite = safeattrsetter(branchmap._LocalBranchCache, b'write')
4271 branchcachewrite.set(lambda *args: None)
4272 else:
4215 4273 branchcachewrite = safeattrsetter(branchmap.branchcache, b'write')
4216 4274 branchcachewrite.set(lambda *args: None)
4217 4275 try:
4218 4276 for name in allfilters:
4219 4277 printname = name
4220 4278 if name is None:
4221 4279 printname = b'unfiltered'
4222 4280 timer(getbranchmap(name), title=printname)
4223 4281 finally:
4282 if old_branch_cache_from_file is not None:
4283 branchmap.branch_cache_from_file = old_branch_cache_from_file
4284 if branchcacheread is not None:
4224 4285 branchcacheread.restore()
4225 4286 branchcachewrite.restore()
4226 4287 fm.end()
4227 4288
4228 4289
4229 4290 @command(
4230 4291 b'perf::branchmapupdate|perfbranchmapupdate',
4231 4292 [
4232 4293 (b'', b'base', [], b'subset of revision to start from'),
4233 4294 (b'', b'target', [], b'subset of revision to end with'),
4234 4295 (b'', b'clear-caches', False, b'clear cache between each runs'),
4235 4296 ]
4236 4297 + formatteropts,
4237 4298 )
4238 4299 def perfbranchmapupdate(ui, repo, base=(), target=(), **opts):
4239 4300 """benchmark branchmap update from for <base> revs to <target> revs
4240 4301
4241 4302 If `--clear-caches` is passed, the following items will be reset before
4242 4303 each update:
4243 4304 * the changelog instance and associated indexes
4244 4305 * the rev-branch-cache instance
4245 4306
4246 4307 Examples:
4247 4308
4248 4309 # update for the one last revision
4249 4310 $ hg perfbranchmapupdate --base 'not tip' --target 'tip'
4250 4311
4251 4312 $ update for change coming with a new branch
4252 4313 $ hg perfbranchmapupdate --base 'stable' --target 'default'
4253 4314 """
4254 4315 from mercurial import branchmap
4255 4316 from mercurial import repoview
4256 4317
4257 4318 opts = _byteskwargs(opts)
4258 4319 timer, fm = gettimer(ui, opts)
4259 4320 clearcaches = opts[b'clear_caches']
4260 4321 unfi = repo.unfiltered()
4261 4322 x = [None] # used to pass data between closure
4262 4323
4263 4324 # we use a `list` here to avoid possible side effect from smartset
4264 4325 baserevs = list(scmutil.revrange(repo, base))
4265 4326 targetrevs = list(scmutil.revrange(repo, target))
4266 4327 if not baserevs:
4267 4328 raise error.Abort(b'no revisions selected for --base')
4268 4329 if not targetrevs:
4269 4330 raise error.Abort(b'no revisions selected for --target')
4270 4331
4271 4332 # make sure the target branchmap also contains the one in the base
4272 4333 targetrevs = list(set(baserevs) | set(targetrevs))
4273 4334 targetrevs.sort()
4274 4335
4275 4336 cl = repo.changelog
4276 4337 allbaserevs = list(cl.ancestors(baserevs, inclusive=True))
4277 4338 allbaserevs.sort()
4278 4339 alltargetrevs = frozenset(cl.ancestors(targetrevs, inclusive=True))
4279 4340
4280 4341 newrevs = list(alltargetrevs.difference(allbaserevs))
4281 4342 newrevs.sort()
4282 4343
4283 4344 allrevs = frozenset(unfi.changelog.revs())
4284 4345 basefilterrevs = frozenset(allrevs.difference(allbaserevs))
4285 4346 targetfilterrevs = frozenset(allrevs.difference(alltargetrevs))
4286 4347
4287 4348 def basefilter(repo, visibilityexceptions=None):
4288 4349 return basefilterrevs
4289 4350
4290 4351 def targetfilter(repo, visibilityexceptions=None):
4291 4352 return targetfilterrevs
4292 4353
4293 4354 msg = b'benchmark of branchmap with %d revisions with %d new ones\n'
4294 4355 ui.status(msg % (len(allbaserevs), len(newrevs)))
4295 4356 if targetfilterrevs:
4296 4357 msg = b'(%d revisions still filtered)\n'
4297 4358 ui.status(msg % len(targetfilterrevs))
4298 4359
4299 4360 try:
4300 4361 repoview.filtertable[b'__perf_branchmap_update_base'] = basefilter
4301 4362 repoview.filtertable[b'__perf_branchmap_update_target'] = targetfilter
4302 4363
4303 4364 baserepo = repo.filtered(b'__perf_branchmap_update_base')
4304 4365 targetrepo = repo.filtered(b'__perf_branchmap_update_target')
4305 4366
4367 bcache = repo.branchmap()
4368 copy_method = 'copy'
4369
4370 copy_base_kwargs = copy_base_kwargs = {}
4371 if hasattr(bcache, 'copy'):
4372 if 'repo' in getargspec(bcache.copy).args:
4373 copy_base_kwargs = {"repo": baserepo}
4374 copy_target_kwargs = {"repo": targetrepo}
4375 else:
4376 copy_method = 'inherit_for'
4377 copy_base_kwargs = {"repo": baserepo}
4378 copy_target_kwargs = {"repo": targetrepo}
4379
4306 4380 # try to find an existing branchmap to reuse
4307 4381 subsettable = getbranchmapsubsettable()
4308 4382 candidatefilter = subsettable.get(None)
4309 4383 while candidatefilter is not None:
4310 4384 candidatebm = repo.filtered(candidatefilter).branchmap()
4311 4385 if candidatebm.validfor(baserepo):
4312 4386 filtered = repoview.filterrevs(repo, candidatefilter)
4313 4387 missing = [r for r in allbaserevs if r in filtered]
4314 base = candidatebm.copy()
4388 base = getattr(candidatebm, copy_method)(**copy_base_kwargs)
4315 4389 base.update(baserepo, missing)
4316 4390 break
4317 4391 candidatefilter = subsettable.get(candidatefilter)
4318 4392 else:
4319 4393 # no suitable subset where found
4320 4394 base = branchmap.branchcache()
4321 4395 base.update(baserepo, allbaserevs)
4322 4396
4323 4397 def setup():
4324 x[0] = base.copy()
4398 x[0] = getattr(base, copy_method)(**copy_target_kwargs)
4325 4399 if clearcaches:
4326 4400 unfi._revbranchcache = None
4327 4401 clearchangelog(repo)
4328 4402
4329 4403 def bench():
4330 4404 x[0].update(targetrepo, newrevs)
4331 4405
4332 4406 timer(bench, setup=setup)
4333 4407 fm.end()
4334 4408 finally:
4335 4409 repoview.filtertable.pop(b'__perf_branchmap_update_base', None)
4336 4410 repoview.filtertable.pop(b'__perf_branchmap_update_target', None)
4337 4411
4338 4412
4339 4413 @command(
4340 4414 b'perf::branchmapload|perfbranchmapload',
4341 4415 [
4342 4416 (b'f', b'filter', b'', b'Specify repoview filter'),
4343 4417 (b'', b'list', False, b'List brachmap filter caches'),
4344 4418 (b'', b'clear-revlogs', False, b'refresh changelog and manifest'),
4345 4419 ]
4346 4420 + formatteropts,
4347 4421 )
4348 4422 def perfbranchmapload(ui, repo, filter=b'', list=False, **opts):
4349 4423 """benchmark reading the branchmap"""
4350 4424 opts = _byteskwargs(opts)
4351 4425 clearrevlogs = opts[b'clear_revlogs']
4352 4426
4353 4427 if list:
4354 4428 for name, kind, st in repo.cachevfs.readdir(stat=True):
4355 4429 if name.startswith(b'branch2'):
4356 4430 filtername = name.partition(b'-')[2] or b'unfiltered'
4357 4431 ui.status(
4358 4432 b'%s - %s\n' % (filtername, util.bytecount(st.st_size))
4359 4433 )
4360 4434 return
4361 4435 if not filter:
4362 4436 filter = None
4363 4437 subsettable = getbranchmapsubsettable()
4364 4438 if filter is None:
4365 4439 repo = repo.unfiltered()
4366 4440 else:
4367 4441 repo = repoview.repoview(repo, filter)
4368 4442
4369 4443 repo.branchmap() # make sure we have a relevant, up to date branchmap
4370 4444
4371 try:
4372 fromfile = branchmap.branchcache.fromfile
4373 except AttributeError:
4374 # older versions
4445 fromfile = getattr(branchmap, 'branch_cache_from_file', None)
4446 if fromfile is None:
4447 fromfile = getattr(branchmap.branchcache, 'fromfile', None)
4448 if fromfile is None:
4375 4449 fromfile = branchmap.read
4376 4450
4377 4451 currentfilter = filter
4378 4452 # try once without timer, the filter may not be cached
4379 4453 while fromfile(repo) is None:
4380 4454 currentfilter = subsettable.get(currentfilter)
4381 4455 if currentfilter is None:
4382 4456 raise error.Abort(
4383 4457 b'No branchmap cached for %s repo' % (filter or b'unfiltered')
4384 4458 )
4385 4459 repo = repo.filtered(currentfilter)
4386 4460 timer, fm = gettimer(ui, opts)
4387 4461
4388 4462 def setup():
4389 4463 if clearrevlogs:
4390 4464 clearchangelog(repo)
4391 4465
4392 4466 def bench():
4393 4467 fromfile(repo)
4394 4468
4395 4469 timer(bench, setup=setup)
4396 4470 fm.end()
4397 4471
4398 4472
4399 4473 @command(b'perf::loadmarkers|perfloadmarkers')
4400 4474 def perfloadmarkers(ui, repo):
4401 4475 """benchmark the time to parse the on-disk markers for a repo
4402 4476
4403 4477 Result is the number of markers in the repo."""
4404 4478 timer, fm = gettimer(ui)
4405 4479 svfs = getsvfs(repo)
4406 4480 timer(lambda: len(obsolete.obsstore(repo, svfs)))
4407 4481 fm.end()
4408 4482
4409 4483
4410 4484 @command(
4411 4485 b'perf::lrucachedict|perflrucachedict',
4412 4486 formatteropts
4413 4487 + [
4414 4488 (b'', b'costlimit', 0, b'maximum total cost of items in cache'),
4415 4489 (b'', b'mincost', 0, b'smallest cost of items in cache'),
4416 4490 (b'', b'maxcost', 100, b'maximum cost of items in cache'),
4417 4491 (b'', b'size', 4, b'size of cache'),
4418 4492 (b'', b'gets', 10000, b'number of key lookups'),
4419 4493 (b'', b'sets', 10000, b'number of key sets'),
4420 4494 (b'', b'mixed', 10000, b'number of mixed mode operations'),
4421 4495 (
4422 4496 b'',
4423 4497 b'mixedgetfreq',
4424 4498 50,
4425 4499 b'frequency of get vs set ops in mixed mode',
4426 4500 ),
4427 4501 ],
4428 4502 norepo=True,
4429 4503 )
4430 4504 def perflrucache(
4431 4505 ui,
4432 4506 mincost=0,
4433 4507 maxcost=100,
4434 4508 costlimit=0,
4435 4509 size=4,
4436 4510 gets=10000,
4437 4511 sets=10000,
4438 4512 mixed=10000,
4439 4513 mixedgetfreq=50,
4440 4514 **opts
4441 4515 ):
4442 4516 opts = _byteskwargs(opts)
4443 4517
4444 4518 def doinit():
4445 4519 for i in _xrange(10000):
4446 4520 util.lrucachedict(size)
4447 4521
4448 4522 costrange = list(range(mincost, maxcost + 1))
4449 4523
4450 4524 values = []
4451 4525 for i in _xrange(size):
4452 4526 values.append(random.randint(0, _maxint))
4453 4527
4454 4528 # Get mode fills the cache and tests raw lookup performance with no
4455 4529 # eviction.
4456 4530 getseq = []
4457 4531 for i in _xrange(gets):
4458 4532 getseq.append(random.choice(values))
4459 4533
4460 4534 def dogets():
4461 4535 d = util.lrucachedict(size)
4462 4536 for v in values:
4463 4537 d[v] = v
4464 4538 for key in getseq:
4465 4539 value = d[key]
4466 4540 value # silence pyflakes warning
4467 4541
4468 4542 def dogetscost():
4469 4543 d = util.lrucachedict(size, maxcost=costlimit)
4470 4544 for i, v in enumerate(values):
4471 4545 d.insert(v, v, cost=costs[i])
4472 4546 for key in getseq:
4473 4547 try:
4474 4548 value = d[key]
4475 4549 value # silence pyflakes warning
4476 4550 except KeyError:
4477 4551 pass
4478 4552
4479 4553 # Set mode tests insertion speed with cache eviction.
4480 4554 setseq = []
4481 4555 costs = []
4482 4556 for i in _xrange(sets):
4483 4557 setseq.append(random.randint(0, _maxint))
4484 4558 costs.append(random.choice(costrange))
4485 4559
4486 4560 def doinserts():
4487 4561 d = util.lrucachedict(size)
4488 4562 for v in setseq:
4489 4563 d.insert(v, v)
4490 4564
4491 4565 def doinsertscost():
4492 4566 d = util.lrucachedict(size, maxcost=costlimit)
4493 4567 for i, v in enumerate(setseq):
4494 4568 d.insert(v, v, cost=costs[i])
4495 4569
4496 4570 def dosets():
4497 4571 d = util.lrucachedict(size)
4498 4572 for v in setseq:
4499 4573 d[v] = v
4500 4574
4501 4575 # Mixed mode randomly performs gets and sets with eviction.
4502 4576 mixedops = []
4503 4577 for i in _xrange(mixed):
4504 4578 r = random.randint(0, 100)
4505 4579 if r < mixedgetfreq:
4506 4580 op = 0
4507 4581 else:
4508 4582 op = 1
4509 4583
4510 4584 mixedops.append(
4511 4585 (op, random.randint(0, size * 2), random.choice(costrange))
4512 4586 )
4513 4587
4514 4588 def domixed():
4515 4589 d = util.lrucachedict(size)
4516 4590
4517 4591 for op, v, cost in mixedops:
4518 4592 if op == 0:
4519 4593 try:
4520 4594 d[v]
4521 4595 except KeyError:
4522 4596 pass
4523 4597 else:
4524 4598 d[v] = v
4525 4599
4526 4600 def domixedcost():
4527 4601 d = util.lrucachedict(size, maxcost=costlimit)
4528 4602
4529 4603 for op, v, cost in mixedops:
4530 4604 if op == 0:
4531 4605 try:
4532 4606 d[v]
4533 4607 except KeyError:
4534 4608 pass
4535 4609 else:
4536 4610 d.insert(v, v, cost=cost)
4537 4611
4538 4612 benches = [
4539 4613 (doinit, b'init'),
4540 4614 ]
4541 4615
4542 4616 if costlimit:
4543 4617 benches.extend(
4544 4618 [
4545 4619 (dogetscost, b'gets w/ cost limit'),
4546 4620 (doinsertscost, b'inserts w/ cost limit'),
4547 4621 (domixedcost, b'mixed w/ cost limit'),
4548 4622 ]
4549 4623 )
4550 4624 else:
4551 4625 benches.extend(
4552 4626 [
4553 4627 (dogets, b'gets'),
4554 4628 (doinserts, b'inserts'),
4555 4629 (dosets, b'sets'),
4556 4630 (domixed, b'mixed'),
4557 4631 ]
4558 4632 )
4559 4633
4560 4634 for fn, title in benches:
4561 4635 timer, fm = gettimer(ui, opts)
4562 4636 timer(fn, title=title)
4563 4637 fm.end()
4564 4638
4565 4639
4566 4640 @command(
4567 4641 b'perf::write|perfwrite',
4568 4642 formatteropts
4569 4643 + [
4570 4644 (b'', b'write-method', b'write', b'ui write method'),
4571 4645 (b'', b'nlines', 100, b'number of lines'),
4572 4646 (b'', b'nitems', 100, b'number of items (per line)'),
4573 4647 (b'', b'item', b'x', b'item that is written'),
4574 4648 (b'', b'batch-line', None, b'pass whole line to write method at once'),
4575 4649 (b'', b'flush-line', None, b'flush after each line'),
4576 4650 ],
4577 4651 )
4578 4652 def perfwrite(ui, repo, **opts):
4579 4653 """microbenchmark ui.write (and others)"""
4580 4654 opts = _byteskwargs(opts)
4581 4655
4582 4656 write = getattr(ui, _sysstr(opts[b'write_method']))
4583 4657 nlines = int(opts[b'nlines'])
4584 4658 nitems = int(opts[b'nitems'])
4585 4659 item = opts[b'item']
4586 4660 batch_line = opts.get(b'batch_line')
4587 4661 flush_line = opts.get(b'flush_line')
4588 4662
4589 4663 if batch_line:
4590 4664 line = item * nitems + b'\n'
4591 4665
4592 4666 def benchmark():
4593 4667 for i in pycompat.xrange(nlines):
4594 4668 if batch_line:
4595 4669 write(line)
4596 4670 else:
4597 4671 for i in pycompat.xrange(nitems):
4598 4672 write(item)
4599 4673 write(b'\n')
4600 4674 if flush_line:
4601 4675 ui.flush()
4602 4676 ui.flush()
4603 4677
4604 4678 timer, fm = gettimer(ui, opts)
4605 4679 timer(benchmark)
4606 4680 fm.end()
4607 4681
4608 4682
4609 4683 def uisetup(ui):
4610 4684 if util.safehasattr(cmdutil, b'openrevlog') and not util.safehasattr(
4611 4685 commands, b'debugrevlogopts'
4612 4686 ):
4613 4687 # for "historical portability":
4614 4688 # In this case, Mercurial should be 1.9 (or a79fea6b3e77) -
4615 4689 # 3.7 (or 5606f7d0d063). Therefore, '--dir' option for
4616 4690 # openrevlog() should cause failure, because it has been
4617 4691 # available since 3.5 (or 49c583ca48c4).
4618 4692 def openrevlog(orig, repo, cmd, file_, opts):
4619 4693 if opts.get(b'dir') and not util.safehasattr(repo, b'dirlog'):
4620 4694 raise error.Abort(
4621 4695 b"This version doesn't support --dir option",
4622 4696 hint=b"use 3.5 or later",
4623 4697 )
4624 4698 return orig(repo, cmd, file_, opts)
4625 4699
4626 4700 name = _sysstr(b'openrevlog')
4627 4701 extensions.wrapfunction(cmdutil, name, openrevlog)
4628 4702
4629 4703
4630 4704 @command(
4631 4705 b'perf::progress|perfprogress',
4632 4706 formatteropts
4633 4707 + [
4634 4708 (b'', b'topic', b'topic', b'topic for progress messages'),
4635 4709 (b'c', b'total', 1000000, b'total value we are progressing to'),
4636 4710 ],
4637 4711 norepo=True,
4638 4712 )
4639 4713 def perfprogress(ui, topic=None, total=None, **opts):
4640 4714 """printing of progress bars"""
4641 4715 opts = _byteskwargs(opts)
4642 4716
4643 4717 timer, fm = gettimer(ui, opts)
4644 4718
4645 4719 def doprogress():
4646 4720 with ui.makeprogress(topic, total=total) as progress:
4647 4721 for i in _xrange(total):
4648 4722 progress.increment()
4649 4723
4650 4724 timer(doprogress)
4651 4725 fm.end()
@@ -1,823 +1,826 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''largefiles utility code: must not import other modules in this package.'''
10 10
11 11 import contextlib
12 12 import copy
13 13 import os
14 14 import stat
15 15
16 16 from mercurial.i18n import _
17 17 from mercurial.node import hex
18 18 from mercurial.pycompat import open
19 19
20 20 from mercurial import (
21 21 dirstate,
22 22 encoding,
23 23 error,
24 24 httpconnection,
25 25 match as matchmod,
26 26 pycompat,
27 27 requirements,
28 28 scmutil,
29 29 sparse,
30 30 util,
31 31 vfs as vfsmod,
32 32 )
33 33 from mercurial.utils import hashutil
34 34 from mercurial.dirstateutils import timestamp
35 35
36 36 shortname = b'.hglf'
37 37 shortnameslash = shortname + b'/'
38 38 longname = b'largefiles'
39 39
40 40 # -- Private worker functions ------------------------------------------
41 41
42 42
43 43 @contextlib.contextmanager
44 44 def lfstatus(repo, value=True):
45 45 oldvalue = getattr(repo, 'lfstatus', False)
46 46 repo.lfstatus = value
47 47 try:
48 48 yield
49 49 finally:
50 50 repo.lfstatus = oldvalue
51 51
52 52
53 53 def getminsize(ui, assumelfiles, opt, default=10):
54 54 lfsize = opt
55 55 if not lfsize and assumelfiles:
56 56 lfsize = ui.config(longname, b'minsize', default=default)
57 57 if lfsize:
58 58 try:
59 59 lfsize = float(lfsize)
60 60 except ValueError:
61 61 raise error.Abort(
62 62 _(b'largefiles: size must be number (not %s)\n') % lfsize
63 63 )
64 64 if lfsize is None:
65 65 raise error.Abort(_(b'minimum size for largefiles must be specified'))
66 66 return lfsize
67 67
68 68
69 69 def link(src, dest):
70 70 """Try to create hardlink - if that fails, efficiently make a copy."""
71 71 util.makedirs(os.path.dirname(dest))
72 72 try:
73 73 util.oslink(src, dest)
74 74 except OSError:
75 75 # if hardlinks fail, fallback on atomic copy
76 76 with open(src, b'rb') as srcf, util.atomictempfile(dest) as dstf:
77 77 for chunk in util.filechunkiter(srcf):
78 78 dstf.write(chunk)
79 79 os.chmod(dest, os.stat(src).st_mode)
80 80
81 81
82 82 def usercachepath(ui, hash):
83 83 """Return the correct location in the "global" largefiles cache for a file
84 84 with the given hash.
85 85 This cache is used for sharing of largefiles across repositories - both
86 86 to preserve download bandwidth and storage space."""
87 87 return os.path.join(_usercachedir(ui), hash)
88 88
89 89
90 90 def _usercachedir(ui, name=longname):
91 91 '''Return the location of the "global" largefiles cache.'''
92 92 path = ui.configpath(name, b'usercache')
93 93 if path:
94 94 return path
95 95
96 96 hint = None
97 97
98 98 if pycompat.iswindows:
99 99 appdata = encoding.environ.get(
100 100 b'LOCALAPPDATA', encoding.environ.get(b'APPDATA')
101 101 )
102 102 if appdata:
103 103 return os.path.join(appdata, name)
104 104
105 105 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
106 106 b"LOCALAPPDATA",
107 107 b"APPDATA",
108 108 name,
109 109 )
110 110 elif pycompat.isdarwin:
111 111 home = encoding.environ.get(b'HOME')
112 112 if home:
113 113 return os.path.join(home, b'Library', b'Caches', name)
114 114
115 115 hint = _(b"define %s in the environment, or set %s.usercache") % (
116 116 b"HOME",
117 117 name,
118 118 )
119 119 elif pycompat.isposix:
120 120 path = encoding.environ.get(b'XDG_CACHE_HOME')
121 121 if path:
122 122 return os.path.join(path, name)
123 123 home = encoding.environ.get(b'HOME')
124 124 if home:
125 125 return os.path.join(home, b'.cache', name)
126 126
127 127 hint = _(b"define %s or %s in the environment, or set %s.usercache") % (
128 128 b"XDG_CACHE_HOME",
129 129 b"HOME",
130 130 name,
131 131 )
132 132 else:
133 133 raise error.Abort(
134 134 _(b'unknown operating system: %s\n') % pycompat.osname
135 135 )
136 136
137 137 raise error.Abort(_(b'unknown %s usercache location') % name, hint=hint)
138 138
139 139
140 140 def inusercache(ui, hash):
141 141 path = usercachepath(ui, hash)
142 142 return os.path.exists(path)
143 143
144 144
145 145 def findfile(repo, hash):
146 146 """Return store path of the largefile with the specified hash.
147 147 As a side effect, the file might be linked from user cache.
148 148 Return None if the file can't be found locally."""
149 149 path, exists = findstorepath(repo, hash)
150 150 if exists:
151 151 repo.ui.note(_(b'found %s in store\n') % hash)
152 152 return path
153 153 elif inusercache(repo.ui, hash):
154 154 repo.ui.note(_(b'found %s in system cache\n') % hash)
155 155 path = storepath(repo, hash)
156 156 link(usercachepath(repo.ui, hash), path)
157 157 return path
158 158 return None
159 159
160 160
161 161 class largefilesdirstate(dirstate.dirstate):
162 162 _large_file_dirstate = True
163 163 _tr_key_suffix = b'-large-files'
164 164
165 165 def __getitem__(self, key):
166 166 return super(largefilesdirstate, self).__getitem__(unixpath(key))
167 167
168 168 def set_tracked(self, f):
169 169 return super(largefilesdirstate, self).set_tracked(unixpath(f))
170 170
171 171 def set_untracked(self, f):
172 172 return super(largefilesdirstate, self).set_untracked(unixpath(f))
173 173
174 174 def normal(self, f, parentfiledata=None):
175 175 # not sure if we should pass the `parentfiledata` down or throw it
176 176 # away. So throwing it away to stay on the safe side.
177 177 return super(largefilesdirstate, self).normal(unixpath(f))
178 178
179 179 def remove(self, f):
180 180 return super(largefilesdirstate, self).remove(unixpath(f))
181 181
182 182 def add(self, f):
183 183 return super(largefilesdirstate, self).add(unixpath(f))
184 184
185 185 def drop(self, f):
186 186 return super(largefilesdirstate, self).drop(unixpath(f))
187 187
188 188 def forget(self, f):
189 189 return super(largefilesdirstate, self).forget(unixpath(f))
190 190
191 191 def normallookup(self, f):
192 192 return super(largefilesdirstate, self).normallookup(unixpath(f))
193 193
194 194 def _ignore(self, f):
195 195 return False
196 196
197 197 def write(self, tr):
198 198 # (1) disable PENDING mode always
199 199 # (lfdirstate isn't yet managed as a part of the transaction)
200 200 # (2) avoid develwarn 'use dirstate.write with ....'
201 201 if tr:
202 202 tr.addbackup(b'largefiles/dirstate', location=b'plain')
203 203 super(largefilesdirstate, self).write(None)
204 204
205 205
206 206 def openlfdirstate(ui, repo, create=True):
207 207 """
208 208 Return a dirstate object that tracks largefiles: i.e. its root is
209 209 the repo root, but it is saved in .hg/largefiles/dirstate.
210 210
211 211 If a dirstate object already exists and is being used for a 'changing_*'
212 212 context, it will be returned.
213 213 """
214 214 sub_dirstate = getattr(repo.dirstate, '_sub_dirstate', None)
215 215 if sub_dirstate is not None:
216 216 return sub_dirstate
217 217 vfs = repo.vfs
218 218 lfstoredir = longname
219 219 opener = vfsmod.vfs(vfs.join(lfstoredir))
220 220 use_dirstate_v2 = requirements.DIRSTATE_V2_REQUIREMENT in repo.requirements
221 221 lfdirstate = largefilesdirstate(
222 222 opener,
223 223 ui,
224 224 repo.root,
225 225 repo.dirstate._validate,
226 226 lambda: sparse.matcher(repo),
227 227 repo.nodeconstants,
228 228 use_dirstate_v2,
229 229 )
230 230
231 231 # If the largefiles dirstate does not exist, populate and create
232 232 # it. This ensures that we create it on the first meaningful
233 233 # largefiles operation in a new clone.
234 234 if create and not vfs.exists(vfs.join(lfstoredir, b'dirstate')):
235 235 try:
236 236 with repo.wlock(wait=False), lfdirstate.changing_files(repo):
237 237 matcher = getstandinmatcher(repo)
238 238 standins = repo.dirstate.walk(
239 239 matcher, subrepos=[], unknown=False, ignored=False
240 240 )
241 241
242 242 if len(standins) > 0:
243 243 vfs.makedirs(lfstoredir)
244 244
245 245 for standin in standins:
246 246 lfile = splitstandin(standin)
247 247 lfdirstate.hacky_extension_update_file(
248 248 lfile,
249 249 p1_tracked=True,
250 250 wc_tracked=True,
251 251 possibly_dirty=True,
252 252 )
253 253 except error.LockError:
254 254 # Assume that whatever was holding the lock was important.
255 255 # If we were doing something important, we would already have
256 256 # either the lock or a largefile dirstate.
257 257 pass
258 258 return lfdirstate
259 259
260 260
261 261 def lfdirstatestatus(lfdirstate, repo):
262 262 pctx = repo[b'.']
263 263 match = matchmod.always()
264 264 unsure, s, mtime_boundary = lfdirstate.status(
265 265 match, subrepos=[], ignored=False, clean=False, unknown=False
266 266 )
267 267 modified, clean = s.modified, s.clean
268 268 wctx = repo[None]
269 269 for lfile in unsure:
270 270 try:
271 271 fctx = pctx[standin(lfile)]
272 272 except LookupError:
273 273 fctx = None
274 274 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
275 275 modified.append(lfile)
276 276 else:
277 277 clean.append(lfile)
278 278 st = wctx[lfile].lstat()
279 279 mode = st.st_mode
280 280 size = st.st_size
281 281 mtime = timestamp.reliable_mtime_of(st, mtime_boundary)
282 282 if mtime is not None:
283 283 cache_data = (mode, size, mtime)
284 284 lfdirstate.set_clean(lfile, cache_data)
285 285 return s
286 286
287 287
288 288 def listlfiles(repo, rev=None, matcher=None):
289 289 """return a list of largefiles in the working copy or the
290 290 specified changeset"""
291 291
292 292 if matcher is None:
293 293 matcher = getstandinmatcher(repo)
294 294
295 295 # ignore unknown files in working directory
296 296 return [
297 297 splitstandin(f)
298 298 for f in repo[rev].walk(matcher)
299 299 if rev is not None or repo.dirstate.get_entry(f).any_tracked
300 300 ]
301 301
302 302
303 303 def instore(repo, hash, forcelocal=False):
304 304 '''Return true if a largefile with the given hash exists in the store'''
305 305 return os.path.exists(storepath(repo, hash, forcelocal))
306 306
307 307
308 308 def storepath(repo, hash, forcelocal=False):
309 309 """Return the correct location in the repository largefiles store for a
310 310 file with the given hash."""
311 311 if not forcelocal and repo.shared():
312 312 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
313 313 return repo.vfs.join(longname, hash)
314 314
315 315
316 316 def findstorepath(repo, hash):
317 317 """Search through the local store path(s) to find the file for the given
318 318 hash. If the file is not found, its path in the primary store is returned.
319 319 The return value is a tuple of (path, exists(path)).
320 320 """
321 321 # For shared repos, the primary store is in the share source. But for
322 322 # backward compatibility, force a lookup in the local store if it wasn't
323 323 # found in the share source.
324 324 path = storepath(repo, hash, False)
325 325
326 326 if instore(repo, hash):
327 327 return (path, True)
328 328 elif repo.shared() and instore(repo, hash, True):
329 329 return storepath(repo, hash, True), True
330 330
331 331 return (path, False)
332 332
333 333
334 334 def copyfromcache(repo, hash, filename):
335 335 """Copy the specified largefile from the repo or system cache to
336 336 filename in the repository. Return true on success or false if the
337 337 file was not found in either cache (which should not happened:
338 338 this is meant to be called only after ensuring that the needed
339 339 largefile exists in the cache)."""
340 340 wvfs = repo.wvfs
341 341 path = findfile(repo, hash)
342 342 if path is None:
343 343 return False
344 344 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
345 345 # The write may fail before the file is fully written, but we
346 346 # don't use atomic writes in the working copy.
347 347 with open(path, b'rb') as srcfd, wvfs(filename, b'wb') as destfd:
348 348 gothash = copyandhash(util.filechunkiter(srcfd), destfd)
349 349 if gothash != hash:
350 350 repo.ui.warn(
351 351 _(b'%s: data corruption in %s with hash %s\n')
352 352 % (filename, path, gothash)
353 353 )
354 354 wvfs.unlink(filename)
355 355 return False
356 356 return True
357 357
358 358
359 359 def copytostore(repo, ctx, file, fstandin):
360 360 wvfs = repo.wvfs
361 361 hash = readasstandin(ctx[fstandin])
362 362 if instore(repo, hash):
363 363 return
364 364 if wvfs.exists(file):
365 365 copytostoreabsolute(repo, wvfs.join(file), hash)
366 366 else:
367 367 repo.ui.warn(
368 368 _(b"%s: largefile %s not available from local store\n")
369 369 % (file, hash)
370 370 )
371 371
372 372
373 373 def copyalltostore(repo, node):
374 374 '''Copy all largefiles in a given revision to the store'''
375 375
376 376 ctx = repo[node]
377 377 for filename in ctx.files():
378 378 realfile = splitstandin(filename)
379 379 if realfile is not None and filename in ctx.manifest():
380 380 copytostore(repo, ctx, realfile, filename)
381 381
382 382
383 383 def copytostoreabsolute(repo, file, hash):
384 384 if inusercache(repo.ui, hash):
385 385 link(usercachepath(repo.ui, hash), storepath(repo, hash))
386 386 else:
387 387 util.makedirs(os.path.dirname(storepath(repo, hash)))
388 388 with open(file, b'rb') as srcf:
389 389 with util.atomictempfile(
390 390 storepath(repo, hash), createmode=repo.store.createmode
391 391 ) as dstf:
392 392 for chunk in util.filechunkiter(srcf):
393 393 dstf.write(chunk)
394 394 linktousercache(repo, hash)
395 395
396 396
397 397 def linktousercache(repo, hash):
398 398 """Link / copy the largefile with the specified hash from the store
399 399 to the cache."""
400 400 path = usercachepath(repo.ui, hash)
401 401 link(storepath(repo, hash), path)
402 402
403 403
404 404 def getstandinmatcher(repo, rmatcher=None):
405 405 '''Return a match object that applies rmatcher to the standin directory'''
406 406 wvfs = repo.wvfs
407 407 standindir = shortname
408 408
409 409 # no warnings about missing files or directories
410 410 badfn = lambda f, msg: None
411 411
412 412 if rmatcher and not rmatcher.always():
413 413 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
414 414 if not pats:
415 415 pats = [wvfs.join(standindir)]
416 416 match = scmutil.match(repo[None], pats, badfn=badfn)
417 417 else:
418 418 # no patterns: relative to repo root
419 419 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
420 420 return match
421 421
422 422
423 423 def composestandinmatcher(repo, rmatcher):
424 424 """Return a matcher that accepts standins corresponding to the
425 425 files accepted by rmatcher. Pass the list of files in the matcher
426 426 as the paths specified by the user."""
427 427 smatcher = getstandinmatcher(repo, rmatcher)
428 428 isstandin = smatcher.matchfn
429 429
430 430 def composedmatchfn(f):
431 431 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
432 432
433 smatcher._was_tampered_with = True
433 434 smatcher.matchfn = composedmatchfn
434 435
435 436 return smatcher
436 437
437 438
438 439 def standin(filename):
439 440 """Return the repo-relative path to the standin for the specified big
440 441 file."""
441 442 # Notes:
442 443 # 1) Some callers want an absolute path, but for instance addlargefiles
443 444 # needs it repo-relative so it can be passed to repo[None].add(). So
444 445 # leave it up to the caller to use repo.wjoin() to get an absolute path.
445 446 # 2) Join with '/' because that's what dirstate always uses, even on
446 447 # Windows. Change existing separator to '/' first in case we are
447 448 # passed filenames from an external source (like the command line).
448 449 return shortnameslash + util.pconvert(filename)
449 450
450 451
451 452 def isstandin(filename):
452 453 """Return true if filename is a big file standin. filename must be
453 454 in Mercurial's internal form (slash-separated)."""
454 455 return filename.startswith(shortnameslash)
455 456
456 457
457 458 def splitstandin(filename):
458 459 # Split on / because that's what dirstate always uses, even on Windows.
459 460 # Change local separator to / first just in case we are passed filenames
460 461 # from an external source (like the command line).
461 462 bits = util.pconvert(filename).split(b'/', 1)
462 463 if len(bits) == 2 and bits[0] == shortname:
463 464 return bits[1]
464 465 else:
465 466 return None
466 467
467 468
468 469 def updatestandin(repo, lfile, standin):
469 470 """Re-calculate hash value of lfile and write it into standin
470 471
471 472 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
472 473 """
473 474 file = repo.wjoin(lfile)
474 475 if repo.wvfs.exists(lfile):
475 476 hash = hashfile(file)
476 477 executable = getexecutable(file)
477 478 writestandin(repo, standin, hash, executable)
478 479 else:
479 480 raise error.Abort(_(b'%s: file not found!') % lfile)
480 481
481 482
482 483 def readasstandin(fctx):
483 484 """read hex hash from given filectx of standin file
484 485
485 486 This encapsulates how "standin" data is stored into storage layer."""
486 487 return fctx.data().strip()
487 488
488 489
489 490 def writestandin(repo, standin, hash, executable):
490 491 '''write hash to <repo.root>/<standin>'''
491 492 repo.wwrite(standin, hash + b'\n', executable and b'x' or b'')
492 493
493 494
494 495 def copyandhash(instream, outfile):
495 496 """Read bytes from instream (iterable) and write them to outfile,
496 497 computing the SHA-1 hash of the data along the way. Return the hash."""
497 498 hasher = hashutil.sha1(b'')
498 499 for data in instream:
499 500 hasher.update(data)
500 501 outfile.write(data)
501 502 return hex(hasher.digest())
502 503
503 504
504 505 def hashfile(file):
505 506 if not os.path.exists(file):
506 507 return b''
507 508 with open(file, b'rb') as fd:
508 509 return hexsha1(fd)
509 510
510 511
511 512 def getexecutable(filename):
512 513 mode = os.stat(filename).st_mode
513 514 return (
514 515 (mode & stat.S_IXUSR)
515 516 and (mode & stat.S_IXGRP)
516 517 and (mode & stat.S_IXOTH)
517 518 )
518 519
519 520
520 521 def urljoin(first, second, *arg):
521 522 def join(left, right):
522 523 if not left.endswith(b'/'):
523 524 left += b'/'
524 525 if right.startswith(b'/'):
525 526 right = right[1:]
526 527 return left + right
527 528
528 529 url = join(first, second)
529 530 for a in arg:
530 531 url = join(url, a)
531 532 return url
532 533
533 534
534 535 def hexsha1(fileobj):
535 536 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
536 537 object data"""
537 538 h = hashutil.sha1()
538 539 for chunk in util.filechunkiter(fileobj):
539 540 h.update(chunk)
540 541 return hex(h.digest())
541 542
542 543
543 544 def httpsendfile(ui, filename):
544 545 return httpconnection.httpsendfile(ui, filename, b'rb')
545 546
546 547
547 548 def unixpath(path):
548 549 '''Return a version of path normalized for use with the lfdirstate.'''
549 550 return util.pconvert(os.path.normpath(path))
550 551
551 552
552 553 def islfilesrepo(repo):
553 554 '''Return true if the repo is a largefile repo.'''
554 555 if b'largefiles' in repo.requirements:
555 556 for entry in repo.store.data_entries():
556 557 if entry.is_revlog and shortnameslash in entry.target_id:
557 558 return True
558 559
559 560 return any(openlfdirstate(repo.ui, repo, False))
560 561
561 562
562 563 class storeprotonotcapable(Exception):
563 564 def __init__(self, storetypes):
564 565 self.storetypes = storetypes
565 566
566 567
567 568 def getstandinsstate(repo):
568 569 standins = []
569 570 matcher = getstandinmatcher(repo)
570 571 wctx = repo[None]
571 572 for standin in repo.dirstate.walk(
572 573 matcher, subrepos=[], unknown=False, ignored=False
573 574 ):
574 575 lfile = splitstandin(standin)
575 576 try:
576 577 hash = readasstandin(wctx[standin])
577 578 except IOError:
578 579 hash = None
579 580 standins.append((lfile, hash))
580 581 return standins
581 582
582 583
583 584 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
584 585 lfstandin = standin(lfile)
585 586 if lfstandin not in repo.dirstate:
586 587 lfdirstate.hacky_extension_update_file(
587 588 lfile,
588 589 p1_tracked=False,
589 590 wc_tracked=False,
590 591 )
591 592 else:
592 593 entry = repo.dirstate.get_entry(lfstandin)
593 594 lfdirstate.hacky_extension_update_file(
594 595 lfile,
595 596 wc_tracked=entry.tracked,
596 597 p1_tracked=entry.p1_tracked,
597 598 p2_info=entry.p2_info,
598 599 possibly_dirty=True,
599 600 )
600 601
601 602
602 603 def markcommitted(orig, ctx, node):
603 604 repo = ctx.repo()
604 605
605 606 with repo.dirstate.changing_parents(repo):
606 607 orig(node)
607 608
608 609 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
609 610 # because files coming from the 2nd parent are omitted in the latter.
610 611 #
611 612 # The former should be used to get targets of "synclfdirstate",
612 613 # because such files:
613 614 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
614 615 # - have to be marked as "n" after commit, but
615 616 # - aren't listed in "repo[node].files()"
616 617
617 618 lfdirstate = openlfdirstate(repo.ui, repo)
618 619 for f in ctx.files():
619 620 lfile = splitstandin(f)
620 621 if lfile is not None:
621 622 synclfdirstate(repo, lfdirstate, lfile, False)
622 623
623 624 # As part of committing, copy all of the largefiles into the cache.
624 625 #
625 626 # Using "node" instead of "ctx" implies additional "repo[node]"
626 627 # lookup while copyalltostore(), but can omit redundant check for
627 628 # files comming from the 2nd parent, which should exist in store
628 629 # at merging.
629 630 copyalltostore(repo, node)
630 631
631 632
632 633 def getlfilestoupdate(oldstandins, newstandins):
633 634 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
634 635 filelist = []
635 636 for f in changedstandins:
636 637 if f[0] not in filelist:
637 638 filelist.append(f[0])
638 639 return filelist
639 640
640 641
641 642 def getlfilestoupload(repo, missing, addfunc):
642 643 makeprogress = repo.ui.makeprogress
643 644 with makeprogress(
644 645 _(b'finding outgoing largefiles'),
645 646 unit=_(b'revisions'),
646 647 total=len(missing),
647 648 ) as progress:
648 649 for i, n in enumerate(missing):
649 650 progress.update(i)
650 651 parents = [p for p in repo[n].parents() if p != repo.nullid]
651 652
652 653 with lfstatus(repo, value=False):
653 654 ctx = repo[n]
654 655
655 656 files = set(ctx.files())
656 657 if len(parents) == 2:
657 658 mc = ctx.manifest()
658 659 mp1 = ctx.p1().manifest()
659 660 mp2 = ctx.p2().manifest()
660 661 for f in mp1:
661 662 if f not in mc:
662 663 files.add(f)
663 664 for f in mp2:
664 665 if f not in mc:
665 666 files.add(f)
666 667 for f in mc:
667 668 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
668 669 files.add(f)
669 670 for fn in files:
670 671 if isstandin(fn) and fn in ctx:
671 672 addfunc(fn, readasstandin(ctx[fn]))
672 673
673 674
674 675 def updatestandinsbymatch(repo, match):
675 676 """Update standins in the working directory according to specified match
676 677
677 678 This returns (possibly modified) ``match`` object to be used for
678 679 subsequent commit process.
679 680 """
680 681
681 682 ui = repo.ui
682 683
683 684 # Case 1: user calls commit with no specific files or
684 685 # include/exclude patterns: refresh and commit all files that
685 686 # are "dirty".
686 687 if match is None or match.always():
687 688 # Spend a bit of time here to get a list of files we know
688 689 # are modified so we can compare only against those.
689 690 # It can cost a lot of time (several seconds)
690 691 # otherwise to update all standins if the largefiles are
691 692 # large.
692 693 dirtymatch = matchmod.always()
693 694 with repo.dirstate.running_status(repo):
694 695 lfdirstate = openlfdirstate(ui, repo)
695 696 unsure, s, mtime_boundary = lfdirstate.status(
696 697 dirtymatch,
697 698 subrepos=[],
698 699 ignored=False,
699 700 clean=False,
700 701 unknown=False,
701 702 )
702 703 modifiedfiles = unsure + s.modified + s.added + s.removed
703 704 lfiles = listlfiles(repo)
704 705 # this only loops through largefiles that exist (not
705 706 # removed/renamed)
706 707 for lfile in lfiles:
707 708 if lfile in modifiedfiles:
708 709 fstandin = standin(lfile)
709 710 if repo.wvfs.exists(fstandin):
710 711 # this handles the case where a rebase is being
711 712 # performed and the working copy is not updated
712 713 # yet.
713 714 if repo.wvfs.exists(lfile):
714 715 updatestandin(repo, lfile, fstandin)
715 716
716 717 return match
717 718
718 719 lfiles = listlfiles(repo)
720 match._was_tampered_with = True
719 721 match._files = repo._subdirlfs(match.files(), lfiles)
720 722
721 723 # Case 2: user calls commit with specified patterns: refresh
722 724 # any matching big files.
723 725 smatcher = composestandinmatcher(repo, match)
724 726 standins = repo.dirstate.walk(
725 727 smatcher, subrepos=[], unknown=False, ignored=False
726 728 )
727 729
728 730 # No matching big files: get out of the way and pass control to
729 731 # the usual commit() method.
730 732 if not standins:
731 733 return match
732 734
733 735 # Refresh all matching big files. It's possible that the
734 736 # commit will end up failing, in which case the big files will
735 737 # stay refreshed. No harm done: the user modified them and
736 738 # asked to commit them, so sooner or later we're going to
737 739 # refresh the standins. Might as well leave them refreshed.
738 740 lfdirstate = openlfdirstate(ui, repo)
739 741 for fstandin in standins:
740 742 lfile = splitstandin(fstandin)
741 743 if lfdirstate.get_entry(lfile).tracked:
742 744 updatestandin(repo, lfile, fstandin)
743 745
744 746 # Cook up a new matcher that only matches regular files or
745 747 # standins corresponding to the big files requested by the
746 748 # user. Have to modify _files to prevent commit() from
747 749 # complaining "not tracked" for big files.
748 750 match = copy.copy(match)
751 match._was_tampered_with = True
749 752 origmatchfn = match.matchfn
750 753
751 754 # Check both the list of largefiles and the list of
752 755 # standins because if a largefile was removed, it
753 756 # won't be in the list of largefiles at this point
754 757 match._files += sorted(standins)
755 758
756 759 actualfiles = []
757 760 for f in match._files:
758 761 fstandin = standin(f)
759 762
760 763 # For largefiles, only one of the normal and standin should be
761 764 # committed (except if one of them is a remove). In the case of a
762 765 # standin removal, drop the normal file if it is unknown to dirstate.
763 766 # Thus, skip plain largefile names but keep the standin.
764 767 if f in lfiles or fstandin in standins:
765 768 if not repo.dirstate.get_entry(fstandin).removed:
766 769 if not repo.dirstate.get_entry(f).removed:
767 770 continue
768 771 elif not repo.dirstate.get_entry(f).any_tracked:
769 772 continue
770 773
771 774 actualfiles.append(f)
772 775 match._files = actualfiles
773 776
774 777 def matchfn(f):
775 778 if origmatchfn(f):
776 779 return f not in lfiles
777 780 else:
778 781 return f in standins
779 782
780 783 match.matchfn = matchfn
781 784
782 785 return match
783 786
784 787
785 788 class automatedcommithook:
786 789 """Stateful hook to update standins at the 1st commit of resuming
787 790
788 791 For efficiency, updating standins in the working directory should
789 792 be avoided while automated committing (like rebase, transplant and
790 793 so on), because they should be updated before committing.
791 794
792 795 But the 1st commit of resuming automated committing (e.g. ``rebase
793 796 --continue``) should update them, because largefiles may be
794 797 modified manually.
795 798 """
796 799
797 800 def __init__(self, resuming):
798 801 self.resuming = resuming
799 802
800 803 def __call__(self, repo, match):
801 804 if self.resuming:
802 805 self.resuming = False # avoids updating at subsequent commits
803 806 return updatestandinsbymatch(repo, match)
804 807 else:
805 808 return match
806 809
807 810
808 811 def getstatuswriter(ui, repo, forcibly=None):
809 812 """Return the function to write largefiles specific status out
810 813
811 814 If ``forcibly`` is ``None``, this returns the last element of
812 815 ``repo._lfstatuswriters`` as "default" writer function.
813 816
814 817 Otherwise, this returns the function to always write out (or
815 818 ignore if ``not forcibly``) status.
816 819 """
817 820 if forcibly is None and hasattr(repo, '_largefilesenabled'):
818 821 return repo._lfstatuswriters[-1]
819 822 else:
820 823 if forcibly:
821 824 return ui.status # forcibly WRITE OUT
822 825 else:
823 826 return lambda *msg, **opts: None # forcibly IGNORE
@@ -1,1924 +1,1932 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10 10
11 11 import contextlib
12 12 import copy
13 13 import os
14 14
15 15 from mercurial.i18n import _
16 16
17 17 from mercurial.pycompat import open
18 18
19 19 from mercurial.hgweb import webcommands
20 20
21 21 from mercurial import (
22 22 archival,
23 23 cmdutil,
24 24 copies as copiesmod,
25 25 dirstate,
26 26 error,
27 27 exchange,
28 28 extensions,
29 29 exthelper,
30 30 filemerge,
31 31 hg,
32 32 logcmdutil,
33 33 match as matchmod,
34 34 merge,
35 35 mergestate as mergestatemod,
36 36 pathutil,
37 37 pycompat,
38 38 scmutil,
39 39 smartset,
40 40 subrepo,
41 41 url as urlmod,
42 42 util,
43 43 )
44 44
45 45 from mercurial.upgrade_utils import (
46 46 actions as upgrade_actions,
47 47 )
48 48
49 49 from . import (
50 50 lfcommands,
51 51 lfutil,
52 52 storefactory,
53 53 )
54 54
55 55 ACTION_ADD = mergestatemod.ACTION_ADD
56 56 ACTION_DELETED_CHANGED = mergestatemod.ACTION_DELETED_CHANGED
57 57 ACTION_GET = mergestatemod.ACTION_GET
58 58 ACTION_KEEP = mergestatemod.ACTION_KEEP
59 59 ACTION_REMOVE = mergestatemod.ACTION_REMOVE
60 60
61 61 eh = exthelper.exthelper()
62 62
63 63 lfstatus = lfutil.lfstatus
64 64
65 65 MERGE_ACTION_LARGEFILE_MARK_REMOVED = mergestatemod.MergeAction('lfmr')
66 66
67 67 # -- Utility functions: commonly/repeatedly needed functionality ---------------
68 68
69 69
70 70 def composelargefilematcher(match, manifest):
71 71 """create a matcher that matches only the largefiles in the original
72 72 matcher"""
73 73 m = copy.copy(match)
74 m._was_tampered_with = True
74 75 lfile = lambda f: lfutil.standin(f) in manifest
75 76 m._files = [lf for lf in m._files if lfile(lf)]
76 77 m._fileset = set(m._files)
77 78 m.always = lambda: False
78 79 origmatchfn = m.matchfn
79 80 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
80 81 return m
81 82
82 83
83 84 def composenormalfilematcher(match, manifest, exclude=None):
84 85 excluded = set()
85 86 if exclude is not None:
86 87 excluded.update(exclude)
87 88
88 89 m = copy.copy(match)
90 m._was_tampered_with = True
89 91 notlfile = lambda f: not (
90 92 lfutil.isstandin(f) or lfutil.standin(f) in manifest or f in excluded
91 93 )
92 94 m._files = [lf for lf in m._files if notlfile(lf)]
93 95 m._fileset = set(m._files)
94 96 m.always = lambda: False
95 97 origmatchfn = m.matchfn
96 98 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
97 99 return m
98 100
99 101
100 102 def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
101 103 large = opts.get('large')
102 104 lfsize = lfutil.getminsize(
103 105 ui, lfutil.islfilesrepo(repo), opts.get('lfsize')
104 106 )
105 107
106 108 lfmatcher = None
107 109 if lfutil.islfilesrepo(repo):
108 110 lfpats = ui.configlist(lfutil.longname, b'patterns')
109 111 if lfpats:
110 112 lfmatcher = matchmod.match(repo.root, b'', list(lfpats))
111 113
112 114 lfnames = []
113 115 m = matcher
114 116
115 117 wctx = repo[None]
116 118 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
117 119 exact = m.exact(f)
118 120 lfile = lfutil.standin(f) in wctx
119 121 nfile = f in wctx
120 122 exists = lfile or nfile
121 123
122 124 # Don't warn the user when they attempt to add a normal tracked file.
123 125 # The normal add code will do that for us.
124 126 if exact and exists:
125 127 if lfile:
126 128 ui.warn(_(b'%s already a largefile\n') % uipathfn(f))
127 129 continue
128 130
129 131 if (exact or not exists) and not lfutil.isstandin(f):
130 132 # In case the file was removed previously, but not committed
131 133 # (issue3507)
132 134 if not repo.wvfs.exists(f):
133 135 continue
134 136
135 137 abovemin = (
136 138 lfsize and repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024
137 139 )
138 140 if large or abovemin or (lfmatcher and lfmatcher(f)):
139 141 lfnames.append(f)
140 142 if ui.verbose or not exact:
141 143 ui.status(_(b'adding %s as a largefile\n') % uipathfn(f))
142 144
143 145 bad = []
144 146
145 147 # Need to lock, otherwise there could be a race condition between
146 148 # when standins are created and added to the repo.
147 149 with repo.wlock():
148 150 if not opts.get('dry_run'):
149 151 standins = []
150 152 lfdirstate = lfutil.openlfdirstate(ui, repo)
151 153 for f in lfnames:
152 154 standinname = lfutil.standin(f)
153 155 lfutil.writestandin(
154 156 repo,
155 157 standinname,
156 158 hash=b'',
157 159 executable=lfutil.getexecutable(repo.wjoin(f)),
158 160 )
159 161 standins.append(standinname)
160 162 lfdirstate.set_tracked(f)
161 163 lfdirstate.write(repo.currenttransaction())
162 164 bad += [
163 165 lfutil.splitstandin(f)
164 166 for f in repo[None].add(standins)
165 167 if f in m.files()
166 168 ]
167 169
168 170 added = [f for f in lfnames if f not in bad]
169 171 return added, bad
170 172
171 173
172 174 def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
173 175 after = opts.get('after')
174 176 m = composelargefilematcher(matcher, repo[None].manifest())
175 177 with lfstatus(repo):
176 178 s = repo.status(match=m, clean=not isaddremove)
177 179 manifest = repo[None].manifest()
178 180 modified, added, deleted, clean = [
179 181 [f for f in list if lfutil.standin(f) in manifest]
180 182 for list in (s.modified, s.added, s.deleted, s.clean)
181 183 ]
182 184
183 185 def warn(files, msg):
184 186 for f in files:
185 187 ui.warn(msg % uipathfn(f))
186 188 return int(len(files) > 0)
187 189
188 190 if after:
189 191 remove = deleted
190 192 result = warn(
191 193 modified + added + clean, _(b'not removing %s: file still exists\n')
192 194 )
193 195 else:
194 196 remove = deleted + clean
195 197 result = warn(
196 198 modified,
197 199 _(
198 200 b'not removing %s: file is modified (use -f'
199 201 b' to force removal)\n'
200 202 ),
201 203 )
202 204 result = (
203 205 warn(
204 206 added,
205 207 _(
206 208 b'not removing %s: file has been marked for add'
207 209 b' (use forget to undo)\n'
208 210 ),
209 211 )
210 212 or result
211 213 )
212 214
213 215 # Need to lock because standin files are deleted then removed from the
214 216 # repository and we could race in-between.
215 217 with repo.wlock():
216 218 lfdirstate = lfutil.openlfdirstate(ui, repo)
217 219 for f in sorted(remove):
218 220 if ui.verbose or not m.exact(f):
219 221 ui.status(_(b'removing %s\n') % uipathfn(f))
220 222
221 223 if not dryrun:
222 224 if not after:
223 225 repo.wvfs.unlinkpath(f, ignoremissing=True)
224 226
225 227 if dryrun:
226 228 return result
227 229
228 230 remove = [lfutil.standin(f) for f in remove]
229 231 # If this is being called by addremove, let the original addremove
230 232 # function handle this.
231 233 if not isaddremove:
232 234 for f in remove:
233 235 repo.wvfs.unlinkpath(f, ignoremissing=True)
234 236 repo[None].forget(remove)
235 237
236 238 for f in remove:
237 239 lfdirstate.set_untracked(lfutil.splitstandin(f))
238 240
239 241 lfdirstate.write(repo.currenttransaction())
240 242
241 243 return result
242 244
243 245
244 246 # For overriding mercurial.hgweb.webcommands so that largefiles will
245 247 # appear at their right place in the manifests.
246 248 @eh.wrapfunction(webcommands, 'decodepath')
247 249 def decodepath(orig, path):
248 250 return lfutil.splitstandin(path) or path
249 251
250 252
251 253 # -- Wrappers: modify existing commands --------------------------------
252 254
253 255
254 256 @eh.wrapcommand(
255 257 b'add',
256 258 opts=[
257 259 (b'', b'large', None, _(b'add as largefile')),
258 260 (b'', b'normal', None, _(b'add as normal file')),
259 261 (
260 262 b'',
261 263 b'lfsize',
262 264 b'',
263 265 _(
264 266 b'add all files above this size (in megabytes) '
265 267 b'as largefiles (default: 10)'
266 268 ),
267 269 ),
268 270 ],
269 271 )
270 272 def overrideadd(orig, ui, repo, *pats, **opts):
271 273 if opts.get('normal') and opts.get('large'):
272 274 raise error.Abort(_(b'--normal cannot be used with --large'))
273 275 return orig(ui, repo, *pats, **opts)
274 276
275 277
276 278 @eh.wrapfunction(cmdutil, 'add')
277 279 def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
278 280 # The --normal flag short circuits this override
279 281 if opts.get('normal'):
280 282 return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
281 283
282 284 ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
283 285 normalmatcher = composenormalfilematcher(
284 286 matcher, repo[None].manifest(), ladded
285 287 )
286 288 bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
287 289
288 290 bad.extend(f for f in lbad)
289 291 return bad
290 292
291 293
292 294 @eh.wrapfunction(cmdutil, 'remove')
293 295 def cmdutilremove(
294 296 orig, ui, repo, matcher, prefix, uipathfn, after, force, subrepos, dryrun
295 297 ):
296 298 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
297 299 result = orig(
298 300 ui,
299 301 repo,
300 302 normalmatcher,
301 303 prefix,
302 304 uipathfn,
303 305 after,
304 306 force,
305 307 subrepos,
306 308 dryrun,
307 309 )
308 310 return (
309 311 removelargefiles(
310 312 ui, repo, False, matcher, uipathfn, dryrun, after=after, force=force
311 313 )
312 314 or result
313 315 )
314 316
315 317
316 318 @eh.wrapfunction(dirstate.dirstate, '_changing')
317 319 @contextlib.contextmanager
318 320 def _changing(orig, self, repo, change_type):
319 321 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
320 322 try:
321 323 lfd = getattr(self, '_large_file_dirstate', False)
322 324 if sub_dirstate is None and not lfd:
323 325 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
324 326 self._sub_dirstate = sub_dirstate
325 327 if not lfd:
326 328 assert self._sub_dirstate is not None
327 329 with orig(self, repo, change_type):
328 330 if sub_dirstate is None:
329 331 yield
330 332 else:
331 333 with sub_dirstate._changing(repo, change_type):
332 334 yield
333 335 finally:
334 336 self._sub_dirstate = pre
335 337
336 338
337 339 @eh.wrapfunction(dirstate.dirstate, 'running_status')
338 340 @contextlib.contextmanager
339 341 def running_status(orig, self, repo):
340 342 pre = sub_dirstate = getattr(self, '_sub_dirstate', None)
341 343 try:
342 344 lfd = getattr(self, '_large_file_dirstate', False)
343 345 if sub_dirstate is None and not lfd:
344 346 sub_dirstate = lfutil.openlfdirstate(repo.ui, repo)
345 347 self._sub_dirstate = sub_dirstate
346 348 if not lfd:
347 349 assert self._sub_dirstate is not None
348 350 with orig(self, repo):
349 351 if sub_dirstate is None:
350 352 yield
351 353 else:
352 354 with sub_dirstate.running_status(repo):
353 355 yield
354 356 finally:
355 357 self._sub_dirstate = pre
356 358
357 359
358 360 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
359 361 def overridestatusfn(orig, repo, rev2, **opts):
360 362 with lfstatus(repo._repo):
361 363 return orig(repo, rev2, **opts)
362 364
363 365
364 366 @eh.wrapcommand(b'status')
365 367 def overridestatus(orig, ui, repo, *pats, **opts):
366 368 with lfstatus(repo):
367 369 return orig(ui, repo, *pats, **opts)
368 370
369 371
370 372 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
371 373 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
372 374 with lfstatus(repo._repo):
373 375 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
374 376
375 377
376 378 @eh.wrapcommand(b'log')
377 379 def overridelog(orig, ui, repo, *pats, **opts):
378 380 def overridematchandpats(
379 381 orig,
380 382 ctx,
381 383 pats=(),
382 384 opts=None,
383 385 globbed=False,
384 386 default=b'relpath',
385 387 badfn=None,
386 388 ):
387 389 """Matcher that merges root directory with .hglf, suitable for log.
388 390 It is still possible to match .hglf directly.
389 391 For any listed files run log on the standin too.
390 392 matchfn tries both the given filename and with .hglf stripped.
391 393 """
392 394 if opts is None:
393 395 opts = {}
394 396 matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
395 397 m, p = copy.copy(matchandpats)
396 398
397 399 if m.always():
398 400 # We want to match everything anyway, so there's no benefit trying
399 401 # to add standins.
400 402 return matchandpats
401 403
402 404 pats = set(p)
403 405
404 406 def fixpats(pat, tostandin=lfutil.standin):
405 407 if pat.startswith(b'set:'):
406 408 return pat
407 409
408 410 kindpat = matchmod._patsplit(pat, None)
409 411
410 412 if kindpat[0] is not None:
411 413 return kindpat[0] + b':' + tostandin(kindpat[1])
412 414 return tostandin(kindpat[1])
413 415
414 416 cwd = repo.getcwd()
415 417 if cwd:
416 418 hglf = lfutil.shortname
417 419 back = util.pconvert(repo.pathto(hglf)[: -len(hglf)])
418 420
419 421 def tostandin(f):
420 422 # The file may already be a standin, so truncate the back
421 423 # prefix and test before mangling it. This avoids turning
422 424 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
423 425 if f.startswith(back) and lfutil.splitstandin(f[len(back) :]):
424 426 return f
425 427
426 428 # An absolute path is from outside the repo, so truncate the
427 429 # path to the root before building the standin. Otherwise cwd
428 430 # is somewhere in the repo, relative to root, and needs to be
429 431 # prepended before building the standin.
430 432 if os.path.isabs(cwd):
431 433 f = f[len(back) :]
432 434 else:
433 435 f = cwd + b'/' + f
434 436 return back + lfutil.standin(f)
435 437
436 438 else:
437 439
438 440 def tostandin(f):
439 441 if lfutil.isstandin(f):
440 442 return f
441 443 return lfutil.standin(f)
442 444
443 445 pats.update(fixpats(f, tostandin) for f in p)
444 446
447 m._was_tampered_with = True
448
445 449 for i in range(0, len(m._files)):
446 450 # Don't add '.hglf' to m.files, since that is already covered by '.'
447 451 if m._files[i] == b'.':
448 452 continue
449 453 standin = lfutil.standin(m._files[i])
450 454 # If the "standin" is a directory, append instead of replace to
451 455 # support naming a directory on the command line with only
452 456 # largefiles. The original directory is kept to support normal
453 457 # files.
454 458 if standin in ctx:
455 459 m._files[i] = standin
456 460 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
457 461 m._files.append(standin)
458 462
459 463 m._fileset = set(m._files)
460 464 m.always = lambda: False
461 465 origmatchfn = m.matchfn
462 466
463 467 def lfmatchfn(f):
464 468 lf = lfutil.splitstandin(f)
465 469 if lf is not None and origmatchfn(lf):
466 470 return True
467 471 r = origmatchfn(f)
468 472 return r
469 473
470 474 m.matchfn = lfmatchfn
471 475
472 476 ui.debug(b'updated patterns: %s\n' % b', '.join(sorted(pats)))
473 477 return m, pats
474 478
475 479 # For hg log --patch, the match object is used in two different senses:
476 480 # (1) to determine what revisions should be printed out, and
477 481 # (2) to determine what files to print out diffs for.
478 482 # The magic matchandpats override should be used for case (1) but not for
479 483 # case (2).
480 484 oldmatchandpats = scmutil.matchandpats
481 485
482 486 def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
483 487 wctx = repo[None]
484 488 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
485 489 return lambda ctx: match
486 490
487 491 wrappedmatchandpats = extensions.wrappedfunction(
488 492 scmutil, 'matchandpats', overridematchandpats
489 493 )
490 494 wrappedmakefilematcher = extensions.wrappedfunction(
491 495 logcmdutil, '_makenofollowfilematcher', overridemakefilematcher
492 496 )
493 497 with wrappedmatchandpats, wrappedmakefilematcher:
494 498 return orig(ui, repo, *pats, **opts)
495 499
496 500
497 501 @eh.wrapcommand(
498 502 b'verify',
499 503 opts=[
500 504 (
501 505 b'',
502 506 b'large',
503 507 None,
504 508 _(b'verify that all largefiles in current revision exists'),
505 509 ),
506 510 (
507 511 b'',
508 512 b'lfa',
509 513 None,
510 514 _(b'verify largefiles in all revisions, not just current'),
511 515 ),
512 516 (
513 517 b'',
514 518 b'lfc',
515 519 None,
516 520 _(b'verify local largefile contents, not just existence'),
517 521 ),
518 522 ],
519 523 )
520 524 def overrideverify(orig, ui, repo, *pats, **opts):
521 525 large = opts.pop('large', False)
522 526 all = opts.pop('lfa', False)
523 527 contents = opts.pop('lfc', False)
524 528
525 529 result = orig(ui, repo, *pats, **opts)
526 530 if large or all or contents:
527 531 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
528 532 return result
529 533
530 534
531 535 @eh.wrapcommand(
532 536 b'debugstate',
533 537 opts=[(b'', b'large', None, _(b'display largefiles dirstate'))],
534 538 )
535 539 def overridedebugstate(orig, ui, repo, *pats, **opts):
536 540 large = opts.pop('large', False)
537 541 if large:
538 542
539 543 class fakerepo:
540 544 dirstate = lfutil.openlfdirstate(ui, repo)
541 545
542 546 orig(ui, fakerepo, *pats, **opts)
543 547 else:
544 548 orig(ui, repo, *pats, **opts)
545 549
546 550
547 551 # Before starting the manifest merge, merge.updates will call
548 552 # _checkunknownfile to check if there are any files in the merged-in
549 553 # changeset that collide with unknown files in the working copy.
550 554 #
551 555 # The largefiles are seen as unknown, so this prevents us from merging
552 556 # in a file 'foo' if we already have a largefile with the same name.
553 557 #
554 558 # The overridden function filters the unknown files by removing any
555 559 # largefiles. This makes the merge proceed and we can then handle this
556 560 # case further in the overridden calculateupdates function below.
557 561 @eh.wrapfunction(merge, '_checkunknownfile')
558 562 def overridecheckunknownfile(
559 563 origfn, dirstate, wvfs, dircache, wctx, mctx, f, f2=None
560 564 ):
561 565 if lfutil.standin(dirstate.normalize(f)) in wctx:
562 566 return False
563 567 return origfn(dirstate, wvfs, dircache, wctx, mctx, f, f2)
564 568
565 569
566 570 # The manifest merge handles conflicts on the manifest level. We want
567 571 # to handle changes in largefile-ness of files at this level too.
568 572 #
569 573 # The strategy is to run the original calculateupdates and then process
570 574 # the action list it outputs. There are two cases we need to deal with:
571 575 #
572 576 # 1. Normal file in p1, largefile in p2. Here the largefile is
573 577 # detected via its standin file, which will enter the working copy
574 578 # with a "get" action. It is not "merge" since the standin is all
575 579 # Mercurial is concerned with at this level -- the link to the
576 580 # existing normal file is not relevant here.
577 581 #
578 582 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
579 583 # since the largefile will be present in the working copy and
580 584 # different from the normal file in p2. Mercurial therefore
581 585 # triggers a merge action.
582 586 #
583 587 # In both cases, we prompt the user and emit new actions to either
584 588 # remove the standin (if the normal file was kept) or to remove the
585 589 # normal file and get the standin (if the largefile was kept). The
586 590 # default prompt answer is to use the largefile version since it was
587 591 # presumably changed on purpose.
588 592 #
589 593 # Finally, the merge.applyupdates function will then take care of
590 594 # writing the files into the working copy and lfcommands.updatelfiles
591 595 # will update the largefiles.
592 596 @eh.wrapfunction(merge, 'calculateupdates')
593 597 def overridecalculateupdates(
594 598 origfn, repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
595 599 ):
596 600 overwrite = force and not branchmerge
597 601 mresult = origfn(
598 602 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs
599 603 )
600 604
601 605 if overwrite:
602 606 return mresult
603 607
604 608 # Convert to dictionary with filename as key and action as value.
605 609 lfiles = set()
606 610 for f in mresult.files():
607 611 splitstandin = lfutil.splitstandin(f)
608 612 if splitstandin is not None and splitstandin in p1:
609 613 lfiles.add(splitstandin)
610 614 elif lfutil.standin(f) in p1:
611 615 lfiles.add(f)
612 616
613 617 for lfile in sorted(lfiles):
614 618 standin = lfutil.standin(lfile)
615 619 (lm, largs, lmsg) = mresult.getfile(lfile, (None, None, None))
616 620 (sm, sargs, smsg) = mresult.getfile(standin, (None, None, None))
617 621
618 622 if sm in (ACTION_GET, ACTION_DELETED_CHANGED) and lm != ACTION_REMOVE:
619 623 if sm == ACTION_DELETED_CHANGED:
620 624 f1, f2, fa, move, anc = sargs
621 625 sargs = (p2[f2].flags(), False)
622 626 # Case 1: normal file in the working copy, largefile in
623 627 # the second parent
624 628 usermsg = (
625 629 _(
626 630 b'remote turned local normal file %s into a largefile\n'
627 631 b'use (l)argefile or keep (n)ormal file?'
628 632 b'$$ &Largefile $$ &Normal file'
629 633 )
630 634 % lfile
631 635 )
632 636 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
633 637 mresult.addfile(
634 638 lfile, ACTION_REMOVE, None, b'replaced by standin'
635 639 )
636 640 mresult.addfile(standin, ACTION_GET, sargs, b'replaces standin')
637 641 else: # keep local normal file
638 642 mresult.addfile(lfile, ACTION_KEEP, None, b'replaces standin')
639 643 if branchmerge:
640 644 mresult.addfile(
641 645 standin,
642 646 ACTION_KEEP,
643 647 None,
644 648 b'replaced by non-standin',
645 649 )
646 650 else:
647 651 mresult.addfile(
648 652 standin,
649 653 ACTION_REMOVE,
650 654 None,
651 655 b'replaced by non-standin',
652 656 )
653 657 if lm in (ACTION_GET, ACTION_DELETED_CHANGED) and sm != ACTION_REMOVE:
654 658 if lm == ACTION_DELETED_CHANGED:
655 659 f1, f2, fa, move, anc = largs
656 660 largs = (p2[f2].flags(), False)
657 661 # Case 2: largefile in the working copy, normal file in
658 662 # the second parent
659 663 usermsg = (
660 664 _(
661 665 b'remote turned local largefile %s into a normal file\n'
662 666 b'keep (l)argefile or use (n)ormal file?'
663 667 b'$$ &Largefile $$ &Normal file'
664 668 )
665 669 % lfile
666 670 )
667 671 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
668 672 if branchmerge:
669 673 # largefile can be restored from standin safely
670 674 mresult.addfile(
671 675 lfile,
672 676 ACTION_KEEP,
673 677 None,
674 678 b'replaced by standin',
675 679 )
676 680 mresult.addfile(
677 681 standin, ACTION_KEEP, None, b'replaces standin'
678 682 )
679 683 else:
680 684 # "lfile" should be marked as "removed" without
681 685 # removal of itself
682 686 mresult.addfile(
683 687 lfile,
684 688 MERGE_ACTION_LARGEFILE_MARK_REMOVED,
685 689 None,
686 690 b'forget non-standin largefile',
687 691 )
688 692
689 693 # linear-merge should treat this largefile as 're-added'
690 694 mresult.addfile(standin, ACTION_ADD, None, b'keep standin')
691 695 else: # pick remote normal file
692 696 mresult.addfile(lfile, ACTION_GET, largs, b'replaces standin')
693 697 mresult.addfile(
694 698 standin,
695 699 ACTION_REMOVE,
696 700 None,
697 701 b'replaced by non-standin',
698 702 )
699 703
700 704 return mresult
701 705
702 706
703 707 @eh.wrapfunction(mergestatemod, 'recordupdates')
704 708 def mergerecordupdates(orig, repo, actions, branchmerge, getfiledata):
705 709 if MERGE_ACTION_LARGEFILE_MARK_REMOVED in actions:
706 710 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
707 711 for lfile, args, msg in actions[MERGE_ACTION_LARGEFILE_MARK_REMOVED]:
708 712 # this should be executed before 'orig', to execute 'remove'
709 713 # before all other actions
710 714 repo.dirstate.update_file(lfile, p1_tracked=True, wc_tracked=False)
711 715 # make sure lfile doesn't get synclfdirstate'd as normal
712 716 lfdirstate.update_file(lfile, p1_tracked=False, wc_tracked=True)
713 717
714 718 return orig(repo, actions, branchmerge, getfiledata)
715 719
716 720
717 721 # Override filemerge to prompt the user about how they wish to merge
718 722 # largefiles. This will handle identical edits without prompting the user.
719 723 @eh.wrapfunction(filemerge, 'filemerge')
720 724 def overridefilemerge(
721 725 origfn, repo, wctx, mynode, orig, fcd, fco, fca, labels=None
722 726 ):
723 727 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
724 728 return origfn(repo, wctx, mynode, orig, fcd, fco, fca, labels=labels)
725 729
726 730 ahash = lfutil.readasstandin(fca).lower()
727 731 dhash = lfutil.readasstandin(fcd).lower()
728 732 ohash = lfutil.readasstandin(fco).lower()
729 733 if (
730 734 ohash != ahash
731 735 and ohash != dhash
732 736 and (
733 737 dhash == ahash
734 738 or repo.ui.promptchoice(
735 739 _(
736 740 b'largefile %s has a merge conflict\nancestor was %s\n'
737 741 b'you can keep (l)ocal %s or take (o)ther %s.\n'
738 742 b'what do you want to do?'
739 743 b'$$ &Local $$ &Other'
740 744 )
741 745 % (lfutil.splitstandin(orig), ahash, dhash, ohash),
742 746 0,
743 747 )
744 748 == 1
745 749 )
746 750 ):
747 751 repo.wwrite(fcd.path(), fco.data(), fco.flags())
748 752 return 0, False
749 753
750 754
751 755 @eh.wrapfunction(copiesmod, 'pathcopies')
752 756 def copiespathcopies(orig, ctx1, ctx2, match=None):
753 757 copies = orig(ctx1, ctx2, match=match)
754 758 updated = {}
755 759
756 760 for k, v in copies.items():
757 761 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
758 762
759 763 return updated
760 764
761 765
762 766 # Copy first changes the matchers to match standins instead of
763 767 # largefiles. Then it overrides util.copyfile in that function it
764 768 # checks if the destination largefile already exists. It also keeps a
765 769 # list of copied files so that the largefiles can be copied and the
766 770 # dirstate updated.
767 771 @eh.wrapfunction(cmdutil, 'copy')
768 772 def overridecopy(orig, ui, repo, pats, opts, rename=False):
769 773 # doesn't remove largefile on rename
770 774 if len(pats) < 2:
771 775 # this isn't legal, let the original function deal with it
772 776 return orig(ui, repo, pats, opts, rename)
773 777
774 778 # This could copy both lfiles and normal files in one command,
775 779 # but we don't want to do that. First replace their matcher to
776 780 # only match normal files and run it, then replace it to just
777 781 # match largefiles and run it again.
778 782 nonormalfiles = False
779 783 nolfiles = False
780 784 manifest = repo[None].manifest()
781 785
782 786 def normalfilesmatchfn(
783 787 orig,
784 788 ctx,
785 789 pats=(),
786 790 opts=None,
787 791 globbed=False,
788 792 default=b'relpath',
789 793 badfn=None,
790 794 ):
791 795 if opts is None:
792 796 opts = {}
793 797 match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
794 798 return composenormalfilematcher(match, manifest)
795 799
796 800 with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
797 801 try:
798 802 result = orig(ui, repo, pats, opts, rename)
799 803 except error.Abort as e:
800 804 if e.message != _(b'no files to copy'):
801 805 raise e
802 806 else:
803 807 nonormalfiles = True
804 808 result = 0
805 809
806 810 # The first rename can cause our current working directory to be removed.
807 811 # In that case there is nothing left to copy/rename so just quit.
808 812 try:
809 813 repo.getcwd()
810 814 except OSError:
811 815 return result
812 816
813 817 def makestandin(relpath):
814 818 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
815 819 return repo.wvfs.join(lfutil.standin(path))
816 820
817 821 fullpats = scmutil.expandpats(pats)
818 822 dest = fullpats[-1]
819 823
820 824 if os.path.isdir(dest):
821 825 if not os.path.isdir(makestandin(dest)):
822 826 os.makedirs(makestandin(dest))
823 827
824 828 try:
825 829 # When we call orig below it creates the standins but we don't add
826 830 # them to the dir state until later so lock during that time.
827 831 wlock = repo.wlock()
828 832
829 833 manifest = repo[None].manifest()
830 834
831 835 def overridematch(
832 836 orig,
833 837 ctx,
834 838 pats=(),
835 839 opts=None,
836 840 globbed=False,
837 841 default=b'relpath',
838 842 badfn=None,
839 843 ):
840 844 if opts is None:
841 845 opts = {}
842 846 newpats = []
843 847 # The patterns were previously mangled to add the standin
844 848 # directory; we need to remove that now
845 849 for pat in pats:
846 850 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
847 851 newpats.append(pat.replace(lfutil.shortname, b''))
848 852 else:
849 853 newpats.append(pat)
850 854 match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
851 855 m = copy.copy(match)
856 m._was_tampered_with = True
852 857 lfile = lambda f: lfutil.standin(f) in manifest
853 858 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
854 859 m._fileset = set(m._files)
855 860 origmatchfn = m.matchfn
856 861
857 862 def matchfn(f):
858 863 lfile = lfutil.splitstandin(f)
859 864 return (
860 865 lfile is not None
861 866 and (f in manifest)
862 867 and origmatchfn(lfile)
863 868 or None
864 869 )
865 870
866 871 m.matchfn = matchfn
867 872 return m
868 873
869 874 listpats = []
870 875 for pat in pats:
871 876 if matchmod.patkind(pat) is not None:
872 877 listpats.append(pat)
873 878 else:
874 879 listpats.append(makestandin(pat))
875 880
876 881 copiedfiles = []
877 882
878 883 def overridecopyfile(orig, src, dest, *args, **kwargs):
879 884 if lfutil.shortname in src and dest.startswith(
880 885 repo.wjoin(lfutil.shortname)
881 886 ):
882 887 destlfile = dest.replace(lfutil.shortname, b'')
883 888 if not opts[b'force'] and os.path.exists(destlfile):
884 889 raise IOError(
885 890 b'', _(b'destination largefile already exists')
886 891 )
887 892 copiedfiles.append((src, dest))
888 893 orig(src, dest, *args, **kwargs)
889 894
890 895 with extensions.wrappedfunction(util, 'copyfile', overridecopyfile):
891 896 with extensions.wrappedfunction(scmutil, 'match', overridematch):
892 897 result += orig(ui, repo, listpats, opts, rename)
893 898
894 899 lfdirstate = lfutil.openlfdirstate(ui, repo)
895 900 for (src, dest) in copiedfiles:
896 901 if lfutil.shortname in src and dest.startswith(
897 902 repo.wjoin(lfutil.shortname)
898 903 ):
899 904 srclfile = src.replace(repo.wjoin(lfutil.standin(b'')), b'')
900 905 destlfile = dest.replace(repo.wjoin(lfutil.standin(b'')), b'')
901 906 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or b'.'
902 907 if not os.path.isdir(destlfiledir):
903 908 os.makedirs(destlfiledir)
904 909 if rename:
905 910 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
906 911
907 912 # The file is gone, but this deletes any empty parent
908 913 # directories as a side-effect.
909 914 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
910 915 lfdirstate.set_untracked(srclfile)
911 916 else:
912 917 util.copyfile(repo.wjoin(srclfile), repo.wjoin(destlfile))
913 918
914 919 lfdirstate.set_tracked(destlfile)
915 920 lfdirstate.write(repo.currenttransaction())
916 921 except error.Abort as e:
917 922 if e.message != _(b'no files to copy'):
918 923 raise e
919 924 else:
920 925 nolfiles = True
921 926 finally:
922 927 wlock.release()
923 928
924 929 if nolfiles and nonormalfiles:
925 930 raise error.Abort(_(b'no files to copy'))
926 931
927 932 return result
928 933
929 934
930 935 # When the user calls revert, we have to be careful to not revert any
931 936 # changes to other largefiles accidentally. This means we have to keep
932 937 # track of the largefiles that are being reverted so we only pull down
933 938 # the necessary largefiles.
934 939 #
935 940 # Standins are only updated (to match the hash of largefiles) before
936 941 # commits. Update the standins then run the original revert, changing
937 942 # the matcher to hit standins instead of largefiles. Based on the
938 943 # resulting standins update the largefiles.
939 944 @eh.wrapfunction(cmdutil, 'revert')
940 945 def overriderevert(orig, ui, repo, ctx, *pats, **opts):
941 946 # Because we put the standins in a bad state (by updating them)
942 947 # and then return them to a correct state we need to lock to
943 948 # prevent others from changing them in their incorrect state.
944 949 with repo.wlock(), repo.dirstate.running_status(repo):
945 950 lfdirstate = lfutil.openlfdirstate(ui, repo)
946 951 s = lfutil.lfdirstatestatus(lfdirstate, repo)
947 952 lfdirstate.write(repo.currenttransaction())
948 953 for lfile in s.modified:
949 954 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
950 955 for lfile in s.deleted:
951 956 fstandin = lfutil.standin(lfile)
952 957 if repo.wvfs.exists(fstandin):
953 958 repo.wvfs.unlink(fstandin)
954 959
955 960 oldstandins = lfutil.getstandinsstate(repo)
956 961
957 962 def overridematch(
958 963 orig,
959 964 mctx,
960 965 pats=(),
961 966 opts=None,
962 967 globbed=False,
963 968 default=b'relpath',
964 969 badfn=None,
965 970 ):
966 971 if opts is None:
967 972 opts = {}
968 973 match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
969 974 m = copy.copy(match)
975 m._was_tampered_with = True
970 976
971 977 # revert supports recursing into subrepos, and though largefiles
972 978 # currently doesn't work correctly in that case, this match is
973 979 # called, so the lfdirstate above may not be the correct one for
974 980 # this invocation of match.
975 981 lfdirstate = lfutil.openlfdirstate(
976 982 mctx.repo().ui, mctx.repo(), False
977 983 )
978 984
979 985 wctx = repo[None]
980 986 matchfiles = []
981 987 for f in m._files:
982 988 standin = lfutil.standin(f)
983 989 if standin in ctx or standin in mctx:
984 990 matchfiles.append(standin)
985 991 elif standin in wctx or lfdirstate.get_entry(f).removed:
986 992 continue
987 993 else:
988 994 matchfiles.append(f)
989 995 m._files = matchfiles
990 996 m._fileset = set(m._files)
991 997 origmatchfn = m.matchfn
992 998
993 999 def matchfn(f):
994 1000 lfile = lfutil.splitstandin(f)
995 1001 if lfile is not None:
996 1002 return origmatchfn(lfile) and (f in ctx or f in mctx)
997 1003 return origmatchfn(f)
998 1004
999 1005 m.matchfn = matchfn
1000 1006 return m
1001 1007
1002 1008 with extensions.wrappedfunction(scmutil, 'match', overridematch):
1003 1009 orig(ui, repo, ctx, *pats, **opts)
1004 1010
1005 1011 newstandins = lfutil.getstandinsstate(repo)
1006 1012 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1007 1013 # lfdirstate should be 'normallookup'-ed for updated files,
1008 1014 # because reverting doesn't touch dirstate for 'normal' files
1009 1015 # when target revision is explicitly specified: in such case,
1010 1016 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
1011 1017 # of target (standin) file.
1012 1018 lfcommands.updatelfiles(
1013 1019 ui, repo, filelist, printmessage=False, normallookup=True
1014 1020 )
1015 1021
1016 1022
1017 1023 # after pulling changesets, we need to take some extra care to get
1018 1024 # largefiles updated remotely
1019 1025 @eh.wrapcommand(
1020 1026 b'pull',
1021 1027 opts=[
1022 1028 (
1023 1029 b'',
1024 1030 b'all-largefiles',
1025 1031 None,
1026 1032 _(b'download all pulled versions of largefiles (DEPRECATED)'),
1027 1033 ),
1028 1034 (
1029 1035 b'',
1030 1036 b'lfrev',
1031 1037 [],
1032 1038 _(b'download largefiles for these revisions'),
1033 1039 _(b'REV'),
1034 1040 ),
1035 1041 ],
1036 1042 )
1037 1043 def overridepull(orig, ui, repo, source=None, **opts):
1038 1044 revsprepull = len(repo)
1039 1045 if not source:
1040 1046 source = b'default'
1041 1047 repo.lfpullsource = source
1042 1048 result = orig(ui, repo, source, **opts)
1043 1049 revspostpull = len(repo)
1044 1050 lfrevs = opts.get('lfrev', [])
1045 1051 if opts.get('all_largefiles'):
1046 1052 lfrevs.append(b'pulled()')
1047 1053 if lfrevs and revspostpull > revsprepull:
1048 1054 numcached = 0
1049 1055 repo.firstpulled = revsprepull # for pulled() revset expression
1050 1056 try:
1051 1057 for rev in logcmdutil.revrange(repo, lfrevs):
1052 1058 ui.note(_(b'pulling largefiles for revision %d\n') % rev)
1053 1059 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
1054 1060 numcached += len(cached)
1055 1061 finally:
1056 1062 del repo.firstpulled
1057 1063 ui.status(_(b"%d largefiles cached\n") % numcached)
1058 1064 return result
1059 1065
1060 1066
1061 1067 @eh.wrapcommand(
1062 1068 b'push',
1063 1069 opts=[
1064 1070 (
1065 1071 b'',
1066 1072 b'lfrev',
1067 1073 [],
1068 1074 _(b'upload largefiles for these revisions'),
1069 1075 _(b'REV'),
1070 1076 )
1071 1077 ],
1072 1078 )
1073 1079 def overridepush(orig, ui, repo, *args, **kwargs):
1074 1080 """Override push command and store --lfrev parameters in opargs"""
1075 1081 lfrevs = kwargs.pop('lfrev', None)
1076 1082 if lfrevs:
1077 1083 opargs = kwargs.setdefault('opargs', {})
1078 1084 opargs[b'lfrevs'] = logcmdutil.revrange(repo, lfrevs)
1079 1085 return orig(ui, repo, *args, **kwargs)
1080 1086
1081 1087
1082 1088 @eh.wrapfunction(exchange, 'pushoperation')
1083 1089 def exchangepushoperation(orig, *args, **kwargs):
1084 1090 """Override pushoperation constructor and store lfrevs parameter"""
1085 1091 lfrevs = kwargs.pop('lfrevs', None)
1086 1092 pushop = orig(*args, **kwargs)
1087 1093 pushop.lfrevs = lfrevs
1088 1094 return pushop
1089 1095
1090 1096
1091 1097 @eh.revsetpredicate(b'pulled()')
1092 1098 def pulledrevsetsymbol(repo, subset, x):
1093 1099 """Changesets that just has been pulled.
1094 1100
1095 1101 Only available with largefiles from pull --lfrev expressions.
1096 1102
1097 1103 .. container:: verbose
1098 1104
1099 1105 Some examples:
1100 1106
1101 1107 - pull largefiles for all new changesets::
1102 1108
1103 1109 hg pull -lfrev "pulled()"
1104 1110
1105 1111 - pull largefiles for all new branch heads::
1106 1112
1107 1113 hg pull -lfrev "head(pulled()) and not closed()"
1108 1114
1109 1115 """
1110 1116
1111 1117 try:
1112 1118 firstpulled = repo.firstpulled
1113 1119 except AttributeError:
1114 1120 raise error.Abort(_(b"pulled() only available in --lfrev"))
1115 1121 return smartset.baseset([r for r in subset if r >= firstpulled])
1116 1122
1117 1123
1118 1124 @eh.wrapcommand(
1119 1125 b'clone',
1120 1126 opts=[
1121 1127 (
1122 1128 b'',
1123 1129 b'all-largefiles',
1124 1130 None,
1125 1131 _(b'download all versions of all largefiles'),
1126 1132 )
1127 1133 ],
1128 1134 )
1129 1135 def overrideclone(orig, ui, source, dest=None, **opts):
1130 1136 d = dest
1131 1137 if d is None:
1132 1138 d = hg.defaultdest(source)
1133 1139 if opts.get('all_largefiles') and not hg.islocal(d):
1134 1140 raise error.Abort(
1135 1141 _(b'--all-largefiles is incompatible with non-local destination %s')
1136 1142 % d
1137 1143 )
1138 1144
1139 1145 return orig(ui, source, dest, **opts)
1140 1146
1141 1147
1142 1148 @eh.wrapfunction(hg, 'clone')
1143 1149 def hgclone(orig, ui, opts, *args, **kwargs):
1144 1150 result = orig(ui, opts, *args, **kwargs)
1145 1151
1146 1152 if result is not None:
1147 1153 sourcerepo, destrepo = result
1148 1154 repo = destrepo.local()
1149 1155
1150 1156 # When cloning to a remote repo (like through SSH), no repo is available
1151 1157 # from the peer. Therefore the largefiles can't be downloaded and the
1152 1158 # hgrc can't be updated.
1153 1159 if not repo:
1154 1160 return result
1155 1161
1156 1162 # Caching is implicitly limited to 'rev' option, since the dest repo was
1157 1163 # truncated at that point. The user may expect a download count with
1158 1164 # this option, so attempt whether or not this is a largefile repo.
1159 1165 if opts.get(b'all_largefiles'):
1160 1166 success, missing = lfcommands.downloadlfiles(ui, repo)
1161 1167
1162 1168 if missing != 0:
1163 1169 return None
1164 1170
1165 1171 return result
1166 1172
1167 1173
1168 1174 @eh.wrapcommand(b'rebase', extension=b'rebase')
1169 1175 def overriderebasecmd(orig, ui, repo, **opts):
1170 1176 if not hasattr(repo, '_largefilesenabled'):
1171 1177 return orig(ui, repo, **opts)
1172 1178
1173 1179 resuming = opts.get('continue')
1174 1180 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1175 1181 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1176 1182 try:
1177 1183 with ui.configoverride(
1178 1184 {(b'rebase', b'experimental.inmemory'): False}, b"largefiles"
1179 1185 ):
1180 1186 return orig(ui, repo, **opts)
1181 1187 finally:
1182 1188 repo._lfstatuswriters.pop()
1183 1189 repo._lfcommithooks.pop()
1184 1190
1185 1191
1186 1192 @eh.extsetup
1187 1193 def overriderebase(ui):
1188 1194 try:
1189 1195 rebase = extensions.find(b'rebase')
1190 1196 except KeyError:
1191 1197 pass
1192 1198 else:
1193 1199
1194 1200 def _dorebase(orig, *args, **kwargs):
1195 1201 kwargs['inmemory'] = False
1196 1202 return orig(*args, **kwargs)
1197 1203
1198 1204 extensions.wrapfunction(rebase, '_dorebase', _dorebase)
1199 1205
1200 1206
1201 1207 @eh.wrapcommand(b'archive')
1202 1208 def overridearchivecmd(orig, ui, repo, dest, **opts):
1203 1209 with lfstatus(repo.unfiltered()):
1204 1210 return orig(ui, repo.unfiltered(), dest, **opts)
1205 1211
1206 1212
1207 1213 @eh.wrapfunction(webcommands, 'archive')
1208 1214 def hgwebarchive(orig, web):
1209 1215 with lfstatus(web.repo):
1210 1216 return orig(web)
1211 1217
1212 1218
1213 1219 @eh.wrapfunction(archival, 'archive')
1214 1220 def overridearchive(
1215 1221 orig,
1216 1222 repo,
1217 1223 dest,
1218 1224 node,
1219 1225 kind,
1220 1226 decode=True,
1221 1227 match=None,
1222 1228 prefix=b'',
1223 1229 mtime=None,
1224 1230 subrepos=None,
1225 1231 ):
1226 1232 # For some reason setting repo.lfstatus in hgwebarchive only changes the
1227 1233 # unfiltered repo's attr, so check that as well.
1228 1234 if not repo.lfstatus and not repo.unfiltered().lfstatus:
1229 1235 return orig(
1230 1236 repo, dest, node, kind, decode, match, prefix, mtime, subrepos
1231 1237 )
1232 1238
1233 1239 # No need to lock because we are only reading history and
1234 1240 # largefile caches, neither of which are modified.
1235 1241 if node is not None:
1236 1242 lfcommands.cachelfiles(repo.ui, repo, node)
1237 1243
1238 1244 if kind not in archival.archivers:
1239 1245 raise error.Abort(_(b"unknown archive type '%s'") % kind)
1240 1246
1241 1247 ctx = repo[node]
1242 1248
1243 1249 if kind == b'files':
1244 1250 if prefix:
1245 1251 raise error.Abort(_(b'cannot give prefix when archiving to files'))
1246 1252 else:
1247 1253 prefix = archival.tidyprefix(dest, kind, prefix)
1248 1254
1249 1255 def write(name, mode, islink, getdata):
1250 1256 if match and not match(name):
1251 1257 return
1252 1258 data = getdata()
1253 1259 if decode:
1254 1260 data = repo.wwritedata(name, data)
1255 1261 archiver.addfile(prefix + name, mode, islink, data)
1256 1262
1257 1263 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
1258 1264
1259 1265 if repo.ui.configbool(b"ui", b"archivemeta"):
1260 1266 write(
1261 1267 b'.hg_archival.txt',
1262 1268 0o644,
1263 1269 False,
1264 1270 lambda: archival.buildmetadata(ctx),
1265 1271 )
1266 1272
1267 1273 for f in ctx:
1268 1274 ff = ctx.flags(f)
1269 1275 getdata = ctx[f].data
1270 1276 lfile = lfutil.splitstandin(f)
1271 1277 if lfile is not None:
1272 1278 if node is not None:
1273 1279 path = lfutil.findfile(repo, getdata().strip())
1274 1280
1275 1281 if path is None:
1276 1282 raise error.Abort(
1277 1283 _(
1278 1284 b'largefile %s not found in repo store or system cache'
1279 1285 )
1280 1286 % lfile
1281 1287 )
1282 1288 else:
1283 1289 path = lfile
1284 1290
1285 1291 f = lfile
1286 1292
1287 1293 getdata = lambda: util.readfile(path)
1288 1294 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1289 1295
1290 1296 if subrepos:
1291 1297 for subpath in sorted(ctx.substate):
1292 1298 sub = ctx.workingsub(subpath)
1293 1299 submatch = matchmod.subdirmatcher(subpath, match)
1294 1300 subprefix = prefix + subpath + b'/'
1295 1301
1296 1302 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1297 1303 # infer and possibly set lfstatus in hgsubrepoarchive. That would
1298 1304 # allow only hgsubrepos to set this, instead of the current scheme
1299 1305 # where the parent sets this for the child.
1300 1306 with (
1301 1307 hasattr(sub, '_repo')
1302 1308 and lfstatus(sub._repo)
1303 1309 or util.nullcontextmanager()
1304 1310 ):
1305 1311 sub.archive(archiver, subprefix, submatch)
1306 1312
1307 1313 archiver.done()
1308 1314
1309 1315
1310 1316 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
1311 1317 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
1312 1318 lfenabled = hasattr(repo._repo, '_largefilesenabled')
1313 1319 if not lfenabled or not repo._repo.lfstatus:
1314 1320 return orig(repo, archiver, prefix, match, decode)
1315 1321
1316 1322 repo._get(repo._state + (b'hg',))
1317 1323 rev = repo._state[1]
1318 1324 ctx = repo._repo[rev]
1319 1325
1320 1326 if ctx.node() is not None:
1321 1327 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1322 1328
1323 1329 def write(name, mode, islink, getdata):
1324 1330 # At this point, the standin has been replaced with the largefile name,
1325 1331 # so the normal matcher works here without the lfutil variants.
1326 1332 if match and not match(f):
1327 1333 return
1328 1334 data = getdata()
1329 1335 if decode:
1330 1336 data = repo._repo.wwritedata(name, data)
1331 1337
1332 1338 archiver.addfile(prefix + name, mode, islink, data)
1333 1339
1334 1340 for f in ctx:
1335 1341 ff = ctx.flags(f)
1336 1342 getdata = ctx[f].data
1337 1343 lfile = lfutil.splitstandin(f)
1338 1344 if lfile is not None:
1339 1345 if ctx.node() is not None:
1340 1346 path = lfutil.findfile(repo._repo, getdata().strip())
1341 1347
1342 1348 if path is None:
1343 1349 raise error.Abort(
1344 1350 _(
1345 1351 b'largefile %s not found in repo store or system cache'
1346 1352 )
1347 1353 % lfile
1348 1354 )
1349 1355 else:
1350 1356 path = lfile
1351 1357
1352 1358 f = lfile
1353 1359
1354 1360 getdata = lambda: util.readfile(os.path.join(prefix, path))
1355 1361
1356 1362 write(f, b'x' in ff and 0o755 or 0o644, b'l' in ff, getdata)
1357 1363
1358 1364 for subpath in sorted(ctx.substate):
1359 1365 sub = ctx.workingsub(subpath)
1360 1366 submatch = matchmod.subdirmatcher(subpath, match)
1361 1367 subprefix = prefix + subpath + b'/'
1362 1368 # TODO: Only hgsubrepo instances have `_repo`, so figure out how to
1363 1369 # infer and possibly set lfstatus at the top of this function. That
1364 1370 # would allow only hgsubrepos to set this, instead of the current scheme
1365 1371 # where the parent sets this for the child.
1366 1372 with (
1367 1373 hasattr(sub, '_repo')
1368 1374 and lfstatus(sub._repo)
1369 1375 or util.nullcontextmanager()
1370 1376 ):
1371 1377 sub.archive(archiver, subprefix, submatch, decode)
1372 1378
1373 1379
1374 1380 # If a largefile is modified, the change is not reflected in its
1375 1381 # standin until a commit. cmdutil.bailifchanged() raises an exception
1376 1382 # if the repo has uncommitted changes. Wrap it to also check if
1377 1383 # largefiles were changed. This is used by bisect, backout and fetch.
1378 1384 @eh.wrapfunction(cmdutil, 'bailifchanged')
1379 1385 def overridebailifchanged(orig, repo, *args, **kwargs):
1380 1386 orig(repo, *args, **kwargs)
1381 1387 with lfstatus(repo):
1382 1388 s = repo.status()
1383 1389 if s.modified or s.added or s.removed or s.deleted:
1384 1390 raise error.Abort(_(b'uncommitted changes'))
1385 1391
1386 1392
1387 1393 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1388 1394 def postcommitstatus(orig, repo, *args, **kwargs):
1389 1395 with lfstatus(repo):
1390 1396 return orig(repo, *args, **kwargs)
1391 1397
1392 1398
1393 1399 @eh.wrapfunction(cmdutil, 'forget')
1394 1400 def cmdutilforget(
1395 1401 orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun, interactive
1396 1402 ):
1397 1403 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1398 1404 bad, forgot = orig(
1399 1405 ui,
1400 1406 repo,
1401 1407 normalmatcher,
1402 1408 prefix,
1403 1409 uipathfn,
1404 1410 explicitonly,
1405 1411 dryrun,
1406 1412 interactive,
1407 1413 )
1408 1414 m = composelargefilematcher(match, repo[None].manifest())
1409 1415
1410 1416 with lfstatus(repo):
1411 1417 s = repo.status(match=m, clean=True)
1412 1418 manifest = repo[None].manifest()
1413 1419 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1414 1420 forget = [f for f in forget if lfutil.standin(f) in manifest]
1415 1421
1416 1422 for f in forget:
1417 1423 fstandin = lfutil.standin(f)
1418 1424 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1419 1425 ui.warn(
1420 1426 _(b'not removing %s: file is already untracked\n') % uipathfn(f)
1421 1427 )
1422 1428 bad.append(f)
1423 1429
1424 1430 for f in forget:
1425 1431 if ui.verbose or not m.exact(f):
1426 1432 ui.status(_(b'removing %s\n') % uipathfn(f))
1427 1433
1428 1434 # Need to lock because standin files are deleted then removed from the
1429 1435 # repository and we could race in-between.
1430 1436 with repo.wlock():
1431 1437 lfdirstate = lfutil.openlfdirstate(ui, repo)
1432 1438 for f in forget:
1433 1439 lfdirstate.set_untracked(f)
1434 1440 lfdirstate.write(repo.currenttransaction())
1435 1441 standins = [lfutil.standin(f) for f in forget]
1436 1442 for f in standins:
1437 1443 repo.wvfs.unlinkpath(f, ignoremissing=True)
1438 1444 rejected = repo[None].forget(standins)
1439 1445
1440 1446 bad.extend(f for f in rejected if f in m.files())
1441 1447 forgot.extend(f for f in forget if f not in rejected)
1442 1448 return bad, forgot
1443 1449
1444 1450
1445 1451 def _getoutgoings(repo, other, missing, addfunc):
1446 1452 """get pairs of filename and largefile hash in outgoing revisions
1447 1453 in 'missing'.
1448 1454
1449 1455 largefiles already existing on 'other' repository are ignored.
1450 1456
1451 1457 'addfunc' is invoked with each unique pairs of filename and
1452 1458 largefile hash value.
1453 1459 """
1454 1460 knowns = set()
1455 1461 lfhashes = set()
1456 1462
1457 1463 def dedup(fn, lfhash):
1458 1464 k = (fn, lfhash)
1459 1465 if k not in knowns:
1460 1466 knowns.add(k)
1461 1467 lfhashes.add(lfhash)
1462 1468
1463 1469 lfutil.getlfilestoupload(repo, missing, dedup)
1464 1470 if lfhashes:
1465 1471 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1466 1472 for fn, lfhash in knowns:
1467 1473 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1468 1474 addfunc(fn, lfhash)
1469 1475
1470 1476
1471 1477 def outgoinghook(ui, repo, other, opts, missing):
1472 1478 if opts.pop(b'large', None):
1473 1479 lfhashes = set()
1474 1480 if ui.debugflag:
1475 1481 toupload = {}
1476 1482
1477 1483 def addfunc(fn, lfhash):
1478 1484 if fn not in toupload:
1479 1485 toupload[fn] = [] # pytype: disable=unsupported-operands
1480 1486 toupload[fn].append(lfhash)
1481 1487 lfhashes.add(lfhash)
1482 1488
1483 1489 def showhashes(fn):
1484 1490 for lfhash in sorted(toupload[fn]):
1485 1491 ui.debug(b' %s\n' % lfhash)
1486 1492
1487 1493 else:
1488 1494 toupload = set()
1489 1495
1490 1496 def addfunc(fn, lfhash):
1491 1497 toupload.add(fn)
1492 1498 lfhashes.add(lfhash)
1493 1499
1494 1500 def showhashes(fn):
1495 1501 pass
1496 1502
1497 1503 _getoutgoings(repo, other, missing, addfunc)
1498 1504
1499 1505 if not toupload:
1500 1506 ui.status(_(b'largefiles: no files to upload\n'))
1501 1507 else:
1502 1508 ui.status(
1503 1509 _(b'largefiles to upload (%d entities):\n') % (len(lfhashes))
1504 1510 )
1505 1511 for file in sorted(toupload):
1506 1512 ui.status(lfutil.splitstandin(file) + b'\n')
1507 1513 showhashes(file)
1508 1514 ui.status(b'\n')
1509 1515
1510 1516
1511 1517 @eh.wrapcommand(
1512 1518 b'outgoing', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1513 1519 )
1514 1520 def _outgoingcmd(orig, *args, **kwargs):
1515 1521 # Nothing to do here other than add the extra help option- the hook above
1516 1522 # processes it.
1517 1523 return orig(*args, **kwargs)
1518 1524
1519 1525
1520 1526 def summaryremotehook(ui, repo, opts, changes):
1521 1527 largeopt = opts.get(b'large', False)
1522 1528 if changes is None:
1523 1529 if largeopt:
1524 1530 return (False, True) # only outgoing check is needed
1525 1531 else:
1526 1532 return (False, False)
1527 1533 elif largeopt:
1528 1534 url, branch, peer, outgoing = changes[1]
1529 1535 if peer is None:
1530 1536 # i18n: column positioning for "hg summary"
1531 1537 ui.status(_(b'largefiles: (no remote repo)\n'))
1532 1538 return
1533 1539
1534 1540 toupload = set()
1535 1541 lfhashes = set()
1536 1542
1537 1543 def addfunc(fn, lfhash):
1538 1544 toupload.add(fn)
1539 1545 lfhashes.add(lfhash)
1540 1546
1541 1547 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1542 1548
1543 1549 if not toupload:
1544 1550 # i18n: column positioning for "hg summary"
1545 1551 ui.status(_(b'largefiles: (no files to upload)\n'))
1546 1552 else:
1547 1553 # i18n: column positioning for "hg summary"
1548 1554 ui.status(
1549 1555 _(b'largefiles: %d entities for %d files to upload\n')
1550 1556 % (len(lfhashes), len(toupload))
1551 1557 )
1552 1558
1553 1559
1554 1560 @eh.wrapcommand(
1555 1561 b'summary', opts=[(b'', b'large', None, _(b'display outgoing largefiles'))]
1556 1562 )
1557 1563 def overridesummary(orig, ui, repo, *pats, **opts):
1558 1564 with lfstatus(repo):
1559 1565 orig(ui, repo, *pats, **opts)
1560 1566
1561 1567
1562 1568 @eh.wrapfunction(scmutil, 'addremove')
1563 1569 def scmutiladdremove(
1564 1570 orig,
1565 1571 repo,
1566 1572 matcher,
1567 1573 prefix,
1568 1574 uipathfn,
1569 1575 opts=None,
1570 1576 open_tr=None,
1571 1577 ):
1572 1578 if opts is None:
1573 1579 opts = {}
1574 1580 if not lfutil.islfilesrepo(repo):
1575 1581 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1576 1582
1577 1583 # open the transaction and changing_files context
1578 1584 if open_tr is not None:
1579 1585 open_tr()
1580 1586
1581 1587 # Get the list of missing largefiles so we can remove them
1582 1588 with repo.dirstate.running_status(repo):
1583 1589 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1584 1590 unsure, s, mtime_boundary = lfdirstate.status(
1585 1591 matchmod.always(),
1586 1592 subrepos=[],
1587 1593 ignored=False,
1588 1594 clean=False,
1589 1595 unknown=False,
1590 1596 )
1591 1597
1592 1598 # Call into the normal remove code, but the removing of the standin, we want
1593 1599 # to have handled by original addremove. Monkey patching here makes sure
1594 1600 # we don't remove the standin in the largefiles code, preventing a very
1595 1601 # confused state later.
1596 1602 if s.deleted:
1597 1603 m = copy.copy(matcher)
1604 m._was_tampered_with = True
1598 1605
1599 1606 # The m._files and m._map attributes are not changed to the deleted list
1600 1607 # because that affects the m.exact() test, which in turn governs whether
1601 1608 # or not the file name is printed, and how. Simply limit the original
1602 1609 # matches to those in the deleted status list.
1603 1610 matchfn = m.matchfn
1604 1611 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1605 1612
1606 1613 removelargefiles(
1607 1614 repo.ui,
1608 1615 repo,
1609 1616 True,
1610 1617 m,
1611 1618 uipathfn,
1612 1619 opts.get(b'dry_run'),
1613 1620 **pycompat.strkwargs(opts)
1614 1621 )
1615 1622 # Call into the normal add code, and any files that *should* be added as
1616 1623 # largefiles will be
1617 1624 added, bad = addlargefiles(
1618 1625 repo.ui, repo, True, matcher, uipathfn, **pycompat.strkwargs(opts)
1619 1626 )
1620 1627 # Now that we've handled largefiles, hand off to the original addremove
1621 1628 # function to take care of the rest. Make sure it doesn't do anything with
1622 1629 # largefiles by passing a matcher that will ignore them.
1623 1630 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1624 1631
1625 1632 return orig(repo, matcher, prefix, uipathfn, opts, open_tr=open_tr)
1626 1633
1627 1634
1628 1635 # Calling purge with --all will cause the largefiles to be deleted.
1629 1636 # Override repo.status to prevent this from happening.
1630 1637 @eh.wrapcommand(b'purge')
1631 1638 def overridepurge(orig, ui, repo, *dirs, **opts):
1632 1639 # XXX Monkey patching a repoview will not work. The assigned attribute will
1633 1640 # be set on the unfiltered repo, but we will only lookup attributes in the
1634 1641 # unfiltered repo if the lookup in the repoview object itself fails. As the
1635 1642 # monkey patched method exists on the repoview class the lookup will not
1636 1643 # fail. As a result, the original version will shadow the monkey patched
1637 1644 # one, defeating the monkey patch.
1638 1645 #
1639 1646 # As a work around we use an unfiltered repo here. We should do something
1640 1647 # cleaner instead.
1641 1648 repo = repo.unfiltered()
1642 1649 oldstatus = repo.status
1643 1650
1644 1651 def overridestatus(
1645 1652 node1=b'.',
1646 1653 node2=None,
1647 1654 match=None,
1648 1655 ignored=False,
1649 1656 clean=False,
1650 1657 unknown=False,
1651 1658 listsubrepos=False,
1652 1659 ):
1653 1660 r = oldstatus(
1654 1661 node1, node2, match, ignored, clean, unknown, listsubrepos
1655 1662 )
1656 1663 lfdirstate = lfutil.openlfdirstate(ui, repo)
1657 1664 unknown = [
1658 1665 f for f in r.unknown if not lfdirstate.get_entry(f).any_tracked
1659 1666 ]
1660 1667 ignored = [
1661 1668 f for f in r.ignored if not lfdirstate.get_entry(f).any_tracked
1662 1669 ]
1663 1670 return scmutil.status(
1664 1671 r.modified, r.added, r.removed, r.deleted, unknown, ignored, r.clean
1665 1672 )
1666 1673
1667 1674 repo.status = overridestatus
1668 1675 orig(ui, repo, *dirs, **opts)
1669 1676 repo.status = oldstatus
1670 1677
1671 1678
1672 1679 @eh.wrapcommand(b'rollback')
1673 1680 def overriderollback(orig, ui, repo, **opts):
1674 1681 with repo.wlock():
1675 1682 before = repo.dirstate.parents()
1676 1683 orphans = {
1677 1684 f
1678 1685 for f in repo.dirstate
1679 1686 if lfutil.isstandin(f) and not repo.dirstate.get_entry(f).removed
1680 1687 }
1681 1688 result = orig(ui, repo, **opts)
1682 1689 after = repo.dirstate.parents()
1683 1690 if before == after:
1684 1691 return result # no need to restore standins
1685 1692
1686 1693 pctx = repo[b'.']
1687 1694 for f in repo.dirstate:
1688 1695 if lfutil.isstandin(f):
1689 1696 orphans.discard(f)
1690 1697 if repo.dirstate.get_entry(f).removed:
1691 1698 repo.wvfs.unlinkpath(f, ignoremissing=True)
1692 1699 elif f in pctx:
1693 1700 fctx = pctx[f]
1694 1701 repo.wwrite(f, fctx.data(), fctx.flags())
1695 1702 else:
1696 1703 # content of standin is not so important in 'a',
1697 1704 # 'm' or 'n' (coming from the 2nd parent) cases
1698 1705 lfutil.writestandin(repo, f, b'', False)
1699 1706 for standin in orphans:
1700 1707 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1701 1708
1702 1709 return result
1703 1710
1704 1711
1705 1712 @eh.wrapcommand(b'transplant', extension=b'transplant')
1706 1713 def overridetransplant(orig, ui, repo, *revs, **opts):
1707 1714 resuming = opts.get('continue')
1708 1715 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1709 1716 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1710 1717 try:
1711 1718 result = orig(ui, repo, *revs, **opts)
1712 1719 finally:
1713 1720 repo._lfstatuswriters.pop()
1714 1721 repo._lfcommithooks.pop()
1715 1722 return result
1716 1723
1717 1724
1718 1725 @eh.wrapcommand(b'cat')
1719 1726 def overridecat(orig, ui, repo, file1, *pats, **opts):
1720 1727 ctx = logcmdutil.revsingle(repo, opts.get('rev'))
1721 1728 err = 1
1722 1729 notbad = set()
1723 1730 m = scmutil.match(ctx, (file1,) + pats, pycompat.byteskwargs(opts))
1731 m._was_tampered_with = True
1724 1732 origmatchfn = m.matchfn
1725 1733
1726 1734 def lfmatchfn(f):
1727 1735 if origmatchfn(f):
1728 1736 return True
1729 1737 lf = lfutil.splitstandin(f)
1730 1738 if lf is None:
1731 1739 return False
1732 1740 notbad.add(lf)
1733 1741 return origmatchfn(lf)
1734 1742
1735 1743 m.matchfn = lfmatchfn
1736 1744 origbadfn = m.bad
1737 1745
1738 1746 def lfbadfn(f, msg):
1739 1747 if not f in notbad:
1740 1748 origbadfn(f, msg)
1741 1749
1742 1750 m.bad = lfbadfn
1743 1751
1744 1752 origvisitdirfn = m.visitdir
1745 1753
1746 1754 def lfvisitdirfn(dir):
1747 1755 if dir == lfutil.shortname:
1748 1756 return True
1749 1757 ret = origvisitdirfn(dir)
1750 1758 if ret:
1751 1759 return ret
1752 1760 lf = lfutil.splitstandin(dir)
1753 1761 if lf is None:
1754 1762 return False
1755 1763 return origvisitdirfn(lf)
1756 1764
1757 1765 m.visitdir = lfvisitdirfn
1758 1766
1759 1767 for f in ctx.walk(m):
1760 1768 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1761 1769 lf = lfutil.splitstandin(f)
1762 1770 if lf is None or origmatchfn(f):
1763 1771 # duplicating unreachable code from commands.cat
1764 1772 data = ctx[f].data()
1765 1773 if opts.get('decode'):
1766 1774 data = repo.wwritedata(f, data)
1767 1775 fp.write(data)
1768 1776 else:
1769 1777 hash = lfutil.readasstandin(ctx[f])
1770 1778 if not lfutil.inusercache(repo.ui, hash):
1771 1779 store = storefactory.openstore(repo)
1772 1780 success, missing = store.get([(lf, hash)])
1773 1781 if len(success) != 1:
1774 1782 raise error.Abort(
1775 1783 _(
1776 1784 b'largefile %s is not in cache and could not be '
1777 1785 b'downloaded'
1778 1786 )
1779 1787 % lf
1780 1788 )
1781 1789 path = lfutil.usercachepath(repo.ui, hash)
1782 1790 with open(path, b"rb") as fpin:
1783 1791 for chunk in util.filechunkiter(fpin):
1784 1792 fp.write(chunk)
1785 1793 err = 0
1786 1794 return err
1787 1795
1788 1796
1789 1797 @eh.wrapfunction(merge, '_update')
1790 1798 def mergeupdate(orig, repo, node, branchmerge, force, *args, **kwargs):
1791 1799 matcher = kwargs.get('matcher', None)
1792 1800 # note if this is a partial update
1793 1801 partial = matcher and not matcher.always()
1794 1802 with repo.wlock(), repo.dirstate.changing_parents(repo):
1795 1803 # branch | | |
1796 1804 # merge | force | partial | action
1797 1805 # -------+-------+---------+--------------
1798 1806 # x | x | x | linear-merge
1799 1807 # o | x | x | branch-merge
1800 1808 # x | o | x | overwrite (as clean update)
1801 1809 # o | o | x | force-branch-merge (*1)
1802 1810 # x | x | o | (*)
1803 1811 # o | x | o | (*)
1804 1812 # x | o | o | overwrite (as revert)
1805 1813 # o | o | o | (*)
1806 1814 #
1807 1815 # (*) don't care
1808 1816 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1809 1817 with repo.dirstate.running_status(repo):
1810 1818 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1811 1819 unsure, s, mtime_boundary = lfdirstate.status(
1812 1820 matchmod.always(),
1813 1821 subrepos=[],
1814 1822 ignored=False,
1815 1823 clean=True,
1816 1824 unknown=False,
1817 1825 )
1818 1826 oldclean = set(s.clean)
1819 1827 pctx = repo[b'.']
1820 1828 dctx = repo[node]
1821 1829 for lfile in unsure + s.modified:
1822 1830 lfileabs = repo.wvfs.join(lfile)
1823 1831 if not repo.wvfs.exists(lfileabs):
1824 1832 continue
1825 1833 lfhash = lfutil.hashfile(lfileabs)
1826 1834 standin = lfutil.standin(lfile)
1827 1835 lfutil.writestandin(
1828 1836 repo, standin, lfhash, lfutil.getexecutable(lfileabs)
1829 1837 )
1830 1838 if standin in pctx and lfhash == lfutil.readasstandin(
1831 1839 pctx[standin]
1832 1840 ):
1833 1841 oldclean.add(lfile)
1834 1842 for lfile in s.added:
1835 1843 fstandin = lfutil.standin(lfile)
1836 1844 if fstandin not in dctx:
1837 1845 # in this case, content of standin file is meaningless
1838 1846 # (in dctx, lfile is unknown, or normal file)
1839 1847 continue
1840 1848 lfutil.updatestandin(repo, lfile, fstandin)
1841 1849 # mark all clean largefiles as dirty, just in case the update gets
1842 1850 # interrupted before largefiles and lfdirstate are synchronized
1843 1851 for lfile in oldclean:
1844 1852 entry = lfdirstate.get_entry(lfile)
1845 1853 lfdirstate.hacky_extension_update_file(
1846 1854 lfile,
1847 1855 wc_tracked=entry.tracked,
1848 1856 p1_tracked=entry.p1_tracked,
1849 1857 p2_info=entry.p2_info,
1850 1858 possibly_dirty=True,
1851 1859 )
1852 1860 lfdirstate.write(repo.currenttransaction())
1853 1861
1854 1862 oldstandins = lfutil.getstandinsstate(repo)
1855 1863 wc = kwargs.get('wc')
1856 1864 if wc and wc.isinmemory():
1857 1865 # largefiles is not a good candidate for in-memory merge (large
1858 1866 # files, custom dirstate, matcher usage).
1859 1867 raise error.ProgrammingError(
1860 1868 b'largefiles is not compatible with in-memory merge'
1861 1869 )
1862 1870 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1863 1871
1864 1872 newstandins = lfutil.getstandinsstate(repo)
1865 1873 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1866 1874
1867 1875 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1868 1876 # all the ones that didn't change as clean
1869 1877 for lfile in oldclean.difference(filelist):
1870 1878 lfdirstate.update_file(lfile, p1_tracked=True, wc_tracked=True)
1871 1879
1872 1880 if branchmerge or force or partial:
1873 1881 filelist.extend(s.deleted + s.removed)
1874 1882
1875 1883 lfcommands.updatelfiles(
1876 1884 repo.ui, repo, filelist=filelist, normallookup=partial
1877 1885 )
1878 1886
1879 1887 return result
1880 1888
1881 1889
1882 1890 @eh.wrapfunction(scmutil, 'marktouched')
1883 1891 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1884 1892 result = orig(repo, files, *args, **kwargs)
1885 1893
1886 1894 filelist = []
1887 1895 for f in files:
1888 1896 lf = lfutil.splitstandin(f)
1889 1897 if lf is not None:
1890 1898 filelist.append(lf)
1891 1899 if filelist:
1892 1900 lfcommands.updatelfiles(
1893 1901 repo.ui,
1894 1902 repo,
1895 1903 filelist=filelist,
1896 1904 printmessage=False,
1897 1905 normallookup=True,
1898 1906 )
1899 1907
1900 1908 return result
1901 1909
1902 1910
1903 1911 @eh.wrapfunction(upgrade_actions, 'preservedrequirements')
1904 1912 @eh.wrapfunction(upgrade_actions, 'supporteddestrequirements')
1905 1913 def upgraderequirements(orig, repo):
1906 1914 reqs = orig(repo)
1907 1915 if b'largefiles' in repo.requirements:
1908 1916 reqs.add(b'largefiles')
1909 1917 return reqs
1910 1918
1911 1919
1912 1920 _lfscheme = b'largefile://'
1913 1921
1914 1922
1915 1923 @eh.wrapfunction(urlmod, 'open')
1916 1924 def openlargefile(orig, ui, url_, data=None, **kwargs):
1917 1925 if url_.startswith(_lfscheme):
1918 1926 if data:
1919 1927 msg = b"cannot use data on a 'largefile://' url"
1920 1928 raise error.ProgrammingError(msg)
1921 1929 lfid = url_[len(_lfscheme) :]
1922 1930 return storefactory.getlfile(ui, lfid)
1923 1931 else:
1924 1932 return orig(ui, url_, data=data, **kwargs)
@@ -1,474 +1,476 b''
1 1 # Copyright 2009-2010 Gregory P. Ward
2 2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 3 # Copyright 2010-2011 Fog Creek Software
4 4 # Copyright 2010-2011 Unity Technologies
5 5 #
6 6 # This software may be used and distributed according to the terms of the
7 7 # GNU General Public License version 2 or any later version.
8 8
9 9 '''setup for largefiles repositories: reposetup'''
10 10
11 11 import copy
12 12
13 13 from mercurial.i18n import _
14 14
15 15 from mercurial import (
16 16 error,
17 17 extensions,
18 18 localrepo,
19 19 match as matchmod,
20 20 scmutil,
21 21 util,
22 22 )
23 23
24 24 from mercurial.dirstateutils import timestamp
25 25
26 26 from . import (
27 27 lfcommands,
28 28 lfutil,
29 29 )
30 30
31 31
32 32 def reposetup(ui, repo):
33 33 # wire repositories should be given new wireproto functions
34 34 # by "proto.wirereposetup()" via "hg.wirepeersetupfuncs"
35 35 if not repo.local():
36 36 return
37 37
38 38 class lfilesrepo(repo.__class__):
39 39 # the mark to examine whether "repo" object enables largefiles or not
40 40 _largefilesenabled = True
41 41
42 42 lfstatus = False
43 43
44 44 # When lfstatus is set, return a context that gives the names
45 45 # of largefiles instead of their corresponding standins and
46 46 # identifies the largefiles as always binary, regardless of
47 47 # their actual contents.
48 48 def __getitem__(self, changeid):
49 49 ctx = super(lfilesrepo, self).__getitem__(changeid)
50 50 if self.lfstatus:
51 51
52 52 def files(orig):
53 53 filenames = orig()
54 54 return [lfutil.splitstandin(f) or f for f in filenames]
55 55
56 56 extensions.wrapfunction(ctx, 'files', files)
57 57
58 58 def manifest(orig):
59 59 man1 = orig()
60 60
61 61 class lfilesmanifest(man1.__class__):
62 62 def __contains__(self, filename):
63 63 orig = super(lfilesmanifest, self).__contains__
64 64 return orig(filename) or orig(
65 65 lfutil.standin(filename)
66 66 )
67 67
68 68 man1.__class__ = lfilesmanifest
69 69 return man1
70 70
71 71 extensions.wrapfunction(ctx, 'manifest', manifest)
72 72
73 73 def filectx(orig, path, fileid=None, filelog=None):
74 74 try:
75 75 if filelog is not None:
76 76 result = orig(path, fileid, filelog)
77 77 else:
78 78 result = orig(path, fileid)
79 79 except error.LookupError:
80 80 # Adding a null character will cause Mercurial to
81 81 # identify this as a binary file.
82 82 if filelog is not None:
83 83 result = orig(lfutil.standin(path), fileid, filelog)
84 84 else:
85 85 result = orig(lfutil.standin(path), fileid)
86 86 olddata = result.data
87 87 result.data = lambda: olddata() + b'\0'
88 88 return result
89 89
90 90 extensions.wrapfunction(ctx, 'filectx', filectx)
91 91
92 92 return ctx
93 93
94 94 # Figure out the status of big files and insert them into the
95 95 # appropriate list in the result. Also removes standin files
96 96 # from the listing. Revert to the original status if
97 97 # self.lfstatus is False.
98 98 # XXX large file status is buggy when used on repo proxy.
99 99 # XXX this needs to be investigated.
100 100 @localrepo.unfilteredmethod
101 101 def status(
102 102 self,
103 103 node1=b'.',
104 104 node2=None,
105 105 match=None,
106 106 ignored=False,
107 107 clean=False,
108 108 unknown=False,
109 109 listsubrepos=False,
110 110 ):
111 111 listignored, listclean, listunknown = ignored, clean, unknown
112 112 orig = super(lfilesrepo, self).status
113 113 if not self.lfstatus:
114 114 return orig(
115 115 node1,
116 116 node2,
117 117 match,
118 118 listignored,
119 119 listclean,
120 120 listunknown,
121 121 listsubrepos,
122 122 )
123 123
124 124 # some calls in this function rely on the old version of status
125 125 self.lfstatus = False
126 126 ctx1 = self[node1]
127 127 ctx2 = self[node2]
128 128 working = ctx2.rev() is None
129 129 parentworking = working and ctx1 == self[b'.']
130 130
131 131 if match is None:
132 132 match = matchmod.always()
133 133
134 134 try:
135 135 # updating the dirstate is optional
136 136 # so we don't wait on the lock
137 137 wlock = self.wlock(False)
138 138 gotlock = True
139 139 except error.LockError:
140 140 wlock = util.nullcontextmanager()
141 141 gotlock = False
142 142 with wlock, self.dirstate.running_status(self):
143 143
144 144 # First check if paths or patterns were specified on the
145 145 # command line. If there were, and they don't match any
146 146 # largefiles, we should just bail here and let super
147 147 # handle it -- thus gaining a big performance boost.
148 148 lfdirstate = lfutil.openlfdirstate(ui, self)
149 149 if not match.always():
150 150 for f in lfdirstate:
151 151 if match(f):
152 152 break
153 153 else:
154 154 return orig(
155 155 node1,
156 156 node2,
157 157 match,
158 158 listignored,
159 159 listclean,
160 160 listunknown,
161 161 listsubrepos,
162 162 )
163 163
164 164 # Create a copy of match that matches standins instead
165 165 # of largefiles.
166 166 def tostandins(files):
167 167 if not working:
168 168 return files
169 169 newfiles = []
170 170 dirstate = self.dirstate
171 171 for f in files:
172 172 sf = lfutil.standin(f)
173 173 if sf in dirstate:
174 174 newfiles.append(sf)
175 175 elif dirstate.hasdir(sf):
176 176 # Directory entries could be regular or
177 177 # standin, check both
178 178 newfiles.extend((f, sf))
179 179 else:
180 180 newfiles.append(f)
181 181 return newfiles
182 182
183 183 m = copy.copy(match)
184 m._was_tampered_with = True
184 185 m._files = tostandins(m._files)
185 186
186 187 result = orig(
187 188 node1, node2, m, ignored, clean, unknown, listsubrepos
188 189 )
189 190 if working:
190 191
191 192 def sfindirstate(f):
192 193 sf = lfutil.standin(f)
193 194 dirstate = self.dirstate
194 195 return sf in dirstate or dirstate.hasdir(sf)
195 196
197 match._was_tampered_with = True
196 198 match._files = [f for f in match._files if sfindirstate(f)]
197 199 # Don't waste time getting the ignored and unknown
198 200 # files from lfdirstate
199 201 unsure, s, mtime_boundary = lfdirstate.status(
200 202 match,
201 203 subrepos=[],
202 204 ignored=False,
203 205 clean=listclean,
204 206 unknown=False,
205 207 )
206 208 (modified, added, removed, deleted, clean) = (
207 209 s.modified,
208 210 s.added,
209 211 s.removed,
210 212 s.deleted,
211 213 s.clean,
212 214 )
213 215 if parentworking:
214 216 wctx = repo[None]
215 217 for lfile in unsure:
216 218 standin = lfutil.standin(lfile)
217 219 if standin not in ctx1:
218 220 # from second parent
219 221 modified.append(lfile)
220 222 elif lfutil.readasstandin(
221 223 ctx1[standin]
222 224 ) != lfutil.hashfile(self.wjoin(lfile)):
223 225 modified.append(lfile)
224 226 else:
225 227 if listclean:
226 228 clean.append(lfile)
227 229 s = wctx[lfile].lstat()
228 230 mode = s.st_mode
229 231 size = s.st_size
230 232 mtime = timestamp.reliable_mtime_of(
231 233 s, mtime_boundary
232 234 )
233 235 if mtime is not None:
234 236 cache_data = (mode, size, mtime)
235 237 lfdirstate.set_clean(lfile, cache_data)
236 238 else:
237 239 tocheck = unsure + modified + added + clean
238 240 modified, added, clean = [], [], []
239 241 checkexec = self.dirstate._checkexec
240 242
241 243 for lfile in tocheck:
242 244 standin = lfutil.standin(lfile)
243 245 if standin in ctx1:
244 246 abslfile = self.wjoin(lfile)
245 247 if (
246 248 lfutil.readasstandin(ctx1[standin])
247 249 != lfutil.hashfile(abslfile)
248 250 ) or (
249 251 checkexec
250 252 and (b'x' in ctx1.flags(standin))
251 253 != bool(lfutil.getexecutable(abslfile))
252 254 ):
253 255 modified.append(lfile)
254 256 elif listclean:
255 257 clean.append(lfile)
256 258 else:
257 259 added.append(lfile)
258 260
259 261 # at this point, 'removed' contains largefiles
260 262 # marked as 'R' in the working context.
261 263 # then, largefiles not managed also in the target
262 264 # context should be excluded from 'removed'.
263 265 removed = [
264 266 lfile
265 267 for lfile in removed
266 268 if lfutil.standin(lfile) in ctx1
267 269 ]
268 270
269 271 # Standins no longer found in lfdirstate have been deleted
270 272 for standin in ctx1.walk(lfutil.getstandinmatcher(self)):
271 273 lfile = lfutil.splitstandin(standin)
272 274 if not match(lfile):
273 275 continue
274 276 if lfile not in lfdirstate:
275 277 deleted.append(lfile)
276 278 # Sync "largefile has been removed" back to the
277 279 # standin. Removing a file as a side effect of
278 280 # running status is gross, but the alternatives (if
279 281 # any) are worse.
280 282 self.wvfs.unlinkpath(standin, ignoremissing=True)
281 283
282 284 # Filter result lists
283 285 result = list(result)
284 286
285 287 # Largefiles are not really removed when they're
286 288 # still in the normal dirstate. Likewise, normal
287 289 # files are not really removed if they are still in
288 290 # lfdirstate. This happens in merges where files
289 291 # change type.
290 292 removed = [f for f in removed if f not in self.dirstate]
291 293 result[2] = [f for f in result[2] if f not in lfdirstate]
292 294
293 295 lfiles = set(lfdirstate)
294 296 # Unknown files
295 297 result[4] = set(result[4]).difference(lfiles)
296 298 # Ignored files
297 299 result[5] = set(result[5]).difference(lfiles)
298 300 # combine normal files and largefiles
299 301 normals = [
300 302 [fn for fn in filelist if not lfutil.isstandin(fn)]
301 303 for filelist in result
302 304 ]
303 305 lfstatus = (
304 306 modified,
305 307 added,
306 308 removed,
307 309 deleted,
308 310 [],
309 311 [],
310 312 clean,
311 313 )
312 314 result = [
313 315 sorted(list1 + list2)
314 316 for (list1, list2) in zip(normals, lfstatus)
315 317 ]
316 318 else: # not against working directory
317 319 result = [
318 320 [lfutil.splitstandin(f) or f for f in items]
319 321 for items in result
320 322 ]
321 323
322 324 if gotlock:
323 325 lfdirstate.write(self.currenttransaction())
324 326 else:
325 327 lfdirstate.invalidate()
326 328
327 329 self.lfstatus = True
328 330 return scmutil.status(*result)
329 331
330 332 def commitctx(self, ctx, *args, **kwargs):
331 333 node = super(lfilesrepo, self).commitctx(ctx, *args, **kwargs)
332 334
333 335 class lfilesctx(ctx.__class__):
334 336 def markcommitted(self, node):
335 337 orig = super(lfilesctx, self).markcommitted
336 338 return lfutil.markcommitted(orig, self, node)
337 339
338 340 ctx.__class__ = lfilesctx
339 341 return node
340 342
341 343 # Before commit, largefile standins have not had their
342 344 # contents updated to reflect the hash of their largefile.
343 345 # Do that here.
344 346 def commit(
345 347 self,
346 348 text=b"",
347 349 user=None,
348 350 date=None,
349 351 match=None,
350 352 force=False,
351 353 editor=False,
352 354 extra=None,
353 355 ):
354 356 if extra is None:
355 357 extra = {}
356 358 orig = super(lfilesrepo, self).commit
357 359
358 360 with self.wlock():
359 361 lfcommithook = self._lfcommithooks[-1]
360 362 match = lfcommithook(self, match)
361 363 result = orig(
362 364 text=text,
363 365 user=user,
364 366 date=date,
365 367 match=match,
366 368 force=force,
367 369 editor=editor,
368 370 extra=extra,
369 371 )
370 372 return result
371 373
372 374 # TODO: _subdirlfs should be moved into "lfutil.py", because
373 375 # it is referred only from "lfutil.updatestandinsbymatch"
374 376 def _subdirlfs(self, files, lfiles):
375 377 """
376 378 Adjust matched file list
377 379 If we pass a directory to commit whose only committable files
378 380 are largefiles, the core commit code aborts before finding
379 381 the largefiles.
380 382 So we do the following:
381 383 For directories that only have largefiles as matches,
382 384 we explicitly add the largefiles to the match list and remove
383 385 the directory.
384 386 In other cases, we leave the match list unmodified.
385 387 """
386 388 actualfiles = []
387 389 dirs = []
388 390 regulars = []
389 391
390 392 for f in files:
391 393 if lfutil.isstandin(f + b'/'):
392 394 raise error.Abort(
393 395 _(b'file "%s" is a largefile standin') % f,
394 396 hint=b'commit the largefile itself instead',
395 397 )
396 398 # Scan directories
397 399 if self.wvfs.isdir(f):
398 400 dirs.append(f)
399 401 else:
400 402 regulars.append(f)
401 403
402 404 for f in dirs:
403 405 matcheddir = False
404 406 d = self.dirstate.normalize(f) + b'/'
405 407 # Check for matched normal files
406 408 for mf in regulars:
407 409 if self.dirstate.normalize(mf).startswith(d):
408 410 actualfiles.append(f)
409 411 matcheddir = True
410 412 break
411 413 if not matcheddir:
412 414 # If no normal match, manually append
413 415 # any matching largefiles
414 416 for lf in lfiles:
415 417 if self.dirstate.normalize(lf).startswith(d):
416 418 actualfiles.append(lf)
417 419 if not matcheddir:
418 420 # There may still be normal files in the dir, so
419 421 # add a directory to the list, which
420 422 # forces status/dirstate to walk all files and
421 423 # call the match function on the matcher, even
422 424 # on case sensitive filesystems.
423 425 actualfiles.append(b'.')
424 426 matcheddir = True
425 427 # Nothing in dir, so readd it
426 428 # and let commit reject it
427 429 if not matcheddir:
428 430 actualfiles.append(f)
429 431
430 432 # Always add normal files
431 433 actualfiles += regulars
432 434 return actualfiles
433 435
434 436 repo.__class__ = lfilesrepo
435 437
436 438 # stack of hooks being executed before committing.
437 439 # only last element ("_lfcommithooks[-1]") is used for each committing.
438 440 repo._lfcommithooks = [lfutil.updatestandinsbymatch]
439 441
440 442 # Stack of status writer functions taking "*msg, **opts" arguments
441 443 # like "ui.status()". Only last element ("_lfstatuswriters[-1]")
442 444 # is used to write status out.
443 445 repo._lfstatuswriters = [ui.status]
444 446
445 447 def prepushoutgoinghook(pushop):
446 448 """Push largefiles for pushop before pushing revisions."""
447 449 lfrevs = pushop.lfrevs
448 450 if lfrevs is None:
449 451 lfrevs = pushop.outgoing.missing
450 452 if lfrevs:
451 453 toupload = set()
452 454 addfunc = lambda fn, lfhash: toupload.add(lfhash)
453 455 lfutil.getlfilestoupload(pushop.repo, lfrevs, addfunc)
454 456 lfcommands.uploadlfiles(ui, pushop.repo, pushop.remote, toupload)
455 457
456 458 repo.prepushoutgoinghooks.add(b"largefiles", prepushoutgoinghook)
457 459
458 460 def checkrequireslfiles(ui, repo, **kwargs):
459 461 with repo.lock():
460 462 if b'largefiles' in repo.requirements:
461 463 return
462 464 marker = lfutil.shortnameslash
463 465 for entry in repo.store.data_entries():
464 466 # XXX note that this match is not rooted and can wrongly match
465 467 # directory ending with ".hglf"
466 468 if entry.is_revlog and marker in entry.target_id:
467 469 repo.requirements.add(b'largefiles')
468 470 scmutil.writereporequirements(repo)
469 471 break
470 472
471 473 ui.setconfig(
472 474 b'hooks', b'changegroup.lfiles', checkrequireslfiles, b'largefiles'
473 475 )
474 476 ui.setconfig(b'hooks', b'commit.lfiles', checkrequireslfiles, b'largefiles')
@@ -1,2281 +1,2281 b''
1 1 # rebase.py - rebasing feature for mercurial
2 2 #
3 3 # Copyright 2008 Stefano Tortarolo <stefano.tortarolo at gmail dot com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 '''command to move sets of revisions to a different ancestor
9 9
10 10 This extension lets you rebase changesets in an existing Mercurial
11 11 repository.
12 12
13 13 For more information:
14 14 https://mercurial-scm.org/wiki/RebaseExtension
15 15 '''
16 16
17 17
18 18 import os
19 19
20 20 from mercurial.i18n import _
21 21 from mercurial.node import (
22 22 nullrev,
23 23 short,
24 24 wdirrev,
25 25 )
26 26 from mercurial.pycompat import open
27 27 from mercurial import (
28 28 bookmarks,
29 29 cmdutil,
30 30 commands,
31 31 copies,
32 32 destutil,
33 33 error,
34 34 extensions,
35 35 logcmdutil,
36 36 merge as mergemod,
37 37 mergestate as mergestatemod,
38 38 mergeutil,
39 39 obsolete,
40 40 obsutil,
41 41 patch,
42 42 phases,
43 43 pycompat,
44 44 registrar,
45 45 repair,
46 46 revset,
47 47 revsetlang,
48 48 rewriteutil,
49 49 scmutil,
50 50 smartset,
51 51 state as statemod,
52 52 util,
53 53 )
54 54
55 55
56 56 # The following constants are used throughout the rebase module. The ordering of
57 57 # their values must be maintained.
58 58
59 59 # Indicates that a revision needs to be rebased
60 60 revtodo = -1
61 61 revtodostr = b'-1'
62 62
63 63 # legacy revstates no longer needed in current code
64 64 # -2: nullmerge, -3: revignored, -4: revprecursor, -5: revpruned
65 65 legacystates = {b'-2', b'-3', b'-4', b'-5'}
66 66
67 67 cmdtable = {}
68 68 command = registrar.command(cmdtable)
69 69
70 70 configtable = {}
71 71 configitem = registrar.configitem(configtable)
72 72 configitem(
73 73 b'devel',
74 74 b'rebase.force-in-memory-merge',
75 75 default=False,
76 76 )
77 77 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
78 78 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
79 79 # be specifying the version(s) of Mercurial they are tested with, or
80 80 # leave the attribute unspecified.
81 81 testedwith = b'ships-with-hg-core'
82 82
83 83
84 84 def _nothingtorebase():
85 85 return 1
86 86
87 87
88 88 def _savebranch(ctx, extra):
89 89 extra[b'branch'] = ctx.branch()
90 90
91 91
92 92 def _destrebase(repo, sourceset, destspace=None):
93 93 """small wrapper around destmerge to pass the right extra args
94 94
95 95 Please wrap destutil.destmerge instead."""
96 96 return destutil.destmerge(
97 97 repo,
98 98 action=b'rebase',
99 99 sourceset=sourceset,
100 100 onheadcheck=False,
101 101 destspace=destspace,
102 102 )
103 103
104 104
105 105 revsetpredicate = registrar.revsetpredicate()
106 106
107 107
108 108 @revsetpredicate(b'_destrebase')
109 109 def _revsetdestrebase(repo, subset, x):
110 110 # ``_rebasedefaultdest()``
111 111
112 112 # default destination for rebase.
113 113 # # XXX: Currently private because I expect the signature to change.
114 114 # # XXX: - bailing out in case of ambiguity vs returning all data.
115 115 # i18n: "_rebasedefaultdest" is a keyword
116 116 sourceset = None
117 117 if x is not None:
118 118 sourceset = revset.getset(repo, smartset.fullreposet(repo), x)
119 119 return subset & smartset.baseset([_destrebase(repo, sourceset)])
120 120
121 121
122 122 @revsetpredicate(b'_destautoorphanrebase')
123 123 def _revsetdestautoorphanrebase(repo, subset, x):
124 124 # ``_destautoorphanrebase()``
125 125
126 126 # automatic rebase destination for a single orphan revision.
127 127 unfi = repo.unfiltered()
128 128 obsoleted = unfi.revs(b'obsolete()')
129 129
130 130 src = revset.getset(repo, subset, x).first()
131 131
132 132 # Empty src or already obsoleted - Do not return a destination
133 133 if not src or src in obsoleted:
134 134 return smartset.baseset()
135 135 dests = destutil.orphanpossibledestination(repo, src)
136 136 if len(dests) > 1:
137 137 raise error.StateError(
138 138 _(b"ambiguous automatic rebase: %r could end up on any of %r")
139 139 % (src, dests)
140 140 )
141 141 # We have zero or one destination, so we can just return here.
142 142 return smartset.baseset(dests)
143 143
144 144
145 145 def _ctxdesc(ctx):
146 146 """short description for a context"""
147 147 return cmdutil.format_changeset_summary(
148 148 ctx.repo().ui, ctx, command=b'rebase'
149 149 )
150 150
151 151
152 152 class rebaseruntime:
153 153 """This class is a container for rebase runtime state"""
154 154
155 155 def __init__(self, repo, ui, inmemory=False, dryrun=False, opts=None):
156 156 if opts is None:
157 157 opts = {}
158 158
159 159 # prepared: whether we have rebasestate prepared or not. Currently it
160 160 # decides whether "self.repo" is unfiltered or not.
161 161 # The rebasestate has explicit hash to hash instructions not depending
162 162 # on visibility. If rebasestate exists (in-memory or on-disk), use
163 163 # unfiltered repo to avoid visibility issues.
164 164 # Before knowing rebasestate (i.e. when starting a new rebase (not
165 165 # --continue or --abort)), the original repo should be used so
166 166 # visibility-dependent revsets are correct.
167 167 self.prepared = False
168 168 self.resume = False
169 169 self._repo = repo
170 170
171 171 self.ui = ui
172 172 self.opts = opts
173 173 self.originalwd = None
174 174 self.external = nullrev
175 175 # Mapping between the old revision id and either what is the new rebased
176 176 # revision or what needs to be done with the old revision. The state
177 177 # dict will be what contains most of the rebase progress state.
178 178 self.state = {}
179 179 self.activebookmark = None
180 180 self.destmap = {}
181 181 self.skipped = set()
182 182
183 183 self.collapsef = opts.get('collapse', False)
184 184 self.collapsemsg = cmdutil.logmessage(ui, pycompat.byteskwargs(opts))
185 185 self.date = opts.get('date', None)
186 186
187 187 e = opts.get('extrafn') # internal, used by e.g. hgsubversion
188 188 self.extrafns = [rewriteutil.preserve_extras_on_rebase]
189 189 if e:
190 190 self.extrafns = [e]
191 191
192 192 self.backupf = ui.configbool(b'rewrite', b'backup-bundle')
193 193 self.keepf = opts.get('keep', False)
194 194 self.keepbranchesf = opts.get('keepbranches', False)
195 195 self.skipemptysuccessorf = rewriteutil.skip_empty_successor(
196 196 repo.ui, b'rebase'
197 197 )
198 198 self.obsolete_with_successor_in_destination = {}
199 199 self.obsolete_with_successor_in_rebase_set = set()
200 200 self.inmemory = inmemory
201 201 self.dryrun = dryrun
202 202 self.stateobj = statemod.cmdstate(repo, b'rebasestate')
203 203
204 204 @property
205 205 def repo(self):
206 206 if self.prepared:
207 207 return self._repo.unfiltered()
208 208 else:
209 209 return self._repo
210 210
211 211 def storestatus(self, tr=None):
212 212 """Store the current status to allow recovery"""
213 213 if tr:
214 214 tr.addfilegenerator(
215 215 b'rebasestate',
216 216 (b'rebasestate',),
217 217 self._writestatus,
218 218 location=b'plain',
219 219 )
220 220 else:
221 221 with self.repo.vfs(b"rebasestate", b"w") as f:
222 222 self._writestatus(f)
223 223
224 224 def _writestatus(self, f):
225 225 repo = self.repo
226 226 assert repo.filtername is None
227 227 f.write(repo[self.originalwd].hex() + b'\n')
228 228 # was "dest". we now write dest per src root below.
229 229 f.write(b'\n')
230 230 f.write(repo[self.external].hex() + b'\n')
231 231 f.write(b'%d\n' % int(self.collapsef))
232 232 f.write(b'%d\n' % int(self.keepf))
233 233 f.write(b'%d\n' % int(self.keepbranchesf))
234 234 f.write(b'%s\n' % (self.activebookmark or b''))
235 235 destmap = self.destmap
236 236 for d, v in self.state.items():
237 237 oldrev = repo[d].hex()
238 238 if v >= 0:
239 239 newrev = repo[v].hex()
240 240 else:
241 241 newrev = b"%d" % v
242 242 destnode = repo[destmap[d]].hex()
243 243 f.write(b"%s:%s:%s\n" % (oldrev, newrev, destnode))
244 244 repo.ui.debug(b'rebase status stored\n')
245 245
246 246 def restorestatus(self):
247 247 """Restore a previously stored status"""
248 248 if not self.stateobj.exists():
249 249 cmdutil.wrongtooltocontinue(self.repo, _(b'rebase'))
250 250
251 251 data = self._read()
252 252 self.repo.ui.debug(b'rebase status resumed\n')
253 253
254 254 self.originalwd = data[b'originalwd']
255 255 self.destmap = data[b'destmap']
256 256 self.state = data[b'state']
257 257 self.skipped = data[b'skipped']
258 258 self.collapsef = data[b'collapse']
259 259 self.keepf = data[b'keep']
260 260 self.keepbranchesf = data[b'keepbranches']
261 261 self.external = data[b'external']
262 262 self.activebookmark = data[b'activebookmark']
263 263
264 264 def _read(self):
265 265 self.prepared = True
266 266 repo = self.repo
267 267 assert repo.filtername is None
268 268 data = {
269 269 b'keepbranches': None,
270 270 b'collapse': None,
271 271 b'activebookmark': None,
272 272 b'external': nullrev,
273 273 b'keep': None,
274 274 b'originalwd': None,
275 275 }
276 276 legacydest = None
277 277 state = {}
278 278 destmap = {}
279 279
280 280 if True:
281 281 f = repo.vfs(b"rebasestate")
282 282 for i, l in enumerate(f.read().splitlines()):
283 283 if i == 0:
284 284 data[b'originalwd'] = repo[l].rev()
285 285 elif i == 1:
286 286 # this line should be empty in newer version. but legacy
287 287 # clients may still use it
288 288 if l:
289 289 legacydest = repo[l].rev()
290 290 elif i == 2:
291 291 data[b'external'] = repo[l].rev()
292 292 elif i == 3:
293 293 data[b'collapse'] = bool(int(l))
294 294 elif i == 4:
295 295 data[b'keep'] = bool(int(l))
296 296 elif i == 5:
297 297 data[b'keepbranches'] = bool(int(l))
298 298 elif i == 6 and not (len(l) == 81 and b':' in l):
299 299 # line 6 is a recent addition, so for backwards
300 300 # compatibility check that the line doesn't look like the
301 301 # oldrev:newrev lines
302 302 data[b'activebookmark'] = l
303 303 else:
304 304 args = l.split(b':')
305 305 oldrev = repo[args[0]].rev()
306 306 newrev = args[1]
307 307 if newrev in legacystates:
308 308 continue
309 309 if len(args) > 2:
310 310 destrev = repo[args[2]].rev()
311 311 else:
312 312 destrev = legacydest
313 313 destmap[oldrev] = destrev
314 314 if newrev == revtodostr:
315 315 state[oldrev] = revtodo
316 316 # Legacy compat special case
317 317 else:
318 318 state[oldrev] = repo[newrev].rev()
319 319
320 320 if data[b'keepbranches'] is None:
321 321 raise error.Abort(_(b'.hg/rebasestate is incomplete'))
322 322
323 323 data[b'destmap'] = destmap
324 324 data[b'state'] = state
325 325 skipped = set()
326 326 # recompute the set of skipped revs
327 327 if not data[b'collapse']:
328 328 seen = set(destmap.values())
329 329 for old, new in sorted(state.items()):
330 330 if new != revtodo and new in seen:
331 331 skipped.add(old)
332 332 seen.add(new)
333 333 data[b'skipped'] = skipped
334 334 repo.ui.debug(
335 335 b'computed skipped revs: %s\n'
336 336 % (b' '.join(b'%d' % r for r in sorted(skipped)) or b'')
337 337 )
338 338
339 339 return data
340 340
341 341 def _handleskippingobsolete(self):
342 342 """Compute structures necessary for skipping obsolete revisions"""
343 343 if self.keepf:
344 344 return
345 345 if not self.ui.configbool(b'experimental', b'rebaseskipobsolete'):
346 346 return
347 347 obsoleteset = {r for r in self.state if self.repo[r].obsolete()}
348 348 (
349 349 self.obsolete_with_successor_in_destination,
350 350 self.obsolete_with_successor_in_rebase_set,
351 351 ) = _compute_obsolete_sets(self.repo, obsoleteset, self.destmap)
352 352 skippedset = set(self.obsolete_with_successor_in_destination)
353 353 skippedset.update(self.obsolete_with_successor_in_rebase_set)
354 354 _checkobsrebase(self.repo, self.ui, obsoleteset, skippedset)
355 355 if obsolete.isenabled(self.repo, obsolete.allowdivergenceopt):
356 356 self.obsolete_with_successor_in_rebase_set = set()
357 357 else:
358 358 for rev in self.repo.revs(
359 359 b'descendants(%ld) and not %ld',
360 360 self.obsolete_with_successor_in_rebase_set,
361 361 self.obsolete_with_successor_in_rebase_set,
362 362 ):
363 363 self.state.pop(rev, None)
364 364 self.destmap.pop(rev, None)
365 365
366 366 def _prepareabortorcontinue(
367 367 self, isabort, backup=True, suppwarns=False, dryrun=False, confirm=False
368 368 ):
369 369 self.resume = True
370 370 try:
371 371 self.restorestatus()
372 372 # Calculate self.obsolete_* sets
373 373 self._handleskippingobsolete()
374 374 self.collapsemsg = restorecollapsemsg(self.repo, isabort)
375 375 except error.RepoLookupError:
376 376 if isabort:
377 377 clearstatus(self.repo)
378 378 clearcollapsemsg(self.repo)
379 379 self.repo.ui.warn(
380 380 _(
381 381 b'rebase aborted (no revision is removed,'
382 382 b' only broken state is cleared)\n'
383 383 )
384 384 )
385 385 return 0
386 386 else:
387 387 msg = _(b'cannot continue inconsistent rebase')
388 388 hint = _(b'use "hg rebase --abort" to clear broken state')
389 389 raise error.Abort(msg, hint=hint)
390 390
391 391 if isabort:
392 392 backup = backup and self.backupf
393 393 return self._abort(
394 394 backup=backup,
395 395 suppwarns=suppwarns,
396 396 dryrun=dryrun,
397 397 confirm=confirm,
398 398 )
399 399
400 400 def _preparenewrebase(self, destmap):
401 401 if not destmap:
402 402 return _nothingtorebase()
403 403
404 404 result = buildstate(self.repo, destmap, self.collapsef)
405 405
406 406 if not result:
407 407 # Empty state built, nothing to rebase
408 408 self.ui.status(_(b'nothing to rebase\n'))
409 409 return _nothingtorebase()
410 410
411 411 (self.originalwd, self.destmap, self.state) = result
412 412 if self.collapsef:
413 413 dests = set(self.destmap.values())
414 414 if len(dests) != 1:
415 415 raise error.InputError(
416 416 _(b'--collapse does not work with multiple destinations')
417 417 )
418 418 destrev = next(iter(dests))
419 419 destancestors = self.repo.changelog.ancestors(
420 420 [destrev], inclusive=True
421 421 )
422 422 self.external = externalparent(self.repo, self.state, destancestors)
423 423
424 424 for destrev in sorted(set(destmap.values())):
425 425 dest = self.repo[destrev]
426 426 if dest.closesbranch() and not self.keepbranchesf:
427 427 self.ui.status(_(b'reopening closed branch head %s\n') % dest)
428 428
429 429 # Calculate self.obsolete_* sets
430 430 self._handleskippingobsolete()
431 431
432 432 if not self.keepf:
433 433 rebaseset = set(destmap.keys())
434 434 rebaseset -= set(self.obsolete_with_successor_in_destination)
435 435 rebaseset -= self.obsolete_with_successor_in_rebase_set
436 436 # We have our own divergence-checking in the rebase extension
437 437 overrides = {}
438 438 if obsolete.isenabled(self.repo, obsolete.createmarkersopt):
439 439 overrides = {
440 440 (b'experimental', b'evolution.allowdivergence'): b'true'
441 441 }
442 442 try:
443 443 with self.ui.configoverride(overrides):
444 444 rewriteutil.precheck(self.repo, rebaseset, action=b'rebase')
445 445 except error.Abort as e:
446 446 if e.hint is None:
447 447 e.hint = _(b'use --keep to keep original changesets')
448 448 raise e
449 449
450 450 self.prepared = True
451 451
452 452 def _assignworkingcopy(self):
453 453 if self.inmemory:
454 454 from mercurial.context import overlayworkingctx
455 455
456 456 self.wctx = overlayworkingctx(self.repo)
457 457 self.repo.ui.debug(b"rebasing in memory\n")
458 458 else:
459 459 self.wctx = self.repo[None]
460 460 self.repo.ui.debug(b"rebasing on disk\n")
461 461 self.repo.ui.log(
462 462 b"rebase",
463 463 b"using in-memory rebase: %r\n",
464 464 self.inmemory,
465 465 rebase_imm_used=self.inmemory,
466 466 )
467 467
468 468 def _performrebase(self, tr):
469 469 self._assignworkingcopy()
470 470 repo, ui = self.repo, self.ui
471 471 if self.keepbranchesf:
472 472 # insert _savebranch at the start of extrafns so if
473 473 # there's a user-provided extrafn it can clobber branch if
474 474 # desired
475 475 self.extrafns.insert(0, _savebranch)
476 476 if self.collapsef:
477 477 branches = set()
478 478 for rev in self.state:
479 479 branches.add(repo[rev].branch())
480 480 if len(branches) > 1:
481 481 raise error.InputError(
482 482 _(b'cannot collapse multiple named branches')
483 483 )
484 484
485 485 # Keep track of the active bookmarks in order to reset them later
486 486 self.activebookmark = self.activebookmark or repo._activebookmark
487 487 if self.activebookmark:
488 488 bookmarks.deactivate(repo)
489 489
490 490 # Store the state before we begin so users can run 'hg rebase --abort'
491 491 # if we fail before the transaction closes.
492 492 self.storestatus()
493 493 if tr:
494 494 # When using single transaction, store state when transaction
495 495 # commits.
496 496 self.storestatus(tr)
497 497
498 498 cands = [k for k, v in self.state.items() if v == revtodo]
499 499 p = repo.ui.makeprogress(
500 500 _(b"rebasing"), unit=_(b'changesets'), total=len(cands)
501 501 )
502 502
503 503 def progress(ctx):
504 504 p.increment(item=(b"%d:%s" % (ctx.rev(), ctx)))
505 505
506 506 for subset in sortsource(self.destmap):
507 507 sortedrevs = self.repo.revs(b'sort(%ld, -topo)', subset)
508 508 for rev in sortedrevs:
509 509 self._rebasenode(tr, rev, progress)
510 510 p.complete()
511 511 ui.note(_(b'rebase merging completed\n'))
512 512
513 513 def _concludenode(self, rev, editor, commitmsg=None):
514 514 """Commit the wd changes with parents p1 and p2.
515 515
516 516 Reuse commit info from rev but also store useful information in extra.
517 517 Return node of committed revision."""
518 518 repo = self.repo
519 519 ctx = repo[rev]
520 520 if commitmsg is None:
521 521 commitmsg = ctx.description()
522 522
523 523 # Skip replacement if collapsing, as that degenerates to p1 for all
524 524 # nodes.
525 525 if not self.collapsef:
526 526 cl = repo.changelog
527 527 commitmsg = rewriteutil.update_hash_refs(
528 528 repo,
529 529 commitmsg,
530 530 {
531 531 cl.node(oldrev): [cl.node(newrev)]
532 532 for oldrev, newrev in self.state.items()
533 533 if newrev != revtodo
534 534 },
535 535 )
536 536
537 537 date = self.date
538 538 if date is None:
539 539 date = ctx.date()
540 540 extra = {}
541 541 if repo.ui.configbool(b'rebase', b'store-source'):
542 542 extra = {b'rebase_source': ctx.hex()}
543 543 for c in self.extrafns:
544 544 c(ctx, extra)
545 545 destphase = max(ctx.phase(), phases.draft)
546 546 overrides = {
547 547 (b'phases', b'new-commit'): destphase,
548 548 (b'ui', b'allowemptycommit'): not self.skipemptysuccessorf,
549 549 }
550 550 with repo.ui.configoverride(overrides, b'rebase'):
551 551 if self.inmemory:
552 552 newnode = commitmemorynode(
553 553 repo,
554 554 wctx=self.wctx,
555 555 extra=extra,
556 556 commitmsg=commitmsg,
557 557 editor=editor,
558 558 user=ctx.user(),
559 559 date=date,
560 560 )
561 561 else:
562 562 newnode = commitnode(
563 563 repo,
564 564 extra=extra,
565 565 commitmsg=commitmsg,
566 566 editor=editor,
567 567 user=ctx.user(),
568 568 date=date,
569 569 )
570 570
571 571 return newnode
572 572
573 573 def _rebasenode(self, tr, rev, progressfn):
574 574 repo, ui, opts = self.repo, self.ui, self.opts
575 575 ctx = repo[rev]
576 576 desc = _ctxdesc(ctx)
577 577 if self.state[rev] == rev:
578 578 ui.status(_(b'already rebased %s\n') % desc)
579 579 elif rev in self.obsolete_with_successor_in_rebase_set:
580 580 msg = (
581 581 _(
582 582 b'note: not rebasing %s and its descendants as '
583 583 b'this would cause divergence\n'
584 584 )
585 585 % desc
586 586 )
587 587 repo.ui.status(msg)
588 588 self.skipped.add(rev)
589 589 elif rev in self.obsolete_with_successor_in_destination:
590 590 succ = self.obsolete_with_successor_in_destination[rev]
591 591 if succ is None:
592 592 msg = _(b'note: not rebasing %s, it has no successor\n') % desc
593 593 else:
594 594 succdesc = _ctxdesc(repo[succ])
595 595 msg = _(
596 596 b'note: not rebasing %s, already in destination as %s\n'
597 597 ) % (desc, succdesc)
598 598 repo.ui.status(msg)
599 599 # Make clearrebased aware state[rev] is not a true successor
600 600 self.skipped.add(rev)
601 601 # Record rev as moved to its desired destination in self.state.
602 602 # This helps bookmark and working parent movement.
603 603 dest = max(
604 604 adjustdest(repo, rev, self.destmap, self.state, self.skipped)
605 605 )
606 606 self.state[rev] = dest
607 607 elif self.state[rev] == revtodo:
608 608 ui.status(_(b'rebasing %s\n') % desc)
609 609 progressfn(ctx)
610 610 p1, p2, base = defineparents(
611 611 repo,
612 612 rev,
613 613 self.destmap,
614 614 self.state,
615 615 self.skipped,
616 616 self.obsolete_with_successor_in_destination,
617 617 )
618 618 if self.resume and self.wctx.p1().rev() == p1:
619 619 repo.ui.debug(b'resuming interrupted rebase\n')
620 620 self.resume = False
621 621 else:
622 622 overrides = {(b'ui', b'forcemerge'): opts.get('tool', b'')}
623 623 with ui.configoverride(overrides, b'rebase'):
624 624 try:
625 625 rebasenode(
626 626 repo,
627 627 rev,
628 628 p1,
629 629 p2,
630 630 base,
631 631 self.collapsef,
632 632 wctx=self.wctx,
633 633 )
634 634 except error.InMemoryMergeConflictsError:
635 635 if self.dryrun:
636 636 raise error.ConflictResolutionRequired(b'rebase')
637 637 if self.collapsef:
638 638 # TODO: Make the overlayworkingctx reflected
639 639 # in the working copy here instead of re-raising
640 640 # so the entire rebase operation is retried.
641 641 raise
642 642 ui.status(
643 643 _(
644 644 b"hit merge conflicts; rebasing that "
645 645 b"commit again in the working copy\n"
646 646 )
647 647 )
648 648 try:
649 649 cmdutil.bailifchanged(repo)
650 650 except error.Abort:
651 651 clearstatus(repo)
652 652 clearcollapsemsg(repo)
653 653 raise
654 654 self.inmemory = False
655 655 self._assignworkingcopy()
656 656 mergemod.update(repo[p1], wc=self.wctx)
657 657 rebasenode(
658 658 repo,
659 659 rev,
660 660 p1,
661 661 p2,
662 662 base,
663 663 self.collapsef,
664 664 wctx=self.wctx,
665 665 )
666 666 if not self.collapsef:
667 667 merging = p2 != nullrev
668 668 editform = cmdutil.mergeeditform(merging, b'rebase')
669 669 editor = cmdutil.getcommiteditor(editform=editform, **opts)
670 670 # We need to set parents again here just in case we're continuing
671 671 # a rebase started with an old hg version (before 9c9cfecd4600),
672 672 # because those old versions would have left us with two dirstate
673 673 # parents, and we don't want to create a merge commit here (unless
674 674 # we're rebasing a merge commit).
675 675 self.wctx.setparents(repo[p1].node(), repo[p2].node())
676 676 newnode = self._concludenode(rev, editor)
677 677 else:
678 678 # Skip commit if we are collapsing
679 679 newnode = None
680 680 # Update the state
681 681 if newnode is not None:
682 682 self.state[rev] = repo[newnode].rev()
683 683 ui.debug(b'rebased as %s\n' % short(newnode))
684 684 if repo[newnode].isempty():
685 685 ui.warn(
686 686 _(
687 687 b'note: created empty successor for %s, its '
688 688 b'destination already has all its changes\n'
689 689 )
690 690 % desc
691 691 )
692 692 else:
693 693 if not self.collapsef:
694 694 ui.warn(
695 695 _(
696 696 b'note: not rebasing %s, its destination already '
697 697 b'has all its changes\n'
698 698 )
699 699 % desc
700 700 )
701 701 self.skipped.add(rev)
702 702 self.state[rev] = p1
703 703 ui.debug(b'next revision set to %d\n' % p1)
704 704 else:
705 705 ui.status(
706 706 _(b'already rebased %s as %s\n') % (desc, repo[self.state[rev]])
707 707 )
708 708 if not tr:
709 709 # When not using single transaction, store state after each
710 710 # commit is completely done. On InterventionRequired, we thus
711 711 # won't store the status. Instead, we'll hit the "len(parents) == 2"
712 712 # case and realize that the commit was in progress.
713 713 self.storestatus()
714 714
715 715 def _finishrebase(self):
716 716 repo, ui, opts = self.repo, self.ui, self.opts
717 717 fm = ui.formatter(b'rebase', pycompat.byteskwargs(opts))
718 718 fm.startitem()
719 719 if self.collapsef:
720 720 p1, p2, _base = defineparents(
721 721 repo,
722 722 min(self.state),
723 723 self.destmap,
724 724 self.state,
725 725 self.skipped,
726 726 self.obsolete_with_successor_in_destination,
727 727 )
728 728 editopt = opts.get('edit')
729 729 editform = b'rebase.collapse'
730 730 if self.collapsemsg:
731 731 commitmsg = self.collapsemsg
732 732 else:
733 733 commitmsg = b'Collapsed revision'
734 734 for rebased in sorted(self.state):
735 735 if rebased not in self.skipped:
736 736 commitmsg += b'\n* %s' % repo[rebased].description()
737 737 editopt = True
738 738 editor = cmdutil.getcommiteditor(edit=editopt, editform=editform)
739 739 revtoreuse = max(self.state)
740 740
741 741 self.wctx.setparents(repo[p1].node(), repo[self.external].node())
742 742 newnode = self._concludenode(
743 743 revtoreuse, editor, commitmsg=commitmsg
744 744 )
745 745
746 746 if newnode is not None:
747 747 newrev = repo[newnode].rev()
748 748 for oldrev in self.state:
749 749 self.state[oldrev] = newrev
750 750
751 751 if b'qtip' in repo.tags():
752 752 updatemq(repo, self.state, self.skipped, **opts)
753 753
754 754 # restore original working directory
755 755 # (we do this before stripping)
756 756 newwd = self.state.get(self.originalwd, self.originalwd)
757 757 if newwd < 0:
758 758 # original directory is a parent of rebase set root or ignored
759 759 newwd = self.originalwd
760 760 if newwd not in [c.rev() for c in repo[None].parents()]:
761 761 ui.note(_(b"update back to initial working directory parent\n"))
762 762 mergemod.update(repo[newwd])
763 763
764 764 collapsedas = None
765 765 if self.collapsef and not self.keepf:
766 766 collapsedas = newnode
767 767 clearrebased(
768 768 ui,
769 769 repo,
770 770 self.destmap,
771 771 self.state,
772 772 self.skipped,
773 773 collapsedas,
774 774 self.keepf,
775 775 fm=fm,
776 776 backup=self.backupf,
777 777 )
778 778
779 779 clearstatus(repo)
780 780 clearcollapsemsg(repo)
781 781
782 782 ui.note(_(b"rebase completed\n"))
783 783 util.unlinkpath(repo.sjoin(b'undo'), ignoremissing=True)
784 784 if self.skipped:
785 785 skippedlen = len(self.skipped)
786 786 ui.note(_(b"%d revisions have been skipped\n") % skippedlen)
787 787 fm.end()
788 788
789 789 if (
790 790 self.activebookmark
791 791 and self.activebookmark in repo._bookmarks
792 792 and repo[b'.'].node() == repo._bookmarks[self.activebookmark]
793 793 ):
794 794 bookmarks.activate(repo, self.activebookmark)
795 795
796 796 def _abort(self, backup=True, suppwarns=False, dryrun=False, confirm=False):
797 797 '''Restore the repository to its original state.'''
798 798
799 799 repo = self.repo
800 800 try:
801 801 # If the first commits in the rebased set get skipped during the
802 802 # rebase, their values within the state mapping will be the dest
803 803 # rev id. The rebased list must must not contain the dest rev
804 804 # (issue4896)
805 805 rebased = [
806 806 s
807 807 for r, s in self.state.items()
808 808 if s >= 0 and s != r and s != self.destmap[r]
809 809 ]
810 810 immutable = [d for d in rebased if not repo[d].mutable()]
811 811 cleanup = True
812 812 if immutable:
813 813 repo.ui.warn(
814 814 _(b"warning: can't clean up public changesets %s\n")
815 815 % b', '.join(bytes(repo[r]) for r in immutable),
816 816 hint=_(b"see 'hg help phases' for details"),
817 817 )
818 818 cleanup = False
819 819
820 820 descendants = set()
821 821 if rebased:
822 822 descendants = set(repo.changelog.descendants(rebased))
823 823 if descendants - set(rebased):
824 824 repo.ui.warn(
825 825 _(
826 826 b"warning: new changesets detected on "
827 827 b"destination branch, can't strip\n"
828 828 )
829 829 )
830 830 cleanup = False
831 831
832 832 if cleanup:
833 833
834 834 if rebased:
835 835 strippoints = [
836 836 c.node() for c in repo.set(b'roots(%ld)', rebased)
837 837 ]
838 838
839 839 updateifonnodes = set(rebased)
840 840 updateifonnodes.update(self.destmap.values())
841 841
842 842 if not confirm:
843 843 # note: when dry run is set the `rebased` and `destmap`
844 844 # variables seem to contain "bad" contents, so do not
845 845 # rely on them. As dryrun does not need this part of
846 846 # the cleanup, this is "fine"
847 847 updateifonnodes.add(self.originalwd)
848 848
849 849 shouldupdate = repo[b'.'].rev() in updateifonnodes
850 850
851 851 # Update away from the rebase if necessary
852 852 if not dryrun and shouldupdate:
853 853 mergemod.clean_update(repo[self.originalwd])
854 854
855 855 # Strip from the first rebased revision
856 856 if rebased:
857 857 repair.strip(repo.ui, repo, strippoints, backup=backup)
858 858
859 859 if self.activebookmark and self.activebookmark in repo._bookmarks:
860 860 bookmarks.activate(repo, self.activebookmark)
861 861
862 862 finally:
863 863 clearstatus(repo)
864 864 clearcollapsemsg(repo)
865 865 if not suppwarns:
866 866 repo.ui.warn(_(b'rebase aborted\n'))
867 867 return 0
868 868
869 869
870 870 @command(
871 871 b'rebase',
872 872 [
873 873 (
874 874 b's',
875 875 b'source',
876 876 [],
877 877 _(b'rebase the specified changesets and their descendants'),
878 878 _(b'REV'),
879 879 ),
880 880 (
881 881 b'b',
882 882 b'base',
883 883 [],
884 884 _(b'rebase everything from branching point of specified changeset'),
885 885 _(b'REV'),
886 886 ),
887 887 (b'r', b'rev', [], _(b'rebase these revisions'), _(b'REV')),
888 888 (
889 889 b'd',
890 890 b'dest',
891 891 b'',
892 892 _(b'rebase onto the specified changeset'),
893 893 _(b'REV'),
894 894 ),
895 895 (b'', b'collapse', False, _(b'collapse the rebased changesets')),
896 896 (
897 897 b'm',
898 898 b'message',
899 899 b'',
900 900 _(b'use text as collapse commit message'),
901 901 _(b'TEXT'),
902 902 ),
903 903 (b'e', b'edit', False, _(b'invoke editor on commit messages')),
904 904 (
905 905 b'l',
906 906 b'logfile',
907 907 b'',
908 908 _(b'read collapse commit message from file'),
909 909 _(b'FILE'),
910 910 ),
911 911 (b'k', b'keep', False, _(b'keep original changesets')),
912 912 (b'', b'keepbranches', False, _(b'keep original branch names')),
913 913 (b'D', b'detach', False, _(b'(DEPRECATED)')),
914 914 (b'i', b'interactive', False, _(b'(DEPRECATED)')),
915 915 (b't', b'tool', b'', _(b'specify merge tool')),
916 916 (b'', b'stop', False, _(b'stop interrupted rebase')),
917 917 (b'c', b'continue', False, _(b'continue an interrupted rebase')),
918 918 (b'a', b'abort', False, _(b'abort an interrupted rebase')),
919 919 (
920 920 b'',
921 921 b'auto-orphans',
922 922 b'',
923 923 _(
924 924 b'automatically rebase orphan revisions '
925 925 b'in the specified revset (EXPERIMENTAL)'
926 926 ),
927 927 ),
928 928 ]
929 929 + cmdutil.dryrunopts
930 930 + cmdutil.formatteropts
931 931 + cmdutil.confirmopts,
932 932 _(b'[[-s REV]... | [-b REV]... | [-r REV]...] [-d REV] [OPTION]...'),
933 933 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT,
934 934 )
935 935 def rebase(ui, repo, **opts):
936 936 """move changeset (and descendants) to a different branch
937 937
938 938 Rebase uses repeated merging to graft changesets from one part of
939 939 history (the source) onto another (the destination). This can be
940 940 useful for linearizing *local* changes relative to a master
941 941 development tree.
942 942
943 943 Published commits cannot be rebased (see :hg:`help phases`).
944 944 To copy commits, see :hg:`help graft`.
945 945
946 946 If you don't specify a destination changeset (``-d/--dest``), rebase
947 947 will use the same logic as :hg:`merge` to pick a destination. if
948 948 the current branch contains exactly one other head, the other head
949 949 is merged with by default. Otherwise, an explicit revision with
950 950 which to merge with must be provided. (destination changeset is not
951 951 modified by rebasing, but new changesets are added as its
952 952 descendants.)
953 953
954 954 Here are the ways to select changesets:
955 955
956 956 1. Explicitly select them using ``--rev``.
957 957
958 958 2. Use ``--source`` to select a root changeset and include all of its
959 959 descendants.
960 960
961 961 3. Use ``--base`` to select a changeset; rebase will find ancestors
962 962 and their descendants which are not also ancestors of the destination.
963 963
964 964 4. If you do not specify any of ``--rev``, ``--source``, or ``--base``,
965 965 rebase will use ``--base .`` as above.
966 966
967 967 If ``--source`` or ``--rev`` is used, special names ``SRC`` and ``ALLSRC``
968 968 can be used in ``--dest``. Destination would be calculated per source
969 969 revision with ``SRC`` substituted by that single source revision and
970 970 ``ALLSRC`` substituted by all source revisions.
971 971
972 972 Rebase will destroy original changesets unless you use ``--keep``.
973 973 It will also move your bookmarks (even if you do).
974 974
975 975 Some changesets may be dropped if they do not contribute changes
976 976 (e.g. merges from the destination branch).
977 977
978 978 Unlike ``merge``, rebase will do nothing if you are at the branch tip of
979 979 a named branch with two heads. You will need to explicitly specify source
980 980 and/or destination.
981 981
982 982 If you need to use a tool to automate merge/conflict decisions, you
983 983 can specify one with ``--tool``, see :hg:`help merge-tools`.
984 984 As a caveat: the tool will not be used to mediate when a file was
985 985 deleted, there is no hook presently available for this.
986 986
987 987 If a rebase is interrupted to manually resolve a conflict, it can be
988 988 continued with --continue/-c, aborted with --abort/-a, or stopped with
989 989 --stop.
990 990
991 991 .. container:: verbose
992 992
993 993 Examples:
994 994
995 995 - move "local changes" (current commit back to branching point)
996 996 to the current branch tip after a pull::
997 997
998 998 hg rebase
999 999
1000 1000 - move a single changeset to the stable branch::
1001 1001
1002 1002 hg rebase -r 5f493448 -d stable
1003 1003
1004 1004 - splice a commit and all its descendants onto another part of history::
1005 1005
1006 1006 hg rebase --source c0c3 --dest 4cf9
1007 1007
1008 1008 - rebase everything on a branch marked by a bookmark onto the
1009 1009 default branch::
1010 1010
1011 1011 hg rebase --base myfeature --dest default
1012 1012
1013 1013 - collapse a sequence of changes into a single commit::
1014 1014
1015 1015 hg rebase --collapse -r 1520:1525 -d .
1016 1016
1017 1017 - move a named branch while preserving its name::
1018 1018
1019 1019 hg rebase -r "branch(featureX)" -d 1.3 --keepbranches
1020 1020
1021 1021 - stabilize orphaned changesets so history looks linear::
1022 1022
1023 1023 hg rebase -r 'orphan()-obsolete()'\
1024 1024 -d 'first(max((successors(max(roots(ALLSRC) & ::SRC)^)-obsolete())::) +\
1025 1025 max(::((roots(ALLSRC) & ::SRC)^)-obsolete()))'
1026 1026
1027 1027 Configuration Options:
1028 1028
1029 1029 You can make rebase require a destination if you set the following config
1030 1030 option::
1031 1031
1032 1032 [commands]
1033 1033 rebase.requiredest = True
1034 1034
1035 1035 By default, rebase will close the transaction after each commit. For
1036 1036 performance purposes, you can configure rebase to use a single transaction
1037 1037 across the entire rebase. WARNING: This setting introduces a significant
1038 1038 risk of losing the work you've done in a rebase if the rebase aborts
1039 1039 unexpectedly::
1040 1040
1041 1041 [rebase]
1042 1042 singletransaction = True
1043 1043
1044 1044 By default, rebase writes to the working copy, but you can configure it to
1045 1045 run in-memory for better performance. When the rebase is not moving the
1046 1046 parent(s) of the working copy (AKA the "currently checked out changesets"),
1047 1047 this may also allow it to run even if the working copy is dirty::
1048 1048
1049 1049 [rebase]
1050 1050 experimental.inmemory = True
1051 1051
1052 1052 Return Values:
1053 1053
1054 1054 Returns 0 on success, 1 if nothing to rebase or there are
1055 1055 unresolved conflicts.
1056 1056
1057 1057 """
1058 1058 inmemory = ui.configbool(b'rebase', b'experimental.inmemory')
1059 1059 action = cmdutil.check_at_most_one_arg(opts, 'abort', 'stop', 'continue')
1060 1060 if action:
1061 1061 cmdutil.check_incompatible_arguments(
1062 1062 opts, action, ['confirm', 'dry_run']
1063 1063 )
1064 1064 cmdutil.check_incompatible_arguments(
1065 1065 opts, action, ['rev', 'source', 'base', 'dest']
1066 1066 )
1067 1067 cmdutil.check_at_most_one_arg(opts, 'confirm', 'dry_run')
1068 1068 cmdutil.check_at_most_one_arg(opts, 'rev', 'source', 'base')
1069 1069
1070 1070 if action or repo.currenttransaction() is not None:
1071 1071 # in-memory rebase is not compatible with resuming rebases.
1072 1072 # (Or if it is run within a transaction, since the restart logic can
1073 1073 # fail the entire transaction.)
1074 1074 inmemory = False
1075 1075
1076 1076 if opts.get('auto_orphans'):
1077 1077 disallowed_opts = set(opts) - {'auto_orphans'}
1078 1078 cmdutil.check_incompatible_arguments(
1079 1079 opts, 'auto_orphans', disallowed_opts
1080 1080 )
1081 1081
1082 1082 userrevs = list(repo.revs(opts.get('auto_orphans')))
1083 1083 opts['rev'] = [revsetlang.formatspec(b'%ld and orphan()', userrevs)]
1084 1084 opts['dest'] = b'_destautoorphanrebase(SRC)'
1085 1085
1086 1086 if opts.get('dry_run') or opts.get('confirm'):
1087 1087 return _dryrunrebase(ui, repo, action, opts)
1088 1088 elif action == 'stop':
1089 1089 rbsrt = rebaseruntime(repo, ui)
1090 1090 with repo.wlock(), repo.lock():
1091 1091 rbsrt.restorestatus()
1092 1092 if rbsrt.collapsef:
1093 1093 raise error.StateError(_(b"cannot stop in --collapse session"))
1094 1094 allowunstable = obsolete.isenabled(repo, obsolete.allowunstableopt)
1095 1095 if not (rbsrt.keepf or allowunstable):
1096 1096 raise error.StateError(
1097 1097 _(
1098 1098 b"cannot remove original changesets with"
1099 1099 b" unrebased descendants"
1100 1100 ),
1101 1101 hint=_(
1102 1102 b'either enable obsmarkers to allow unstable '
1103 1103 b'revisions or use --keep to keep original '
1104 1104 b'changesets'
1105 1105 ),
1106 1106 )
1107 1107 # update to the current working revision
1108 1108 # to clear interrupted merge
1109 1109 mergemod.clean_update(repo[rbsrt.originalwd])
1110 1110 rbsrt._finishrebase()
1111 1111 return 0
1112 1112 elif inmemory:
1113 1113 try:
1114 1114 # in-memory merge doesn't support conflicts, so if we hit any, abort
1115 1115 # and re-run as an on-disk merge.
1116 1116 overrides = {(b'rebase', b'singletransaction'): True}
1117 1117 with ui.configoverride(overrides, b'rebase'):
1118 1118 return _dorebase(ui, repo, action, opts, inmemory=inmemory)
1119 1119 except error.InMemoryMergeConflictsError:
1120 1120 if ui.configbool(b'devel', b'rebase.force-in-memory-merge'):
1121 1121 raise
1122 1122 ui.warn(
1123 1123 _(
1124 1124 b'hit merge conflicts; re-running rebase without in-memory'
1125 1125 b' merge\n'
1126 1126 )
1127 1127 )
1128 1128 clearstatus(repo)
1129 1129 clearcollapsemsg(repo)
1130 1130 return _dorebase(ui, repo, action, opts, inmemory=False)
1131 1131 else:
1132 1132 return _dorebase(ui, repo, action, opts)
1133 1133
1134 1134
1135 1135 def _dryrunrebase(ui, repo, action, opts):
1136 1136 rbsrt = rebaseruntime(repo, ui, inmemory=True, dryrun=True, opts=opts)
1137 1137 confirm = opts.get('confirm')
1138 1138 if confirm:
1139 1139 ui.status(_(b'starting in-memory rebase\n'))
1140 1140 else:
1141 1141 ui.status(
1142 1142 _(b'starting dry-run rebase; repository will not be changed\n')
1143 1143 )
1144 1144 with repo.wlock(), repo.lock():
1145 1145 needsabort = True
1146 1146 try:
1147 1147 overrides = {(b'rebase', b'singletransaction'): True}
1148 1148 with ui.configoverride(overrides, b'rebase'):
1149 1149 res = _origrebase(
1150 1150 ui,
1151 1151 repo,
1152 1152 action,
1153 1153 opts,
1154 1154 rbsrt,
1155 1155 )
1156 1156 if res == _nothingtorebase():
1157 1157 needsabort = False
1158 1158 return res
1159 1159 except error.ConflictResolutionRequired:
1160 1160 ui.status(_(b'hit a merge conflict\n'))
1161 1161 return 1
1162 1162 except error.Abort:
1163 1163 needsabort = False
1164 1164 raise
1165 1165 else:
1166 1166 if confirm:
1167 1167 ui.status(_(b'rebase completed successfully\n'))
1168 1168 if not ui.promptchoice(_(b'apply changes (yn)?$$ &Yes $$ &No')):
1169 1169 # finish unfinished rebase
1170 1170 rbsrt._finishrebase()
1171 1171 else:
1172 1172 rbsrt._prepareabortorcontinue(
1173 1173 isabort=True,
1174 1174 backup=False,
1175 1175 suppwarns=True,
1176 1176 confirm=confirm,
1177 1177 )
1178 1178 needsabort = False
1179 1179 else:
1180 1180 ui.status(
1181 1181 _(
1182 1182 b'dry-run rebase completed successfully; run without'
1183 1183 b' -n/--dry-run to perform this rebase\n'
1184 1184 )
1185 1185 )
1186 1186 return 0
1187 1187 finally:
1188 1188 if needsabort:
1189 1189 # no need to store backup in case of dryrun
1190 1190 rbsrt._prepareabortorcontinue(
1191 1191 isabort=True,
1192 1192 backup=False,
1193 1193 suppwarns=True,
1194 1194 dryrun=opts.get('dry_run'),
1195 1195 )
1196 1196
1197 1197
1198 1198 def _dorebase(ui, repo, action, opts, inmemory=False):
1199 1199 rbsrt = rebaseruntime(repo, ui, inmemory, opts=opts)
1200 1200 return _origrebase(ui, repo, action, opts, rbsrt)
1201 1201
1202 1202
1203 1203 def _origrebase(ui, repo, action, opts, rbsrt):
1204 1204 assert action != 'stop'
1205 1205 with repo.wlock(), repo.lock():
1206 1206 if opts.get('interactive'):
1207 1207 try:
1208 1208 if extensions.find(b'histedit'):
1209 1209 enablehistedit = b''
1210 1210 except KeyError:
1211 1211 enablehistedit = b" --config extensions.histedit="
1212 1212 help = b"hg%s help -e histedit" % enablehistedit
1213 1213 msg = (
1214 1214 _(
1215 1215 b"interactive history editing is supported by the "
1216 1216 b"'histedit' extension (see \"%s\")"
1217 1217 )
1218 1218 % help
1219 1219 )
1220 1220 raise error.InputError(msg)
1221 1221
1222 1222 if rbsrt.collapsemsg and not rbsrt.collapsef:
1223 1223 raise error.InputError(
1224 1224 _(b'message can only be specified with collapse')
1225 1225 )
1226 1226
1227 1227 if action:
1228 1228 if rbsrt.collapsef:
1229 1229 raise error.InputError(
1230 1230 _(b'cannot use collapse with continue or abort')
1231 1231 )
1232 1232 if action == 'abort' and opts.get('tool', False):
1233 1233 ui.warn(_(b'tool option will be ignored\n'))
1234 1234 if action == 'continue':
1235 1235 ms = mergestatemod.mergestate.read(repo)
1236 1236 mergeutil.checkunresolved(ms)
1237 1237
1238 1238 retcode = rbsrt._prepareabortorcontinue(isabort=(action == 'abort'))
1239 1239 if retcode is not None:
1240 1240 return retcode
1241 1241 else:
1242 1242 # search default destination in this space
1243 1243 # used in the 'hg pull --rebase' case, see issue 5214.
1244 1244 destspace = opts.get('_destspace')
1245 1245 destmap = _definedestmap(
1246 1246 ui,
1247 1247 repo,
1248 1248 rbsrt.inmemory,
1249 1249 opts.get('dest', None),
1250 1250 opts.get('source', []),
1251 1251 opts.get('base', []),
1252 1252 opts.get('rev', []),
1253 1253 destspace=destspace,
1254 1254 )
1255 1255 retcode = rbsrt._preparenewrebase(destmap)
1256 1256 if retcode is not None:
1257 1257 return retcode
1258 1258 storecollapsemsg(repo, rbsrt.collapsemsg)
1259 1259
1260 1260 tr = None
1261 1261
1262 1262 singletr = ui.configbool(b'rebase', b'singletransaction')
1263 1263 if singletr:
1264 1264 tr = repo.transaction(b'rebase')
1265 1265
1266 1266 # If `rebase.singletransaction` is enabled, wrap the entire operation in
1267 1267 # one transaction here. Otherwise, transactions are obtained when
1268 1268 # committing each node, which is slower but allows partial success.
1269 1269 with util.acceptintervention(tr):
1270 1270 rbsrt._performrebase(tr)
1271 1271 if not rbsrt.dryrun:
1272 1272 rbsrt._finishrebase()
1273 1273
1274 1274
1275 1275 def _definedestmap(ui, repo, inmemory, destf, srcf, basef, revf, destspace):
1276 1276 """use revisions argument to define destmap {srcrev: destrev}"""
1277 1277 if revf is None:
1278 1278 revf = []
1279 1279
1280 1280 # destspace is here to work around issues with `hg pull --rebase` see
1281 1281 # issue5214 for details
1282 1282
1283 1283 cmdutil.checkunfinished(repo)
1284 1284 if not inmemory:
1285 1285 cmdutil.bailifchanged(repo)
1286 1286
1287 1287 if ui.configbool(b'commands', b'rebase.requiredest') and not destf:
1288 1288 raise error.InputError(
1289 1289 _(b'you must specify a destination'),
1290 1290 hint=_(b'use: hg rebase -d REV'),
1291 1291 )
1292 1292
1293 1293 dest = None
1294 1294
1295 1295 if revf:
1296 1296 rebaseset = logcmdutil.revrange(repo, revf)
1297 1297 if not rebaseset:
1298 1298 ui.status(_(b'empty "rev" revision set - nothing to rebase\n'))
1299 1299 return None
1300 1300 elif srcf:
1301 1301 src = logcmdutil.revrange(repo, srcf)
1302 1302 if not src:
1303 1303 ui.status(_(b'empty "source" revision set - nothing to rebase\n'))
1304 1304 return None
1305 1305 # `+ (%ld)` to work around `wdir()::` being empty
1306 1306 rebaseset = repo.revs(b'(%ld):: + (%ld)', src, src)
1307 1307 else:
1308 1308 base = logcmdutil.revrange(repo, basef or [b'.'])
1309 1309 if not base:
1310 1310 ui.status(
1311 1311 _(b'empty "base" revision set - ' b"can't compute rebase set\n")
1312 1312 )
1313 1313 return None
1314 1314 if destf:
1315 1315 # --base does not support multiple destinations
1316 1316 dest = logcmdutil.revsingle(repo, destf)
1317 1317 else:
1318 1318 dest = repo[_destrebase(repo, base, destspace=destspace)]
1319 1319 destf = bytes(dest)
1320 1320
1321 1321 roots = [] # selected children of branching points
1322 1322 bpbase = {} # {branchingpoint: [origbase]}
1323 1323 for b in base: # group bases by branching points
1324 1324 bp = repo.revs(b'ancestor(%d, %d)', b, dest.rev()).first()
1325 1325 bpbase[bp] = bpbase.get(bp, []) + [b]
1326 1326 if None in bpbase:
1327 1327 # emulate the old behavior, showing "nothing to rebase" (a better
1328 1328 # behavior may be abort with "cannot find branching point" error)
1329 1329 bpbase.clear()
1330 1330 for bp, bs in bpbase.items(): # calculate roots
1331 1331 roots += list(repo.revs(b'children(%d) & ancestors(%ld)', bp, bs))
1332 1332
1333 1333 rebaseset = repo.revs(b'%ld::', roots)
1334 1334
1335 1335 if not rebaseset:
1336 1336 # transform to list because smartsets are not comparable to
1337 1337 # lists. This should be improved to honor laziness of
1338 1338 # smartset.
1339 1339 if list(base) == [dest.rev()]:
1340 1340 if basef:
1341 1341 ui.status(
1342 1342 _(
1343 1343 b'nothing to rebase - %s is both "base"'
1344 1344 b' and destination\n'
1345 1345 )
1346 1346 % dest
1347 1347 )
1348 1348 else:
1349 1349 ui.status(
1350 1350 _(
1351 1351 b'nothing to rebase - working directory '
1352 1352 b'parent is also destination\n'
1353 1353 )
1354 1354 )
1355 1355 elif not repo.revs(b'%ld - ::%d', base, dest.rev()):
1356 1356 if basef:
1357 1357 ui.status(
1358 1358 _(
1359 1359 b'nothing to rebase - "base" %s is '
1360 1360 b'already an ancestor of destination '
1361 1361 b'%s\n'
1362 1362 )
1363 1363 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1364 1364 )
1365 1365 else:
1366 1366 ui.status(
1367 1367 _(
1368 1368 b'nothing to rebase - working '
1369 1369 b'directory parent is already an '
1370 1370 b'ancestor of destination %s\n'
1371 1371 )
1372 1372 % dest
1373 1373 )
1374 1374 else: # can it happen?
1375 1375 ui.status(
1376 1376 _(b'nothing to rebase from %s to %s\n')
1377 1377 % (b'+'.join(bytes(repo[r]) for r in base), dest)
1378 1378 )
1379 1379 return None
1380 1380
1381 1381 if wdirrev in rebaseset:
1382 1382 raise error.InputError(_(b'cannot rebase the working copy'))
1383 1383 rebasingwcp = repo[b'.'].rev() in rebaseset
1384 1384 ui.log(
1385 1385 b"rebase",
1386 1386 b"rebasing working copy parent: %r\n",
1387 1387 rebasingwcp,
1388 1388 rebase_rebasing_wcp=rebasingwcp,
1389 1389 )
1390 1390 if inmemory and rebasingwcp:
1391 1391 # Check these since we did not before.
1392 1392 cmdutil.checkunfinished(repo)
1393 1393 cmdutil.bailifchanged(repo)
1394 1394
1395 1395 if not destf:
1396 1396 dest = repo[_destrebase(repo, rebaseset, destspace=destspace)]
1397 1397 destf = bytes(dest)
1398 1398
1399 1399 allsrc = revsetlang.formatspec(b'%ld', rebaseset)
1400 1400 alias = {b'ALLSRC': allsrc}
1401 1401
1402 1402 if dest is None:
1403 1403 try:
1404 1404 # fast path: try to resolve dest without SRC alias
1405 1405 dest = scmutil.revsingle(repo, destf, localalias=alias)
1406 1406 except error.RepoLookupError:
1407 1407 # multi-dest path: resolve dest for each SRC separately
1408 1408 destmap = {}
1409 1409 for r in rebaseset:
1410 1410 alias[b'SRC'] = revsetlang.formatspec(b'%d', r)
1411 1411 # use repo.anyrevs instead of scmutil.revsingle because we
1412 1412 # don't want to abort if destset is empty.
1413 1413 destset = repo.anyrevs([destf], user=True, localalias=alias)
1414 1414 size = len(destset)
1415 1415 if size == 1:
1416 1416 destmap[r] = destset.first()
1417 1417 elif size == 0:
1418 1418 ui.note(_(b'skipping %s - empty destination\n') % repo[r])
1419 1419 else:
1420 1420 raise error.InputError(
1421 1421 _(b'rebase destination for %s is not unique') % repo[r]
1422 1422 )
1423 1423
1424 1424 if dest is not None:
1425 1425 # single-dest case: assign dest to each rev in rebaseset
1426 1426 destrev = dest.rev()
1427 1427 destmap = {r: destrev for r in rebaseset} # {srcrev: destrev}
1428 1428
1429 1429 if not destmap:
1430 1430 ui.status(_(b'nothing to rebase - empty destination\n'))
1431 1431 return None
1432 1432
1433 1433 return destmap
1434 1434
1435 1435
1436 1436 def externalparent(repo, state, destancestors):
1437 1437 """Return the revision that should be used as the second parent
1438 1438 when the revisions in state is collapsed on top of destancestors.
1439 1439 Abort if there is more than one parent.
1440 1440 """
1441 1441 parents = set()
1442 1442 source = min(state)
1443 1443 for rev in state:
1444 1444 if rev == source:
1445 1445 continue
1446 1446 for p in repo[rev].parents():
1447 1447 if p.rev() not in state and p.rev() not in destancestors:
1448 1448 parents.add(p.rev())
1449 1449 if not parents:
1450 1450 return nullrev
1451 1451 if len(parents) == 1:
1452 1452 return parents.pop()
1453 1453 raise error.StateError(
1454 1454 _(
1455 1455 b'unable to collapse on top of %d, there is more '
1456 1456 b'than one external parent: %s'
1457 1457 )
1458 1458 % (max(destancestors), b', '.join(b"%d" % p for p in sorted(parents)))
1459 1459 )
1460 1460
1461 1461
1462 1462 def commitmemorynode(repo, wctx, editor, extra, user, date, commitmsg):
1463 1463 """Commit the memory changes with parents p1 and p2.
1464 1464 Return node of committed revision."""
1465 1465 # By convention, ``extra['branch']`` (set by extrafn) clobbers
1466 1466 # ``branch`` (used when passing ``--keepbranches``).
1467 1467 branch = None
1468 1468 if b'branch' in extra:
1469 1469 branch = extra[b'branch']
1470 1470
1471 1471 # FIXME: We call _compact() because it's required to correctly detect
1472 1472 # changed files. This was added to fix a regression shortly before the 5.5
1473 1473 # release. A proper fix will be done in the default branch.
1474 1474 wctx._compact()
1475 1475 memctx = wctx.tomemctx(
1476 1476 commitmsg,
1477 1477 date=date,
1478 1478 extra=extra,
1479 1479 user=user,
1480 1480 branch=branch,
1481 1481 editor=editor,
1482 1482 )
1483 1483 if memctx.isempty() and not repo.ui.configbool(b'ui', b'allowemptycommit'):
1484 1484 return None
1485 1485 commitres = repo.commitctx(memctx)
1486 1486 wctx.clean() # Might be reused
1487 1487 return commitres
1488 1488
1489 1489
1490 1490 def commitnode(repo, editor, extra, user, date, commitmsg):
1491 1491 """Commit the wd changes with parents p1 and p2.
1492 1492 Return node of committed revision."""
1493 1493 tr = util.nullcontextmanager
1494 1494 if not repo.ui.configbool(b'rebase', b'singletransaction'):
1495 1495 tr = lambda: repo.transaction(b'rebase')
1496 1496 with tr():
1497 1497 # Commit might fail if unresolved files exist
1498 1498 newnode = repo.commit(
1499 1499 text=commitmsg, user=user, date=date, extra=extra, editor=editor
1500 1500 )
1501 1501
1502 1502 repo.dirstate.setbranch(
1503 1503 repo[newnode].branch(), repo.currenttransaction()
1504 1504 )
1505 1505 return newnode
1506 1506
1507 1507
1508 1508 def rebasenode(repo, rev, p1, p2, base, collapse, wctx):
1509 1509 """Rebase a single revision rev on top of p1 using base as merge ancestor"""
1510 1510 # Merge phase
1511 1511 # Update to destination and merge it with local
1512 1512 p1ctx = repo[p1]
1513 1513 if wctx.isinmemory():
1514 1514 wctx.setbase(p1ctx)
1515 1515 scope = util.nullcontextmanager
1516 1516 else:
1517 1517 if repo[b'.'].rev() != p1:
1518 1518 repo.ui.debug(b" update to %d:%s\n" % (p1, p1ctx))
1519 1519 mergemod.clean_update(p1ctx)
1520 1520 else:
1521 1521 repo.ui.debug(b" already in destination\n")
1522 1522 scope = lambda: repo.dirstate.changing_parents(repo)
1523 1523 # This is, alas, necessary to invalidate workingctx's manifest cache,
1524 1524 # as well as other data we litter on it in other places.
1525 1525 wctx = repo[None]
1526 1526 repo.dirstate.write(repo.currenttransaction())
1527 1527 ctx = repo[rev]
1528 1528 repo.ui.debug(b" merge against %d:%s\n" % (rev, ctx))
1529 1529 if base is not None:
1530 1530 repo.ui.debug(b" detach base %d:%s\n" % (base, repo[base]))
1531 1531
1532 1532 with scope():
1533 1533 # See explanation in merge.graft()
1534 1534 mergeancestor = repo.changelog.isancestor(p1ctx.node(), ctx.node())
1535 1535 stats = mergemod._update(
1536 1536 repo,
1537 1537 rev,
1538 1538 branchmerge=True,
1539 1539 force=True,
1540 1540 ancestor=base,
1541 1541 mergeancestor=mergeancestor,
1542 1542 labels=[b'dest', b'source', b'parent of source'],
1543 1543 wc=wctx,
1544 1544 )
1545 1545 wctx.setparents(p1ctx.node(), repo[p2].node())
1546 1546 if collapse:
1547 1547 copies.graftcopies(wctx, ctx, p1ctx)
1548 1548 else:
1549 1549 # If we're not using --collapse, we need to
1550 1550 # duplicate copies between the revision we're
1551 1551 # rebasing and its first parent.
1552 1552 copies.graftcopies(wctx, ctx, ctx.p1())
1553 1553
1554 1554 if stats.unresolvedcount > 0:
1555 1555 if wctx.isinmemory():
1556 1556 raise error.InMemoryMergeConflictsError()
1557 1557 else:
1558 1558 raise error.ConflictResolutionRequired(b'rebase')
1559 1559
1560 1560
1561 1561 def adjustdest(repo, rev, destmap, state, skipped):
1562 1562 r"""adjust rebase destination given the current rebase state
1563 1563
1564 1564 rev is what is being rebased. Return a list of two revs, which are the
1565 1565 adjusted destinations for rev's p1 and p2, respectively. If a parent is
1566 1566 nullrev, return dest without adjustment for it.
1567 1567
1568 1568 For example, when doing rebasing B+E to F, C to G, rebase will first move B
1569 1569 to B1, and E's destination will be adjusted from F to B1.
1570 1570
1571 1571 B1 <- written during rebasing B
1572 1572 |
1573 1573 F <- original destination of B, E
1574 1574 |
1575 1575 | E <- rev, which is being rebased
1576 1576 | |
1577 1577 | D <- prev, one parent of rev being checked
1578 1578 | |
1579 1579 | x <- skipped, ex. no successor or successor in (::dest)
1580 1580 | |
1581 1581 | C <- rebased as C', different destination
1582 1582 | |
1583 1583 | B <- rebased as B1 C'
1584 1584 |/ |
1585 1585 A G <- destination of C, different
1586 1586
1587 1587 Another example about merge changeset, rebase -r C+G+H -d K, rebase will
1588 1588 first move C to C1, G to G1, and when it's checking H, the adjusted
1589 1589 destinations will be [C1, G1].
1590 1590
1591 1591 H C1 G1
1592 1592 /| | /
1593 1593 F G |/
1594 1594 K | | -> K
1595 1595 | C D |
1596 1596 | |/ |
1597 1597 | B | ...
1598 1598 |/ |/
1599 1599 A A
1600 1600
1601 1601 Besides, adjust dest according to existing rebase information. For example,
1602 1602
1603 1603 B C D B needs to be rebased on top of C, C needs to be rebased on top
1604 1604 \|/ of D. We will rebase C first.
1605 1605 A
1606 1606
1607 1607 C' After rebasing C, when considering B's destination, use C'
1608 1608 | instead of the original C.
1609 1609 B D
1610 1610 \ /
1611 1611 A
1612 1612 """
1613 1613 # pick already rebased revs with same dest from state as interesting source
1614 1614 dest = destmap[rev]
1615 1615 source = [
1616 1616 s
1617 1617 for s, d in state.items()
1618 1618 if d > 0 and destmap[s] == dest and s not in skipped
1619 1619 ]
1620 1620
1621 1621 result = []
1622 1622 for prev in repo.changelog.parentrevs(rev):
1623 1623 adjusted = dest
1624 1624 if prev != nullrev:
1625 1625 candidate = repo.revs(b'max(%ld and (::%d))', source, prev).first()
1626 1626 if candidate is not None:
1627 1627 adjusted = state[candidate]
1628 1628 if adjusted == dest and dest in state:
1629 1629 adjusted = state[dest]
1630 1630 if adjusted == revtodo:
1631 1631 # sortsource should produce an order that makes this impossible
1632 1632 raise error.ProgrammingError(
1633 1633 b'rev %d should be rebased already at this time' % dest
1634 1634 )
1635 1635 result.append(adjusted)
1636 1636 return result
1637 1637
1638 1638
1639 1639 def _checkobsrebase(repo, ui, rebaseobsrevs, rebaseobsskipped):
1640 1640 """
1641 1641 Abort if rebase will create divergence or rebase is noop because of markers
1642 1642
1643 1643 `rebaseobsrevs`: set of obsolete revision in source
1644 1644 `rebaseobsskipped`: set of revisions from source skipped because they have
1645 1645 successors in destination or no non-obsolete successor.
1646 1646 """
1647 1647 # Obsolete node with successors not in dest leads to divergence
1648 1648 divergenceok = obsolete.isenabled(repo, obsolete.allowdivergenceopt)
1649 1649 divergencebasecandidates = rebaseobsrevs - rebaseobsskipped
1650 1650
1651 1651 if divergencebasecandidates and not divergenceok:
1652 1652 divhashes = (bytes(repo[r]) for r in divergencebasecandidates)
1653 1653 msg = _(b"this rebase will cause divergences from: %s")
1654 1654 h = _(
1655 1655 b"to force the rebase please set "
1656 1656 b"experimental.evolution.allowdivergence=True"
1657 1657 )
1658 1658 raise error.StateError(msg % (b",".join(divhashes),), hint=h)
1659 1659
1660 1660
1661 1661 def successorrevs(unfi, rev):
1662 1662 """yield revision numbers for successors of rev"""
1663 1663 assert unfi.filtername is None
1664 1664 get_rev = unfi.changelog.index.get_rev
1665 1665 for s in obsutil.allsuccessors(unfi.obsstore, [unfi[rev].node()]):
1666 1666 r = get_rev(s)
1667 1667 if r is not None:
1668 1668 yield r
1669 1669
1670 1670
1671 1671 def defineparents(repo, rev, destmap, state, skipped, obsskipped):
1672 1672 """Return new parents and optionally a merge base for rev being rebased
1673 1673
1674 1674 The destination specified by "dest" cannot always be used directly because
1675 1675 previously rebase result could affect destination. For example,
1676 1676
1677 1677 D E rebase -r C+D+E -d B
1678 1678 |/ C will be rebased to C'
1679 1679 B C D's new destination will be C' instead of B
1680 1680 |/ E's new destination will be C' instead of B
1681 1681 A
1682 1682
1683 1683 The new parents of a merge is slightly more complicated. See the comment
1684 1684 block below.
1685 1685 """
1686 1686 # use unfiltered changelog since successorrevs may return filtered nodes
1687 1687 assert repo.filtername is None
1688 1688 cl = repo.changelog
1689 1689 isancestor = cl.isancestorrev
1690 1690
1691 1691 dest = destmap[rev]
1692 1692 oldps = repo.changelog.parentrevs(rev) # old parents
1693 1693 newps = [nullrev, nullrev] # new parents
1694 1694 dests = adjustdest(repo, rev, destmap, state, skipped)
1695 1695 bases = list(oldps) # merge base candidates, initially just old parents
1696 1696
1697 1697 if all(r == nullrev for r in oldps[1:]):
1698 1698 # For non-merge changeset, just move p to adjusted dest as requested.
1699 1699 newps[0] = dests[0]
1700 1700 else:
1701 1701 # For merge changeset, if we move p to dests[i] unconditionally, both
1702 1702 # parents may change and the end result looks like "the merge loses a
1703 1703 # parent", which is a surprise. This is a limit because "--dest" only
1704 1704 # accepts one dest per src.
1705 1705 #
1706 1706 # Therefore, only move p with reasonable conditions (in this order):
1707 1707 # 1. use dest, if dest is a descendent of (p or one of p's successors)
1708 1708 # 2. use p's rebased result, if p is rebased (state[p] > 0)
1709 1709 #
1710 1710 # Comparing with adjustdest, the logic here does some additional work:
1711 1711 # 1. decide which parents will not be moved towards dest
1712 1712 # 2. if the above decision is "no", should a parent still be moved
1713 1713 # because it was rebased?
1714 1714 #
1715 1715 # For example:
1716 1716 #
1717 1717 # C # "rebase -r C -d D" is an error since none of the parents
1718 1718 # /| # can be moved. "rebase -r B+C -d D" will move C's parent
1719 1719 # A B D # B (using rule "2."), since B will be rebased.
1720 1720 #
1721 1721 # The loop tries to be not rely on the fact that a Mercurial node has
1722 1722 # at most 2 parents.
1723 1723 for i, p in enumerate(oldps):
1724 1724 np = p # new parent
1725 1725 if any(isancestor(x, dests[i]) for x in successorrevs(repo, p)):
1726 1726 np = dests[i]
1727 1727 elif p in state and state[p] > 0:
1728 1728 np = state[p]
1729 1729
1730 1730 # If one parent becomes an ancestor of the other, drop the ancestor
1731 1731 for j, x in enumerate(newps[:i]):
1732 1732 if x == nullrev:
1733 1733 continue
1734 1734 if isancestor(np, x): # CASE-1
1735 1735 np = nullrev
1736 1736 elif isancestor(x, np): # CASE-2
1737 1737 newps[j] = np
1738 1738 np = nullrev
1739 1739 # New parents forming an ancestor relationship does not
1740 1740 # mean the old parents have a similar relationship. Do not
1741 1741 # set bases[x] to nullrev.
1742 1742 bases[j], bases[i] = bases[i], bases[j]
1743 1743
1744 1744 newps[i] = np
1745 1745
1746 1746 # "rebasenode" updates to new p1, and the old p1 will be used as merge
1747 1747 # base. If only p2 changes, merging using unchanged p1 as merge base is
1748 1748 # suboptimal. Therefore swap parents to make the merge sane.
1749 1749 if newps[1] != nullrev and oldps[0] == newps[0]:
1750 1750 assert len(newps) == 2 and len(oldps) == 2
1751 1751 newps.reverse()
1752 1752 bases.reverse()
1753 1753
1754 1754 # No parent change might be an error because we fail to make rev a
1755 1755 # descendent of requested dest. This can happen, for example:
1756 1756 #
1757 1757 # C # rebase -r C -d D
1758 1758 # /| # None of A and B will be changed to D and rebase fails.
1759 1759 # A B D
1760 1760 if set(newps) == set(oldps) and dest not in newps:
1761 1761 raise error.InputError(
1762 1762 _(
1763 1763 b'cannot rebase %d:%s without '
1764 1764 b'moving at least one of its parents'
1765 1765 )
1766 1766 % (rev, repo[rev])
1767 1767 )
1768 1768
1769 1769 # Source should not be ancestor of dest. The check here guarantees it's
1770 1770 # impossible. With multi-dest, the initial check does not cover complex
1771 1771 # cases since we don't have abstractions to dry-run rebase cheaply.
1772 1772 if any(p != nullrev and isancestor(rev, p) for p in newps):
1773 1773 raise error.InputError(_(b'source is ancestor of destination'))
1774 1774
1775 1775 # Check if the merge will contain unwanted changes. That may happen if
1776 1776 # there are multiple special (non-changelog ancestor) merge bases, which
1777 1777 # cannot be handled well by the 3-way merge algorithm. For example:
1778 1778 #
1779 1779 # F
1780 1780 # /|
1781 1781 # D E # "rebase -r D+E+F -d Z", when rebasing F, if "D" was chosen
1782 1782 # | | # as merge base, the difference between D and F will include
1783 1783 # B C # C, so the rebased F will contain C surprisingly. If "E" was
1784 1784 # |/ # chosen, the rebased F will contain B.
1785 1785 # A Z
1786 1786 #
1787 1787 # But our merge base candidates (D and E in above case) could still be
1788 1788 # better than the default (ancestor(F, Z) == null). Therefore still
1789 1789 # pick one (so choose p1 above).
1790 1790 if sum(1 for b in set(bases) if b != nullrev and b not in newps) > 1:
1791 1791 unwanted = [None, None] # unwanted[i]: unwanted revs if choose bases[i]
1792 1792 for i, base in enumerate(bases):
1793 1793 if base == nullrev or base in newps:
1794 1794 continue
1795 1795 # Revisions in the side (not chosen as merge base) branch that
1796 1796 # might contain "surprising" contents
1797 1797 other_bases = set(bases) - {base}
1798 1798 siderevs = list(
1799 1799 repo.revs(b'(%ld %% (%d+%d))', other_bases, base, dest)
1800 1800 )
1801 1801
1802 1802 # If those revisions are covered by rebaseset, the result is good.
1803 1803 # A merge in rebaseset would be considered to cover its ancestors.
1804 1804 if siderevs:
1805 1805 rebaseset = [
1806 1806 r for r, d in state.items() if d > 0 and r not in obsskipped
1807 1807 ]
1808 1808 merges = [
1809 1809 r for r in rebaseset if cl.parentrevs(r)[1] != nullrev
1810 1810 ]
1811 1811 unwanted[i] = list(
1812 1812 repo.revs(
1813 1813 b'%ld - (::%ld) - %ld', siderevs, merges, rebaseset
1814 1814 )
1815 1815 )
1816 1816
1817 1817 if any(revs is not None for revs in unwanted):
1818 1818 # Choose a merge base that has a minimal number of unwanted revs.
1819 1819 l, i = min(
1820 1820 (len(revs), i)
1821 1821 for i, revs in enumerate(unwanted)
1822 1822 if revs is not None
1823 1823 )
1824 1824
1825 1825 # The merge will include unwanted revisions. Abort now. Revisit this if
1826 1826 # we have a more advanced merge algorithm that handles multiple bases.
1827 1827 if l > 0:
1828 1828 unwanteddesc = _(b' or ').join(
1829 1829 (
1830 1830 b', '.join(b'%d:%s' % (r, repo[r]) for r in revs)
1831 1831 for revs in unwanted
1832 1832 if revs is not None
1833 1833 )
1834 1834 )
1835 1835 raise error.InputError(
1836 1836 _(b'rebasing %d:%s will include unwanted changes from %s')
1837 1837 % (rev, repo[rev], unwanteddesc)
1838 1838 )
1839 1839
1840 1840 # newps[0] should match merge base if possible. Currently, if newps[i]
1841 1841 # is nullrev, the only case is newps[i] and newps[j] (j < i), one is
1842 1842 # the other's ancestor. In that case, it's fine to not swap newps here.
1843 1843 # (see CASE-1 and CASE-2 above)
1844 1844 if i != 0:
1845 1845 if newps[i] != nullrev:
1846 1846 newps[0], newps[i] = newps[i], newps[0]
1847 1847 bases[0], bases[i] = bases[i], bases[0]
1848 1848
1849 1849 # "rebasenode" updates to new p1, use the corresponding merge base.
1850 1850 base = bases[0]
1851 1851
1852 1852 repo.ui.debug(b" future parents are %d and %d\n" % tuple(newps))
1853 1853
1854 1854 return newps[0], newps[1], base
1855 1855
1856 1856
1857 1857 def isagitpatch(repo, patchname):
1858 1858 """Return true if the given patch is in git format"""
1859 1859 mqpatch = os.path.join(repo.mq.path, patchname)
1860 1860 for line in patch.linereader(open(mqpatch, b'rb')):
1861 1861 if line.startswith(b'diff --git'):
1862 1862 return True
1863 1863 return False
1864 1864
1865 1865
1866 1866 def updatemq(repo, state, skipped, **opts):
1867 1867 """Update rebased mq patches - finalize and then import them"""
1868 1868 mqrebase = {}
1869 1869 mq = repo.mq
1870 1870 original_series = mq.fullseries[:]
1871 1871 skippedpatches = set()
1872 1872
1873 1873 for p in mq.applied:
1874 1874 rev = repo[p.node].rev()
1875 1875 if rev in state:
1876 1876 repo.ui.debug(
1877 1877 b'revision %d is an mq patch (%s), finalize it.\n'
1878 1878 % (rev, p.name)
1879 1879 )
1880 1880 mqrebase[rev] = (p.name, isagitpatch(repo, p.name))
1881 1881 else:
1882 1882 # Applied but not rebased, not sure this should happen
1883 1883 skippedpatches.add(p.name)
1884 1884
1885 1885 if mqrebase:
1886 1886 mq.finish(repo, mqrebase.keys())
1887 1887
1888 1888 # We must start import from the newest revision
1889 1889 for rev in sorted(mqrebase, reverse=True):
1890 1890 if rev not in skipped:
1891 1891 name, isgit = mqrebase[rev]
1892 1892 repo.ui.note(
1893 1893 _(b'updating mq patch %s to %d:%s\n')
1894 1894 % (name, state[rev], repo[state[rev]])
1895 1895 )
1896 1896 mq.qimport(
1897 1897 repo,
1898 1898 (),
1899 1899 patchname=name,
1900 1900 git=isgit,
1901 1901 rev=[b"%d" % state[rev]],
1902 1902 )
1903 1903 else:
1904 1904 # Rebased and skipped
1905 1905 skippedpatches.add(mqrebase[rev][0])
1906 1906
1907 1907 # Patches were either applied and rebased and imported in
1908 1908 # order, applied and removed or unapplied. Discard the removed
1909 1909 # ones while preserving the original series order and guards.
1910 1910 newseries = [
1911 1911 s
1912 1912 for s in original_series
1913 1913 if mq.guard_re.split(s, 1)[0] not in skippedpatches
1914 1914 ]
1915 1915 mq.fullseries[:] = newseries
1916 1916 mq.seriesdirty = True
1917 1917 mq.savedirty()
1918 1918
1919 1919
1920 1920 def storecollapsemsg(repo, collapsemsg):
1921 1921 """Store the collapse message to allow recovery"""
1922 1922 collapsemsg = collapsemsg or b''
1923 1923 f = repo.vfs(b"last-message.txt", b"w")
1924 1924 f.write(b"%s\n" % collapsemsg)
1925 1925 f.close()
1926 1926
1927 1927
1928 1928 def clearcollapsemsg(repo):
1929 1929 """Remove collapse message file"""
1930 1930 repo.vfs.unlinkpath(b"last-message.txt", ignoremissing=True)
1931 1931
1932 1932
1933 1933 def restorecollapsemsg(repo, isabort):
1934 1934 """Restore previously stored collapse message"""
1935 1935 try:
1936 1936 f = repo.vfs(b"last-message.txt")
1937 1937 collapsemsg = f.readline().strip()
1938 1938 f.close()
1939 1939 except FileNotFoundError:
1940 1940 if isabort:
1941 1941 # Oh well, just abort like normal
1942 1942 collapsemsg = b''
1943 1943 else:
1944 1944 raise error.Abort(_(b'missing .hg/last-message.txt for rebase'))
1945 1945 return collapsemsg
1946 1946
1947 1947
1948 1948 def clearstatus(repo):
1949 1949 """Remove the status files"""
1950 1950 # Make sure the active transaction won't write the state file
1951 1951 tr = repo.currenttransaction()
1952 1952 if tr:
1953 1953 tr.removefilegenerator(b'rebasestate')
1954 1954 repo.vfs.unlinkpath(b"rebasestate", ignoremissing=True)
1955 1955
1956 1956
1957 1957 def sortsource(destmap):
1958 1958 """yield source revisions in an order that we only rebase things once
1959 1959
1960 1960 If source and destination overlaps, we should filter out revisions
1961 1961 depending on other revisions which hasn't been rebased yet.
1962 1962
1963 1963 Yield a sorted list of revisions each time.
1964 1964
1965 1965 For example, when rebasing A to B, B to C. This function yields [B], then
1966 1966 [A], indicating B needs to be rebased first.
1967 1967
1968 1968 Raise if there is a cycle so the rebase is impossible.
1969 1969 """
1970 1970 srcset = set(destmap)
1971 1971 while srcset:
1972 1972 srclist = sorted(srcset)
1973 1973 result = []
1974 1974 for r in srclist:
1975 1975 if destmap[r] not in srcset:
1976 1976 result.append(r)
1977 1977 if not result:
1978 1978 raise error.InputError(_(b'source and destination form a cycle'))
1979 1979 srcset -= set(result)
1980 1980 yield result
1981 1981
1982 1982
1983 1983 def buildstate(repo, destmap, collapse):
1984 1984 """Define which revisions are going to be rebased and where
1985 1985
1986 1986 repo: repo
1987 1987 destmap: {srcrev: destrev}
1988 1988 """
1989 1989 rebaseset = destmap.keys()
1990 1990 originalwd = repo[b'.'].rev()
1991 1991
1992 1992 # This check isn't strictly necessary, since mq detects commits over an
1993 1993 # applied patch. But it prevents messing up the working directory when
1994 1994 # a partially completed rebase is blocked by mq.
1995 1995 if b'qtip' in repo.tags():
1996 1996 mqapplied = {repo[s.node].rev() for s in repo.mq.applied}
1997 1997 if set(destmap.values()) & mqapplied:
1998 1998 raise error.StateError(_(b'cannot rebase onto an applied mq patch'))
1999 1999
2000 2000 # Get "cycle" error early by exhausting the generator.
2001 2001 sortedsrc = list(sortsource(destmap)) # a list of sorted revs
2002 2002 if not sortedsrc:
2003 2003 raise error.InputError(_(b'no matching revisions'))
2004 2004
2005 2005 # Only check the first batch of revisions to rebase not depending on other
2006 2006 # rebaseset. This means "source is ancestor of destination" for the second
2007 2007 # (and following) batches of revisions are not checked here. We rely on
2008 2008 # "defineparents" to do that check.
2009 2009 roots = list(repo.set(b'roots(%ld)', sortedsrc[0]))
2010 2010 if not roots:
2011 2011 raise error.InputError(_(b'no matching revisions'))
2012 2012
2013 2013 def revof(r):
2014 2014 return r.rev()
2015 2015
2016 2016 roots = sorted(roots, key=revof)
2017 2017 state = dict.fromkeys(rebaseset, revtodo)
2018 2018 emptyrebase = len(sortedsrc) == 1
2019 2019 for root in roots:
2020 2020 dest = repo[destmap[root.rev()]]
2021 2021 commonbase = root.ancestor(dest)
2022 2022 if commonbase == root:
2023 2023 raise error.InputError(_(b'source is ancestor of destination'))
2024 2024 if commonbase == dest:
2025 2025 wctx = repo[None]
2026 2026 if dest == wctx.p1():
2027 2027 # when rebasing to '.', it will use the current wd branch name
2028 2028 samebranch = root.branch() == wctx.branch()
2029 2029 else:
2030 2030 samebranch = root.branch() == dest.branch()
2031 2031 if not collapse and samebranch and dest in root.parents():
2032 2032 # mark the revision as done by setting its new revision
2033 2033 # equal to its old (current) revisions
2034 2034 state[root.rev()] = root.rev()
2035 2035 repo.ui.debug(b'source is a child of destination\n')
2036 2036 continue
2037 2037
2038 2038 emptyrebase = False
2039 2039 repo.ui.debug(b'rebase onto %s starting from %s\n' % (dest, root))
2040 2040 if emptyrebase:
2041 2041 return None
2042 2042 for rev in sorted(state):
2043 2043 parents = [p for p in repo.changelog.parentrevs(rev) if p != nullrev]
2044 2044 # if all parents of this revision are done, then so is this revision
2045 2045 if parents and all((state.get(p) == p for p in parents)):
2046 2046 state[rev] = rev
2047 2047 return originalwd, destmap, state
2048 2048
2049 2049
2050 2050 def clearrebased(
2051 2051 ui,
2052 2052 repo,
2053 2053 destmap,
2054 2054 state,
2055 2055 skipped,
2056 2056 collapsedas=None,
2057 2057 keepf=False,
2058 2058 fm=None,
2059 2059 backup=True,
2060 2060 ):
2061 2061 """dispose of rebased revision at the end of the rebase
2062 2062
2063 2063 If `collapsedas` is not None, the rebase was a collapse whose result if the
2064 2064 `collapsedas` node.
2065 2065
2066 2066 If `keepf` is not True, the rebase has --keep set and no nodes should be
2067 2067 removed (but bookmarks still need to be moved).
2068 2068
2069 2069 If `backup` is False, no backup will be stored when stripping rebased
2070 2070 revisions.
2071 2071 """
2072 2072 tonode = repo.changelog.node
2073 2073 replacements = {}
2074 2074 moves = {}
2075 2075 stripcleanup = not obsolete.isenabled(repo, obsolete.createmarkersopt)
2076 2076
2077 2077 collapsednodes = []
2078 2078 for rev, newrev in sorted(state.items()):
2079 2079 if newrev >= 0 and newrev != rev:
2080 2080 oldnode = tonode(rev)
2081 2081 newnode = collapsedas or tonode(newrev)
2082 2082 moves[oldnode] = newnode
2083 2083 succs = None
2084 2084 if rev in skipped:
2085 2085 if stripcleanup or not repo[rev].obsolete():
2086 2086 succs = ()
2087 2087 elif collapsedas:
2088 2088 collapsednodes.append(oldnode)
2089 2089 else:
2090 2090 succs = (newnode,)
2091 2091 if succs is not None:
2092 2092 replacements[(oldnode,)] = succs
2093 2093 if collapsednodes:
2094 2094 replacements[tuple(collapsednodes)] = (collapsedas,)
2095 2095 if fm:
2096 2096 hf = fm.hexfunc
2097 2097 fl = fm.formatlist
2098 2098 fd = fm.formatdict
2099 2099 changes = {}
2100 2100 for oldns, newn in replacements.items():
2101 2101 for oldn in oldns:
2102 2102 changes[hf(oldn)] = fl([hf(n) for n in newn], name=b'node')
2103 2103 nodechanges = fd(changes, key=b"oldnode", value=b"newnodes")
2104 2104 fm.data(nodechanges=nodechanges)
2105 2105 if keepf:
2106 2106 replacements = {}
2107 2107 scmutil.cleanupnodes(repo, replacements, b'rebase', moves, backup=backup)
2108 2108
2109 2109
2110 2110 def pullrebase(orig, ui, repo, *args, **opts):
2111 2111 """Call rebase after pull if the latter has been invoked with --rebase"""
2112 2112 if opts.get('rebase'):
2113 2113 if ui.configbool(b'commands', b'rebase.requiredest'):
2114 2114 msg = _(b'rebase destination required by configuration')
2115 2115 hint = _(b'use hg pull followed by hg rebase -d DEST')
2116 2116 raise error.InputError(msg, hint=hint)
2117 2117
2118 2118 with repo.wlock(), repo.lock():
2119 2119 if opts.get('update'):
2120 2120 del opts['update']
2121 2121 ui.debug(
2122 2122 b'--update and --rebase are not compatible, ignoring '
2123 2123 b'the update flag\n'
2124 2124 )
2125 2125
2126 2126 cmdutil.checkunfinished(repo, skipmerge=True)
2127 2127 cmdutil.bailifchanged(
2128 2128 repo,
2129 2129 hint=_(
2130 2130 b'cannot pull with rebase: '
2131 2131 b'please commit or shelve your changes first'
2132 2132 ),
2133 2133 )
2134 2134
2135 2135 revsprepull = len(repo)
2136 origpostincoming = commands.postincoming
2136 origpostincoming = cmdutil.postincoming
2137 2137
2138 2138 def _dummy(*args, **kwargs):
2139 2139 pass
2140 2140
2141 commands.postincoming = _dummy
2141 cmdutil.postincoming = _dummy
2142 2142 try:
2143 2143 ret = orig(ui, repo, *args, **opts)
2144 2144 finally:
2145 commands.postincoming = origpostincoming
2145 cmdutil.postincoming = origpostincoming
2146 2146 revspostpull = len(repo)
2147 2147 if revspostpull > revsprepull:
2148 2148 # --rev option from pull conflict with rebase own --rev
2149 2149 # dropping it
2150 2150 if 'rev' in opts:
2151 2151 del opts['rev']
2152 2152 # positional argument from pull conflicts with rebase's own
2153 2153 # --source.
2154 2154 if 'source' in opts:
2155 2155 del opts['source']
2156 2156 # revsprepull is the len of the repo, not revnum of tip.
2157 2157 destspace = list(repo.changelog.revs(start=revsprepull))
2158 2158 opts['_destspace'] = destspace
2159 2159 try:
2160 2160 rebase(ui, repo, **opts)
2161 2161 except error.NoMergeDestAbort:
2162 2162 # we can maybe update instead
2163 2163 rev, _a, _b = destutil.destupdate(repo)
2164 2164 if rev == repo[b'.'].rev():
2165 2165 ui.status(_(b'nothing to rebase\n'))
2166 2166 else:
2167 2167 ui.status(_(b'nothing to rebase - updating instead\n'))
2168 2168 # not passing argument to get the bare update behavior
2169 2169 # with warning and trumpets
2170 2170 commands.update(ui, repo)
2171 2171 else:
2172 2172 if opts.get('tool'):
2173 2173 raise error.InputError(_(b'--tool can only be used with --rebase'))
2174 2174 ret = orig(ui, repo, *args, **opts)
2175 2175
2176 2176 return ret
2177 2177
2178 2178
2179 2179 def _compute_obsolete_sets(repo, rebaseobsrevs, destmap):
2180 2180 """Figure out what to do about about obsolete revisions
2181 2181
2182 2182 `obsolete_with_successor_in_destination` is a mapping mapping obsolete => successor for all
2183 2183 obsolete nodes to be rebased given in `rebaseobsrevs`.
2184 2184
2185 2185 `obsolete_with_successor_in_rebase_set` is a set with obsolete revisions,
2186 2186 without a successor in destination, that would cause divergence.
2187 2187 """
2188 2188 obsolete_with_successor_in_destination = {}
2189 2189 obsolete_with_successor_in_rebase_set = set()
2190 2190
2191 2191 cl = repo.changelog
2192 2192 get_rev = cl.index.get_rev
2193 2193 extinctrevs = set(repo.revs(b'extinct()'))
2194 2194 for srcrev in rebaseobsrevs:
2195 2195 srcnode = cl.node(srcrev)
2196 2196 # XXX: more advanced APIs are required to handle split correctly
2197 2197 successors = set(obsutil.allsuccessors(repo.obsstore, [srcnode]))
2198 2198 # obsutil.allsuccessors includes node itself
2199 2199 successors.remove(srcnode)
2200 2200 succrevs = {get_rev(s) for s in successors}
2201 2201 succrevs.discard(None)
2202 2202 if not successors or succrevs.issubset(extinctrevs):
2203 2203 # no successor, or all successors are extinct
2204 2204 obsolete_with_successor_in_destination[srcrev] = None
2205 2205 else:
2206 2206 dstrev = destmap[srcrev]
2207 2207 for succrev in succrevs:
2208 2208 if cl.isancestorrev(succrev, dstrev):
2209 2209 obsolete_with_successor_in_destination[srcrev] = succrev
2210 2210 break
2211 2211 else:
2212 2212 # If 'srcrev' has a successor in rebase set but none in
2213 2213 # destination (which would be catched above), we shall skip it
2214 2214 # and its descendants to avoid divergence.
2215 2215 if srcrev in extinctrevs or any(s in destmap for s in succrevs):
2216 2216 obsolete_with_successor_in_rebase_set.add(srcrev)
2217 2217
2218 2218 return (
2219 2219 obsolete_with_successor_in_destination,
2220 2220 obsolete_with_successor_in_rebase_set,
2221 2221 )
2222 2222
2223 2223
2224 2224 def abortrebase(ui, repo):
2225 2225 with repo.wlock(), repo.lock():
2226 2226 rbsrt = rebaseruntime(repo, ui)
2227 2227 rbsrt._prepareabortorcontinue(isabort=True)
2228 2228
2229 2229
2230 2230 def continuerebase(ui, repo):
2231 2231 with repo.wlock(), repo.lock():
2232 2232 rbsrt = rebaseruntime(repo, ui)
2233 2233 ms = mergestatemod.mergestate.read(repo)
2234 2234 mergeutil.checkunresolved(ms)
2235 2235 retcode = rbsrt._prepareabortorcontinue(isabort=False)
2236 2236 if retcode is not None:
2237 2237 return retcode
2238 2238 rbsrt._performrebase(None)
2239 2239 rbsrt._finishrebase()
2240 2240
2241 2241
2242 2242 def summaryhook(ui, repo):
2243 2243 if not repo.vfs.exists(b'rebasestate'):
2244 2244 return
2245 2245 try:
2246 2246 rbsrt = rebaseruntime(repo, ui, {})
2247 2247 rbsrt.restorestatus()
2248 2248 state = rbsrt.state
2249 2249 except error.RepoLookupError:
2250 2250 # i18n: column positioning for "hg summary"
2251 2251 msg = _(b'rebase: (use "hg rebase --abort" to clear broken state)\n')
2252 2252 ui.write(msg)
2253 2253 return
2254 2254 numrebased = len([i for i in state.values() if i >= 0])
2255 2255 # i18n: column positioning for "hg summary"
2256 2256 ui.write(
2257 2257 _(b'rebase: %s, %s (rebase --continue)\n')
2258 2258 % (
2259 2259 ui.label(_(b'%d rebased'), b'rebase.rebased') % numrebased,
2260 2260 ui.label(_(b'%d remaining'), b'rebase.remaining')
2261 2261 % (len(state) - numrebased),
2262 2262 )
2263 2263 )
2264 2264
2265 2265
2266 2266 def uisetup(ui):
2267 2267 # Replace pull with a decorator to provide --rebase option
2268 2268 entry = extensions.wrapcommand(commands.table, b'pull', pullrebase)
2269 2269 entry[1].append(
2270 2270 (b'', b'rebase', None, _(b"rebase working directory to branch head"))
2271 2271 )
2272 2272 entry[1].append((b't', b'tool', b'', _(b"specify merge tool for rebase")))
2273 2273 cmdutil.summaryhooks.add(b'rebase', summaryhook)
2274 2274 statemod.addunfinished(
2275 2275 b'rebase',
2276 2276 fname=b'rebasestate',
2277 2277 stopflag=True,
2278 2278 continueflag=True,
2279 2279 abortfunc=abortrebase,
2280 2280 continuefunc=continuerebase,
2281 2281 )
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now