##// END OF EJS Templates
remotefilelog: include file contents in bundles produced during strip...
Kyle Lippincott -
r47606:47a95277 default
parent child Browse files
Show More
@@ -1,1260 +1,1262 b''
1 1 # __init__.py - remotefilelog extension
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8 8
9 9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 10 GUARANTEES. This means that repositories created with this extension may
11 11 only be usable with the exact version of this extension/Mercurial that was
12 12 used. The extension attempts to enforce this in order to prevent repository
13 13 corruption.
14 14
15 15 remotefilelog works by fetching file contents lazily and storing them
16 16 in a cache on the client rather than in revlogs. This allows enormous
17 17 histories to be transferred only partially, making them easier to
18 18 operate on.
19 19
20 20 Configs:
21 21
22 22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23 23
24 24 ``packs.maxpacksize`` specifies the maximum pack file size
25 25
26 26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 27 shared cache (trees only for now)
28 28
29 29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30 30
31 31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 32 update, and on other commands that use them. Different from pullprefetch.
33 33
34 34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35 35
36 36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 37 it is garbage collected
38 38
39 39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40 40
41 41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 42 days after which it is no longer prefetched.
43 43
44 44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 45 prefetches in seconds after operations that change the working copy parent
46 46
47 47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 48 pack files required to be considered part of a generation. In particular,
49 49 minimum number of packs files > gencountlimit.
50 50
51 51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 52 each generation of the data pack files. For example, list ['100MB','1MB']
53 53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 54 1MB, 100MB) and [100MB, infinity).
55 55
56 56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 57 include in an incremental data repack.
58 58
59 59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 60 it to be considered for an incremental data repack.
61 61
62 62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 63 to include in an incremental data repack.
64 64
65 65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 66 history pack files required to be considered part of a generation. In
67 67 particular, minimum number of packs files > gencountlimit.
68 68
69 69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 70 each generation of the history pack files. For example, list [
71 71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73 73
74 74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 75 include in an incremental history repack.
76 76
77 77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 78 for it to be considered for an incremental history repack.
79 79
80 80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 81 files to include in an incremental history repack.
82 82
83 83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 84 background
85 85
86 86 ``remotefilelog.cachepath`` path to cache
87 87
88 88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 89 group
90 90
91 91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92 92
93 93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94 94
95 95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96 96
97 97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98 98
99 99 ``remotefilelog.fetchwarning``: message to print when too many
100 100 single-file fetches occur
101 101
102 102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103 103
104 104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 105 files, otherwise use optimistic fetching
106 106
107 107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 108 eagerly downloaded rather than lazily
109 109
110 110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 111 data from other repos in a shared store.
112 112
113 113 ``remotefilelog.server`` if true, enable server-side functionality
114 114
115 115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116 116
117 117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 118 blobs
119 119
120 120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 121 before returning blobs
122 122
123 123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 124 corruption before returning metadata
125 125
126 126 """
127 127 from __future__ import absolute_import
128 128
129 129 import os
130 130 import time
131 131 import traceback
132 132
133 133 from mercurial.node import (
134 134 hex,
135 135 wdirrev,
136 136 )
137 137 from mercurial.i18n import _
138 138 from mercurial.pycompat import open
139 139 from mercurial import (
140 140 changegroup,
141 141 changelog,
142 142 commands,
143 143 configitems,
144 144 context,
145 145 copies,
146 146 debugcommands as hgdebugcommands,
147 147 dispatch,
148 148 error,
149 149 exchange,
150 150 extensions,
151 151 hg,
152 152 localrepo,
153 153 match as matchmod,
154 154 merge,
155 155 mergestate as mergestatemod,
156 156 patch,
157 157 pycompat,
158 158 registrar,
159 159 repair,
160 160 repoview,
161 161 revset,
162 162 scmutil,
163 163 smartset,
164 164 streamclone,
165 165 util,
166 166 )
167 167 from . import (
168 168 constants,
169 169 debugcommands,
170 170 fileserverclient,
171 171 remotefilectx,
172 172 remotefilelog,
173 173 remotefilelogserver,
174 174 repack as repackmod,
175 175 shallowbundle,
176 176 shallowrepo,
177 177 shallowstore,
178 178 shallowutil,
179 179 shallowverifier,
180 180 )
181 181
182 182 # ensures debug commands are registered
183 183 hgdebugcommands.command
184 184
185 185 cmdtable = {}
186 186 command = registrar.command(cmdtable)
187 187
188 188 configtable = {}
189 189 configitem = registrar.configitem(configtable)
190 190
191 191 configitem(b'remotefilelog', b'debug', default=False)
192 192
193 193 configitem(b'remotefilelog', b'reponame', default=b'')
194 194 configitem(b'remotefilelog', b'cachepath', default=None)
195 195 configitem(b'remotefilelog', b'cachegroup', default=None)
196 196 configitem(b'remotefilelog', b'cacheprocess', default=None)
197 197 configitem(b'remotefilelog', b'cacheprocess.includepath', default=None)
198 198 configitem(b"remotefilelog", b"cachelimit", default=b"1000 GB")
199 199
200 200 configitem(
201 201 b'remotefilelog',
202 202 b'fallbackpath',
203 203 default=configitems.dynamicdefault,
204 204 alias=[(b'remotefilelog', b'fallbackrepo')],
205 205 )
206 206
207 207 configitem(b'remotefilelog', b'validatecachelog', default=None)
208 208 configitem(b'remotefilelog', b'validatecache', default=b'on')
209 209 configitem(b'remotefilelog', b'server', default=None)
210 210 configitem(b'remotefilelog', b'servercachepath', default=None)
211 211 configitem(b"remotefilelog", b"serverexpiration", default=30)
212 212 configitem(b'remotefilelog', b'backgroundrepack', default=False)
213 213 configitem(b'remotefilelog', b'bgprefetchrevs', default=None)
214 214 configitem(b'remotefilelog', b'pullprefetch', default=None)
215 215 configitem(b'remotefilelog', b'backgroundprefetch', default=False)
216 216 configitem(b'remotefilelog', b'prefetchdelay', default=120)
217 217 configitem(b'remotefilelog', b'prefetchdays', default=14)
218 # Other values include 'local' or 'none'. Any unrecognized value is 'all'.
219 configitem(b'remotefilelog', b'strip.includefiles', default='all')
218 220
219 221 configitem(b'remotefilelog', b'getfilesstep', default=10000)
220 222 configitem(b'remotefilelog', b'getfilestype', default=b'optimistic')
221 223 configitem(b'remotefilelog', b'batchsize', configitems.dynamicdefault)
222 224 configitem(b'remotefilelog', b'fetchwarning', default=b'')
223 225
224 226 configitem(b'remotefilelog', b'includepattern', default=None)
225 227 configitem(b'remotefilelog', b'excludepattern', default=None)
226 228
227 229 configitem(b'remotefilelog', b'gcrepack', default=False)
228 230 configitem(b'remotefilelog', b'repackonhggc', default=False)
229 231 configitem(b'repack', b'chainorphansbysize', default=True, experimental=True)
230 232
231 233 configitem(b'packs', b'maxpacksize', default=0)
232 234 configitem(b'packs', b'maxchainlen', default=1000)
233 235
234 236 configitem(b'devel', b'remotefilelog.bg-wait', default=False)
235 237
236 238 # default TTL limit is 30 days
237 239 _defaultlimit = 60 * 60 * 24 * 30
238 240 configitem(b'remotefilelog', b'nodettl', default=_defaultlimit)
239 241
240 242 configitem(b'remotefilelog', b'data.gencountlimit', default=2),
241 243 configitem(
242 244 b'remotefilelog', b'data.generations', default=[b'1GB', b'100MB', b'1MB']
243 245 )
244 246 configitem(b'remotefilelog', b'data.maxrepackpacks', default=50)
245 247 configitem(b'remotefilelog', b'data.repackmaxpacksize', default=b'4GB')
246 248 configitem(b'remotefilelog', b'data.repacksizelimit', default=b'100MB')
247 249
248 250 configitem(b'remotefilelog', b'history.gencountlimit', default=2),
249 251 configitem(b'remotefilelog', b'history.generations', default=[b'100MB'])
250 252 configitem(b'remotefilelog', b'history.maxrepackpacks', default=50)
251 253 configitem(b'remotefilelog', b'history.repackmaxpacksize', default=b'400MB')
252 254 configitem(b'remotefilelog', b'history.repacksizelimit', default=b'100MB')
253 255
254 256 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
255 257 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
256 258 # be specifying the version(s) of Mercurial they are tested with, or
257 259 # leave the attribute unspecified.
258 260 testedwith = b'ships-with-hg-core'
259 261
260 262 repoclass = localrepo.localrepository
261 263 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
262 264
263 265 isenabled = shallowutil.isenabled
264 266
265 267
266 268 def uisetup(ui):
267 269 """Wraps user facing Mercurial commands to swap them out with shallow
268 270 versions.
269 271 """
270 272 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
271 273
272 274 entry = extensions.wrapcommand(commands.table, b'clone', cloneshallow)
273 275 entry[1].append(
274 276 (
275 277 b'',
276 278 b'shallow',
277 279 None,
278 280 _(b"create a shallow clone which uses remote file history"),
279 281 )
280 282 )
281 283
282 284 extensions.wrapcommand(
283 285 commands.table, b'debugindex', debugcommands.debugindex
284 286 )
285 287 extensions.wrapcommand(
286 288 commands.table, b'debugindexdot', debugcommands.debugindexdot
287 289 )
288 290 extensions.wrapcommand(commands.table, b'log', log)
289 291 extensions.wrapcommand(commands.table, b'pull', pull)
290 292
291 293 # Prevent 'hg manifest --all'
292 294 def _manifest(orig, ui, repo, *args, **opts):
293 295 if isenabled(repo) and opts.get('all'):
294 296 raise error.Abort(_(b"--all is not supported in a shallow repo"))
295 297
296 298 return orig(ui, repo, *args, **opts)
297 299
298 300 extensions.wrapcommand(commands.table, b"manifest", _manifest)
299 301
300 302 # Wrap remotefilelog with lfs code
301 303 def _lfsloaded(loaded=False):
302 304 lfsmod = None
303 305 try:
304 306 lfsmod = extensions.find(b'lfs')
305 307 except KeyError:
306 308 pass
307 309 if lfsmod:
308 310 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
309 311 fileserverclient._lfsmod = lfsmod
310 312
311 313 extensions.afterloaded(b'lfs', _lfsloaded)
312 314
313 315 # debugdata needs remotefilelog.len to work
314 316 extensions.wrapcommand(commands.table, b'debugdata', debugdatashallow)
315 317
316 318 changegroup.cgpacker = shallowbundle.shallowcg1packer
317 319
318 320 extensions.wrapfunction(
319 321 changegroup, b'_addchangegroupfiles', shallowbundle.addchangegroupfiles
320 322 )
321 323 extensions.wrapfunction(
322 324 changegroup, b'makechangegroup', shallowbundle.makechangegroup
323 325 )
324 326 extensions.wrapfunction(localrepo, b'makestore', storewrapper)
325 327 extensions.wrapfunction(exchange, b'pull', exchangepull)
326 328 extensions.wrapfunction(merge, b'applyupdates', applyupdates)
327 329 extensions.wrapfunction(merge, b'_checkunknownfiles', checkunknownfiles)
328 330 extensions.wrapfunction(context.workingctx, b'_checklookup', checklookup)
329 331 extensions.wrapfunction(scmutil, b'_findrenames', findrenames)
330 332 extensions.wrapfunction(
331 333 copies, b'_computeforwardmissing', computeforwardmissing
332 334 )
333 335 extensions.wrapfunction(dispatch, b'runcommand', runcommand)
334 336 extensions.wrapfunction(repair, b'_collectbrokencsets', _collectbrokencsets)
335 337 extensions.wrapfunction(context.changectx, b'filectx', filectx)
336 338 extensions.wrapfunction(context.workingctx, b'filectx', workingfilectx)
337 339 extensions.wrapfunction(patch, b'trydiff', trydiff)
338 340 extensions.wrapfunction(hg, b'verify', _verify)
339 341 scmutil.fileprefetchhooks.add(b'remotefilelog', _fileprefetchhook)
340 342
341 343 # disappointing hacks below
342 344 extensions.wrapfunction(scmutil, b'getrenamedfn', getrenamedfn)
343 345 extensions.wrapfunction(revset, b'filelog', filelogrevset)
344 346 revset.symbols[b'filelog'] = revset.filelog
345 347
346 348
347 349 def cloneshallow(orig, ui, repo, *args, **opts):
348 350 if opts.get('shallow'):
349 351 repos = []
350 352
351 353 def pull_shallow(orig, self, *args, **kwargs):
352 354 if not isenabled(self):
353 355 repos.append(self.unfiltered())
354 356 # set up the client hooks so the post-clone update works
355 357 setupclient(self.ui, self.unfiltered())
356 358
357 359 # setupclient fixed the class on the repo itself
358 360 # but we also need to fix it on the repoview
359 361 if isinstance(self, repoview.repoview):
360 362 self.__class__.__bases__ = (
361 363 self.__class__.__bases__[0],
362 364 self.unfiltered().__class__,
363 365 )
364 366 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
365 367 with self.lock():
366 368 # acquire store lock before writing requirements as some
367 369 # requirements might be written to .hg/store/requires
368 370 scmutil.writereporequirements(self)
369 371
370 372 # Since setupclient hadn't been called, exchange.pull was not
371 373 # wrapped. So we need to manually invoke our version of it.
372 374 return exchangepull(orig, self, *args, **kwargs)
373 375 else:
374 376 return orig(self, *args, **kwargs)
375 377
376 378 extensions.wrapfunction(exchange, b'pull', pull_shallow)
377 379
378 380 # Wrap the stream logic to add requirements and to pass include/exclude
379 381 # patterns around.
380 382 def setup_streamout(repo, remote):
381 383 # Replace remote.stream_out with a version that sends file
382 384 # patterns.
383 385 def stream_out_shallow(orig):
384 386 caps = remote.capabilities()
385 387 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
386 388 opts = {}
387 389 if repo.includepattern:
388 390 opts['includepattern'] = b'\0'.join(repo.includepattern)
389 391 if repo.excludepattern:
390 392 opts['excludepattern'] = b'\0'.join(repo.excludepattern)
391 393 return remote._callstream(b'stream_out_shallow', **opts)
392 394 else:
393 395 return orig()
394 396
395 397 extensions.wrapfunction(remote, b'stream_out', stream_out_shallow)
396 398
397 399 def stream_wrap(orig, op):
398 400 setup_streamout(op.repo, op.remote)
399 401 return orig(op)
400 402
401 403 extensions.wrapfunction(
402 404 streamclone, b'maybeperformlegacystreamclone', stream_wrap
403 405 )
404 406
405 407 def canperformstreamclone(orig, pullop, bundle2=False):
406 408 # remotefilelog is currently incompatible with the
407 409 # bundle2 flavor of streamclones, so force us to use
408 410 # v1 instead.
409 411 if b'v2' in pullop.remotebundle2caps.get(b'stream', []):
410 412 pullop.remotebundle2caps[b'stream'] = [
411 413 c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2'
412 414 ]
413 415 if bundle2:
414 416 return False, None
415 417 supported, requirements = orig(pullop, bundle2=bundle2)
416 418 if requirements is not None:
417 419 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
418 420 return supported, requirements
419 421
420 422 extensions.wrapfunction(
421 423 streamclone, b'canperformstreamclone', canperformstreamclone
422 424 )
423 425
424 426 try:
425 427 orig(ui, repo, *args, **opts)
426 428 finally:
427 429 if opts.get('shallow'):
428 430 for r in repos:
429 431 if util.safehasattr(r, b'fileservice'):
430 432 r.fileservice.close()
431 433
432 434
433 435 def debugdatashallow(orig, *args, **kwds):
434 436 oldlen = remotefilelog.remotefilelog.__len__
435 437 try:
436 438 remotefilelog.remotefilelog.__len__ = lambda x: 1
437 439 return orig(*args, **kwds)
438 440 finally:
439 441 remotefilelog.remotefilelog.__len__ = oldlen
440 442
441 443
442 444 def reposetup(ui, repo):
443 445 if not repo.local():
444 446 return
445 447
446 448 # put here intentionally bc doesnt work in uisetup
447 449 ui.setconfig(b'hooks', b'update.prefetch', wcpprefetch)
448 450 ui.setconfig(b'hooks', b'commit.prefetch', wcpprefetch)
449 451
450 452 isserverenabled = ui.configbool(b'remotefilelog', b'server')
451 453 isshallowclient = isenabled(repo)
452 454
453 455 if isserverenabled and isshallowclient:
454 456 raise RuntimeError(b"Cannot be both a server and shallow client.")
455 457
456 458 if isshallowclient:
457 459 setupclient(ui, repo)
458 460
459 461 if isserverenabled:
460 462 remotefilelogserver.setupserver(ui, repo)
461 463
462 464
463 465 def setupclient(ui, repo):
464 466 if not isinstance(repo, localrepo.localrepository):
465 467 return
466 468
467 469 # Even clients get the server setup since they need to have the
468 470 # wireprotocol endpoints registered.
469 471 remotefilelogserver.onetimesetup(ui)
470 472 onetimeclientsetup(ui)
471 473
472 474 shallowrepo.wraprepo(repo)
473 475 repo.store = shallowstore.wrapstore(repo.store)
474 476
475 477
476 478 def storewrapper(orig, requirements, path, vfstype):
477 479 s = orig(requirements, path, vfstype)
478 480 if constants.SHALLOWREPO_REQUIREMENT in requirements:
479 481 s = shallowstore.wrapstore(s)
480 482
481 483 return s
482 484
483 485
484 486 # prefetch files before update
485 487 def applyupdates(
486 488 orig, repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts
487 489 ):
488 490 if isenabled(repo):
489 491 manifest = mctx.manifest()
490 492 files = []
491 493 for f, args, msg in mresult.getactions([mergestatemod.ACTION_GET]):
492 494 files.append((f, hex(manifest[f])))
493 495 # batch fetch the needed files from the server
494 496 repo.fileservice.prefetch(files)
495 497 return orig(repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts)
496 498
497 499
498 500 # Prefetch merge checkunknownfiles
499 501 def checkunknownfiles(orig, repo, wctx, mctx, force, mresult, *args, **kwargs):
500 502 if isenabled(repo):
501 503 files = []
502 504 sparsematch = repo.maybesparsematch(mctx.rev())
503 505 for f, (m, actionargs, msg) in mresult.filemap():
504 506 if sparsematch and not sparsematch(f):
505 507 continue
506 508 if m in (
507 509 mergestatemod.ACTION_CREATED,
508 510 mergestatemod.ACTION_DELETED_CHANGED,
509 511 mergestatemod.ACTION_CREATED_MERGE,
510 512 ):
511 513 files.append((f, hex(mctx.filenode(f))))
512 514 elif m == mergestatemod.ACTION_LOCAL_DIR_RENAME_GET:
513 515 f2 = actionargs[0]
514 516 files.append((f2, hex(mctx.filenode(f2))))
515 517 # batch fetch the needed files from the server
516 518 repo.fileservice.prefetch(files)
517 519 return orig(repo, wctx, mctx, force, mresult, *args, **kwargs)
518 520
519 521
520 522 # Prefetch files before status attempts to look at their size and contents
521 523 def checklookup(orig, self, files):
522 524 repo = self._repo
523 525 if isenabled(repo):
524 526 prefetchfiles = []
525 527 for parent in self._parents:
526 528 for f in files:
527 529 if f in parent:
528 530 prefetchfiles.append((f, hex(parent.filenode(f))))
529 531 # batch fetch the needed files from the server
530 532 repo.fileservice.prefetch(prefetchfiles)
531 533 return orig(self, files)
532 534
533 535
534 536 # Prefetch the logic that compares added and removed files for renames
535 537 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
536 538 if isenabled(repo):
537 539 files = []
538 540 pmf = repo[b'.'].manifest()
539 541 for f in removed:
540 542 if f in pmf:
541 543 files.append((f, hex(pmf[f])))
542 544 # batch fetch the needed files from the server
543 545 repo.fileservice.prefetch(files)
544 546 return orig(repo, matcher, added, removed, *args, **kwargs)
545 547
546 548
547 549 # prefetch files before pathcopies check
548 550 def computeforwardmissing(orig, a, b, match=None):
549 551 missing = orig(a, b, match=match)
550 552 repo = a._repo
551 553 if isenabled(repo):
552 554 mb = b.manifest()
553 555
554 556 files = []
555 557 sparsematch = repo.maybesparsematch(b.rev())
556 558 if sparsematch:
557 559 sparsemissing = set()
558 560 for f in missing:
559 561 if sparsematch(f):
560 562 files.append((f, hex(mb[f])))
561 563 sparsemissing.add(f)
562 564 missing = sparsemissing
563 565
564 566 # batch fetch the needed files from the server
565 567 repo.fileservice.prefetch(files)
566 568 return missing
567 569
568 570
569 571 # close cache miss server connection after the command has finished
570 572 def runcommand(orig, lui, repo, *args, **kwargs):
571 573 fileservice = None
572 574 # repo can be None when running in chg:
573 575 # - at startup, reposetup was called because serve is not norepo
574 576 # - a norepo command like "help" is called
575 577 if repo and isenabled(repo):
576 578 fileservice = repo.fileservice
577 579 try:
578 580 return orig(lui, repo, *args, **kwargs)
579 581 finally:
580 582 if fileservice:
581 583 fileservice.close()
582 584
583 585
584 586 # prevent strip from stripping remotefilelogs
585 587 def _collectbrokencsets(orig, repo, files, striprev):
586 588 if isenabled(repo):
587 589 files = list([f for f in files if not repo.shallowmatch(f)])
588 590 return orig(repo, files, striprev)
589 591
590 592
591 593 # changectx wrappers
592 594 def filectx(orig, self, path, fileid=None, filelog=None):
593 595 if fileid is None:
594 596 fileid = self.filenode(path)
595 597 if isenabled(self._repo) and self._repo.shallowmatch(path):
596 598 return remotefilectx.remotefilectx(
597 599 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
598 600 )
599 601 return orig(self, path, fileid=fileid, filelog=filelog)
600 602
601 603
602 604 def workingfilectx(orig, self, path, filelog=None):
603 605 if isenabled(self._repo) and self._repo.shallowmatch(path):
604 606 return remotefilectx.remoteworkingfilectx(
605 607 self._repo, path, workingctx=self, filelog=filelog
606 608 )
607 609 return orig(self, path, filelog=filelog)
608 610
609 611
610 612 # prefetch required revisions before a diff
611 613 def trydiff(
612 614 orig,
613 615 repo,
614 616 revs,
615 617 ctx1,
616 618 ctx2,
617 619 modified,
618 620 added,
619 621 removed,
620 622 copy,
621 623 getfilectx,
622 624 *args,
623 625 **kwargs
624 626 ):
625 627 if isenabled(repo):
626 628 prefetch = []
627 629 mf1 = ctx1.manifest()
628 630 for fname in modified + added + removed:
629 631 if fname in mf1:
630 632 fnode = getfilectx(fname, ctx1).filenode()
631 633 # fnode can be None if it's a edited working ctx file
632 634 if fnode:
633 635 prefetch.append((fname, hex(fnode)))
634 636 if fname not in removed:
635 637 fnode = getfilectx(fname, ctx2).filenode()
636 638 if fnode:
637 639 prefetch.append((fname, hex(fnode)))
638 640
639 641 repo.fileservice.prefetch(prefetch)
640 642
641 643 return orig(
642 644 repo,
643 645 revs,
644 646 ctx1,
645 647 ctx2,
646 648 modified,
647 649 added,
648 650 removed,
649 651 copy,
650 652 getfilectx,
651 653 *args,
652 654 **kwargs
653 655 )
654 656
655 657
656 658 # Prevent verify from processing files
657 659 # a stub for mercurial.hg.verify()
658 660 def _verify(orig, repo, level=None):
659 661 lock = repo.lock()
660 662 try:
661 663 return shallowverifier.shallowverifier(repo).verify()
662 664 finally:
663 665 lock.release()
664 666
665 667
666 668 clientonetime = False
667 669
668 670
669 671 def onetimeclientsetup(ui):
670 672 global clientonetime
671 673 if clientonetime:
672 674 return
673 675 clientonetime = True
674 676
675 677 # Don't commit filelogs until we know the commit hash, since the hash
676 678 # is present in the filelog blob.
677 679 # This violates Mercurial's filelog->manifest->changelog write order,
678 680 # but is generally fine for client repos.
679 681 pendingfilecommits = []
680 682
681 683 def addrawrevision(
682 684 orig,
683 685 self,
684 686 rawtext,
685 687 transaction,
686 688 link,
687 689 p1,
688 690 p2,
689 691 node,
690 692 flags,
691 693 cachedelta=None,
692 694 _metatuple=None,
693 695 ):
694 696 if isinstance(link, int):
695 697 pendingfilecommits.append(
696 698 (
697 699 self,
698 700 rawtext,
699 701 transaction,
700 702 link,
701 703 p1,
702 704 p2,
703 705 node,
704 706 flags,
705 707 cachedelta,
706 708 _metatuple,
707 709 )
708 710 )
709 711 return node
710 712 else:
711 713 return orig(
712 714 self,
713 715 rawtext,
714 716 transaction,
715 717 link,
716 718 p1,
717 719 p2,
718 720 node,
719 721 flags,
720 722 cachedelta,
721 723 _metatuple=_metatuple,
722 724 )
723 725
724 726 extensions.wrapfunction(
725 727 remotefilelog.remotefilelog, b'addrawrevision', addrawrevision
726 728 )
727 729
728 730 def changelogadd(orig, self, *args, **kwargs):
729 731 oldlen = len(self)
730 732 node = orig(self, *args, **kwargs)
731 733 newlen = len(self)
732 734 if oldlen != newlen:
733 735 for oldargs in pendingfilecommits:
734 736 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
735 737 linknode = self.node(link)
736 738 if linknode == node:
737 739 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
738 740 else:
739 741 raise error.ProgrammingError(
740 742 b'pending multiple integer revisions are not supported'
741 743 )
742 744 else:
743 745 # "link" is actually wrong here (it is set to len(changelog))
744 746 # if changelog remains unchanged, skip writing file revisions
745 747 # but still do a sanity check about pending multiple revisions
746 748 if len({x[3] for x in pendingfilecommits}) > 1:
747 749 raise error.ProgrammingError(
748 750 b'pending multiple integer revisions are not supported'
749 751 )
750 752 del pendingfilecommits[:]
751 753 return node
752 754
753 755 extensions.wrapfunction(changelog.changelog, b'add', changelogadd)
754 756
755 757
756 758 def getrenamedfn(orig, repo, endrev=None):
757 759 if not isenabled(repo) or copies.usechangesetcentricalgo(repo):
758 760 return orig(repo, endrev)
759 761
760 762 rcache = {}
761 763
762 764 def getrenamed(fn, rev):
763 765 """looks up all renames for a file (up to endrev) the first
764 766 time the file is given. It indexes on the changerev and only
765 767 parses the manifest if linkrev != changerev.
766 768 Returns rename info for fn at changerev rev."""
767 769 if rev in rcache.setdefault(fn, {}):
768 770 return rcache[fn][rev]
769 771
770 772 try:
771 773 fctx = repo[rev].filectx(fn)
772 774 for ancestor in fctx.ancestors():
773 775 if ancestor.path() == fn:
774 776 renamed = ancestor.renamed()
775 777 rcache[fn][ancestor.rev()] = renamed and renamed[0]
776 778
777 779 renamed = fctx.renamed()
778 780 return renamed and renamed[0]
779 781 except error.LookupError:
780 782 return None
781 783
782 784 return getrenamed
783 785
784 786
785 787 def filelogrevset(orig, repo, subset, x):
786 788 """``filelog(pattern)``
787 789 Changesets connected to the specified filelog.
788 790
789 791 For performance reasons, ``filelog()`` does not show every changeset
790 792 that affects the requested file(s). See :hg:`help log` for details. For
791 793 a slower, more accurate result, use ``file()``.
792 794 """
793 795
794 796 if not isenabled(repo):
795 797 return orig(repo, subset, x)
796 798
797 799 # i18n: "filelog" is a keyword
798 800 pat = revset.getstring(x, _(b"filelog requires a pattern"))
799 801 m = matchmod.match(
800 802 repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None]
801 803 )
802 804 s = set()
803 805
804 806 if not matchmod.patkind(pat):
805 807 # slow
806 808 for r in subset:
807 809 ctx = repo[r]
808 810 cfiles = ctx.files()
809 811 for f in m.files():
810 812 if f in cfiles:
811 813 s.add(ctx.rev())
812 814 break
813 815 else:
814 816 # partial
815 817 files = (f for f in repo[None] if m(f))
816 818 for f in files:
817 819 fctx = repo[None].filectx(f)
818 820 s.add(fctx.linkrev())
819 821 for actx in fctx.ancestors():
820 822 s.add(actx.linkrev())
821 823
822 824 return smartset.baseset([r for r in subset if r in s])
823 825
824 826
825 827 @command(b'gc', [], _(b'hg gc [REPO...]'), norepo=True)
826 828 def gc(ui, *args, **opts):
827 829 """garbage collect the client and server filelog caches"""
828 830 cachepaths = set()
829 831
830 832 # get the system client cache
831 833 systemcache = shallowutil.getcachepath(ui, allowempty=True)
832 834 if systemcache:
833 835 cachepaths.add(systemcache)
834 836
835 837 # get repo client and server cache
836 838 repopaths = []
837 839 pwd = ui.environ.get(b'PWD')
838 840 if pwd:
839 841 repopaths.append(pwd)
840 842
841 843 repopaths.extend(args)
842 844 repos = []
843 845 for repopath in repopaths:
844 846 try:
845 847 repo = hg.peer(ui, {}, repopath)
846 848 repos.append(repo)
847 849
848 850 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
849 851 if repocache:
850 852 cachepaths.add(repocache)
851 853 except error.RepoError:
852 854 pass
853 855
854 856 # gc client cache
855 857 for cachepath in cachepaths:
856 858 gcclient(ui, cachepath)
857 859
858 860 # gc server cache
859 861 for repo in repos:
860 862 remotefilelogserver.gcserver(ui, repo._repo)
861 863
862 864
863 865 def gcclient(ui, cachepath):
864 866 # get list of repos that use this cache
865 867 repospath = os.path.join(cachepath, b'repos')
866 868 if not os.path.exists(repospath):
867 869 ui.warn(_(b"no known cache at %s\n") % cachepath)
868 870 return
869 871
870 872 reposfile = open(repospath, b'rb')
871 873 repos = {r[:-1] for r in reposfile.readlines()}
872 874 reposfile.close()
873 875
874 876 # build list of useful files
875 877 validrepos = []
876 878 keepkeys = set()
877 879
878 880 sharedcache = None
879 881 filesrepacked = False
880 882
881 883 count = 0
882 884 progress = ui.makeprogress(
883 885 _(b"analyzing repositories"), unit=b"repos", total=len(repos)
884 886 )
885 887 for path in repos:
886 888 progress.update(count)
887 889 count += 1
888 890 try:
889 891 path = ui.expandpath(os.path.normpath(path))
890 892 except TypeError as e:
891 893 ui.warn(_(b"warning: malformed path: %r:%s\n") % (path, e))
892 894 traceback.print_exc()
893 895 continue
894 896 try:
895 897 peer = hg.peer(ui, {}, path)
896 898 repo = peer._repo
897 899 except error.RepoError:
898 900 continue
899 901
900 902 validrepos.append(path)
901 903
902 904 # Protect against any repo or config changes that have happened since
903 905 # this repo was added to the repos file. We'd rather this loop succeed
904 906 # and too much be deleted, than the loop fail and nothing gets deleted.
905 907 if not isenabled(repo):
906 908 continue
907 909
908 910 if not util.safehasattr(repo, b'name'):
909 911 ui.warn(
910 912 _(b"repo %s is a misconfigured remotefilelog repo\n") % path
911 913 )
912 914 continue
913 915
914 916 # If garbage collection on repack and repack on hg gc are enabled
915 917 # then loose files are repacked and garbage collected.
916 918 # Otherwise regular garbage collection is performed.
917 919 repackonhggc = repo.ui.configbool(b'remotefilelog', b'repackonhggc')
918 920 gcrepack = repo.ui.configbool(b'remotefilelog', b'gcrepack')
919 921 if repackonhggc and gcrepack:
920 922 try:
921 923 repackmod.incrementalrepack(repo)
922 924 filesrepacked = True
923 925 continue
924 926 except (IOError, repackmod.RepackAlreadyRunning):
925 927 # If repack cannot be performed due to not enough disk space
926 928 # continue doing garbage collection of loose files w/o repack
927 929 pass
928 930
929 931 reponame = repo.name
930 932 if not sharedcache:
931 933 sharedcache = repo.sharedstore
932 934
933 935 # Compute a keepset which is not garbage collected
934 936 def keyfn(fname, fnode):
935 937 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
936 938
937 939 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
938 940
939 941 progress.complete()
940 942
941 943 # write list of valid repos back
942 944 oldumask = os.umask(0o002)
943 945 try:
944 946 reposfile = open(repospath, b'wb')
945 947 reposfile.writelines([(b"%s\n" % r) for r in validrepos])
946 948 reposfile.close()
947 949 finally:
948 950 os.umask(oldumask)
949 951
950 952 # prune cache
951 953 if sharedcache is not None:
952 954 sharedcache.gc(keepkeys)
953 955 elif not filesrepacked:
954 956 ui.warn(_(b"warning: no valid repos in repofile\n"))
955 957
956 958
957 959 def log(orig, ui, repo, *pats, **opts):
958 960 if not isenabled(repo):
959 961 return orig(ui, repo, *pats, **opts)
960 962
961 963 follow = opts.get('follow')
962 964 revs = opts.get('rev')
963 965 if pats:
964 966 # Force slowpath for non-follow patterns and follows that start from
965 967 # non-working-copy-parent revs.
966 968 if not follow or revs:
967 969 # This forces the slowpath
968 970 opts['removed'] = True
969 971
970 972 # If this is a non-follow log without any revs specified, recommend that
971 973 # the user add -f to speed it up.
972 974 if not follow and not revs:
973 975 match = scmutil.match(repo[b'.'], pats, pycompat.byteskwargs(opts))
974 976 isfile = not match.anypats()
975 977 if isfile:
976 978 for file in match.files():
977 979 if not os.path.isfile(repo.wjoin(file)):
978 980 isfile = False
979 981 break
980 982
981 983 if isfile:
982 984 ui.warn(
983 985 _(
984 986 b"warning: file log can be slow on large repos - "
985 987 + b"use -f to speed it up\n"
986 988 )
987 989 )
988 990
989 991 return orig(ui, repo, *pats, **opts)
990 992
991 993
992 994 def revdatelimit(ui, revset):
993 995 """Update revset so that only changesets no older than 'prefetchdays' days
994 996 are included. The default value is set to 14 days. If 'prefetchdays' is set
995 997 to zero or negative value then date restriction is not applied.
996 998 """
997 999 days = ui.configint(b'remotefilelog', b'prefetchdays')
998 1000 if days > 0:
999 1001 revset = b'(%s) & date(-%s)' % (revset, days)
1000 1002 return revset
1001 1003
1002 1004
1003 1005 def readytofetch(repo):
1004 1006 """Check that enough time has passed since the last background prefetch.
1005 1007 This only relates to prefetches after operations that change the working
1006 1008 copy parent. Default delay between background prefetches is 2 minutes.
1007 1009 """
1008 1010 timeout = repo.ui.configint(b'remotefilelog', b'prefetchdelay')
1009 1011 fname = repo.vfs.join(b'lastprefetch')
1010 1012
1011 1013 ready = False
1012 1014 with open(fname, b'a'):
1013 1015 # the with construct above is used to avoid race conditions
1014 1016 modtime = os.path.getmtime(fname)
1015 1017 if (time.time() - modtime) > timeout:
1016 1018 os.utime(fname, None)
1017 1019 ready = True
1018 1020
1019 1021 return ready
1020 1022
1021 1023
1022 1024 def wcpprefetch(ui, repo, **kwargs):
1023 1025 """Prefetches in background revisions specified by bgprefetchrevs revset.
1024 1026 Does background repack if backgroundrepack flag is set in config.
1025 1027 """
1026 1028 shallow = isenabled(repo)
1027 1029 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs')
1028 1030 isready = readytofetch(repo)
1029 1031
1030 1032 if not (shallow and bgprefetchrevs and isready):
1031 1033 return
1032 1034
1033 1035 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1034 1036 # update a revset with a date limit
1035 1037 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
1036 1038
1037 1039 def anon(unused_success):
1038 1040 if util.safehasattr(repo, b'ranprefetch') and repo.ranprefetch:
1039 1041 return
1040 1042 repo.ranprefetch = True
1041 1043 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
1042 1044
1043 1045 repo._afterlock(anon)
1044 1046
1045 1047
1046 1048 def pull(orig, ui, repo, *pats, **opts):
1047 1049 result = orig(ui, repo, *pats, **opts)
1048 1050
1049 1051 if isenabled(repo):
1050 1052 # prefetch if it's configured
1051 1053 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch')
1052 1054 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1053 1055 bgprefetch = repo.ui.configbool(b'remotefilelog', b'backgroundprefetch')
1054 1056
1055 1057 if prefetchrevset:
1056 1058 ui.status(_(b"prefetching file contents\n"))
1057 1059 revs = scmutil.revrange(repo, [prefetchrevset])
1058 1060 base = repo[b'.'].rev()
1059 1061 if bgprefetch:
1060 1062 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
1061 1063 else:
1062 1064 repo.prefetch(revs, base=base)
1063 1065 if bgrepack:
1064 1066 repackmod.backgroundrepack(repo, incremental=True)
1065 1067 elif bgrepack:
1066 1068 repackmod.backgroundrepack(repo, incremental=True)
1067 1069
1068 1070 return result
1069 1071
1070 1072
1071 1073 def exchangepull(orig, repo, remote, *args, **kwargs):
1072 1074 # Hook into the callstream/getbundle to insert bundle capabilities
1073 1075 # during a pull.
1074 1076 def localgetbundle(
1075 1077 orig, source, heads=None, common=None, bundlecaps=None, **kwargs
1076 1078 ):
1077 1079 if not bundlecaps:
1078 1080 bundlecaps = set()
1079 1081 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1080 1082 return orig(
1081 1083 source, heads=heads, common=common, bundlecaps=bundlecaps, **kwargs
1082 1084 )
1083 1085
1084 1086 if util.safehasattr(remote, b'_callstream'):
1085 1087 remote._localrepo = repo
1086 1088 elif util.safehasattr(remote, b'getbundle'):
1087 1089 extensions.wrapfunction(remote, b'getbundle', localgetbundle)
1088 1090
1089 1091 return orig(repo, remote, *args, **kwargs)
1090 1092
1091 1093
1092 1094 def _fileprefetchhook(repo, revmatches):
1093 1095 if isenabled(repo):
1094 1096 allfiles = []
1095 1097 for rev, match in revmatches:
1096 1098 if rev == wdirrev or rev is None:
1097 1099 continue
1098 1100 ctx = repo[rev]
1099 1101 mf = ctx.manifest()
1100 1102 sparsematch = repo.maybesparsematch(ctx.rev())
1101 1103 for path in ctx.walk(match):
1102 1104 if (not sparsematch or sparsematch(path)) and path in mf:
1103 1105 allfiles.append((path, hex(mf[path])))
1104 1106 repo.fileservice.prefetch(allfiles)
1105 1107
1106 1108
1107 1109 @command(
1108 1110 b'debugremotefilelog',
1109 1111 [
1110 1112 (b'd', b'decompress', None, _(b'decompress the filelog first')),
1111 1113 ],
1112 1114 _(b'hg debugremotefilelog <path>'),
1113 1115 norepo=True,
1114 1116 )
1115 1117 def debugremotefilelog(ui, path, **opts):
1116 1118 return debugcommands.debugremotefilelog(ui, path, **opts)
1117 1119
1118 1120
1119 1121 @command(
1120 1122 b'verifyremotefilelog',
1121 1123 [
1122 1124 (b'd', b'decompress', None, _(b'decompress the filelogs first')),
1123 1125 ],
1124 1126 _(b'hg verifyremotefilelogs <directory>'),
1125 1127 norepo=True,
1126 1128 )
1127 1129 def verifyremotefilelog(ui, path, **opts):
1128 1130 return debugcommands.verifyremotefilelog(ui, path, **opts)
1129 1131
1130 1132
1131 1133 @command(
1132 1134 b'debugdatapack',
1133 1135 [
1134 1136 (b'', b'long', None, _(b'print the long hashes')),
1135 1137 (b'', b'node', b'', _(b'dump the contents of node'), b'NODE'),
1136 1138 ],
1137 1139 _(b'hg debugdatapack <paths>'),
1138 1140 norepo=True,
1139 1141 )
1140 1142 def debugdatapack(ui, *paths, **opts):
1141 1143 return debugcommands.debugdatapack(ui, *paths, **opts)
1142 1144
1143 1145
1144 1146 @command(b'debughistorypack', [], _(b'hg debughistorypack <path>'), norepo=True)
1145 1147 def debughistorypack(ui, path, **opts):
1146 1148 return debugcommands.debughistorypack(ui, path)
1147 1149
1148 1150
1149 1151 @command(b'debugkeepset', [], _(b'hg debugkeepset'))
1150 1152 def debugkeepset(ui, repo, **opts):
1151 1153 # The command is used to measure keepset computation time
1152 1154 def keyfn(fname, fnode):
1153 1155 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1154 1156
1155 1157 repackmod.keepset(repo, keyfn)
1156 1158 return
1157 1159
1158 1160
1159 1161 @command(b'debugwaitonrepack', [], _(b'hg debugwaitonrepack'))
1160 1162 def debugwaitonrepack(ui, repo, **opts):
1161 1163 return debugcommands.debugwaitonrepack(repo)
1162 1164
1163 1165
1164 1166 @command(b'debugwaitonprefetch', [], _(b'hg debugwaitonprefetch'))
1165 1167 def debugwaitonprefetch(ui, repo, **opts):
1166 1168 return debugcommands.debugwaitonprefetch(repo)
1167 1169
1168 1170
1169 1171 def resolveprefetchopts(ui, opts):
1170 1172 if not opts.get(b'rev'):
1171 1173 revset = [b'.', b'draft()']
1172 1174
1173 1175 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch', None)
1174 1176 if prefetchrevset:
1175 1177 revset.append(b'(%s)' % prefetchrevset)
1176 1178 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs', None)
1177 1179 if bgprefetchrevs:
1178 1180 revset.append(b'(%s)' % bgprefetchrevs)
1179 1181 revset = b'+'.join(revset)
1180 1182
1181 1183 # update a revset with a date limit
1182 1184 revset = revdatelimit(ui, revset)
1183 1185
1184 1186 opts[b'rev'] = [revset]
1185 1187
1186 1188 if not opts.get(b'base'):
1187 1189 opts[b'base'] = None
1188 1190
1189 1191 return opts
1190 1192
1191 1193
1192 1194 @command(
1193 1195 b'prefetch',
1194 1196 [
1195 1197 (b'r', b'rev', [], _(b'prefetch the specified revisions'), _(b'REV')),
1196 1198 (b'', b'repack', False, _(b'run repack after prefetch')),
1197 1199 (b'b', b'base', b'', _(b"rev that is assumed to already be local")),
1198 1200 ]
1199 1201 + commands.walkopts,
1200 1202 _(b'hg prefetch [OPTIONS] [FILE...]'),
1201 1203 helpcategory=command.CATEGORY_MAINTENANCE,
1202 1204 )
1203 1205 def prefetch(ui, repo, *pats, **opts):
1204 1206 """prefetch file revisions from the server
1205 1207
1206 1208 Prefetchs file revisions for the specified revs and stores them in the
1207 1209 local remotefilelog cache. If no rev is specified, the default rev is
1208 1210 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1209 1211 File names or patterns can be used to limit which files are downloaded.
1210 1212
1211 1213 Return 0 on success.
1212 1214 """
1213 1215 opts = pycompat.byteskwargs(opts)
1214 1216 if not isenabled(repo):
1215 1217 raise error.Abort(_(b"repo is not shallow"))
1216 1218
1217 1219 opts = resolveprefetchopts(ui, opts)
1218 1220 revs = scmutil.revrange(repo, opts.get(b'rev'))
1219 1221 repo.prefetch(revs, opts.get(b'base'), pats, opts)
1220 1222
1221 1223 # Run repack in background
1222 1224 if opts.get(b'repack'):
1223 1225 repackmod.backgroundrepack(repo, incremental=True)
1224 1226
1225 1227
1226 1228 @command(
1227 1229 b'repack',
1228 1230 [
1229 1231 (b'', b'background', None, _(b'run in a background process'), None),
1230 1232 (b'', b'incremental', None, _(b'do an incremental repack'), None),
1231 1233 (
1232 1234 b'',
1233 1235 b'packsonly',
1234 1236 None,
1235 1237 _(b'only repack packs (skip loose objects)'),
1236 1238 None,
1237 1239 ),
1238 1240 ],
1239 1241 _(b'hg repack [OPTIONS]'),
1240 1242 )
1241 1243 def repack_(ui, repo, *pats, **opts):
1242 1244 if opts.get('background'):
1243 1245 repackmod.backgroundrepack(
1244 1246 repo,
1245 1247 incremental=opts.get('incremental'),
1246 1248 packsonly=opts.get('packsonly', False),
1247 1249 )
1248 1250 return
1249 1251
1250 1252 options = {b'packsonly': opts.get('packsonly')}
1251 1253
1252 1254 try:
1253 1255 if opts.get('incremental'):
1254 1256 repackmod.incrementalrepack(repo, options=options)
1255 1257 else:
1256 1258 repackmod.fullrepack(repo, options=options)
1257 1259 except repackmod.RepackAlreadyRunning as ex:
1258 1260 # Don't propogate the exception if the repack is already in
1259 1261 # progress, since we want the command to exit 0.
1260 1262 repo.ui.warn(b'%s\n' % ex)
@@ -1,307 +1,319 b''
1 1 # shallowbundle.py - bundle10 implementation for use with shallow repositories
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 from __future__ import absolute_import
8 8
9 9 from mercurial.i18n import _
10 10 from mercurial.node import bin, hex, nullid
11 11 from mercurial import (
12 12 bundlerepo,
13 13 changegroup,
14 14 error,
15 15 match,
16 16 mdiff,
17 17 pycompat,
18 18 )
19 19 from . import (
20 20 constants,
21 21 remotefilelog,
22 22 shallowutil,
23 23 )
24 24
25 25 NoFiles = 0
26 26 LocalFiles = 1
27 27 AllFiles = 2
28 28
29 29
30 30 def shallowgroup(cls, self, nodelist, rlog, lookup, units=None, reorder=None):
31 31 if not isinstance(rlog, remotefilelog.remotefilelog):
32 32 for c in super(cls, self).group(nodelist, rlog, lookup, units=units):
33 33 yield c
34 34 return
35 35
36 36 if len(nodelist) == 0:
37 37 yield self.close()
38 38 return
39 39
40 40 nodelist = shallowutil.sortnodes(nodelist, rlog.parents)
41 41
42 42 # add the parent of the first rev
43 43 p = rlog.parents(nodelist[0])[0]
44 44 nodelist.insert(0, p)
45 45
46 46 # build deltas
47 47 for i in pycompat.xrange(len(nodelist) - 1):
48 48 prev, curr = nodelist[i], nodelist[i + 1]
49 49 linknode = lookup(curr)
50 50 for c in self.nodechunk(rlog, curr, prev, linknode):
51 51 yield c
52 52
53 53 yield self.close()
54 54
55 55
56 56 class shallowcg1packer(changegroup.cgpacker):
57 57 def generate(self, commonrevs, clnodes, fastpathlinkrev, source, **kwargs):
58 58 if shallowutil.isenabled(self._repo):
59 59 fastpathlinkrev = False
60 60
61 61 return super(shallowcg1packer, self).generate(
62 62 commonrevs, clnodes, fastpathlinkrev, source, **kwargs
63 63 )
64 64
65 65 def group(self, nodelist, rlog, lookup, units=None, reorder=None):
66 66 return shallowgroup(
67 67 shallowcg1packer, self, nodelist, rlog, lookup, units=units
68 68 )
69 69
70 70 def generatefiles(self, changedfiles, *args, **kwargs):
71 71 try:
72 72 linknodes, commonrevs, source = args
73 73 except ValueError:
74 74 commonrevs, source, mfdicts, fastpathlinkrev, fnodes, clrevs = args
75 75 if shallowutil.isenabled(self._repo):
76 76 repo = self._repo
77 77 if isinstance(repo, bundlerepo.bundlerepository):
78 78 # If the bundle contains filelogs, we can't pull from it, since
79 79 # bundlerepo is heavily tied to revlogs. Instead require that
80 80 # the user use unbundle instead.
81 81 # Force load the filelog data.
82 82 bundlerepo.bundlerepository.file(repo, b'foo')
83 83 if repo._cgfilespos:
84 84 raise error.Abort(
85 85 b"cannot pull from full bundles",
86 86 hint=b"use `hg unbundle` instead",
87 87 )
88 88 return []
89 89 filestosend = self.shouldaddfilegroups(source)
90 90 if filestosend == NoFiles:
91 91 changedfiles = list(
92 92 [f for f in changedfiles if not repo.shallowmatch(f)]
93 93 )
94 94
95 95 return super(shallowcg1packer, self).generatefiles(
96 96 changedfiles, *args, **kwargs
97 97 )
98 98
99 99 def shouldaddfilegroups(self, source):
100 100 repo = self._repo
101 101 if not shallowutil.isenabled(repo):
102 102 return AllFiles
103 103
104 104 if source == b"push" or source == b"bundle":
105 105 return AllFiles
106 106
107 # We won't actually strip the files, but we should put them in any
108 # backup bundle generated by strip (especially for cases like narrow's
109 # `hg tracked --removeinclude`, as failing to do so means that the
110 # "saved" changesets during a strip won't have their files reapplied and
111 # thus their linknode adjusted, if necessary).
112 if source == b"strip":
113 cfg = repo.ui.config(b'remotefilelog', b'strip.includefiles')
114 if cfg == b'local':
115 return LocalFiles
116 elif cfg != b'none':
117 return AllFiles
118
107 119 caps = self._bundlecaps or []
108 120 if source == b"serve" or source == b"pull":
109 121 if constants.BUNDLE2_CAPABLITY in caps:
110 122 return LocalFiles
111 123 else:
112 124 # Serving to a full repo requires us to serve everything
113 125 repo.ui.warn(_(b"pulling from a shallow repo\n"))
114 126 return AllFiles
115 127
116 128 return NoFiles
117 129
118 130 def prune(self, rlog, missing, commonrevs):
119 131 if not isinstance(rlog, remotefilelog.remotefilelog):
120 132 return super(shallowcg1packer, self).prune(
121 133 rlog, missing, commonrevs
122 134 )
123 135
124 136 repo = self._repo
125 137 results = []
126 138 for fnode in missing:
127 139 fctx = repo.filectx(rlog.filename, fileid=fnode)
128 140 if fctx.linkrev() not in commonrevs:
129 141 results.append(fnode)
130 142 return results
131 143
132 144 def nodechunk(self, revlog, node, prevnode, linknode):
133 145 prefix = b''
134 146 if prevnode == nullid:
135 147 delta = revlog.rawdata(node)
136 148 prefix = mdiff.trivialdiffheader(len(delta))
137 149 else:
138 150 # Actually uses remotefilelog.revdiff which works on nodes, not revs
139 151 delta = revlog.revdiff(prevnode, node)
140 152 p1, p2 = revlog.parents(node)
141 153 flags = revlog.flags(node)
142 154 meta = self.builddeltaheader(node, p1, p2, prevnode, linknode, flags)
143 155 meta += prefix
144 156 l = len(meta) + len(delta)
145 157 yield changegroup.chunkheader(l)
146 158 yield meta
147 159 yield delta
148 160
149 161
150 162 def makechangegroup(orig, repo, outgoing, version, source, *args, **kwargs):
151 163 if not shallowutil.isenabled(repo):
152 164 return orig(repo, outgoing, version, source, *args, **kwargs)
153 165
154 166 original = repo.shallowmatch
155 167 try:
156 168 # if serving, only send files the clients has patterns for
157 169 if source == b'serve':
158 170 bundlecaps = kwargs.get('bundlecaps')
159 171 includepattern = None
160 172 excludepattern = None
161 173 for cap in bundlecaps or []:
162 174 if cap.startswith(b"includepattern="):
163 175 raw = cap[len(b"includepattern=") :]
164 176 if raw:
165 177 includepattern = raw.split(b'\0')
166 178 elif cap.startswith(b"excludepattern="):
167 179 raw = cap[len(b"excludepattern=") :]
168 180 if raw:
169 181 excludepattern = raw.split(b'\0')
170 182 if includepattern or excludepattern:
171 183 repo.shallowmatch = match.match(
172 184 repo.root, b'', None, includepattern, excludepattern
173 185 )
174 186 else:
175 187 repo.shallowmatch = match.always()
176 188 return orig(repo, outgoing, version, source, *args, **kwargs)
177 189 finally:
178 190 repo.shallowmatch = original
179 191
180 192
181 193 def addchangegroupfiles(
182 194 orig, repo, source, revmap, trp, expectedfiles, *args, **kwargs
183 195 ):
184 196 if not shallowutil.isenabled(repo):
185 197 return orig(repo, source, revmap, trp, expectedfiles, *args, **kwargs)
186 198
187 199 newfiles = 0
188 200 visited = set()
189 201 revisiondatas = {}
190 202 queue = []
191 203
192 204 # Normal Mercurial processes each file one at a time, adding all
193 205 # the new revisions for that file at once. In remotefilelog a file
194 206 # revision may depend on a different file's revision (in the case
195 207 # of a rename/copy), so we must lay all revisions down across all
196 208 # files in topological order.
197 209
198 210 # read all the file chunks but don't add them
199 211 progress = repo.ui.makeprogress(_(b'files'), total=expectedfiles)
200 212 while True:
201 213 chunkdata = source.filelogheader()
202 214 if not chunkdata:
203 215 break
204 216 f = chunkdata[b"filename"]
205 217 repo.ui.debug(b"adding %s revisions\n" % f)
206 218 progress.increment()
207 219
208 220 if not repo.shallowmatch(f):
209 221 fl = repo.file(f)
210 222 deltas = source.deltaiter()
211 223 fl.addgroup(deltas, revmap, trp)
212 224 continue
213 225
214 226 chain = None
215 227 while True:
216 228 # returns: (node, p1, p2, cs, deltabase, delta, flags) or None
217 229 revisiondata = source.deltachunk(chain)
218 230 if not revisiondata:
219 231 break
220 232
221 233 chain = revisiondata[0]
222 234
223 235 revisiondatas[(f, chain)] = revisiondata
224 236 queue.append((f, chain))
225 237
226 238 if f not in visited:
227 239 newfiles += 1
228 240 visited.add(f)
229 241
230 242 if chain is None:
231 243 raise error.Abort(_(b"received file revlog group is empty"))
232 244
233 245 processed = set()
234 246
235 247 def available(f, node, depf, depnode):
236 248 if depnode != nullid and (depf, depnode) not in processed:
237 249 if not (depf, depnode) in revisiondatas:
238 250 # It's not in the changegroup, assume it's already
239 251 # in the repo
240 252 return True
241 253 # re-add self to queue
242 254 queue.insert(0, (f, node))
243 255 # add dependency in front
244 256 queue.insert(0, (depf, depnode))
245 257 return False
246 258 return True
247 259
248 260 skipcount = 0
249 261
250 262 # Prefetch the non-bundled revisions that we will need
251 263 prefetchfiles = []
252 264 for f, node in queue:
253 265 revisiondata = revisiondatas[(f, node)]
254 266 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
255 267 dependents = [revisiondata[1], revisiondata[2], revisiondata[4]]
256 268
257 269 for dependent in dependents:
258 270 if dependent == nullid or (f, dependent) in revisiondatas:
259 271 continue
260 272 prefetchfiles.append((f, hex(dependent)))
261 273
262 274 repo.fileservice.prefetch(prefetchfiles)
263 275
264 276 # Apply the revisions in topological order such that a revision
265 277 # is only written once it's deltabase and parents have been written.
266 278 while queue:
267 279 f, node = queue.pop(0)
268 280 if (f, node) in processed:
269 281 continue
270 282
271 283 skipcount += 1
272 284 if skipcount > len(queue) + 1:
273 285 raise error.Abort(_(b"circular node dependency"))
274 286
275 287 fl = repo.file(f)
276 288
277 289 revisiondata = revisiondatas[(f, node)]
278 290 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
279 291 node, p1, p2, linknode, deltabase, delta, flags, sidedata = revisiondata
280 292
281 293 if not available(f, node, f, deltabase):
282 294 continue
283 295
284 296 base = fl.rawdata(deltabase)
285 297 text = mdiff.patch(base, delta)
286 298 if not isinstance(text, bytes):
287 299 text = bytes(text)
288 300
289 301 meta, text = shallowutil.parsemeta(text)
290 302 if b'copy' in meta:
291 303 copyfrom = meta[b'copy']
292 304 copynode = bin(meta[b'copyrev'])
293 305 if not available(f, node, copyfrom, copynode):
294 306 continue
295 307
296 308 for p in [p1, p2]:
297 309 if p != nullid:
298 310 if not available(f, node, f, p):
299 311 continue
300 312
301 313 fl.add(text, meta, trp, linknode, p1, p2)
302 314 processed.add((f, node))
303 315 skipcount = 0
304 316
305 317 progress.complete()
306 318
307 319 return len(revisiondatas), newfiles
@@ -1,353 +1,354 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4 $ cat >> $HGRCPATH <<EOF
5 5 > [devel]
6 6 > remotefilelog.bg-wait=True
7 7 > EOF
8 8
9 9 $ hg init master
10 10 $ cd master
11 11 $ cat >> .hg/hgrc <<EOF
12 12 > [remotefilelog]
13 13 > server=True
14 14 > EOF
15 15 $ echo x > x
16 16 $ echo z > z
17 17 $ hg commit -qAm x
18 18 $ echo x2 > x
19 19 $ echo y > y
20 20 $ hg commit -qAm y
21 21 $ echo w > w
22 22 $ rm z
23 23 $ hg commit -qAm w
24 24 $ hg bookmark foo
25 25
26 26 $ cd ..
27 27
28 28 # clone the repo
29 29
30 30 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
31 31 streaming all changes
32 32 2 files to transfer, 776 bytes of data
33 33 transferred 776 bytes in * seconds (*/sec) (glob)
34 34 searching for changes
35 35 no changes found
36 36
37 37 # Set the prefetchdays config to zero so that all commits are prefetched
38 38 # no matter what their creation date is. Also set prefetchdelay config
39 39 # to zero so that there is no delay between prefetches.
40 40 $ cd shallow
41 41 $ cat >> .hg/hgrc <<EOF
42 42 > [remotefilelog]
43 43 > prefetchdays=0
44 44 > prefetchdelay=0
45 45 > EOF
46 46 $ cd ..
47 47
48 48 # prefetch a revision
49 49 $ cd shallow
50 50
51 51 $ hg prefetch -r 0
52 52 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
53 53
54 54 $ hg cat -r 0 x
55 55 x
56 56
57 57 # background prefetch on pull when configured
58 58
59 59 $ cat >> .hg/hgrc <<EOF
60 60 > [remotefilelog]
61 61 > pullprefetch=bookmark()
62 62 > backgroundprefetch=True
63 63 > EOF
64 64 $ hg strip tip
65 65 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
66 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
66 67
67 68 $ clearcache
68 69 $ hg pull
69 70 pulling from ssh://user@dummy/master
70 71 searching for changes
71 72 adding changesets
72 73 adding manifests
73 74 adding file changes
74 75 updating bookmark foo
75 76 added 1 changesets with 0 changes to 0 files
76 77 new changesets 6b4b6f66ef8c
77 78 (run 'hg update' to get a working copy)
78 79 prefetching file contents
79 80 $ find $CACHEDIR -type f | sort
80 81 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
81 82 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
82 83 $TESTTMP/hgcache/master/af/f024fe4ab0fece4091de044c58c9ae4233383a/bb6ccd5dceaa5e9dc220e0dad65e051b94f69a2c
83 84 $TESTTMP/hgcache/repos
84 85
85 86 # background prefetch with repack on pull when configured
86 87
87 88 $ cat >> .hg/hgrc <<EOF
88 89 > [remotefilelog]
89 90 > backgroundrepack=True
90 91 > EOF
91 92 $ hg strip tip
92 93 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
93 94
94 95 $ clearcache
95 96 $ hg pull
96 97 pulling from ssh://user@dummy/master
97 98 searching for changes
98 99 adding changesets
99 100 adding manifests
100 101 adding file changes
101 102 updating bookmark foo
102 103 added 1 changesets with 0 changes to 0 files
103 104 new changesets 6b4b6f66ef8c
104 105 (run 'hg update' to get a working copy)
105 106 prefetching file contents
106 107 $ find $CACHEDIR -type f | sort
107 108 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
108 109 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
109 110 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.dataidx
110 111 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.datapack
111 112 $TESTTMP/hgcache/repos
112 113
113 114 # background prefetch with repack on update when wcprevset configured
114 115
115 116 $ clearcache
116 117 $ hg up -r 0
117 118 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
118 119 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
119 120 $ find $CACHEDIR -type f | sort
120 121 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1406e74118627694268417491f018a4a883152f0
121 122 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
122 123 $TESTTMP/hgcache/repos
123 124
124 125 $ hg up -r 1
125 126 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
126 127 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127 128
128 129 $ cat >> .hg/hgrc <<EOF
129 130 > [remotefilelog]
130 131 > bgprefetchrevs=.::
131 132 > EOF
132 133
133 134 $ clearcache
134 135 $ hg up -r 0
135 136 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
136 137 * files fetched over * fetches - (* misses, 0.00% hit ratio) over *s (glob)
137 138 $ find $CACHEDIR -type f | sort
138 139 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
139 140 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
140 141 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
141 142 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
142 143 $TESTTMP/hgcache/repos
143 144
144 145 # Ensure that file 'w' was prefetched - it was not part of the update operation and therefore
145 146 # could only be downloaded by the background prefetch
146 147
147 148 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
148 149 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
149 150 w:
150 151 Node Delta Base Delta Length Blob Size
151 152 bb6ccd5dceaa 000000000000 2 2
152 153
153 154 Total: 2 2 (0.0% bigger)
154 155 x:
155 156 Node Delta Base Delta Length Blob Size
156 157 ef95c5376f34 000000000000 3 3
157 158 1406e7411862 ef95c5376f34 14 2
158 159
159 160 Total: 17 5 (240.0% bigger)
160 161 y:
161 162 Node Delta Base Delta Length Blob Size
162 163 076f5e2225b3 000000000000 2 2
163 164
164 165 Total: 2 2 (0.0% bigger)
165 166 z:
166 167 Node Delta Base Delta Length Blob Size
167 168 69a1b6752270 000000000000 2 2
168 169
169 170 Total: 2 2 (0.0% bigger)
170 171
171 172 # background prefetch with repack on commit when wcprevset configured
172 173
173 174 $ cat >> .hg/hgrc <<EOF
174 175 > [remotefilelog]
175 176 > bgprefetchrevs=0::
176 177 > EOF
177 178
178 179 $ clearcache
179 180 $ find $CACHEDIR -type f | sort
180 181 $ echo b > b
181 182 .. The following output line about files fetches is globed because it is
182 183 .. flaky, the core the test is checked when checking the cache dir, so
183 184 .. hopefully this flakyness is not hiding any actual bug.
184 185 $ hg commit -qAm b
185 186 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
186 187 $ hg bookmark temporary
187 188 $ find $CACHEDIR -type f | sort
188 189 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
189 190 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
190 191 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
191 192 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
192 193 $TESTTMP/hgcache/repos
193 194
194 195 # Ensure that file 'w' was prefetched - it was not part of the commit operation and therefore
195 196 # could only be downloaded by the background prefetch
196 197
197 198 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
198 199 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
199 200 w:
200 201 Node Delta Base Delta Length Blob Size
201 202 bb6ccd5dceaa 000000000000 2 2
202 203
203 204 Total: 2 2 (0.0% bigger)
204 205 x:
205 206 Node Delta Base Delta Length Blob Size
206 207 ef95c5376f34 000000000000 3 3
207 208 1406e7411862 ef95c5376f34 14 2
208 209
209 210 Total: 17 5 (240.0% bigger)
210 211 y:
211 212 Node Delta Base Delta Length Blob Size
212 213 076f5e2225b3 000000000000 2 2
213 214
214 215 Total: 2 2 (0.0% bigger)
215 216 z:
216 217 Node Delta Base Delta Length Blob Size
217 218 69a1b6752270 000000000000 2 2
218 219
219 220 Total: 2 2 (0.0% bigger)
220 221
221 222 # background prefetch with repack on rebase when wcprevset configured
222 223
223 224 $ hg up -r 2
224 225 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
225 226 (leaving bookmark temporary)
226 227 $ clearcache
227 228 $ find $CACHEDIR -type f | sort
228 229 .. The following output line about files fetches is globed because it is
229 230 .. flaky, the core the test is checked when checking the cache dir, so
230 231 .. hopefully this flakyness is not hiding any actual bug.
231 232 $ hg rebase -s temporary -d foo
232 233 rebasing 3:d9cf06e3b5b6 temporary tip "b"
233 234 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/d9cf06e3b5b6-e5c3dc63-rebase.hg
234 235 ? files fetched over ? fetches - (? misses, 0.00% hit ratio) over *s (glob)
235 236 $ find $CACHEDIR -type f | sort
236 237 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
237 238 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
238 239 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
239 240 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
240 241 $TESTTMP/hgcache/repos
241 242
242 243 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
243 244 # could only be downloaded by the background prefetch
244 245
245 246 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
246 247 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
247 248 w:
248 249 Node Delta Base Delta Length Blob Size
249 250 bb6ccd5dceaa 000000000000 2 2
250 251
251 252 Total: 2 2 (0.0% bigger)
252 253 x:
253 254 Node Delta Base Delta Length Blob Size
254 255 ef95c5376f34 000000000000 3 3
255 256 1406e7411862 ef95c5376f34 14 2
256 257
257 258 Total: 17 5 (240.0% bigger)
258 259 y:
259 260 Node Delta Base Delta Length Blob Size
260 261 076f5e2225b3 000000000000 2 2
261 262
262 263 Total: 2 2 (0.0% bigger)
263 264 z:
264 265 Node Delta Base Delta Length Blob Size
265 266 69a1b6752270 000000000000 2 2
266 267
267 268 Total: 2 2 (0.0% bigger)
268 269
269 270 # Check that foregound prefetch with no arguments blocks until background prefetches finish
270 271
271 272 $ hg up -r 3
272 273 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
273 274 $ clearcache
274 275 $ hg prefetch --repack --config ui.timeout.warn=-1
275 276 (running background incremental repack)
276 277 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
277 278
278 279 $ find $CACHEDIR -type f | sort
279 280 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
280 281 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
281 282 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
282 283 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
283 284 $TESTTMP/hgcache/repos
284 285
285 286 # Ensure that files were prefetched
286 287 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
287 288 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
288 289 w:
289 290 Node Delta Base Delta Length Blob Size
290 291 bb6ccd5dceaa 000000000000 2 2
291 292
292 293 Total: 2 2 (0.0% bigger)
293 294 x:
294 295 Node Delta Base Delta Length Blob Size
295 296 ef95c5376f34 000000000000 3 3
296 297 1406e7411862 ef95c5376f34 14 2
297 298
298 299 Total: 17 5 (240.0% bigger)
299 300 y:
300 301 Node Delta Base Delta Length Blob Size
301 302 076f5e2225b3 000000000000 2 2
302 303
303 304 Total: 2 2 (0.0% bigger)
304 305 z:
305 306 Node Delta Base Delta Length Blob Size
306 307 69a1b6752270 000000000000 2 2
307 308
308 309 Total: 2 2 (0.0% bigger)
309 310
310 311 # Check that foreground prefetch fetches revs specified by '. + draft() + bgprefetchrevs + pullprefetch'
311 312
312 313 $ clearcache
313 314 $ hg prefetch --repack --config ui.timeout.warn=-1
314 315 (running background incremental repack)
315 316 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
316 317
317 318 $ find $CACHEDIR -type f | sort
318 319 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
319 320 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
320 321 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
321 322 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
322 323 $TESTTMP/hgcache/repos
323 324
324 325 # Ensure that files were prefetched
325 326 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
326 327 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
327 328 w:
328 329 Node Delta Base Delta Length Blob Size
329 330 bb6ccd5dceaa 000000000000 2 2
330 331
331 332 Total: 2 2 (0.0% bigger)
332 333 x:
333 334 Node Delta Base Delta Length Blob Size
334 335 ef95c5376f34 000000000000 3 3
335 336 1406e7411862 ef95c5376f34 14 2
336 337
337 338 Total: 17 5 (240.0% bigger)
338 339 y:
339 340 Node Delta Base Delta Length Blob Size
340 341 076f5e2225b3 000000000000 2 2
341 342
342 343 Total: 2 2 (0.0% bigger)
343 344 z:
344 345 Node Delta Base Delta Length Blob Size
345 346 69a1b6752270 000000000000 2 2
346 347
347 348 Total: 2 2 (0.0% bigger)
348 349
349 350 # Test that if data was prefetched and repacked we dont need to prefetch it again
350 351 # It ensures that Mercurial looks not only in loose files but in packs as well
351 352
352 353 $ hg prefetch --repack
353 354 (running background incremental repack)
@@ -1,75 +1,76 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ hg commit -qAm x
13 13 $ echo y >> x
14 14 $ hg commit -qAm y
15 15 $ echo z >> x
16 16 $ hg commit -qAm z
17 17
18 18 $ cd ..
19 19
20 20 $ hgcloneshallow ssh://user@dummy/master shallow -q
21 21 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
22 22 $ cd shallow
23 23
24 24 Unbundling a shallow bundle
25 25
26 26 $ hg strip -r 66ee28d0328c
27 27 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
28 28 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
29 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
29 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
30 30 $ hg unbundle .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
31 31 adding changesets
32 32 adding manifests
33 33 adding file changes
34 added 2 changesets with 0 changes to 0 files
34 added 2 changesets with 2 changes to 1 files
35 35 new changesets 66ee28d0328c:16db62c5946f
36 36 (run 'hg update' to get a working copy)
37 37
38 38 Unbundling a full bundle
39 39
40 40 $ hg -R ../master bundle -r 66ee28d0328c:: --base "66ee28d0328c^" ../fullbundle.hg
41 41 2 changesets found
42 42 $ hg strip -r 66ee28d0328c
43 43 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
44 44 $ hg unbundle ../fullbundle.hg
45 45 adding changesets
46 46 adding manifests
47 47 adding file changes
48 48 added 2 changesets with 2 changes to 1 files
49 49 new changesets 66ee28d0328c:16db62c5946f (2 drafts)
50 50 (run 'hg update' to get a working copy)
51 51
52 52 Pulling from a shallow bundle
53 53
54 $ hg strip -r 66ee28d0328c
54 $ hg strip -r 66ee28d0328c --config remotefilelog.strip.includefiles=none
55 55 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
56 56 $ hg pull -r 66ee28d0328c .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
57 57 pulling from .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
58 58 searching for changes
59 59 adding changesets
60 60 adding manifests
61 61 adding file changes
62 62 added 1 changesets with 0 changes to 0 files
63 63 new changesets 66ee28d0328c (1 drafts)
64 64 (run 'hg update' to get a working copy)
65 65
66 Pulling from a full bundle
66 Pulling from a full bundle, also testing that strip produces a full bundle by
67 default.
67 68
68 69 $ hg strip -r 66ee28d0328c
69 70 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg (glob)
70 $ hg pull -r 66ee28d0328c ../fullbundle.hg
71 pulling from ../fullbundle.hg
71 $ hg pull -r 66ee28d0328c .hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg
72 pulling from .hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg
72 73 searching for changes
73 74 abort: cannot pull from full bundles
74 75 (use `hg unbundle` instead)
75 76 [255]
@@ -1,209 +1,210 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ echo y > y
13 13 $ echo z > z
14 14 $ hg commit -qAm xy
15 15
16 16 $ cd ..
17 17
18 18 $ hgcloneshallow ssh://user@dummy/master shallow -q
19 19 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
20 20 $ cd shallow
21 21
22 22 # status
23 23
24 24 $ clearcache
25 25 $ echo xx > x
26 26 $ echo yy > y
27 27 $ touch a
28 28 $ hg status
29 29 M x
30 30 M y
31 31 ? a
32 32 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
33 33 $ hg add a
34 34 $ hg status
35 35 M x
36 36 M y
37 37 A a
38 38
39 39 # diff
40 40
41 41 $ hg debugrebuilddirstate # fixes dirstate non-determinism
42 42 $ hg add a
43 43 $ clearcache
44 44 $ hg diff
45 45 diff -r f3d0bb0d1e48 x
46 46 --- a/x* (glob)
47 47 +++ b/x* (glob)
48 48 @@ -1,1 +1,1 @@
49 49 -x
50 50 +xx
51 51 diff -r f3d0bb0d1e48 y
52 52 --- a/y* (glob)
53 53 +++ b/y* (glob)
54 54 @@ -1,1 +1,1 @@
55 55 -y
56 56 +yy
57 57 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
58 58
59 59 # local commit
60 60
61 61 $ clearcache
62 62 $ echo a > a
63 63 $ echo xxx > x
64 64 $ echo yyy > y
65 65 $ hg commit -m a
66 66 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over *s (glob)
67 67
68 68 # local commit where the dirstate is clean -- ensure that we do just one fetch
69 69 # (update to a commit on the server first)
70 70
71 71 $ hg --config debug.dirstate.delaywrite=1 up 0
72 72 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
73 73 $ clearcache
74 74 $ hg debugdirstate
75 75 n 644 2 * x (glob)
76 76 n 644 2 * y (glob)
77 77 n 644 2 * z (glob)
78 78 $ echo xxxx > x
79 79 $ echo yyyy > y
80 80 $ hg commit -m x
81 81 created new head
82 82 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
83 83
84 84 # restore state for future tests
85 85
86 86 $ hg -q strip .
87 87 $ hg -q up tip
88 88
89 89 # rebase
90 90
91 91 $ clearcache
92 92 $ cd ../master
93 93 $ echo w > w
94 94 $ hg commit -qAm w
95 95
96 96 $ cd ../shallow
97 97 $ hg pull
98 98 pulling from ssh://user@dummy/master
99 99 searching for changes
100 100 adding changesets
101 101 adding manifests
102 102 adding file changes
103 103 added 1 changesets with 0 changes to 0 files (+1 heads)
104 104 new changesets fed61014d323
105 105 (run 'hg heads' to see heads, 'hg merge' to merge)
106 106
107 107 $ hg rebase -d tip
108 108 rebasing 1:9abfe7bca547 "a"
109 109 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/9abfe7bca547-8b11e5ff-rebase.hg (glob)
110 110 3 files fetched over 2 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
111 111
112 112 # strip
113 113
114 114 $ clearcache
115 115 $ hg debugrebuilddirstate # fixes dirstate non-determinism
116 116 $ hg strip -r .
117 117 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 118 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg (glob)
119 4 files fetched over 2 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
119 3 files fetched over 2 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
120 120
121 121 # unbundle
122 122
123 123 $ clearcache
124 124 $ ls -A
125 125 .hg
126 126 w
127 127 x
128 128 y
129 129 z
130 130
131 131 $ hg debugrebuilddirstate # fixes dirstate non-determinism
132 132 $ hg unbundle .hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg
133 133 adding changesets
134 134 adding manifests
135 135 adding file changes
136 added 1 changesets with 0 changes to 0 files
136 added 1 changesets with 3 changes to 3 files
137 137 new changesets 19edf50f4de7 (1 drafts)
138 138 (run 'hg update' to get a working copy)
139 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
139 140
140 141 $ hg up
141 142 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
142 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
143 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
143 144 $ cat a
144 145 a
145 146
146 147 # revert
147 148
148 149 $ clearcache
149 150 $ hg revert -r .~2 y z
150 151 no changes needed to z
151 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
152 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
152 153 $ hg checkout -C -r . -q
153 154
154 155 # explicit bundle should produce full bundle file
155 156
156 157 $ hg bundle -r 2 --base 1 ../local.bundle
157 158 1 changesets found
158 159 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
159 160 $ cd ..
160 161
161 162 $ hgcloneshallow ssh://user@dummy/master shallow2 -q
162 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
163 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
163 164 $ cd shallow2
164 165 $ hg unbundle ../local.bundle
165 166 adding changesets
166 167 adding manifests
167 168 adding file changes
168 169 added 1 changesets with 3 changes to 3 files
169 170 new changesets 19edf50f4de7 (1 drafts)
170 171 (run 'hg update' to get a working copy)
171 172
172 173 $ hg log -r 2 --stat
173 174 changeset: 2:19edf50f4de7
174 175 tag: tip
175 176 user: test
176 177 date: Thu Jan 01 00:00:00 1970 +0000
177 178 summary: a
178 179
179 180 a | 1 +
180 181 x | 2 +-
181 182 y | 2 +-
182 183 3 files changed, 3 insertions(+), 2 deletions(-)
183 184
184 185 # Merge
185 186
186 187 $ echo merge >> w
187 188 $ hg commit -m w
188 189 created new head
189 190 $ hg merge 2
190 191 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
191 192 (branch merge, don't forget to commit)
192 193 $ hg commit -m merge
193 194 $ hg strip -q -r ".^"
194 195
195 196 # commit without producing new node
196 197
197 198 $ cd $TESTTMP
198 199 $ hgcloneshallow ssh://user@dummy/master shallow3 -q
199 200 $ cd shallow3
200 201 $ echo 1 > A
201 202 $ hg commit -m foo -A A
202 203 $ hg log -r . -T '{node}\n'
203 204 383ce605500277f879b7460a16ba620eb6930b7f
204 205 $ hg update -r '.^' -q
205 206 $ echo 1 > A
206 207 $ hg commit -m foo -A A
207 208 warning: commit already existed in the repository!
208 209 $ hg log -r . -T '{node}\n'
209 210 383ce605500277f879b7460a16ba620eb6930b7f
@@ -1,280 +1,281 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ echo z > z
13 13 $ hg commit -qAm x
14 14 $ echo x2 > x
15 15 $ echo y > y
16 16 $ hg commit -qAm y
17 17 $ hg bookmark foo
18 18
19 19 $ cd ..
20 20
21 21 # prefetch a revision
22 22
23 23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 24 streaming all changes
25 25 2 files to transfer, 528 bytes of data
26 26 transferred 528 bytes in * seconds (*/sec) (glob)
27 27 searching for changes
28 28 no changes found
29 29 $ cd shallow
30 30
31 31 $ hg prefetch -r 0
32 32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
33 33
34 34 $ hg cat -r 0 x
35 35 x
36 36
37 37 # prefetch with base
38 38
39 39 $ clearcache
40 40 $ hg prefetch -r 0::1 -b 0
41 41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
42 42
43 43 $ hg cat -r 1 x
44 44 x2
45 45 $ hg cat -r 1 y
46 46 y
47 47
48 48 $ hg cat -r 0 x
49 49 x
50 50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
51 51
52 52 $ hg cat -r 0 z
53 53 z
54 54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
55 55
56 56 $ hg prefetch -r 0::1 --base 0
57 57 $ hg prefetch -r 0::1 -b 1
58 58 $ hg prefetch -r 0::1
59 59
60 60 # prefetch a range of revisions
61 61
62 62 $ clearcache
63 63 $ hg prefetch -r 0::1
64 64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
65 65
66 66 $ hg cat -r 0 x
67 67 x
68 68 $ hg cat -r 1 x
69 69 x2
70 70
71 71 # prefetch certain files
72 72
73 73 $ clearcache
74 74 $ hg prefetch -r 1 x
75 75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
76 76
77 77 $ hg cat -r 1 x
78 78 x2
79 79
80 80 $ hg cat -r 1 y
81 81 y
82 82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
83 83
84 84 # prefetch on pull when configured
85 85
86 86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
87 87 $ hg strip tip
88 88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
89 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
89 90
90 91 $ clearcache
91 92 $ hg pull
92 93 pulling from ssh://user@dummy/master
93 94 searching for changes
94 95 adding changesets
95 96 adding manifests
96 97 adding file changes
97 98 updating bookmark foo
98 99 added 1 changesets with 0 changes to 0 files
99 100 new changesets 109c3a557a73
100 101 (run 'hg update' to get a working copy)
101 102 prefetching file contents
102 103 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
103 104
104 105 $ hg up tip
105 106 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
106 107
107 108 # prefetch only fetches changes not in working copy
108 109
109 110 $ hg strip tip
110 111 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
111 112 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
112 113 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
113 114 $ clearcache
114 115
115 116 $ hg pull
116 117 pulling from ssh://user@dummy/master
117 118 searching for changes
118 119 adding changesets
119 120 adding manifests
120 121 adding file changes
121 122 updating bookmark foo
122 123 added 1 changesets with 0 changes to 0 files
123 124 new changesets 109c3a557a73
124 125 (run 'hg update' to get a working copy)
125 126 prefetching file contents
126 127 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127 128
128 129 # Make some local commits that produce the same file versions as are on the
129 130 # server. To simulate a situation where we have local commits that were somehow
130 131 # pushed, and we will soon pull.
131 132
132 133 $ hg prefetch -r 'all()'
133 134 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
134 135 $ hg strip -q -r 0
135 136 $ echo x > x
136 137 $ echo z > z
137 138 $ hg commit -qAm x
138 139 $ echo x2 > x
139 140 $ echo y > y
140 141 $ hg commit -qAm y
141 142
142 143 # prefetch server versions, even if local versions are available
143 144
144 145 $ clearcache
145 146 $ hg strip -q tip
146 147 $ hg pull
147 148 pulling from ssh://user@dummy/master
148 149 searching for changes
149 150 adding changesets
150 151 adding manifests
151 152 adding file changes
152 153 updating bookmark foo
153 154 added 1 changesets with 0 changes to 0 files
154 155 new changesets 109c3a557a73
155 156 1 local changesets published (?)
156 157 (run 'hg update' to get a working copy)
157 158 prefetching file contents
158 159 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
159 160
160 161 $ cd ..
161 162
162 163 # Prefetch unknown files during checkout
163 164
164 165 $ hgcloneshallow ssh://user@dummy/master shallow2
165 166 streaming all changes
166 167 2 files to transfer, 528 bytes of data
167 168 transferred 528 bytes in * seconds * (glob)
168 169 searching for changes
169 170 no changes found
170 171 updating to branch default
171 172 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
172 173 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
173 174 $ cd shallow2
174 175 $ hg up -q null
175 176 $ echo x > x
176 177 $ echo y > y
177 178 $ echo z > z
178 179 $ clearcache
179 180 $ hg up tip
180 181 x: untracked file differs
181 182 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
182 183 abort: untracked files in working directory differ from files in requested revision
183 184 [20]
184 185 $ hg revert --all
185 186
186 187 # Test batch fetching of lookup files during hg status
187 188 $ hg up --clean tip
188 189 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 190 $ hg debugrebuilddirstate
190 191 $ clearcache
191 192 $ hg status
192 193 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
193 194
194 195 # Prefetch during addrename detection
195 196 $ hg up -q --clean tip
196 197 $ hg revert --all
197 198 $ mv x x2
198 199 $ mv y y2
199 200 $ mv z z2
200 201 $ echo a > a
201 202 $ hg add a
202 203 $ rm a
203 204 $ clearcache
204 205 $ hg addremove -s 50 > /dev/null
205 206 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
206 207 $ hg revert --all
207 208 forgetting x2
208 209 forgetting y2
209 210 forgetting z2
210 211 undeleting x
211 212 undeleting y
212 213 undeleting z
213 214
214 215
215 216 # Revert across double renames. Note: the scary "abort", error is because
216 217 # https://bz.mercurial-scm.org/5419 .
217 218
218 219 $ cd ../master
219 220 $ hg mv z z2
220 221 $ hg commit -m 'move z -> z2'
221 222 $ cd ../shallow2
222 223 $ hg pull -q
223 224 $ clearcache
224 225 $ hg mv y y2
225 226 y2: not overwriting - file exists
226 227 ('hg rename --after' to record the rename)
227 228 [1]
228 229 $ hg mv x x2
229 230 x2: not overwriting - file exists
230 231 ('hg rename --after' to record the rename)
231 232 [1]
232 233 $ hg mv z2 z3
233 234 z2: not copying - file is not managed
234 235 abort: no files to copy
235 236 [10]
236 237 $ find $CACHEDIR -type f | sort
237 238 .. The following output line about files fetches is globed because it is
238 239 .. flaky, the core the test is checked when checking the cache dir, so
239 240 .. hopefully this flakyness is not hiding any actual bug.
240 241 $ hg revert -a -r 1 || true
241 242 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over * (glob)
242 243 abort: z2@109c3a557a73: not found in manifest (?)
243 244 $ find $CACHEDIR -type f | sort
244 245 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
245 246 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
246 247 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
247 248 $TESTTMP/hgcache/repos
248 249
249 250 # warning when we have excess remotefilelog fetching
250 251
251 252 $ cat > repeated_fetch.py << EOF
252 253 > import binascii
253 254 > from mercurial import extensions, registrar
254 255 > cmdtable = {}
255 256 > command = registrar.command(cmdtable)
256 257 > @command(b'repeated-fetch', [], b'', inferrepo=True)
257 258 > def repeated_fetch(ui, repo, *args, **opts):
258 259 > for i in range(20):
259 260 > try:
260 261 > hexid = (b'%02x' % (i + 1)) * 20
261 262 > repo.fileservice.prefetch([(b'somefile.txt', hexid)])
262 263 > except Exception:
263 264 > pass
264 265 > EOF
265 266
266 267 We should only output to the user once. We're ignoring most of the output
267 268 because we're not actually fetching anything real here, all the hashes are
268 269 bogus, so it's just going to be errors and a final summary of all the misses.
269 270 $ hg --config extensions.repeated_fetch=repeated_fetch.py \
270 271 > --config remotefilelog.fetchwarning="fetch warning!" \
271 272 > --config extensions.blackbox= \
272 273 > repeated-fetch 2>&1 | grep 'fetch warning'
273 274 fetch warning!
274 275
275 276 We should output to blackbox three times, with a stack trace on each (though
276 277 that isn't tested here).
277 278 $ grep 'excess remotefilelog fetching' .hg/blackbox.log
278 279 .* excess remotefilelog fetching: (re)
279 280 .* excess remotefilelog fetching: (re)
280 281 .* excess remotefilelog fetching: (re)
@@ -1,102 +1,103 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ echo z > z
13 13 $ hg commit -qAm x1
14 14 $ echo x2 > x
15 15 $ echo z2 > z
16 16 $ hg commit -qAm x2
17 17 $ hg bookmark foo
18 18
19 19 $ cd ..
20 20
21 21 # prefetch a revision w/ a sparse checkout
22 22
23 23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 24 streaming all changes
25 25 2 files to transfer, 527 bytes of data
26 26 transferred 527 bytes in 0.* seconds (*/sec) (glob)
27 27 searching for changes
28 28 no changes found
29 29 $ cd shallow
30 30 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
31 31
32 32 $ hg debugsparse -I x
33 33 $ hg prefetch -r 0
34 34 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
35 35
36 36 $ hg cat -r 0 x
37 37 x
38 38
39 39 $ hg debugsparse -I z
40 40 $ hg prefetch -r 0
41 41 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
42 42
43 43 $ hg cat -r 0 z
44 44 z
45 45
46 46 # prefetch sparse only on pull when configured
47 47
48 48 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
49 49 $ hg strip tip
50 50 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/876b1317060d-b2e91d8d-backup.hg (glob)
51 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
51 52
52 53 $ hg debugsparse --delete z
53 54
54 55 $ clearcache
55 56 $ hg pull
56 57 pulling from ssh://user@dummy/master
57 58 searching for changes
58 59 adding changesets
59 60 adding manifests
60 61 adding file changes
61 62 updating bookmark foo
62 63 added 1 changesets with 0 changes to 0 files
63 64 new changesets 876b1317060d
64 65 (run 'hg update' to get a working copy)
65 66 prefetching file contents
66 67 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
67 68
68 69 # Dont consider filtered files when doing copy tracing
69 70
70 71 ## Push an unrelated commit
71 72 $ cd ../
72 73
73 74 $ hgcloneshallow ssh://user@dummy/master shallow2
74 75 streaming all changes
75 76 2 files to transfer, 527 bytes of data
76 77 transferred 527 bytes in 0.* seconds (*) (glob)
77 78 searching for changes
78 79 no changes found
79 80 updating to branch default
80 81 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
81 82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
82 83 $ cd shallow2
83 84 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
84 85
85 86 $ hg up -q 0
86 87 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
87 88 $ touch a
88 89 $ hg ci -Aqm a
89 90 $ hg push -q -f
90 91
91 92 ## Pull the unrelated commit and rebase onto it - verify unrelated file was not
92 93 pulled
93 94
94 95 $ cd ../shallow
95 96 $ hg up -q 1
96 97 $ hg pull -q
97 98 $ hg debugsparse -I z
98 99 $ clearcache
99 100 $ hg prefetch -r '. + .^' -I x -I z
100 101 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over * (glob)
101 102 $ hg rebase -d 2 --keep
102 103 rebasing 1:876b1317060d foo "x2"
@@ -1,67 +1,68 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ hg commit -qAm x
13 13
14 14 $ cd ..
15 15
16 16 $ hgcloneshallow ssh://user@dummy/master shallow -q
17 17 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
18 18 $ cd shallow
19 19
20 20 $ cat >> $TESTTMP/get_file_linknode.py <<EOF
21 21 > from mercurial import node, registrar, scmutil
22 22 > cmdtable = {}
23 23 > command = registrar.command(cmdtable)
24 24 > @command(b'debug-file-linknode', [(b'r', b'rev', b'.', b'rev')], b'hg debug-file-linknode FILE')
25 25 > def debug_file_linknode(ui, repo, file, **opts):
26 26 > rflctx = scmutil.revsingle(repo.unfiltered(), opts['rev']).filectx(file)
27 27 > ui.status(b'%s\n' % node.hex(rflctx.ancestormap()[rflctx._filenode][2]))
28 28 > EOF
29 29
30 30 $ cat >> .hg/hgrc <<EOF
31 31 > [ui]
32 32 > interactive=1
33 33 > [extensions]
34 34 > strip=
35 35 > get_file_linknode=$TESTTMP/get_file_linknode.py
36 36 > [experimental]
37 37 > evolution=createmarkers,allowunstable
38 38 > EOF
39 39 $ echo a > a
40 40 $ hg commit -qAm msg1
41 41 $ hg commit --amend 're:^$' -m msg2
42 42 $ hg commit --amend 're:^$' -m msg3
43 43 $ hg --hidden log -G -T '{rev} {node|short}'
44 44 @ 3 df91f74b871e
45 45 |
46 46 | x 2 70494d7ec5ef
47 47 |/
48 48 | x 1 1e423846dde0
49 49 |/
50 50 o 0 b292c1e3311f
51 51
52 52 $ hg debug-file-linknode -r 70494d a
53 53 df91f74b871e064c89afa1fe9e2f66afa2c125df
54 54 $ hg --hidden strip -r 1 3
55 55 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
56 56 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/df91f74b871e-c94d67be-backup.hg
57 57
58 58 $ hg --hidden log -G -T '{rev} {node|short}'
59 59 o 1 70494d7ec5ef
60 60 |
61 61 @ 0 b292c1e3311f
62 62
63 FIXME: This should point to a commit that actually exists in the repo. Otherwise
64 remotefilelog has to search every commit in the repository looking for a valid
65 linkrev every time it's queried, such as during push.
63 Demonstrate that the linknode points to a commit that is actually in the repo
64 after the strip operation. Otherwise remotefilelog has to search every commit in
65 the repository looking for a valid linkrev every time it's queried, such as
66 during push.
66 67 $ hg debug-file-linknode -r 70494d a
67 df91f74b871e064c89afa1fe9e2f66afa2c125df
68 70494d7ec5ef6cd3cd6939a9fd2812f9956bf553
General Comments 0
You need to be logged in to leave comments. Login now