##// END OF EJS Templates
remotefilelog: fix crash on `hg addremove` of added-but-deleted file...
Martin von Zweigbergk -
r42261:91cc8dc8 default
parent child Browse files
Show More
@@ -1,1141 +1,1142 b''
1 1 # __init__.py - remotefilelog extension
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8 8
9 9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 10 GUARANTEES. This means that repositories created with this extension may
11 11 only be usable with the exact version of this extension/Mercurial that was
12 12 used. The extension attempts to enforce this in order to prevent repository
13 13 corruption.
14 14
15 15 remotefilelog works by fetching file contents lazily and storing them
16 16 in a cache on the client rather than in revlogs. This allows enormous
17 17 histories to be transferred only partially, making them easier to
18 18 operate on.
19 19
20 20 Configs:
21 21
22 22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23 23
24 24 ``packs.maxpacksize`` specifies the maximum pack file size
25 25
26 26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 27 shared cache (trees only for now)
28 28
29 29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30 30
31 31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 32 update, and on other commands that use them. Different from pullprefetch.
33 33
34 34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35 35
36 36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 37 it is garbage collected
38 38
39 39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40 40
41 41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 42 days after which it is no longer prefetched.
43 43
44 44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 45 prefetches in seconds after operations that change the working copy parent
46 46
47 47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 48 pack files required to be considered part of a generation. In particular,
49 49 minimum number of packs files > gencountlimit.
50 50
51 51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 52 each generation of the data pack files. For example, list ['100MB','1MB']
53 53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 54 1MB, 100MB) and [100MB, infinity).
55 55
56 56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 57 include in an incremental data repack.
58 58
59 59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 60 it to be considered for an incremental data repack.
61 61
62 62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 63 to include in an incremental data repack.
64 64
65 65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 66 history pack files required to be considered part of a generation. In
67 67 particular, minimum number of packs files > gencountlimit.
68 68
69 69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 70 each generation of the history pack files. For example, list [
71 71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73 73
74 74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 75 include in an incremental history repack.
76 76
77 77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 78 for it to be considered for an incremental history repack.
79 79
80 80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 81 files to include in an incremental history repack.
82 82
83 83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 84 background
85 85
86 86 ``remotefilelog.cachepath`` path to cache
87 87
88 88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 89 group
90 90
91 91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92 92
93 93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94 94
95 95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96 96
97 97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98 98
99 99 ``remotefilelog.fetchwarning``: message to print when too many
100 100 single-file fetches occur
101 101
102 102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103 103
104 104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 105 files, otherwise use optimistic fetching
106 106
107 107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 108 eagerly downloaded rather than lazily
109 109
110 110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 111 data from other repos in a shared store.
112 112
113 113 ``remotefilelog.server`` if true, enable server-side functionality
114 114
115 115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116 116
117 117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 118 blobs
119 119
120 120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 121 before returning blobs
122 122
123 123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 124 corruption before returning metadata
125 125
126 126 """
127 127 from __future__ import absolute_import
128 128
129 129 import os
130 130 import time
131 131 import traceback
132 132
133 133 from mercurial.node import hex
134 134 from mercurial.i18n import _
135 135 from mercurial import (
136 136 changegroup,
137 137 changelog,
138 138 cmdutil,
139 139 commands,
140 140 configitems,
141 141 context,
142 142 copies,
143 143 debugcommands as hgdebugcommands,
144 144 dispatch,
145 145 error,
146 146 exchange,
147 147 extensions,
148 148 hg,
149 149 localrepo,
150 150 match,
151 151 merge,
152 152 node as nodemod,
153 153 patch,
154 154 pycompat,
155 155 registrar,
156 156 repair,
157 157 repoview,
158 158 revset,
159 159 scmutil,
160 160 smartset,
161 161 streamclone,
162 162 util,
163 163 )
164 164 from . import (
165 165 constants,
166 166 debugcommands,
167 167 fileserverclient,
168 168 remotefilectx,
169 169 remotefilelog,
170 170 remotefilelogserver,
171 171 repack as repackmod,
172 172 shallowbundle,
173 173 shallowrepo,
174 174 shallowstore,
175 175 shallowutil,
176 176 shallowverifier,
177 177 )
178 178
179 179 # ensures debug commands are registered
180 180 hgdebugcommands.command
181 181
182 182 cmdtable = {}
183 183 command = registrar.command(cmdtable)
184 184
185 185 configtable = {}
186 186 configitem = registrar.configitem(configtable)
187 187
188 188 configitem('remotefilelog', 'debug', default=False)
189 189
190 190 configitem('remotefilelog', 'reponame', default='')
191 191 configitem('remotefilelog', 'cachepath', default=None)
192 192 configitem('remotefilelog', 'cachegroup', default=None)
193 193 configitem('remotefilelog', 'cacheprocess', default=None)
194 194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
195 195 configitem("remotefilelog", "cachelimit", default="1000 GB")
196 196
197 197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
198 198 alias=[('remotefilelog', 'fallbackrepo')])
199 199
200 200 configitem('remotefilelog', 'validatecachelog', default=None)
201 201 configitem('remotefilelog', 'validatecache', default='on')
202 202 configitem('remotefilelog', 'server', default=None)
203 203 configitem('remotefilelog', 'servercachepath', default=None)
204 204 configitem("remotefilelog", "serverexpiration", default=30)
205 205 configitem('remotefilelog', 'backgroundrepack', default=False)
206 206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
207 207 configitem('remotefilelog', 'pullprefetch', default=None)
208 208 configitem('remotefilelog', 'backgroundprefetch', default=False)
209 209 configitem('remotefilelog', 'prefetchdelay', default=120)
210 210 configitem('remotefilelog', 'prefetchdays', default=14)
211 211
212 212 configitem('remotefilelog', 'getfilesstep', default=10000)
213 213 configitem('remotefilelog', 'getfilestype', default='optimistic')
214 214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
215 215 configitem('remotefilelog', 'fetchwarning', default='')
216 216
217 217 configitem('remotefilelog', 'includepattern', default=None)
218 218 configitem('remotefilelog', 'excludepattern', default=None)
219 219
220 220 configitem('remotefilelog', 'gcrepack', default=False)
221 221 configitem('remotefilelog', 'repackonhggc', default=False)
222 222 configitem('repack', 'chainorphansbysize', default=True)
223 223
224 224 configitem('packs', 'maxpacksize', default=0)
225 225 configitem('packs', 'maxchainlen', default=1000)
226 226
227 227 # default TTL limit is 30 days
228 228 _defaultlimit = 60 * 60 * 24 * 30
229 229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
230 230
231 231 configitem('remotefilelog', 'data.gencountlimit', default=2),
232 232 configitem('remotefilelog', 'data.generations',
233 233 default=['1GB', '100MB', '1MB'])
234 234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
235 235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
236 236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
237 237
238 238 configitem('remotefilelog', 'history.gencountlimit', default=2),
239 239 configitem('remotefilelog', 'history.generations', default=['100MB'])
240 240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
241 241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
242 242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
243 243
244 244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
245 245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
246 246 # be specifying the version(s) of Mercurial they are tested with, or
247 247 # leave the attribute unspecified.
248 248 testedwith = 'ships-with-hg-core'
249 249
250 250 repoclass = localrepo.localrepository
251 251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
252 252
253 253 isenabled = shallowutil.isenabled
254 254
255 255 def uisetup(ui):
256 256 """Wraps user facing Mercurial commands to swap them out with shallow
257 257 versions.
258 258 """
259 259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
260 260
261 261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
262 262 entry[1].append(('', 'shallow', None,
263 263 _("create a shallow clone which uses remote file "
264 264 "history")))
265 265
266 266 extensions.wrapcommand(commands.table, 'debugindex',
267 267 debugcommands.debugindex)
268 268 extensions.wrapcommand(commands.table, 'debugindexdot',
269 269 debugcommands.debugindexdot)
270 270 extensions.wrapcommand(commands.table, 'log', log)
271 271 extensions.wrapcommand(commands.table, 'pull', pull)
272 272
273 273 # Prevent 'hg manifest --all'
274 274 def _manifest(orig, ui, repo, *args, **opts):
275 275 if (isenabled(repo) and opts.get(r'all')):
276 276 raise error.Abort(_("--all is not supported in a shallow repo"))
277 277
278 278 return orig(ui, repo, *args, **opts)
279 279 extensions.wrapcommand(commands.table, "manifest", _manifest)
280 280
281 281 # Wrap remotefilelog with lfs code
282 282 def _lfsloaded(loaded=False):
283 283 lfsmod = None
284 284 try:
285 285 lfsmod = extensions.find('lfs')
286 286 except KeyError:
287 287 pass
288 288 if lfsmod:
289 289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
290 290 fileserverclient._lfsmod = lfsmod
291 291 extensions.afterloaded('lfs', _lfsloaded)
292 292
293 293 # debugdata needs remotefilelog.len to work
294 294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
295 295
296 296 def cloneshallow(orig, ui, repo, *args, **opts):
297 297 if opts.get(r'shallow'):
298 298 repos = []
299 299 def pull_shallow(orig, self, *args, **kwargs):
300 300 if not isenabled(self):
301 301 repos.append(self.unfiltered())
302 302 # set up the client hooks so the post-clone update works
303 303 setupclient(self.ui, self.unfiltered())
304 304
305 305 # setupclient fixed the class on the repo itself
306 306 # but we also need to fix it on the repoview
307 307 if isinstance(self, repoview.repoview):
308 308 self.__class__.__bases__ = (self.__class__.__bases__[0],
309 309 self.unfiltered().__class__)
310 310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
311 311 self._writerequirements()
312 312
313 313 # Since setupclient hadn't been called, exchange.pull was not
314 314 # wrapped. So we need to manually invoke our version of it.
315 315 return exchangepull(orig, self, *args, **kwargs)
316 316 else:
317 317 return orig(self, *args, **kwargs)
318 318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
319 319
320 320 # Wrap the stream logic to add requirements and to pass include/exclude
321 321 # patterns around.
322 322 def setup_streamout(repo, remote):
323 323 # Replace remote.stream_out with a version that sends file
324 324 # patterns.
325 325 def stream_out_shallow(orig):
326 326 caps = remote.capabilities()
327 327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
328 328 opts = {}
329 329 if repo.includepattern:
330 330 opts[r'includepattern'] = '\0'.join(repo.includepattern)
331 331 if repo.excludepattern:
332 332 opts[r'excludepattern'] = '\0'.join(repo.excludepattern)
333 333 return remote._callstream('stream_out_shallow', **opts)
334 334 else:
335 335 return orig()
336 336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
337 337 def stream_wrap(orig, op):
338 338 setup_streamout(op.repo, op.remote)
339 339 return orig(op)
340 340 extensions.wrapfunction(
341 341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
342 342
343 343 def canperformstreamclone(orig, pullop, bundle2=False):
344 344 # remotefilelog is currently incompatible with the
345 345 # bundle2 flavor of streamclones, so force us to use
346 346 # v1 instead.
347 347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
348 348 pullop.remotebundle2caps['stream'] = [
349 349 c for c in pullop.remotebundle2caps['stream']
350 350 if c != 'v2']
351 351 if bundle2:
352 352 return False, None
353 353 supported, requirements = orig(pullop, bundle2=bundle2)
354 354 if requirements is not None:
355 355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
356 356 return supported, requirements
357 357 extensions.wrapfunction(
358 358 streamclone, 'canperformstreamclone', canperformstreamclone)
359 359
360 360 try:
361 361 orig(ui, repo, *args, **opts)
362 362 finally:
363 363 if opts.get(r'shallow'):
364 364 for r in repos:
365 365 if util.safehasattr(r, 'fileservice'):
366 366 r.fileservice.close()
367 367
368 368 def debugdatashallow(orig, *args, **kwds):
369 369 oldlen = remotefilelog.remotefilelog.__len__
370 370 try:
371 371 remotefilelog.remotefilelog.__len__ = lambda x: 1
372 372 return orig(*args, **kwds)
373 373 finally:
374 374 remotefilelog.remotefilelog.__len__ = oldlen
375 375
376 376 def reposetup(ui, repo):
377 377 if not repo.local():
378 378 return
379 379
380 380 # put here intentionally bc doesnt work in uisetup
381 381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
382 382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
383 383
384 384 isserverenabled = ui.configbool('remotefilelog', 'server')
385 385 isshallowclient = isenabled(repo)
386 386
387 387 if isserverenabled and isshallowclient:
388 388 raise RuntimeError("Cannot be both a server and shallow client.")
389 389
390 390 if isshallowclient:
391 391 setupclient(ui, repo)
392 392
393 393 if isserverenabled:
394 394 remotefilelogserver.setupserver(ui, repo)
395 395
396 396 def setupclient(ui, repo):
397 397 if not isinstance(repo, localrepo.localrepository):
398 398 return
399 399
400 400 # Even clients get the server setup since they need to have the
401 401 # wireprotocol endpoints registered.
402 402 remotefilelogserver.onetimesetup(ui)
403 403 onetimeclientsetup(ui)
404 404
405 405 shallowrepo.wraprepo(repo)
406 406 repo.store = shallowstore.wrapstore(repo.store)
407 407
408 408 clientonetime = False
409 409 def onetimeclientsetup(ui):
410 410 global clientonetime
411 411 if clientonetime:
412 412 return
413 413 clientonetime = True
414 414
415 415 changegroup.cgpacker = shallowbundle.shallowcg1packer
416 416
417 417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
418 418 shallowbundle.addchangegroupfiles)
419 419 extensions.wrapfunction(
420 420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
421 421
422 422 def storewrapper(orig, requirements, path, vfstype):
423 423 s = orig(requirements, path, vfstype)
424 424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
425 425 s = shallowstore.wrapstore(s)
426 426
427 427 return s
428 428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
429 429
430 430 extensions.wrapfunction(exchange, 'pull', exchangepull)
431 431
432 432 # prefetch files before update
433 433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
434 434 if isenabled(repo):
435 435 manifest = mctx.manifest()
436 436 files = []
437 437 for f, args, msg in actions['g']:
438 438 files.append((f, hex(manifest[f])))
439 439 # batch fetch the needed files from the server
440 440 repo.fileservice.prefetch(files)
441 441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
442 442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
443 443
444 444 # Prefetch merge checkunknownfiles
445 445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
446 446 *args, **kwargs):
447 447 if isenabled(repo):
448 448 files = []
449 449 sparsematch = repo.maybesparsematch(mctx.rev())
450 450 for f, (m, actionargs, msg) in actions.iteritems():
451 451 if sparsematch and not sparsematch(f):
452 452 continue
453 453 if m in ('c', 'dc', 'cm'):
454 454 files.append((f, hex(mctx.filenode(f))))
455 455 elif m == 'dg':
456 456 f2 = actionargs[0]
457 457 files.append((f2, hex(mctx.filenode(f2))))
458 458 # batch fetch the needed files from the server
459 459 repo.fileservice.prefetch(files)
460 460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
461 461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
462 462
463 463 # Prefetch files before status attempts to look at their size and contents
464 464 def checklookup(orig, self, files):
465 465 repo = self._repo
466 466 if isenabled(repo):
467 467 prefetchfiles = []
468 468 for parent in self._parents:
469 469 for f in files:
470 470 if f in parent:
471 471 prefetchfiles.append((f, hex(parent.filenode(f))))
472 472 # batch fetch the needed files from the server
473 473 repo.fileservice.prefetch(prefetchfiles)
474 474 return orig(self, files)
475 475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
476 476
477 477 # Prefetch the logic that compares added and removed files for renames
478 478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
479 479 if isenabled(repo):
480 480 files = []
481 parentctx = repo['.']
481 pmf = repo['.'].manifest()
482 482 for f in removed:
483 files.append((f, hex(parentctx.filenode(f))))
483 if f in pmf:
484 files.append((f, hex(pmf[f])))
484 485 # batch fetch the needed files from the server
485 486 repo.fileservice.prefetch(files)
486 487 return orig(repo, matcher, added, removed, *args, **kwargs)
487 488 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488 489
489 490 # prefetch files before mergecopies check
490 491 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 492 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 493 if isenabled(repo):
493 494 m1 = c1.manifest()
494 495 m2 = c2.manifest()
495 496 files = []
496 497
497 498 sparsematch1 = repo.maybesparsematch(c1.rev())
498 499 if sparsematch1:
499 500 sparseu1 = []
500 501 for f in u1:
501 502 if sparsematch1(f):
502 503 files.append((f, hex(m1[f])))
503 504 sparseu1.append(f)
504 505 u1 = sparseu1
505 506
506 507 sparsematch2 = repo.maybesparsematch(c2.rev())
507 508 if sparsematch2:
508 509 sparseu2 = []
509 510 for f in u2:
510 511 if sparsematch2(f):
511 512 files.append((f, hex(m2[f])))
512 513 sparseu2.append(f)
513 514 u2 = sparseu2
514 515
515 516 # batch fetch the needed files from the server
516 517 repo.fileservice.prefetch(files)
517 518 return u1, u2
518 519 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519 520
520 521 # prefetch files before pathcopies check
521 522 def computeforwardmissing(orig, a, b, match=None):
522 523 missing = list(orig(a, b, match=match))
523 524 repo = a._repo
524 525 if isenabled(repo):
525 526 mb = b.manifest()
526 527
527 528 files = []
528 529 sparsematch = repo.maybesparsematch(b.rev())
529 530 if sparsematch:
530 531 sparsemissing = []
531 532 for f in missing:
532 533 if sparsematch(f):
533 534 files.append((f, hex(mb[f])))
534 535 sparsemissing.append(f)
535 536 missing = sparsemissing
536 537
537 538 # batch fetch the needed files from the server
538 539 repo.fileservice.prefetch(files)
539 540 return missing
540 541 extensions.wrapfunction(copies, '_computeforwardmissing',
541 542 computeforwardmissing)
542 543
543 544 # close cache miss server connection after the command has finished
544 545 def runcommand(orig, lui, repo, *args, **kwargs):
545 546 fileservice = None
546 547 # repo can be None when running in chg:
547 548 # - at startup, reposetup was called because serve is not norepo
548 549 # - a norepo command like "help" is called
549 550 if repo and isenabled(repo):
550 551 fileservice = repo.fileservice
551 552 try:
552 553 return orig(lui, repo, *args, **kwargs)
553 554 finally:
554 555 if fileservice:
555 556 fileservice.close()
556 557 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
557 558
558 559 # disappointing hacks below
559 560 scmutil.getrenamedfn = getrenamedfn
560 561 extensions.wrapfunction(revset, 'filelog', filelogrevset)
561 562 revset.symbols['filelog'] = revset.filelog
562 563 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
563 564
564 565 # prevent strip from stripping remotefilelogs
565 566 def _collectbrokencsets(orig, repo, files, striprev):
566 567 if isenabled(repo):
567 568 files = list([f for f in files if not repo.shallowmatch(f)])
568 569 return orig(repo, files, striprev)
569 570 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
570 571
571 572 # Don't commit filelogs until we know the commit hash, since the hash
572 573 # is present in the filelog blob.
573 574 # This violates Mercurial's filelog->manifest->changelog write order,
574 575 # but is generally fine for client repos.
575 576 pendingfilecommits = []
576 577 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
577 578 flags, cachedelta=None, _metatuple=None):
578 579 if isinstance(link, int):
579 580 pendingfilecommits.append(
580 581 (self, rawtext, transaction, link, p1, p2, node, flags,
581 582 cachedelta, _metatuple))
582 583 return node
583 584 else:
584 585 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
585 586 cachedelta, _metatuple=_metatuple)
586 587 extensions.wrapfunction(
587 588 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
588 589
589 590 def changelogadd(orig, self, *args):
590 591 oldlen = len(self)
591 592 node = orig(self, *args)
592 593 newlen = len(self)
593 594 if oldlen != newlen:
594 595 for oldargs in pendingfilecommits:
595 596 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
596 597 linknode = self.node(link)
597 598 if linknode == node:
598 599 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
599 600 else:
600 601 raise error.ProgrammingError(
601 602 'pending multiple integer revisions are not supported')
602 603 else:
603 604 # "link" is actually wrong here (it is set to len(changelog))
604 605 # if changelog remains unchanged, skip writing file revisions
605 606 # but still do a sanity check about pending multiple revisions
606 607 if len(set(x[3] for x in pendingfilecommits)) > 1:
607 608 raise error.ProgrammingError(
608 609 'pending multiple integer revisions are not supported')
609 610 del pendingfilecommits[:]
610 611 return node
611 612 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
612 613
613 614 # changectx wrappers
614 615 def filectx(orig, self, path, fileid=None, filelog=None):
615 616 if fileid is None:
616 617 fileid = self.filenode(path)
617 618 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
618 619 return remotefilectx.remotefilectx(self._repo, path,
619 620 fileid=fileid, changectx=self, filelog=filelog)
620 621 return orig(self, path, fileid=fileid, filelog=filelog)
621 622 extensions.wrapfunction(context.changectx, 'filectx', filectx)
622 623
623 624 def workingfilectx(orig, self, path, filelog=None):
624 625 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
625 626 return remotefilectx.remoteworkingfilectx(self._repo,
626 627 path, workingctx=self, filelog=filelog)
627 628 return orig(self, path, filelog=filelog)
628 629 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
629 630
630 631 # prefetch required revisions before a diff
631 632 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
632 633 copy, getfilectx, *args, **kwargs):
633 634 if isenabled(repo):
634 635 prefetch = []
635 636 mf1 = ctx1.manifest()
636 637 for fname in modified + added + removed:
637 638 if fname in mf1:
638 639 fnode = getfilectx(fname, ctx1).filenode()
639 640 # fnode can be None if it's a edited working ctx file
640 641 if fnode:
641 642 prefetch.append((fname, hex(fnode)))
642 643 if fname not in removed:
643 644 fnode = getfilectx(fname, ctx2).filenode()
644 645 if fnode:
645 646 prefetch.append((fname, hex(fnode)))
646 647
647 648 repo.fileservice.prefetch(prefetch)
648 649
649 650 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
650 651 copy, getfilectx, *args, **kwargs)
651 652 extensions.wrapfunction(patch, 'trydiff', trydiff)
652 653
653 654 # Prevent verify from processing files
654 655 # a stub for mercurial.hg.verify()
655 656 def _verify(orig, repo):
656 657 lock = repo.lock()
657 658 try:
658 659 return shallowverifier.shallowverifier(repo).verify()
659 660 finally:
660 661 lock.release()
661 662
662 663 extensions.wrapfunction(hg, 'verify', _verify)
663 664
664 665 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
665 666
666 667 def getrenamedfn(repo, endrev=None):
667 668 rcache = {}
668 669
669 670 def getrenamed(fn, rev):
670 671 '''looks up all renames for a file (up to endrev) the first
671 672 time the file is given. It indexes on the changerev and only
672 673 parses the manifest if linkrev != changerev.
673 674 Returns rename info for fn at changerev rev.'''
674 675 if rev in rcache.setdefault(fn, {}):
675 676 return rcache[fn][rev]
676 677
677 678 try:
678 679 fctx = repo[rev].filectx(fn)
679 680 for ancestor in fctx.ancestors():
680 681 if ancestor.path() == fn:
681 682 renamed = ancestor.renamed()
682 683 rcache[fn][ancestor.rev()] = renamed and renamed[0]
683 684
684 685 renamed = fctx.renamed()
685 686 return renamed and renamed[0]
686 687 except error.LookupError:
687 688 return None
688 689
689 690 return getrenamed
690 691
691 692 def walkfilerevs(orig, repo, match, follow, revs, fncache):
692 693 if not isenabled(repo):
693 694 return orig(repo, match, follow, revs, fncache)
694 695
695 696 # remotefilelog's can't be walked in rev order, so throw.
696 697 # The caller will see the exception and walk the commit tree instead.
697 698 if not follow:
698 699 raise cmdutil.FileWalkError("Cannot walk via filelog")
699 700
700 701 wanted = set()
701 702 minrev, maxrev = min(revs), max(revs)
702 703
703 704 pctx = repo['.']
704 705 for filename in match.files():
705 706 if filename not in pctx:
706 707 raise error.Abort(_('cannot follow file not in parent '
707 708 'revision: "%s"') % filename)
708 709 fctx = pctx[filename]
709 710
710 711 linkrev = fctx.linkrev()
711 712 if linkrev >= minrev and linkrev <= maxrev:
712 713 fncache.setdefault(linkrev, []).append(filename)
713 714 wanted.add(linkrev)
714 715
715 716 for ancestor in fctx.ancestors():
716 717 linkrev = ancestor.linkrev()
717 718 if linkrev >= minrev and linkrev <= maxrev:
718 719 fncache.setdefault(linkrev, []).append(ancestor.path())
719 720 wanted.add(linkrev)
720 721
721 722 return wanted
722 723
723 724 def filelogrevset(orig, repo, subset, x):
724 725 """``filelog(pattern)``
725 726 Changesets connected to the specified filelog.
726 727
727 728 For performance reasons, ``filelog()`` does not show every changeset
728 729 that affects the requested file(s). See :hg:`help log` for details. For
729 730 a slower, more accurate result, use ``file()``.
730 731 """
731 732
732 733 if not isenabled(repo):
733 734 return orig(repo, subset, x)
734 735
735 736 # i18n: "filelog" is a keyword
736 737 pat = revset.getstring(x, _("filelog requires a pattern"))
737 738 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
738 739 ctx=repo[None])
739 740 s = set()
740 741
741 742 if not match.patkind(pat):
742 743 # slow
743 744 for r in subset:
744 745 ctx = repo[r]
745 746 cfiles = ctx.files()
746 747 for f in m.files():
747 748 if f in cfiles:
748 749 s.add(ctx.rev())
749 750 break
750 751 else:
751 752 # partial
752 753 files = (f for f in repo[None] if m(f))
753 754 for f in files:
754 755 fctx = repo[None].filectx(f)
755 756 s.add(fctx.linkrev())
756 757 for actx in fctx.ancestors():
757 758 s.add(actx.linkrev())
758 759
759 760 return smartset.baseset([r for r in subset if r in s])
760 761
761 762 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
762 763 def gc(ui, *args, **opts):
763 764 '''garbage collect the client and server filelog caches
764 765 '''
765 766 cachepaths = set()
766 767
767 768 # get the system client cache
768 769 systemcache = shallowutil.getcachepath(ui, allowempty=True)
769 770 if systemcache:
770 771 cachepaths.add(systemcache)
771 772
772 773 # get repo client and server cache
773 774 repopaths = []
774 775 pwd = ui.environ.get('PWD')
775 776 if pwd:
776 777 repopaths.append(pwd)
777 778
778 779 repopaths.extend(args)
779 780 repos = []
780 781 for repopath in repopaths:
781 782 try:
782 783 repo = hg.peer(ui, {}, repopath)
783 784 repos.append(repo)
784 785
785 786 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
786 787 if repocache:
787 788 cachepaths.add(repocache)
788 789 except error.RepoError:
789 790 pass
790 791
791 792 # gc client cache
792 793 for cachepath in cachepaths:
793 794 gcclient(ui, cachepath)
794 795
795 796 # gc server cache
796 797 for repo in repos:
797 798 remotefilelogserver.gcserver(ui, repo._repo)
798 799
799 800 def gcclient(ui, cachepath):
800 801 # get list of repos that use this cache
801 802 repospath = os.path.join(cachepath, 'repos')
802 803 if not os.path.exists(repospath):
803 804 ui.warn(_("no known cache at %s\n") % cachepath)
804 805 return
805 806
806 807 reposfile = open(repospath, 'rb')
807 808 repos = {r[:-1] for r in reposfile.readlines()}
808 809 reposfile.close()
809 810
810 811 # build list of useful files
811 812 validrepos = []
812 813 keepkeys = set()
813 814
814 815 sharedcache = None
815 816 filesrepacked = False
816 817
817 818 count = 0
818 819 progress = ui.makeprogress(_("analyzing repositories"), unit="repos",
819 820 total=len(repos))
820 821 for path in repos:
821 822 progress.update(count)
822 823 count += 1
823 824 try:
824 825 path = ui.expandpath(os.path.normpath(path))
825 826 except TypeError as e:
826 827 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
827 828 traceback.print_exc()
828 829 continue
829 830 try:
830 831 peer = hg.peer(ui, {}, path)
831 832 repo = peer._repo
832 833 except error.RepoError:
833 834 continue
834 835
835 836 validrepos.append(path)
836 837
837 838 # Protect against any repo or config changes that have happened since
838 839 # this repo was added to the repos file. We'd rather this loop succeed
839 840 # and too much be deleted, than the loop fail and nothing gets deleted.
840 841 if not isenabled(repo):
841 842 continue
842 843
843 844 if not util.safehasattr(repo, 'name'):
844 845 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
845 846 continue
846 847
847 848 # If garbage collection on repack and repack on hg gc are enabled
848 849 # then loose files are repacked and garbage collected.
849 850 # Otherwise regular garbage collection is performed.
850 851 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
851 852 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
852 853 if repackonhggc and gcrepack:
853 854 try:
854 855 repackmod.incrementalrepack(repo)
855 856 filesrepacked = True
856 857 continue
857 858 except (IOError, repackmod.RepackAlreadyRunning):
858 859 # If repack cannot be performed due to not enough disk space
859 860 # continue doing garbage collection of loose files w/o repack
860 861 pass
861 862
862 863 reponame = repo.name
863 864 if not sharedcache:
864 865 sharedcache = repo.sharedstore
865 866
866 867 # Compute a keepset which is not garbage collected
867 868 def keyfn(fname, fnode):
868 869 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
869 870 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
870 871
871 872 progress.complete()
872 873
873 874 # write list of valid repos back
874 875 oldumask = os.umask(0o002)
875 876 try:
876 877 reposfile = open(repospath, 'wb')
877 878 reposfile.writelines([("%s\n" % r) for r in validrepos])
878 879 reposfile.close()
879 880 finally:
880 881 os.umask(oldumask)
881 882
882 883 # prune cache
883 884 if sharedcache is not None:
884 885 sharedcache.gc(keepkeys)
885 886 elif not filesrepacked:
886 887 ui.warn(_("warning: no valid repos in repofile\n"))
887 888
888 889 def log(orig, ui, repo, *pats, **opts):
889 890 if not isenabled(repo):
890 891 return orig(ui, repo, *pats, **opts)
891 892
892 893 follow = opts.get(r'follow')
893 894 revs = opts.get(r'rev')
894 895 if pats:
895 896 # Force slowpath for non-follow patterns and follows that start from
896 897 # non-working-copy-parent revs.
897 898 if not follow or revs:
898 899 # This forces the slowpath
899 900 opts[r'removed'] = True
900 901
901 902 # If this is a non-follow log without any revs specified, recommend that
902 903 # the user add -f to speed it up.
903 904 if not follow and not revs:
904 905 match = scmutil.match(repo['.'], pats, pycompat.byteskwargs(opts))
905 906 isfile = not match.anypats()
906 907 if isfile:
907 908 for file in match.files():
908 909 if not os.path.isfile(repo.wjoin(file)):
909 910 isfile = False
910 911 break
911 912
912 913 if isfile:
913 914 ui.warn(_("warning: file log can be slow on large repos - " +
914 915 "use -f to speed it up\n"))
915 916
916 917 return orig(ui, repo, *pats, **opts)
917 918
918 919 def revdatelimit(ui, revset):
919 920 """Update revset so that only changesets no older than 'prefetchdays' days
920 921 are included. The default value is set to 14 days. If 'prefetchdays' is set
921 922 to zero or negative value then date restriction is not applied.
922 923 """
923 924 days = ui.configint('remotefilelog', 'prefetchdays')
924 925 if days > 0:
925 926 revset = '(%s) & date(-%s)' % (revset, days)
926 927 return revset
927 928
928 929 def readytofetch(repo):
929 930 """Check that enough time has passed since the last background prefetch.
930 931 This only relates to prefetches after operations that change the working
931 932 copy parent. Default delay between background prefetches is 2 minutes.
932 933 """
933 934 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
934 935 fname = repo.vfs.join('lastprefetch')
935 936
936 937 ready = False
937 938 with open(fname, 'a'):
938 939 # the with construct above is used to avoid race conditions
939 940 modtime = os.path.getmtime(fname)
940 941 if (time.time() - modtime) > timeout:
941 942 os.utime(fname, None)
942 943 ready = True
943 944
944 945 return ready
945 946
946 947 def wcpprefetch(ui, repo, **kwargs):
947 948 """Prefetches in background revisions specified by bgprefetchrevs revset.
948 949 Does background repack if backgroundrepack flag is set in config.
949 950 """
950 951 shallow = isenabled(repo)
951 952 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
952 953 isready = readytofetch(repo)
953 954
954 955 if not (shallow and bgprefetchrevs and isready):
955 956 return
956 957
957 958 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
958 959 # update a revset with a date limit
959 960 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
960 961
961 962 def anon():
962 963 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
963 964 return
964 965 repo.ranprefetch = True
965 966 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
966 967
967 968 repo._afterlock(anon)
968 969
969 970 def pull(orig, ui, repo, *pats, **opts):
970 971 result = orig(ui, repo, *pats, **opts)
971 972
972 973 if isenabled(repo):
973 974 # prefetch if it's configured
974 975 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
975 976 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
976 977 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
977 978
978 979 if prefetchrevset:
979 980 ui.status(_("prefetching file contents\n"))
980 981 revs = scmutil.revrange(repo, [prefetchrevset])
981 982 base = repo['.'].rev()
982 983 if bgprefetch:
983 984 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
984 985 else:
985 986 repo.prefetch(revs, base=base)
986 987 if bgrepack:
987 988 repackmod.backgroundrepack(repo, incremental=True)
988 989 elif bgrepack:
989 990 repackmod.backgroundrepack(repo, incremental=True)
990 991
991 992 return result
992 993
993 994 def exchangepull(orig, repo, remote, *args, **kwargs):
994 995 # Hook into the callstream/getbundle to insert bundle capabilities
995 996 # during a pull.
996 997 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
997 998 **kwargs):
998 999 if not bundlecaps:
999 1000 bundlecaps = set()
1000 1001 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1001 1002 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1002 1003 **kwargs)
1003 1004
1004 1005 if util.safehasattr(remote, '_callstream'):
1005 1006 remote._localrepo = repo
1006 1007 elif util.safehasattr(remote, 'getbundle'):
1007 1008 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1008 1009
1009 1010 return orig(repo, remote, *args, **kwargs)
1010 1011
1011 1012 def _fileprefetchhook(repo, revs, match):
1012 1013 if isenabled(repo):
1013 1014 allfiles = []
1014 1015 for rev in revs:
1015 1016 if rev == nodemod.wdirrev or rev is None:
1016 1017 continue
1017 1018 ctx = repo[rev]
1018 1019 mf = ctx.manifest()
1019 1020 sparsematch = repo.maybesparsematch(ctx.rev())
1020 1021 for path in ctx.walk(match):
1021 1022 if path.endswith('/'):
1022 1023 # Tree manifest that's being excluded as part of narrow
1023 1024 continue
1024 1025 if (not sparsematch or sparsematch(path)) and path in mf:
1025 1026 allfiles.append((path, hex(mf[path])))
1026 1027 repo.fileservice.prefetch(allfiles)
1027 1028
1028 1029 @command('debugremotefilelog', [
1029 1030 ('d', 'decompress', None, _('decompress the filelog first')),
1030 1031 ], _('hg debugremotefilelog <path>'), norepo=True)
1031 1032 def debugremotefilelog(ui, path, **opts):
1032 1033 return debugcommands.debugremotefilelog(ui, path, **opts)
1033 1034
1034 1035 @command('verifyremotefilelog', [
1035 1036 ('d', 'decompress', None, _('decompress the filelogs first')),
1036 1037 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1037 1038 def verifyremotefilelog(ui, path, **opts):
1038 1039 return debugcommands.verifyremotefilelog(ui, path, **opts)
1039 1040
1040 1041 @command('debugdatapack', [
1041 1042 ('', 'long', None, _('print the long hashes')),
1042 1043 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1043 1044 ], _('hg debugdatapack <paths>'), norepo=True)
1044 1045 def debugdatapack(ui, *paths, **opts):
1045 1046 return debugcommands.debugdatapack(ui, *paths, **opts)
1046 1047
1047 1048 @command('debughistorypack', [
1048 1049 ], _('hg debughistorypack <path>'), norepo=True)
1049 1050 def debughistorypack(ui, path, **opts):
1050 1051 return debugcommands.debughistorypack(ui, path)
1051 1052
1052 1053 @command('debugkeepset', [
1053 1054 ], _('hg debugkeepset'))
1054 1055 def debugkeepset(ui, repo, **opts):
1055 1056 # The command is used to measure keepset computation time
1056 1057 def keyfn(fname, fnode):
1057 1058 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1058 1059 repackmod.keepset(repo, keyfn)
1059 1060 return
1060 1061
1061 1062 @command('debugwaitonrepack', [
1062 1063 ], _('hg debugwaitonrepack'))
1063 1064 def debugwaitonrepack(ui, repo, **opts):
1064 1065 return debugcommands.debugwaitonrepack(repo)
1065 1066
1066 1067 @command('debugwaitonprefetch', [
1067 1068 ], _('hg debugwaitonprefetch'))
1068 1069 def debugwaitonprefetch(ui, repo, **opts):
1069 1070 return debugcommands.debugwaitonprefetch(repo)
1070 1071
1071 1072 def resolveprefetchopts(ui, opts):
1072 1073 if not opts.get('rev'):
1073 1074 revset = ['.', 'draft()']
1074 1075
1075 1076 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1076 1077 if prefetchrevset:
1077 1078 revset.append('(%s)' % prefetchrevset)
1078 1079 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1079 1080 if bgprefetchrevs:
1080 1081 revset.append('(%s)' % bgprefetchrevs)
1081 1082 revset = '+'.join(revset)
1082 1083
1083 1084 # update a revset with a date limit
1084 1085 revset = revdatelimit(ui, revset)
1085 1086
1086 1087 opts['rev'] = [revset]
1087 1088
1088 1089 if not opts.get('base'):
1089 1090 opts['base'] = None
1090 1091
1091 1092 return opts
1092 1093
1093 1094 @command('prefetch', [
1094 1095 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1095 1096 ('', 'repack', False, _('run repack after prefetch')),
1096 1097 ('b', 'base', '', _("rev that is assumed to already be local")),
1097 1098 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1098 1099 def prefetch(ui, repo, *pats, **opts):
1099 1100 """prefetch file revisions from the server
1100 1101
1101 1102 Prefetchs file revisions for the specified revs and stores them in the
1102 1103 local remotefilelog cache. If no rev is specified, the default rev is
1103 1104 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1104 1105 File names or patterns can be used to limit which files are downloaded.
1105 1106
1106 1107 Return 0 on success.
1107 1108 """
1108 1109 opts = pycompat.byteskwargs(opts)
1109 1110 if not isenabled(repo):
1110 1111 raise error.Abort(_("repo is not shallow"))
1111 1112
1112 1113 opts = resolveprefetchopts(ui, opts)
1113 1114 revs = scmutil.revrange(repo, opts.get('rev'))
1114 1115 repo.prefetch(revs, opts.get('base'), pats, opts)
1115 1116
1116 1117 # Run repack in background
1117 1118 if opts.get('repack'):
1118 1119 repackmod.backgroundrepack(repo, incremental=True)
1119 1120
1120 1121 @command('repack', [
1121 1122 ('', 'background', None, _('run in a background process'), None),
1122 1123 ('', 'incremental', None, _('do an incremental repack'), None),
1123 1124 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1124 1125 ], _('hg repack [OPTIONS]'))
1125 1126 def repack_(ui, repo, *pats, **opts):
1126 1127 if opts.get(r'background'):
1127 1128 repackmod.backgroundrepack(repo, incremental=opts.get(r'incremental'),
1128 1129 packsonly=opts.get(r'packsonly', False))
1129 1130 return
1130 1131
1131 1132 options = {'packsonly': opts.get(r'packsonly')}
1132 1133
1133 1134 try:
1134 1135 if opts.get(r'incremental'):
1135 1136 repackmod.incrementalrepack(repo, options=options)
1136 1137 else:
1137 1138 repackmod.fullrepack(repo, options=options)
1138 1139 except repackmod.RepackAlreadyRunning as ex:
1139 1140 # Don't propogate the exception if the repack is already in
1140 1141 # progress, since we want the command to exit 0.
1141 1142 repo.ui.warn('%s\n' % ex)
@@ -1,235 +1,238 b''
1 1 #require no-windows
2 2
3 3 $ . "$TESTDIR/remotefilelog-library.sh"
4 4
5 5 $ hg init master
6 6 $ cd master
7 7 $ cat >> .hg/hgrc <<EOF
8 8 > [remotefilelog]
9 9 > server=True
10 10 > EOF
11 11 $ echo x > x
12 12 $ echo z > z
13 13 $ hg commit -qAm x
14 14 $ echo x2 > x
15 15 $ echo y > y
16 16 $ hg commit -qAm y
17 17 $ hg bookmark foo
18 18
19 19 $ cd ..
20 20
21 21 # prefetch a revision
22 22
23 23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 24 streaming all changes
25 25 2 files to transfer, 528 bytes of data
26 26 transferred 528 bytes in * seconds (*/sec) (glob)
27 27 searching for changes
28 28 no changes found
29 29 $ cd shallow
30 30
31 31 $ hg prefetch -r 0
32 32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
33 33
34 34 $ hg cat -r 0 x
35 35 x
36 36
37 37 # prefetch with base
38 38
39 39 $ clearcache
40 40 $ hg prefetch -r 0::1 -b 0
41 41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
42 42
43 43 $ hg cat -r 1 x
44 44 x2
45 45 $ hg cat -r 1 y
46 46 y
47 47
48 48 $ hg cat -r 0 x
49 49 x
50 50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
51 51
52 52 $ hg cat -r 0 z
53 53 z
54 54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
55 55
56 56 $ hg prefetch -r 0::1 --base 0
57 57 $ hg prefetch -r 0::1 -b 1
58 58 $ hg prefetch -r 0::1
59 59
60 60 # prefetch a range of revisions
61 61
62 62 $ clearcache
63 63 $ hg prefetch -r 0::1
64 64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
65 65
66 66 $ hg cat -r 0 x
67 67 x
68 68 $ hg cat -r 1 x
69 69 x2
70 70
71 71 # prefetch certain files
72 72
73 73 $ clearcache
74 74 $ hg prefetch -r 1 x
75 75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
76 76
77 77 $ hg cat -r 1 x
78 78 x2
79 79
80 80 $ hg cat -r 1 y
81 81 y
82 82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
83 83
84 84 # prefetch on pull when configured
85 85
86 86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
87 87 $ hg strip tip
88 88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
89 89
90 90 $ clearcache
91 91 $ hg pull
92 92 pulling from ssh://user@dummy/master
93 93 searching for changes
94 94 adding changesets
95 95 adding manifests
96 96 adding file changes
97 97 added 1 changesets with 0 changes to 0 files
98 98 updating bookmark foo
99 99 new changesets 109c3a557a73
100 100 (run 'hg update' to get a working copy)
101 101 prefetching file contents
102 102 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
103 103
104 104 $ hg up tip
105 105 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
106 106
107 107 # prefetch only fetches changes not in working copy
108 108
109 109 $ hg strip tip
110 110 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
111 111 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
112 112 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
113 113 $ clearcache
114 114
115 115 $ hg pull
116 116 pulling from ssh://user@dummy/master
117 117 searching for changes
118 118 adding changesets
119 119 adding manifests
120 120 adding file changes
121 121 added 1 changesets with 0 changes to 0 files
122 122 updating bookmark foo
123 123 new changesets 109c3a557a73
124 124 (run 'hg update' to get a working copy)
125 125 prefetching file contents
126 126 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127 127
128 128 # Make some local commits that produce the same file versions as are on the
129 129 # server. To simulate a situation where we have local commits that were somehow
130 130 # pushed, and we will soon pull.
131 131
132 132 $ hg prefetch -r 'all()'
133 133 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
134 134 $ hg strip -q -r 0
135 135 $ echo x > x
136 136 $ echo z > z
137 137 $ hg commit -qAm x
138 138 $ echo x2 > x
139 139 $ echo y > y
140 140 $ hg commit -qAm y
141 141
142 142 # prefetch server versions, even if local versions are available
143 143
144 144 $ clearcache
145 145 $ hg strip -q tip
146 146 $ hg pull
147 147 pulling from ssh://user@dummy/master
148 148 searching for changes
149 149 adding changesets
150 150 adding manifests
151 151 adding file changes
152 152 added 1 changesets with 0 changes to 0 files
153 153 updating bookmark foo
154 154 new changesets 109c3a557a73
155 155 1 local changesets published (?)
156 156 (run 'hg update' to get a working copy)
157 157 prefetching file contents
158 158 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
159 159
160 160 $ cd ..
161 161
162 162 # Prefetch unknown files during checkout
163 163
164 164 $ hgcloneshallow ssh://user@dummy/master shallow2
165 165 streaming all changes
166 166 2 files to transfer, 528 bytes of data
167 167 transferred 528 bytes in * seconds * (glob)
168 168 searching for changes
169 169 no changes found
170 170 updating to branch default
171 171 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
172 172 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
173 173 $ cd shallow2
174 174 $ hg up -q null
175 175 $ echo x > x
176 176 $ echo y > y
177 177 $ echo z > z
178 178 $ clearcache
179 179 $ hg up tip
180 180 x: untracked file differs
181 181 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
182 182 abort: untracked files in working directory differ from files in requested revision
183 183 [255]
184 184 $ hg revert --all
185 185
186 186 # Test batch fetching of lookup files during hg status
187 187 $ hg up --clean tip
188 188 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 189 $ hg debugrebuilddirstate
190 190 $ clearcache
191 191 $ hg status
192 192 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
193 193
194 194 # Prefetch during addrename detection
195 195 $ hg up -q --clean tip
196 196 $ hg revert --all
197 197 $ mv x x2
198 198 $ mv y y2
199 199 $ mv z z2
200 $ echo a > a
201 $ hg add a
202 $ rm a
200 203 $ clearcache
201 204 $ hg addremove -s 50 > /dev/null
202 205 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
203 206 $ hg revert --all
204 207 forgetting x2
205 208 forgetting y2
206 209 forgetting z2
207 210 undeleting x
208 211 undeleting y
209 212 undeleting z
210 213
211 214
212 215 # Revert across double renames. Note: the scary "abort", error is because
213 216 # https://bz.mercurial-scm.org/5419 .
214 217
215 218 $ cd ../master
216 219 $ hg mv z z2
217 220 $ hg commit -m 'move z -> z2'
218 221 $ cd ../shallow2
219 222 $ hg pull -q
220 223 $ clearcache
221 224 $ hg mv y y2
222 225 y2: not overwriting - file exists
223 226 ('hg rename --after' to record the rename)
224 227 [1]
225 228 $ hg mv x x2
226 229 x2: not overwriting - file exists
227 230 ('hg rename --after' to record the rename)
228 231 [1]
229 232 $ hg mv z2 z3
230 233 z2: not copying - file is not managed
231 234 abort: no files to copy
232 235 [255]
233 236 $ hg revert -a -r 1 || true
234 237 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
235 238 abort: z2@109c3a557a73: not found in manifest! (?)
General Comments 0
You need to be logged in to leave comments. Login now