##// END OF EJS Templates
remotefilelog: avoid accessing repo instance after dispatch...
Martin von Zweigbergk -
r40611:157f0e29 default
parent child Browse files
Show More
@@ -1,1136 +1,1139 b''
1 1 # __init__.py - remotefilelog extension
2 2 #
3 3 # Copyright 2013 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8 8
9 9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 10 GUARANTEES. This means that repositories created with this extension may
11 11 only be usable with the exact version of this extension/Mercurial that was
12 12 used. The extension attempts to enforce this in order to prevent repository
13 13 corruption.
14 14
15 15 remotefilelog works by fetching file contents lazily and storing them
16 16 in a cache on the client rather than in revlogs. This allows enormous
17 17 histories to be transferred only partially, making them easier to
18 18 operate on.
19 19
20 20 Configs:
21 21
22 22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23 23
24 24 ``packs.maxpacksize`` specifies the maximum pack file size
25 25
26 26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 27 shared cache (trees only for now)
28 28
29 29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30 30
31 31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 32 update, and on other commands that use them. Different from pullprefetch.
33 33
34 34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35 35
36 36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 37 it is garbage collected
38 38
39 39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40 40
41 41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 42 days after which it is no longer prefetched.
43 43
44 44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 45 prefetches in seconds after operations that change the working copy parent
46 46
47 47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 48 pack files required to be considered part of a generation. In particular,
49 49 minimum number of packs files > gencountlimit.
50 50
51 51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 52 each generation of the data pack files. For example, list ['100MB','1MB']
53 53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 54 1MB, 100MB) and [100MB, infinity).
55 55
56 56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 57 include in an incremental data repack.
58 58
59 59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 60 it to be considered for an incremental data repack.
61 61
62 62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 63 to include in an incremental data repack.
64 64
65 65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 66 history pack files required to be considered part of a generation. In
67 67 particular, minimum number of packs files > gencountlimit.
68 68
69 69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 70 each generation of the historhy pack files. For example, list [
71 71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73 73
74 74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 75 include in an incremental history repack.
76 76
77 77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 78 for it to be considered for an incremental history repack.
79 79
80 80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 81 files to include in an incremental history repack.
82 82
83 83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 84 background
85 85
86 86 ``remotefilelog.cachepath`` path to cache
87 87
88 88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 89 group
90 90
91 91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92 92
93 93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94 94
95 95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96 96
97 97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98 98
99 99 ``remotefilelog.fetchwarning``: message to print when too many
100 100 single-file fetches occur
101 101
102 102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103 103
104 104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 105 files, otherwise use optimistic fetching
106 106
107 107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 108 eagerly downloaded rather than lazily
109 109
110 110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 111 data from other repos in a shared store.
112 112
113 113 ``remotefilelog.server`` if true, enable server-side functionality
114 114
115 115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116 116
117 117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 118 blobs
119 119
120 120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 121 before returning blobs
122 122
123 123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 124 corruption before returning metadata
125 125
126 126 """
127 127 from __future__ import absolute_import
128 128
129 129 import os
130 130 import time
131 131 import traceback
132 132
133 133 from mercurial.node import hex
134 134 from mercurial.i18n import _
135 135 from mercurial import (
136 136 changegroup,
137 137 changelog,
138 138 cmdutil,
139 139 commands,
140 140 configitems,
141 141 context,
142 142 copies,
143 143 debugcommands as hgdebugcommands,
144 144 dispatch,
145 145 error,
146 146 exchange,
147 147 extensions,
148 148 hg,
149 149 localrepo,
150 150 match,
151 151 merge,
152 152 node as nodemod,
153 153 patch,
154 154 registrar,
155 155 repair,
156 156 repoview,
157 157 revset,
158 158 scmutil,
159 159 smartset,
160 160 streamclone,
161 161 templatekw,
162 162 util,
163 163 )
164 164 from . import (
165 165 constants,
166 166 debugcommands,
167 167 fileserverclient,
168 168 remotefilectx,
169 169 remotefilelog,
170 170 remotefilelogserver,
171 171 repack as repackmod,
172 172 shallowbundle,
173 173 shallowrepo,
174 174 shallowstore,
175 175 shallowutil,
176 176 shallowverifier,
177 177 )
178 178
179 179 # ensures debug commands are registered
180 180 hgdebugcommands.command
181 181
182 182 cmdtable = {}
183 183 command = registrar.command(cmdtable)
184 184
185 185 configtable = {}
186 186 configitem = registrar.configitem(configtable)
187 187
188 188 configitem('remotefilelog', 'debug', default=False)
189 189
190 190 configitem('remotefilelog', 'reponame', default='')
191 191 configitem('remotefilelog', 'cachepath', default=None)
192 192 configitem('remotefilelog', 'cachegroup', default=None)
193 193 configitem('remotefilelog', 'cacheprocess', default=None)
194 194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
195 195 configitem("remotefilelog", "cachelimit", default="1000 GB")
196 196
197 197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
198 198 alias=[('remotefilelog', 'fallbackrepo')])
199 199
200 200 configitem('remotefilelog', 'validatecachelog', default=None)
201 201 configitem('remotefilelog', 'validatecache', default='on')
202 202 configitem('remotefilelog', 'server', default=None)
203 203 configitem('remotefilelog', 'servercachepath', default=None)
204 204 configitem("remotefilelog", "serverexpiration", default=30)
205 205 configitem('remotefilelog', 'backgroundrepack', default=False)
206 206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
207 207 configitem('remotefilelog', 'pullprefetch', default=None)
208 208 configitem('remotefilelog', 'backgroundprefetch', default=False)
209 209 configitem('remotefilelog', 'prefetchdelay', default=120)
210 210 configitem('remotefilelog', 'prefetchdays', default=14)
211 211
212 212 configitem('remotefilelog', 'getfilesstep', default=10000)
213 213 configitem('remotefilelog', 'getfilestype', default='optimistic')
214 214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
215 215 configitem('remotefilelog', 'fetchwarning', default='')
216 216
217 217 configitem('remotefilelog', 'includepattern', default=None)
218 218 configitem('remotefilelog', 'excludepattern', default=None)
219 219
220 220 configitem('remotefilelog', 'gcrepack', default=False)
221 221 configitem('remotefilelog', 'repackonhggc', default=False)
222 222 configitem('repack', 'chainorphansbysize', default=True)
223 223
224 224 configitem('packs', 'maxpacksize', default=0)
225 225 configitem('packs', 'maxchainlen', default=1000)
226 226
227 227 # default TTL limit is 30 days
228 228 _defaultlimit = 60 * 60 * 24 * 30
229 229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
230 230
231 231 configitem('remotefilelog', 'data.gencountlimit', default=2),
232 232 configitem('remotefilelog', 'data.generations',
233 233 default=['1GB', '100MB', '1MB'])
234 234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
235 235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
236 236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
237 237
238 238 configitem('remotefilelog', 'history.gencountlimit', default=2),
239 239 configitem('remotefilelog', 'history.generations', default=['100MB'])
240 240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
241 241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
242 242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
243 243
244 244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
245 245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
246 246 # be specifying the version(s) of Mercurial they are tested with, or
247 247 # leave the attribute unspecified.
248 248 testedwith = 'ships-with-hg-core'
249 249
250 250 repoclass = localrepo.localrepository
251 251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
252 252
253 253 isenabled = shallowutil.isenabled
254 254
255 255 def uisetup(ui):
256 256 """Wraps user facing Mercurial commands to swap them out with shallow
257 257 versions.
258 258 """
259 259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
260 260
261 261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
262 262 entry[1].append(('', 'shallow', None,
263 263 _("create a shallow clone which uses remote file "
264 264 "history")))
265 265
266 266 extensions.wrapcommand(commands.table, 'debugindex',
267 267 debugcommands.debugindex)
268 268 extensions.wrapcommand(commands.table, 'debugindexdot',
269 269 debugcommands.debugindexdot)
270 270 extensions.wrapcommand(commands.table, 'log', log)
271 271 extensions.wrapcommand(commands.table, 'pull', pull)
272 272
273 273 # Prevent 'hg manifest --all'
274 274 def _manifest(orig, ui, repo, *args, **opts):
275 275 if (isenabled(repo) and opts.get('all')):
276 276 raise error.Abort(_("--all is not supported in a shallow repo"))
277 277
278 278 return orig(ui, repo, *args, **opts)
279 279 extensions.wrapcommand(commands.table, "manifest", _manifest)
280 280
281 281 # Wrap remotefilelog with lfs code
282 282 def _lfsloaded(loaded=False):
283 283 lfsmod = None
284 284 try:
285 285 lfsmod = extensions.find('lfs')
286 286 except KeyError:
287 287 pass
288 288 if lfsmod:
289 289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
290 290 fileserverclient._lfsmod = lfsmod
291 291 extensions.afterloaded('lfs', _lfsloaded)
292 292
293 293 # debugdata needs remotefilelog.len to work
294 294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
295 295
296 296 def cloneshallow(orig, ui, repo, *args, **opts):
297 297 if opts.get('shallow'):
298 298 repos = []
299 299 def pull_shallow(orig, self, *args, **kwargs):
300 300 if not isenabled(self):
301 301 repos.append(self.unfiltered())
302 302 # set up the client hooks so the post-clone update works
303 303 setupclient(self.ui, self.unfiltered())
304 304
305 305 # setupclient fixed the class on the repo itself
306 306 # but we also need to fix it on the repoview
307 307 if isinstance(self, repoview.repoview):
308 308 self.__class__.__bases__ = (self.__class__.__bases__[0],
309 309 self.unfiltered().__class__)
310 310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
311 311 self._writerequirements()
312 312
313 313 # Since setupclient hadn't been called, exchange.pull was not
314 314 # wrapped. So we need to manually invoke our version of it.
315 315 return exchangepull(orig, self, *args, **kwargs)
316 316 else:
317 317 return orig(self, *args, **kwargs)
318 318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
319 319
320 320 # Wrap the stream logic to add requirements and to pass include/exclude
321 321 # patterns around.
322 322 def setup_streamout(repo, remote):
323 323 # Replace remote.stream_out with a version that sends file
324 324 # patterns.
325 325 def stream_out_shallow(orig):
326 326 caps = remote.capabilities()
327 327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
328 328 opts = {}
329 329 if repo.includepattern:
330 330 opts['includepattern'] = '\0'.join(repo.includepattern)
331 331 if repo.excludepattern:
332 332 opts['excludepattern'] = '\0'.join(repo.excludepattern)
333 333 return remote._callstream('stream_out_shallow', **opts)
334 334 else:
335 335 return orig()
336 336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
337 337 def stream_wrap(orig, op):
338 338 setup_streamout(op.repo, op.remote)
339 339 return orig(op)
340 340 extensions.wrapfunction(
341 341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
342 342
343 343 def canperformstreamclone(orig, pullop, bundle2=False):
344 344 # remotefilelog is currently incompatible with the
345 345 # bundle2 flavor of streamclones, so force us to use
346 346 # v1 instead.
347 347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
348 348 pullop.remotebundle2caps['stream'] = [
349 349 c for c in pullop.remotebundle2caps['stream']
350 350 if c != 'v2']
351 351 if bundle2:
352 352 return False, None
353 353 supported, requirements = orig(pullop, bundle2=bundle2)
354 354 if requirements is not None:
355 355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
356 356 return supported, requirements
357 357 extensions.wrapfunction(
358 358 streamclone, 'canperformstreamclone', canperformstreamclone)
359 359
360 360 try:
361 361 orig(ui, repo, *args, **opts)
362 362 finally:
363 363 if opts.get('shallow'):
364 364 for r in repos:
365 365 if util.safehasattr(r, 'fileservice'):
366 366 r.fileservice.close()
367 367
368 368 def debugdatashallow(orig, *args, **kwds):
369 369 oldlen = remotefilelog.remotefilelog.__len__
370 370 try:
371 371 remotefilelog.remotefilelog.__len__ = lambda x: 1
372 372 return orig(*args, **kwds)
373 373 finally:
374 374 remotefilelog.remotefilelog.__len__ = oldlen
375 375
376 376 def reposetup(ui, repo):
377 377 if not isinstance(repo, localrepo.localrepository):
378 378 return
379 379
380 380 # put here intentionally bc doesnt work in uisetup
381 381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
382 382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
383 383
384 384 isserverenabled = ui.configbool('remotefilelog', 'server')
385 385 isshallowclient = isenabled(repo)
386 386
387 387 if isserverenabled and isshallowclient:
388 388 raise RuntimeError("Cannot be both a server and shallow client.")
389 389
390 390 if isshallowclient:
391 391 setupclient(ui, repo)
392 392
393 393 if isserverenabled:
394 394 remotefilelogserver.setupserver(ui, repo)
395 395
396 396 def setupclient(ui, repo):
397 397 if not isinstance(repo, localrepo.localrepository):
398 398 return
399 399
400 400 # Even clients get the server setup since they need to have the
401 401 # wireprotocol endpoints registered.
402 402 remotefilelogserver.onetimesetup(ui)
403 403 onetimeclientsetup(ui)
404 404
405 405 shallowrepo.wraprepo(repo)
406 406 repo.store = shallowstore.wrapstore(repo.store)
407 407
408 408 clientonetime = False
409 409 def onetimeclientsetup(ui):
410 410 global clientonetime
411 411 if clientonetime:
412 412 return
413 413 clientonetime = True
414 414
415 415 changegroup.cgpacker = shallowbundle.shallowcg1packer
416 416
417 417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
418 418 shallowbundle.addchangegroupfiles)
419 419 extensions.wrapfunction(
420 420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
421 421
422 422 def storewrapper(orig, requirements, path, vfstype):
423 423 s = orig(requirements, path, vfstype)
424 424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
425 425 s = shallowstore.wrapstore(s)
426 426
427 427 return s
428 428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
429 429
430 430 extensions.wrapfunction(exchange, 'pull', exchangepull)
431 431
432 432 # prefetch files before update
433 433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
434 434 if isenabled(repo):
435 435 manifest = mctx.manifest()
436 436 files = []
437 437 for f, args, msg in actions['g']:
438 438 files.append((f, hex(manifest[f])))
439 439 # batch fetch the needed files from the server
440 440 repo.fileservice.prefetch(files)
441 441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
442 442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
443 443
444 444 # Prefetch merge checkunknownfiles
445 445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
446 446 *args, **kwargs):
447 447 if isenabled(repo):
448 448 files = []
449 449 sparsematch = repo.maybesparsematch(mctx.rev())
450 450 for f, (m, actionargs, msg) in actions.iteritems():
451 451 if sparsematch and not sparsematch(f):
452 452 continue
453 453 if m in ('c', 'dc', 'cm'):
454 454 files.append((f, hex(mctx.filenode(f))))
455 455 elif m == 'dg':
456 456 f2 = actionargs[0]
457 457 files.append((f2, hex(mctx.filenode(f2))))
458 458 # batch fetch the needed files from the server
459 459 repo.fileservice.prefetch(files)
460 460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
461 461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
462 462
463 463 # Prefetch files before status attempts to look at their size and contents
464 464 def checklookup(orig, self, files):
465 465 repo = self._repo
466 466 if isenabled(repo):
467 467 prefetchfiles = []
468 468 for parent in self._parents:
469 469 for f in files:
470 470 if f in parent:
471 471 prefetchfiles.append((f, hex(parent.filenode(f))))
472 472 # batch fetch the needed files from the server
473 473 repo.fileservice.prefetch(prefetchfiles)
474 474 return orig(self, files)
475 475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
476 476
477 477 # Prefetch the logic that compares added and removed files for renames
478 478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
479 479 if isenabled(repo):
480 480 files = []
481 481 parentctx = repo['.']
482 482 for f in removed:
483 483 files.append((f, hex(parentctx.filenode(f))))
484 484 # batch fetch the needed files from the server
485 485 repo.fileservice.prefetch(files)
486 486 return orig(repo, matcher, added, removed, *args, **kwargs)
487 487 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488 488
489 489 # prefetch files before mergecopies check
490 490 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 491 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 492 if isenabled(repo):
493 493 m1 = c1.manifest()
494 494 m2 = c2.manifest()
495 495 files = []
496 496
497 497 sparsematch1 = repo.maybesparsematch(c1.rev())
498 498 if sparsematch1:
499 499 sparseu1 = []
500 500 for f in u1:
501 501 if sparsematch1(f):
502 502 files.append((f, hex(m1[f])))
503 503 sparseu1.append(f)
504 504 u1 = sparseu1
505 505
506 506 sparsematch2 = repo.maybesparsematch(c2.rev())
507 507 if sparsematch2:
508 508 sparseu2 = []
509 509 for f in u2:
510 510 if sparsematch2(f):
511 511 files.append((f, hex(m2[f])))
512 512 sparseu2.append(f)
513 513 u2 = sparseu2
514 514
515 515 # batch fetch the needed files from the server
516 516 repo.fileservice.prefetch(files)
517 517 return u1, u2
518 518 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519 519
520 520 # prefetch files before pathcopies check
521 521 def computeforwardmissing(orig, a, b, match=None):
522 522 missing = list(orig(a, b, match=match))
523 523 repo = a._repo
524 524 if isenabled(repo):
525 525 mb = b.manifest()
526 526
527 527 files = []
528 528 sparsematch = repo.maybesparsematch(b.rev())
529 529 if sparsematch:
530 530 sparsemissing = []
531 531 for f in missing:
532 532 if sparsematch(f):
533 533 files.append((f, hex(mb[f])))
534 534 sparsemissing.append(f)
535 535 missing = sparsemissing
536 536
537 537 # batch fetch the needed files from the server
538 538 repo.fileservice.prefetch(files)
539 539 return missing
540 540 extensions.wrapfunction(copies, '_computeforwardmissing',
541 541 computeforwardmissing)
542 542
543 543 # close cache miss server connection after the command has finished
544 544 def runcommand(orig, lui, repo, *args, **kwargs):
545 try:
546 return orig(lui, repo, *args, **kwargs)
547 finally:
545 fileservice = None
548 546 # repo can be None when running in chg:
549 547 # - at startup, reposetup was called because serve is not norepo
550 548 # - a norepo command like "help" is called
551 549 if repo and isenabled(repo):
552 repo.fileservice.close()
550 fileservice = repo.fileservice
551 try:
552 return orig(lui, repo, *args, **kwargs)
553 finally:
554 if fileservice:
555 fileservice.close()
553 556 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
554 557
555 558 # disappointing hacks below
556 559 templatekw.getrenamedfn = getrenamedfn
557 560 extensions.wrapfunction(revset, 'filelog', filelogrevset)
558 561 revset.symbols['filelog'] = revset.filelog
559 562 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
560 563
561 564 # prevent strip from stripping remotefilelogs
562 565 def _collectbrokencsets(orig, repo, files, striprev):
563 566 if isenabled(repo):
564 567 files = list([f for f in files if not repo.shallowmatch(f)])
565 568 return orig(repo, files, striprev)
566 569 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
567 570
568 571 # Don't commit filelogs until we know the commit hash, since the hash
569 572 # is present in the filelog blob.
570 573 # This violates Mercurial's filelog->manifest->changelog write order,
571 574 # but is generally fine for client repos.
572 575 pendingfilecommits = []
573 576 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
574 577 flags, cachedelta=None, _metatuple=None):
575 578 if isinstance(link, int):
576 579 pendingfilecommits.append(
577 580 (self, rawtext, transaction, link, p1, p2, node, flags,
578 581 cachedelta, _metatuple))
579 582 return node
580 583 else:
581 584 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
582 585 cachedelta, _metatuple=_metatuple)
583 586 extensions.wrapfunction(
584 587 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
585 588
586 589 def changelogadd(orig, self, *args):
587 590 oldlen = len(self)
588 591 node = orig(self, *args)
589 592 newlen = len(self)
590 593 if oldlen != newlen:
591 594 for oldargs in pendingfilecommits:
592 595 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
593 596 linknode = self.node(link)
594 597 if linknode == node:
595 598 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
596 599 else:
597 600 raise error.ProgrammingError(
598 601 'pending multiple integer revisions are not supported')
599 602 else:
600 603 # "link" is actually wrong here (it is set to len(changelog))
601 604 # if changelog remains unchanged, skip writing file revisions
602 605 # but still do a sanity check about pending multiple revisions
603 606 if len(set(x[3] for x in pendingfilecommits)) > 1:
604 607 raise error.ProgrammingError(
605 608 'pending multiple integer revisions are not supported')
606 609 del pendingfilecommits[:]
607 610 return node
608 611 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
609 612
610 613 # changectx wrappers
611 614 def filectx(orig, self, path, fileid=None, filelog=None):
612 615 if fileid is None:
613 616 fileid = self.filenode(path)
614 617 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
615 618 return remotefilectx.remotefilectx(self._repo, path,
616 619 fileid=fileid, changectx=self, filelog=filelog)
617 620 return orig(self, path, fileid=fileid, filelog=filelog)
618 621 extensions.wrapfunction(context.changectx, 'filectx', filectx)
619 622
620 623 def workingfilectx(orig, self, path, filelog=None):
621 624 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
622 625 return remotefilectx.remoteworkingfilectx(self._repo,
623 626 path, workingctx=self, filelog=filelog)
624 627 return orig(self, path, filelog=filelog)
625 628 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
626 629
627 630 # prefetch required revisions before a diff
628 631 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
629 632 copy, getfilectx, *args, **kwargs):
630 633 if isenabled(repo):
631 634 prefetch = []
632 635 mf1 = ctx1.manifest()
633 636 for fname in modified + added + removed:
634 637 if fname in mf1:
635 638 fnode = getfilectx(fname, ctx1).filenode()
636 639 # fnode can be None if it's a edited working ctx file
637 640 if fnode:
638 641 prefetch.append((fname, hex(fnode)))
639 642 if fname not in removed:
640 643 fnode = getfilectx(fname, ctx2).filenode()
641 644 if fnode:
642 645 prefetch.append((fname, hex(fnode)))
643 646
644 647 repo.fileservice.prefetch(prefetch)
645 648
646 649 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
647 650 copy, getfilectx, *args, **kwargs)
648 651 extensions.wrapfunction(patch, 'trydiff', trydiff)
649 652
650 653 # Prevent verify from processing files
651 654 # a stub for mercurial.hg.verify()
652 655 def _verify(orig, repo):
653 656 lock = repo.lock()
654 657 try:
655 658 return shallowverifier.shallowverifier(repo).verify()
656 659 finally:
657 660 lock.release()
658 661
659 662 extensions.wrapfunction(hg, 'verify', _verify)
660 663
661 664 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
662 665
663 666 def getrenamedfn(repo, endrev=None):
664 667 rcache = {}
665 668
666 669 def getrenamed(fn, rev):
667 670 '''looks up all renames for a file (up to endrev) the first
668 671 time the file is given. It indexes on the changerev and only
669 672 parses the manifest if linkrev != changerev.
670 673 Returns rename info for fn at changerev rev.'''
671 674 if rev in rcache.setdefault(fn, {}):
672 675 return rcache[fn][rev]
673 676
674 677 try:
675 678 fctx = repo[rev].filectx(fn)
676 679 for ancestor in fctx.ancestors():
677 680 if ancestor.path() == fn:
678 681 renamed = ancestor.renamed()
679 682 rcache[fn][ancestor.rev()] = renamed
680 683
681 684 return fctx.renamed()
682 685 except error.LookupError:
683 686 return None
684 687
685 688 return getrenamed
686 689
687 690 def walkfilerevs(orig, repo, match, follow, revs, fncache):
688 691 if not isenabled(repo):
689 692 return orig(repo, match, follow, revs, fncache)
690 693
691 694 # remotefilelog's can't be walked in rev order, so throw.
692 695 # The caller will see the exception and walk the commit tree instead.
693 696 if not follow:
694 697 raise cmdutil.FileWalkError("Cannot walk via filelog")
695 698
696 699 wanted = set()
697 700 minrev, maxrev = min(revs), max(revs)
698 701
699 702 pctx = repo['.']
700 703 for filename in match.files():
701 704 if filename not in pctx:
702 705 raise error.Abort(_('cannot follow file not in parent '
703 706 'revision: "%s"') % filename)
704 707 fctx = pctx[filename]
705 708
706 709 linkrev = fctx.linkrev()
707 710 if linkrev >= minrev and linkrev <= maxrev:
708 711 fncache.setdefault(linkrev, []).append(filename)
709 712 wanted.add(linkrev)
710 713
711 714 for ancestor in fctx.ancestors():
712 715 linkrev = ancestor.linkrev()
713 716 if linkrev >= minrev and linkrev <= maxrev:
714 717 fncache.setdefault(linkrev, []).append(ancestor.path())
715 718 wanted.add(linkrev)
716 719
717 720 return wanted
718 721
719 722 def filelogrevset(orig, repo, subset, x):
720 723 """``filelog(pattern)``
721 724 Changesets connected to the specified filelog.
722 725
723 726 For performance reasons, ``filelog()`` does not show every changeset
724 727 that affects the requested file(s). See :hg:`help log` for details. For
725 728 a slower, more accurate result, use ``file()``.
726 729 """
727 730
728 731 if not isenabled(repo):
729 732 return orig(repo, subset, x)
730 733
731 734 # i18n: "filelog" is a keyword
732 735 pat = revset.getstring(x, _("filelog requires a pattern"))
733 736 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
734 737 ctx=repo[None])
735 738 s = set()
736 739
737 740 if not match.patkind(pat):
738 741 # slow
739 742 for r in subset:
740 743 ctx = repo[r]
741 744 cfiles = ctx.files()
742 745 for f in m.files():
743 746 if f in cfiles:
744 747 s.add(ctx.rev())
745 748 break
746 749 else:
747 750 # partial
748 751 files = (f for f in repo[None] if m(f))
749 752 for f in files:
750 753 fctx = repo[None].filectx(f)
751 754 s.add(fctx.linkrev())
752 755 for actx in fctx.ancestors():
753 756 s.add(actx.linkrev())
754 757
755 758 return smartset.baseset([r for r in subset if r in s])
756 759
757 760 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
758 761 def gc(ui, *args, **opts):
759 762 '''garbage collect the client and server filelog caches
760 763 '''
761 764 cachepaths = set()
762 765
763 766 # get the system client cache
764 767 systemcache = shallowutil.getcachepath(ui, allowempty=True)
765 768 if systemcache:
766 769 cachepaths.add(systemcache)
767 770
768 771 # get repo client and server cache
769 772 repopaths = []
770 773 pwd = ui.environ.get('PWD')
771 774 if pwd:
772 775 repopaths.append(pwd)
773 776
774 777 repopaths.extend(args)
775 778 repos = []
776 779 for repopath in repopaths:
777 780 try:
778 781 repo = hg.peer(ui, {}, repopath)
779 782 repos.append(repo)
780 783
781 784 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
782 785 if repocache:
783 786 cachepaths.add(repocache)
784 787 except error.RepoError:
785 788 pass
786 789
787 790 # gc client cache
788 791 for cachepath in cachepaths:
789 792 gcclient(ui, cachepath)
790 793
791 794 # gc server cache
792 795 for repo in repos:
793 796 remotefilelogserver.gcserver(ui, repo._repo)
794 797
795 798 def gcclient(ui, cachepath):
796 799 # get list of repos that use this cache
797 800 repospath = os.path.join(cachepath, 'repos')
798 801 if not os.path.exists(repospath):
799 802 ui.warn(_("no known cache at %s\n") % cachepath)
800 803 return
801 804
802 805 reposfile = open(repospath, 'r')
803 806 repos = set([r[:-1] for r in reposfile.readlines()])
804 807 reposfile.close()
805 808
806 809 # build list of useful files
807 810 validrepos = []
808 811 keepkeys = set()
809 812
810 813 _analyzing = _("analyzing repositories")
811 814
812 815 sharedcache = None
813 816 filesrepacked = False
814 817
815 818 count = 0
816 819 for path in repos:
817 820 ui.progress(_analyzing, count, unit="repos", total=len(repos))
818 821 count += 1
819 822 try:
820 823 path = ui.expandpath(os.path.normpath(path))
821 824 except TypeError as e:
822 825 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
823 826 traceback.print_exc()
824 827 continue
825 828 try:
826 829 peer = hg.peer(ui, {}, path)
827 830 repo = peer._repo
828 831 except error.RepoError:
829 832 continue
830 833
831 834 validrepos.append(path)
832 835
833 836 # Protect against any repo or config changes that have happened since
834 837 # this repo was added to the repos file. We'd rather this loop succeed
835 838 # and too much be deleted, than the loop fail and nothing gets deleted.
836 839 if not isenabled(repo):
837 840 continue
838 841
839 842 if not util.safehasattr(repo, 'name'):
840 843 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
841 844 continue
842 845
843 846 # If garbage collection on repack and repack on hg gc are enabled
844 847 # then loose files are repacked and garbage collected.
845 848 # Otherwise regular garbage collection is performed.
846 849 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
847 850 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
848 851 if repackonhggc and gcrepack:
849 852 try:
850 853 repackmod.incrementalrepack(repo)
851 854 filesrepacked = True
852 855 continue
853 856 except (IOError, repackmod.RepackAlreadyRunning):
854 857 # If repack cannot be performed due to not enough disk space
855 858 # continue doing garbage collection of loose files w/o repack
856 859 pass
857 860
858 861 reponame = repo.name
859 862 if not sharedcache:
860 863 sharedcache = repo.sharedstore
861 864
862 865 # Compute a keepset which is not garbage collected
863 866 def keyfn(fname, fnode):
864 867 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
865 868 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
866 869
867 870 ui.progress(_analyzing, None)
868 871
869 872 # write list of valid repos back
870 873 oldumask = os.umask(0o002)
871 874 try:
872 875 reposfile = open(repospath, 'w')
873 876 reposfile.writelines([("%s\n" % r) for r in validrepos])
874 877 reposfile.close()
875 878 finally:
876 879 os.umask(oldumask)
877 880
878 881 # prune cache
879 882 if sharedcache is not None:
880 883 sharedcache.gc(keepkeys)
881 884 elif not filesrepacked:
882 885 ui.warn(_("warning: no valid repos in repofile\n"))
883 886
884 887 def log(orig, ui, repo, *pats, **opts):
885 888 if not isenabled(repo):
886 889 return orig(ui, repo, *pats, **opts)
887 890
888 891 follow = opts.get('follow')
889 892 revs = opts.get('rev')
890 893 if pats:
891 894 # Force slowpath for non-follow patterns and follows that start from
892 895 # non-working-copy-parent revs.
893 896 if not follow or revs:
894 897 # This forces the slowpath
895 898 opts['removed'] = True
896 899
897 900 # If this is a non-follow log without any revs specified, recommend that
898 901 # the user add -f to speed it up.
899 902 if not follow and not revs:
900 903 match, pats = scmutil.matchandpats(repo['.'], pats, opts)
901 904 isfile = not match.anypats()
902 905 if isfile:
903 906 for file in match.files():
904 907 if not os.path.isfile(repo.wjoin(file)):
905 908 isfile = False
906 909 break
907 910
908 911 if isfile:
909 912 ui.warn(_("warning: file log can be slow on large repos - " +
910 913 "use -f to speed it up\n"))
911 914
912 915 return orig(ui, repo, *pats, **opts)
913 916
914 917 def revdatelimit(ui, revset):
915 918 """Update revset so that only changesets no older than 'prefetchdays' days
916 919 are included. The default value is set to 14 days. If 'prefetchdays' is set
917 920 to zero or negative value then date restriction is not applied.
918 921 """
919 922 days = ui.configint('remotefilelog', 'prefetchdays')
920 923 if days > 0:
921 924 revset = '(%s) & date(-%s)' % (revset, days)
922 925 return revset
923 926
924 927 def readytofetch(repo):
925 928 """Check that enough time has passed since the last background prefetch.
926 929 This only relates to prefetches after operations that change the working
927 930 copy parent. Default delay between background prefetches is 2 minutes.
928 931 """
929 932 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
930 933 fname = repo.vfs.join('lastprefetch')
931 934
932 935 ready = False
933 936 with open(fname, 'a'):
934 937 # the with construct above is used to avoid race conditions
935 938 modtime = os.path.getmtime(fname)
936 939 if (time.time() - modtime) > timeout:
937 940 os.utime(fname, None)
938 941 ready = True
939 942
940 943 return ready
941 944
942 945 def wcpprefetch(ui, repo, **kwargs):
943 946 """Prefetches in background revisions specified by bgprefetchrevs revset.
944 947 Does background repack if backgroundrepack flag is set in config.
945 948 """
946 949 shallow = isenabled(repo)
947 950 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
948 951 isready = readytofetch(repo)
949 952
950 953 if not (shallow and bgprefetchrevs and isready):
951 954 return
952 955
953 956 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
954 957 # update a revset with a date limit
955 958 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
956 959
957 960 def anon():
958 961 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
959 962 return
960 963 repo.ranprefetch = True
961 964 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
962 965
963 966 repo._afterlock(anon)
964 967
965 968 def pull(orig, ui, repo, *pats, **opts):
966 969 result = orig(ui, repo, *pats, **opts)
967 970
968 971 if isenabled(repo):
969 972 # prefetch if it's configured
970 973 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
971 974 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
972 975 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
973 976
974 977 if prefetchrevset:
975 978 ui.status(_("prefetching file contents\n"))
976 979 revs = scmutil.revrange(repo, [prefetchrevset])
977 980 base = repo['.'].rev()
978 981 if bgprefetch:
979 982 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
980 983 else:
981 984 repo.prefetch(revs, base=base)
982 985 if bgrepack:
983 986 repackmod.backgroundrepack(repo, incremental=True)
984 987 elif bgrepack:
985 988 repackmod.backgroundrepack(repo, incremental=True)
986 989
987 990 return result
988 991
989 992 def exchangepull(orig, repo, remote, *args, **kwargs):
990 993 # Hook into the callstream/getbundle to insert bundle capabilities
991 994 # during a pull.
992 995 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
993 996 **kwargs):
994 997 if not bundlecaps:
995 998 bundlecaps = set()
996 999 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
997 1000 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
998 1001 **kwargs)
999 1002
1000 1003 if util.safehasattr(remote, '_callstream'):
1001 1004 remote._localrepo = repo
1002 1005 elif util.safehasattr(remote, 'getbundle'):
1003 1006 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1004 1007
1005 1008 return orig(repo, remote, *args, **kwargs)
1006 1009
1007 1010 def _fileprefetchhook(repo, revs, match):
1008 1011 if isenabled(repo):
1009 1012 allfiles = []
1010 1013 for rev in revs:
1011 1014 if rev == nodemod.wdirrev or rev is None:
1012 1015 continue
1013 1016 ctx = repo[rev]
1014 1017 mf = ctx.manifest()
1015 1018 sparsematch = repo.maybesparsematch(ctx.rev())
1016 1019 for path in ctx.walk(match):
1017 1020 if path.endswith('/'):
1018 1021 # Tree manifest that's being excluded as part of narrow
1019 1022 continue
1020 1023 if (not sparsematch or sparsematch(path)) and path in mf:
1021 1024 allfiles.append((path, hex(mf[path])))
1022 1025 repo.fileservice.prefetch(allfiles)
1023 1026
1024 1027 @command('debugremotefilelog', [
1025 1028 ('d', 'decompress', None, _('decompress the filelog first')),
1026 1029 ], _('hg debugremotefilelog <path>'), norepo=True)
1027 1030 def debugremotefilelog(ui, path, **opts):
1028 1031 return debugcommands.debugremotefilelog(ui, path, **opts)
1029 1032
1030 1033 @command('verifyremotefilelog', [
1031 1034 ('d', 'decompress', None, _('decompress the filelogs first')),
1032 1035 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1033 1036 def verifyremotefilelog(ui, path, **opts):
1034 1037 return debugcommands.verifyremotefilelog(ui, path, **opts)
1035 1038
1036 1039 @command('debugdatapack', [
1037 1040 ('', 'long', None, _('print the long hashes')),
1038 1041 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1039 1042 ], _('hg debugdatapack <paths>'), norepo=True)
1040 1043 def debugdatapack(ui, *paths, **opts):
1041 1044 return debugcommands.debugdatapack(ui, *paths, **opts)
1042 1045
1043 1046 @command('debughistorypack', [
1044 1047 ], _('hg debughistorypack <path>'), norepo=True)
1045 1048 def debughistorypack(ui, path, **opts):
1046 1049 return debugcommands.debughistorypack(ui, path)
1047 1050
1048 1051 @command('debugkeepset', [
1049 1052 ], _('hg debugkeepset'))
1050 1053 def debugkeepset(ui, repo, **opts):
1051 1054 # The command is used to measure keepset computation time
1052 1055 def keyfn(fname, fnode):
1053 1056 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1054 1057 repackmod.keepset(repo, keyfn)
1055 1058 return
1056 1059
1057 1060 @command('debugwaitonrepack', [
1058 1061 ], _('hg debugwaitonrepack'))
1059 1062 def debugwaitonrepack(ui, repo, **opts):
1060 1063 return debugcommands.debugwaitonrepack(repo)
1061 1064
1062 1065 @command('debugwaitonprefetch', [
1063 1066 ], _('hg debugwaitonprefetch'))
1064 1067 def debugwaitonprefetch(ui, repo, **opts):
1065 1068 return debugcommands.debugwaitonprefetch(repo)
1066 1069
1067 1070 def resolveprefetchopts(ui, opts):
1068 1071 if not opts.get('rev'):
1069 1072 revset = ['.', 'draft()']
1070 1073
1071 1074 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1072 1075 if prefetchrevset:
1073 1076 revset.append('(%s)' % prefetchrevset)
1074 1077 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1075 1078 if bgprefetchrevs:
1076 1079 revset.append('(%s)' % bgprefetchrevs)
1077 1080 revset = '+'.join(revset)
1078 1081
1079 1082 # update a revset with a date limit
1080 1083 revset = revdatelimit(ui, revset)
1081 1084
1082 1085 opts['rev'] = [revset]
1083 1086
1084 1087 if not opts.get('base'):
1085 1088 opts['base'] = None
1086 1089
1087 1090 return opts
1088 1091
1089 1092 @command('prefetch', [
1090 1093 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1091 1094 ('', 'repack', False, _('run repack after prefetch')),
1092 1095 ('b', 'base', '', _("rev that is assumed to already be local")),
1093 1096 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1094 1097 def prefetch(ui, repo, *pats, **opts):
1095 1098 """prefetch file revisions from the server
1096 1099
1097 1100 Prefetchs file revisions for the specified revs and stores them in the
1098 1101 local remotefilelog cache. If no rev is specified, the default rev is
1099 1102 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1100 1103 File names or patterns can be used to limit which files are downloaded.
1101 1104
1102 1105 Return 0 on success.
1103 1106 """
1104 1107 if not isenabled(repo):
1105 1108 raise error.Abort(_("repo is not shallow"))
1106 1109
1107 1110 opts = resolveprefetchopts(ui, opts)
1108 1111 revs = scmutil.revrange(repo, opts.get('rev'))
1109 1112 repo.prefetch(revs, opts.get('base'), pats, opts)
1110 1113
1111 1114 # Run repack in background
1112 1115 if opts.get('repack'):
1113 1116 repackmod.backgroundrepack(repo, incremental=True)
1114 1117
1115 1118 @command('repack', [
1116 1119 ('', 'background', None, _('run in a background process'), None),
1117 1120 ('', 'incremental', None, _('do an incremental repack'), None),
1118 1121 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1119 1122 ], _('hg repack [OPTIONS]'))
1120 1123 def repack_(ui, repo, *pats, **opts):
1121 1124 if opts.get('background'):
1122 1125 repackmod.backgroundrepack(repo, incremental=opts.get('incremental'),
1123 1126 packsonly=opts.get('packsonly', False))
1124 1127 return
1125 1128
1126 1129 options = {'packsonly': opts.get('packsonly')}
1127 1130
1128 1131 try:
1129 1132 if opts.get('incremental'):
1130 1133 repackmod.incrementalrepack(repo, options=options)
1131 1134 else:
1132 1135 repackmod.fullrepack(repo, options=options)
1133 1136 except repackmod.RepackAlreadyRunning as ex:
1134 1137 # Don't propogate the exception if the repack is already in
1135 1138 # progress, since we want the command to exit 0.
1136 1139 repo.ui.warn('%s\n' % ex)
@@ -1,24 +1,25 b''
1 1 $ . "$TESTDIR/remotefilelog-library.sh"
2 2
3 3 $ cat >> $HGRCPATH <<EOF
4 4 > [extensions]
5 5 > remotefilelog=
6 6 > share=
7 7 > EOF
8 8
9 9 $ hg init master
10 10 $ cd master
11 11 $ cat >> .hg/hgrc <<EOF
12 12 > [remotefilelog]
13 13 > server=True
14 14 > EOF
15 15 $ echo x > x
16 16 $ hg commit -qAm x
17 17
18 18 $ cd ..
19 19
20 20
21 21 $ hgcloneshallow ssh://user@dummy/master source --noupdate -q
22 22 $ hg share source dest
23 23 updating working directory
24 24 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 $ hg -R dest unshare
General Comments 0
You need to be logged in to leave comments. Login now