##// END OF EJS Templates
remotefilelog: avoid accessing repo instance after dispatch...
Martin von Zweigbergk -
r40611:157f0e29 default
parent child Browse files
Show More
@@ -1,1136 +1,1139 b''
1 # __init__.py - remotefilelog extension
1 # __init__.py - remotefilelog extension
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8
8
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 GUARANTEES. This means that repositories created with this extension may
10 GUARANTEES. This means that repositories created with this extension may
11 only be usable with the exact version of this extension/Mercurial that was
11 only be usable with the exact version of this extension/Mercurial that was
12 used. The extension attempts to enforce this in order to prevent repository
12 used. The extension attempts to enforce this in order to prevent repository
13 corruption.
13 corruption.
14
14
15 remotefilelog works by fetching file contents lazily and storing them
15 remotefilelog works by fetching file contents lazily and storing them
16 in a cache on the client rather than in revlogs. This allows enormous
16 in a cache on the client rather than in revlogs. This allows enormous
17 histories to be transferred only partially, making them easier to
17 histories to be transferred only partially, making them easier to
18 operate on.
18 operate on.
19
19
20 Configs:
20 Configs:
21
21
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23
23
24 ``packs.maxpacksize`` specifies the maximum pack file size
24 ``packs.maxpacksize`` specifies the maximum pack file size
25
25
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 shared cache (trees only for now)
27 shared cache (trees only for now)
28
28
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30
30
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 update, and on other commands that use them. Different from pullprefetch.
32 update, and on other commands that use them. Different from pullprefetch.
33
33
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35
35
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 it is garbage collected
37 it is garbage collected
38
38
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40
40
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 days after which it is no longer prefetched.
42 days after which it is no longer prefetched.
43
43
44 ``remotefilelog.prefetchdelay`` specifies delay between background
44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 prefetches in seconds after operations that change the working copy parent
45 prefetches in seconds after operations that change the working copy parent
46
46
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 pack files required to be considered part of a generation. In particular,
48 pack files required to be considered part of a generation. In particular,
49 minimum number of packs files > gencountlimit.
49 minimum number of packs files > gencountlimit.
50
50
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 each generation of the data pack files. For example, list ['100MB','1MB']
52 each generation of the data pack files. For example, list ['100MB','1MB']
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 1MB, 100MB) and [100MB, infinity).
54 1MB, 100MB) and [100MB, infinity).
55
55
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 include in an incremental data repack.
57 include in an incremental data repack.
58
58
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 it to be considered for an incremental data repack.
60 it to be considered for an incremental data repack.
61
61
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 to include in an incremental data repack.
63 to include in an incremental data repack.
64
64
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 history pack files required to be considered part of a generation. In
66 history pack files required to be considered part of a generation. In
67 particular, minimum number of packs files > gencountlimit.
67 particular, minimum number of packs files > gencountlimit.
68
68
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 each generation of the historhy pack files. For example, list [
70 each generation of the historhy pack files. For example, list [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73
73
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 include in an incremental history repack.
75 include in an incremental history repack.
76
76
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 for it to be considered for an incremental history repack.
78 for it to be considered for an incremental history repack.
79
79
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 files to include in an incremental history repack.
81 files to include in an incremental history repack.
82
82
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 background
84 background
85
85
86 ``remotefilelog.cachepath`` path to cache
86 ``remotefilelog.cachepath`` path to cache
87
87
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 group
89 group
90
90
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92
92
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94
94
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96
96
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98
98
99 ``remotefilelog.fetchwarning``: message to print when too many
99 ``remotefilelog.fetchwarning``: message to print when too many
100 single-file fetches occur
100 single-file fetches occur
101
101
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103
103
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 files, otherwise use optimistic fetching
105 files, otherwise use optimistic fetching
106
106
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 eagerly downloaded rather than lazily
108 eagerly downloaded rather than lazily
109
109
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 data from other repos in a shared store.
111 data from other repos in a shared store.
112
112
113 ``remotefilelog.server`` if true, enable server-side functionality
113 ``remotefilelog.server`` if true, enable server-side functionality
114
114
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116
116
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 blobs
118 blobs
119
119
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 before returning blobs
121 before returning blobs
122
122
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 corruption before returning metadata
124 corruption before returning metadata
125
125
126 """
126 """
127 from __future__ import absolute_import
127 from __future__ import absolute_import
128
128
129 import os
129 import os
130 import time
130 import time
131 import traceback
131 import traceback
132
132
133 from mercurial.node import hex
133 from mercurial.node import hex
134 from mercurial.i18n import _
134 from mercurial.i18n import _
135 from mercurial import (
135 from mercurial import (
136 changegroup,
136 changegroup,
137 changelog,
137 changelog,
138 cmdutil,
138 cmdutil,
139 commands,
139 commands,
140 configitems,
140 configitems,
141 context,
141 context,
142 copies,
142 copies,
143 debugcommands as hgdebugcommands,
143 debugcommands as hgdebugcommands,
144 dispatch,
144 dispatch,
145 error,
145 error,
146 exchange,
146 exchange,
147 extensions,
147 extensions,
148 hg,
148 hg,
149 localrepo,
149 localrepo,
150 match,
150 match,
151 merge,
151 merge,
152 node as nodemod,
152 node as nodemod,
153 patch,
153 patch,
154 registrar,
154 registrar,
155 repair,
155 repair,
156 repoview,
156 repoview,
157 revset,
157 revset,
158 scmutil,
158 scmutil,
159 smartset,
159 smartset,
160 streamclone,
160 streamclone,
161 templatekw,
161 templatekw,
162 util,
162 util,
163 )
163 )
164 from . import (
164 from . import (
165 constants,
165 constants,
166 debugcommands,
166 debugcommands,
167 fileserverclient,
167 fileserverclient,
168 remotefilectx,
168 remotefilectx,
169 remotefilelog,
169 remotefilelog,
170 remotefilelogserver,
170 remotefilelogserver,
171 repack as repackmod,
171 repack as repackmod,
172 shallowbundle,
172 shallowbundle,
173 shallowrepo,
173 shallowrepo,
174 shallowstore,
174 shallowstore,
175 shallowutil,
175 shallowutil,
176 shallowverifier,
176 shallowverifier,
177 )
177 )
178
178
179 # ensures debug commands are registered
179 # ensures debug commands are registered
180 hgdebugcommands.command
180 hgdebugcommands.command
181
181
182 cmdtable = {}
182 cmdtable = {}
183 command = registrar.command(cmdtable)
183 command = registrar.command(cmdtable)
184
184
185 configtable = {}
185 configtable = {}
186 configitem = registrar.configitem(configtable)
186 configitem = registrar.configitem(configtable)
187
187
188 configitem('remotefilelog', 'debug', default=False)
188 configitem('remotefilelog', 'debug', default=False)
189
189
190 configitem('remotefilelog', 'reponame', default='')
190 configitem('remotefilelog', 'reponame', default='')
191 configitem('remotefilelog', 'cachepath', default=None)
191 configitem('remotefilelog', 'cachepath', default=None)
192 configitem('remotefilelog', 'cachegroup', default=None)
192 configitem('remotefilelog', 'cachegroup', default=None)
193 configitem('remotefilelog', 'cacheprocess', default=None)
193 configitem('remotefilelog', 'cacheprocess', default=None)
194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
195 configitem("remotefilelog", "cachelimit", default="1000 GB")
195 configitem("remotefilelog", "cachelimit", default="1000 GB")
196
196
197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
198 alias=[('remotefilelog', 'fallbackrepo')])
198 alias=[('remotefilelog', 'fallbackrepo')])
199
199
200 configitem('remotefilelog', 'validatecachelog', default=None)
200 configitem('remotefilelog', 'validatecachelog', default=None)
201 configitem('remotefilelog', 'validatecache', default='on')
201 configitem('remotefilelog', 'validatecache', default='on')
202 configitem('remotefilelog', 'server', default=None)
202 configitem('remotefilelog', 'server', default=None)
203 configitem('remotefilelog', 'servercachepath', default=None)
203 configitem('remotefilelog', 'servercachepath', default=None)
204 configitem("remotefilelog", "serverexpiration", default=30)
204 configitem("remotefilelog", "serverexpiration", default=30)
205 configitem('remotefilelog', 'backgroundrepack', default=False)
205 configitem('remotefilelog', 'backgroundrepack', default=False)
206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
207 configitem('remotefilelog', 'pullprefetch', default=None)
207 configitem('remotefilelog', 'pullprefetch', default=None)
208 configitem('remotefilelog', 'backgroundprefetch', default=False)
208 configitem('remotefilelog', 'backgroundprefetch', default=False)
209 configitem('remotefilelog', 'prefetchdelay', default=120)
209 configitem('remotefilelog', 'prefetchdelay', default=120)
210 configitem('remotefilelog', 'prefetchdays', default=14)
210 configitem('remotefilelog', 'prefetchdays', default=14)
211
211
212 configitem('remotefilelog', 'getfilesstep', default=10000)
212 configitem('remotefilelog', 'getfilesstep', default=10000)
213 configitem('remotefilelog', 'getfilestype', default='optimistic')
213 configitem('remotefilelog', 'getfilestype', default='optimistic')
214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
215 configitem('remotefilelog', 'fetchwarning', default='')
215 configitem('remotefilelog', 'fetchwarning', default='')
216
216
217 configitem('remotefilelog', 'includepattern', default=None)
217 configitem('remotefilelog', 'includepattern', default=None)
218 configitem('remotefilelog', 'excludepattern', default=None)
218 configitem('remotefilelog', 'excludepattern', default=None)
219
219
220 configitem('remotefilelog', 'gcrepack', default=False)
220 configitem('remotefilelog', 'gcrepack', default=False)
221 configitem('remotefilelog', 'repackonhggc', default=False)
221 configitem('remotefilelog', 'repackonhggc', default=False)
222 configitem('repack', 'chainorphansbysize', default=True)
222 configitem('repack', 'chainorphansbysize', default=True)
223
223
224 configitem('packs', 'maxpacksize', default=0)
224 configitem('packs', 'maxpacksize', default=0)
225 configitem('packs', 'maxchainlen', default=1000)
225 configitem('packs', 'maxchainlen', default=1000)
226
226
227 # default TTL limit is 30 days
227 # default TTL limit is 30 days
228 _defaultlimit = 60 * 60 * 24 * 30
228 _defaultlimit = 60 * 60 * 24 * 30
229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
230
230
231 configitem('remotefilelog', 'data.gencountlimit', default=2),
231 configitem('remotefilelog', 'data.gencountlimit', default=2),
232 configitem('remotefilelog', 'data.generations',
232 configitem('remotefilelog', 'data.generations',
233 default=['1GB', '100MB', '1MB'])
233 default=['1GB', '100MB', '1MB'])
234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
237
237
238 configitem('remotefilelog', 'history.gencountlimit', default=2),
238 configitem('remotefilelog', 'history.gencountlimit', default=2),
239 configitem('remotefilelog', 'history.generations', default=['100MB'])
239 configitem('remotefilelog', 'history.generations', default=['100MB'])
240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
243
243
244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
246 # be specifying the version(s) of Mercurial they are tested with, or
246 # be specifying the version(s) of Mercurial they are tested with, or
247 # leave the attribute unspecified.
247 # leave the attribute unspecified.
248 testedwith = 'ships-with-hg-core'
248 testedwith = 'ships-with-hg-core'
249
249
250 repoclass = localrepo.localrepository
250 repoclass = localrepo.localrepository
251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
252
252
253 isenabled = shallowutil.isenabled
253 isenabled = shallowutil.isenabled
254
254
255 def uisetup(ui):
255 def uisetup(ui):
256 """Wraps user facing Mercurial commands to swap them out with shallow
256 """Wraps user facing Mercurial commands to swap them out with shallow
257 versions.
257 versions.
258 """
258 """
259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
260
260
261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
262 entry[1].append(('', 'shallow', None,
262 entry[1].append(('', 'shallow', None,
263 _("create a shallow clone which uses remote file "
263 _("create a shallow clone which uses remote file "
264 "history")))
264 "history")))
265
265
266 extensions.wrapcommand(commands.table, 'debugindex',
266 extensions.wrapcommand(commands.table, 'debugindex',
267 debugcommands.debugindex)
267 debugcommands.debugindex)
268 extensions.wrapcommand(commands.table, 'debugindexdot',
268 extensions.wrapcommand(commands.table, 'debugindexdot',
269 debugcommands.debugindexdot)
269 debugcommands.debugindexdot)
270 extensions.wrapcommand(commands.table, 'log', log)
270 extensions.wrapcommand(commands.table, 'log', log)
271 extensions.wrapcommand(commands.table, 'pull', pull)
271 extensions.wrapcommand(commands.table, 'pull', pull)
272
272
273 # Prevent 'hg manifest --all'
273 # Prevent 'hg manifest --all'
274 def _manifest(orig, ui, repo, *args, **opts):
274 def _manifest(orig, ui, repo, *args, **opts):
275 if (isenabled(repo) and opts.get('all')):
275 if (isenabled(repo) and opts.get('all')):
276 raise error.Abort(_("--all is not supported in a shallow repo"))
276 raise error.Abort(_("--all is not supported in a shallow repo"))
277
277
278 return orig(ui, repo, *args, **opts)
278 return orig(ui, repo, *args, **opts)
279 extensions.wrapcommand(commands.table, "manifest", _manifest)
279 extensions.wrapcommand(commands.table, "manifest", _manifest)
280
280
281 # Wrap remotefilelog with lfs code
281 # Wrap remotefilelog with lfs code
282 def _lfsloaded(loaded=False):
282 def _lfsloaded(loaded=False):
283 lfsmod = None
283 lfsmod = None
284 try:
284 try:
285 lfsmod = extensions.find('lfs')
285 lfsmod = extensions.find('lfs')
286 except KeyError:
286 except KeyError:
287 pass
287 pass
288 if lfsmod:
288 if lfsmod:
289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
290 fileserverclient._lfsmod = lfsmod
290 fileserverclient._lfsmod = lfsmod
291 extensions.afterloaded('lfs', _lfsloaded)
291 extensions.afterloaded('lfs', _lfsloaded)
292
292
293 # debugdata needs remotefilelog.len to work
293 # debugdata needs remotefilelog.len to work
294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
295
295
296 def cloneshallow(orig, ui, repo, *args, **opts):
296 def cloneshallow(orig, ui, repo, *args, **opts):
297 if opts.get('shallow'):
297 if opts.get('shallow'):
298 repos = []
298 repos = []
299 def pull_shallow(orig, self, *args, **kwargs):
299 def pull_shallow(orig, self, *args, **kwargs):
300 if not isenabled(self):
300 if not isenabled(self):
301 repos.append(self.unfiltered())
301 repos.append(self.unfiltered())
302 # set up the client hooks so the post-clone update works
302 # set up the client hooks so the post-clone update works
303 setupclient(self.ui, self.unfiltered())
303 setupclient(self.ui, self.unfiltered())
304
304
305 # setupclient fixed the class on the repo itself
305 # setupclient fixed the class on the repo itself
306 # but we also need to fix it on the repoview
306 # but we also need to fix it on the repoview
307 if isinstance(self, repoview.repoview):
307 if isinstance(self, repoview.repoview):
308 self.__class__.__bases__ = (self.__class__.__bases__[0],
308 self.__class__.__bases__ = (self.__class__.__bases__[0],
309 self.unfiltered().__class__)
309 self.unfiltered().__class__)
310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
311 self._writerequirements()
311 self._writerequirements()
312
312
313 # Since setupclient hadn't been called, exchange.pull was not
313 # Since setupclient hadn't been called, exchange.pull was not
314 # wrapped. So we need to manually invoke our version of it.
314 # wrapped. So we need to manually invoke our version of it.
315 return exchangepull(orig, self, *args, **kwargs)
315 return exchangepull(orig, self, *args, **kwargs)
316 else:
316 else:
317 return orig(self, *args, **kwargs)
317 return orig(self, *args, **kwargs)
318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
319
319
320 # Wrap the stream logic to add requirements and to pass include/exclude
320 # Wrap the stream logic to add requirements and to pass include/exclude
321 # patterns around.
321 # patterns around.
322 def setup_streamout(repo, remote):
322 def setup_streamout(repo, remote):
323 # Replace remote.stream_out with a version that sends file
323 # Replace remote.stream_out with a version that sends file
324 # patterns.
324 # patterns.
325 def stream_out_shallow(orig):
325 def stream_out_shallow(orig):
326 caps = remote.capabilities()
326 caps = remote.capabilities()
327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
328 opts = {}
328 opts = {}
329 if repo.includepattern:
329 if repo.includepattern:
330 opts['includepattern'] = '\0'.join(repo.includepattern)
330 opts['includepattern'] = '\0'.join(repo.includepattern)
331 if repo.excludepattern:
331 if repo.excludepattern:
332 opts['excludepattern'] = '\0'.join(repo.excludepattern)
332 opts['excludepattern'] = '\0'.join(repo.excludepattern)
333 return remote._callstream('stream_out_shallow', **opts)
333 return remote._callstream('stream_out_shallow', **opts)
334 else:
334 else:
335 return orig()
335 return orig()
336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
337 def stream_wrap(orig, op):
337 def stream_wrap(orig, op):
338 setup_streamout(op.repo, op.remote)
338 setup_streamout(op.repo, op.remote)
339 return orig(op)
339 return orig(op)
340 extensions.wrapfunction(
340 extensions.wrapfunction(
341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
342
342
343 def canperformstreamclone(orig, pullop, bundle2=False):
343 def canperformstreamclone(orig, pullop, bundle2=False):
344 # remotefilelog is currently incompatible with the
344 # remotefilelog is currently incompatible with the
345 # bundle2 flavor of streamclones, so force us to use
345 # bundle2 flavor of streamclones, so force us to use
346 # v1 instead.
346 # v1 instead.
347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
348 pullop.remotebundle2caps['stream'] = [
348 pullop.remotebundle2caps['stream'] = [
349 c for c in pullop.remotebundle2caps['stream']
349 c for c in pullop.remotebundle2caps['stream']
350 if c != 'v2']
350 if c != 'v2']
351 if bundle2:
351 if bundle2:
352 return False, None
352 return False, None
353 supported, requirements = orig(pullop, bundle2=bundle2)
353 supported, requirements = orig(pullop, bundle2=bundle2)
354 if requirements is not None:
354 if requirements is not None:
355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
356 return supported, requirements
356 return supported, requirements
357 extensions.wrapfunction(
357 extensions.wrapfunction(
358 streamclone, 'canperformstreamclone', canperformstreamclone)
358 streamclone, 'canperformstreamclone', canperformstreamclone)
359
359
360 try:
360 try:
361 orig(ui, repo, *args, **opts)
361 orig(ui, repo, *args, **opts)
362 finally:
362 finally:
363 if opts.get('shallow'):
363 if opts.get('shallow'):
364 for r in repos:
364 for r in repos:
365 if util.safehasattr(r, 'fileservice'):
365 if util.safehasattr(r, 'fileservice'):
366 r.fileservice.close()
366 r.fileservice.close()
367
367
368 def debugdatashallow(orig, *args, **kwds):
368 def debugdatashallow(orig, *args, **kwds):
369 oldlen = remotefilelog.remotefilelog.__len__
369 oldlen = remotefilelog.remotefilelog.__len__
370 try:
370 try:
371 remotefilelog.remotefilelog.__len__ = lambda x: 1
371 remotefilelog.remotefilelog.__len__ = lambda x: 1
372 return orig(*args, **kwds)
372 return orig(*args, **kwds)
373 finally:
373 finally:
374 remotefilelog.remotefilelog.__len__ = oldlen
374 remotefilelog.remotefilelog.__len__ = oldlen
375
375
376 def reposetup(ui, repo):
376 def reposetup(ui, repo):
377 if not isinstance(repo, localrepo.localrepository):
377 if not isinstance(repo, localrepo.localrepository):
378 return
378 return
379
379
380 # put here intentionally bc doesnt work in uisetup
380 # put here intentionally bc doesnt work in uisetup
381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
383
383
384 isserverenabled = ui.configbool('remotefilelog', 'server')
384 isserverenabled = ui.configbool('remotefilelog', 'server')
385 isshallowclient = isenabled(repo)
385 isshallowclient = isenabled(repo)
386
386
387 if isserverenabled and isshallowclient:
387 if isserverenabled and isshallowclient:
388 raise RuntimeError("Cannot be both a server and shallow client.")
388 raise RuntimeError("Cannot be both a server and shallow client.")
389
389
390 if isshallowclient:
390 if isshallowclient:
391 setupclient(ui, repo)
391 setupclient(ui, repo)
392
392
393 if isserverenabled:
393 if isserverenabled:
394 remotefilelogserver.setupserver(ui, repo)
394 remotefilelogserver.setupserver(ui, repo)
395
395
396 def setupclient(ui, repo):
396 def setupclient(ui, repo):
397 if not isinstance(repo, localrepo.localrepository):
397 if not isinstance(repo, localrepo.localrepository):
398 return
398 return
399
399
400 # Even clients get the server setup since they need to have the
400 # Even clients get the server setup since they need to have the
401 # wireprotocol endpoints registered.
401 # wireprotocol endpoints registered.
402 remotefilelogserver.onetimesetup(ui)
402 remotefilelogserver.onetimesetup(ui)
403 onetimeclientsetup(ui)
403 onetimeclientsetup(ui)
404
404
405 shallowrepo.wraprepo(repo)
405 shallowrepo.wraprepo(repo)
406 repo.store = shallowstore.wrapstore(repo.store)
406 repo.store = shallowstore.wrapstore(repo.store)
407
407
408 clientonetime = False
408 clientonetime = False
409 def onetimeclientsetup(ui):
409 def onetimeclientsetup(ui):
410 global clientonetime
410 global clientonetime
411 if clientonetime:
411 if clientonetime:
412 return
412 return
413 clientonetime = True
413 clientonetime = True
414
414
415 changegroup.cgpacker = shallowbundle.shallowcg1packer
415 changegroup.cgpacker = shallowbundle.shallowcg1packer
416
416
417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
418 shallowbundle.addchangegroupfiles)
418 shallowbundle.addchangegroupfiles)
419 extensions.wrapfunction(
419 extensions.wrapfunction(
420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
421
421
422 def storewrapper(orig, requirements, path, vfstype):
422 def storewrapper(orig, requirements, path, vfstype):
423 s = orig(requirements, path, vfstype)
423 s = orig(requirements, path, vfstype)
424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
425 s = shallowstore.wrapstore(s)
425 s = shallowstore.wrapstore(s)
426
426
427 return s
427 return s
428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
429
429
430 extensions.wrapfunction(exchange, 'pull', exchangepull)
430 extensions.wrapfunction(exchange, 'pull', exchangepull)
431
431
432 # prefetch files before update
432 # prefetch files before update
433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
434 if isenabled(repo):
434 if isenabled(repo):
435 manifest = mctx.manifest()
435 manifest = mctx.manifest()
436 files = []
436 files = []
437 for f, args, msg in actions['g']:
437 for f, args, msg in actions['g']:
438 files.append((f, hex(manifest[f])))
438 files.append((f, hex(manifest[f])))
439 # batch fetch the needed files from the server
439 # batch fetch the needed files from the server
440 repo.fileservice.prefetch(files)
440 repo.fileservice.prefetch(files)
441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
443
443
444 # Prefetch merge checkunknownfiles
444 # Prefetch merge checkunknownfiles
445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
446 *args, **kwargs):
446 *args, **kwargs):
447 if isenabled(repo):
447 if isenabled(repo):
448 files = []
448 files = []
449 sparsematch = repo.maybesparsematch(mctx.rev())
449 sparsematch = repo.maybesparsematch(mctx.rev())
450 for f, (m, actionargs, msg) in actions.iteritems():
450 for f, (m, actionargs, msg) in actions.iteritems():
451 if sparsematch and not sparsematch(f):
451 if sparsematch and not sparsematch(f):
452 continue
452 continue
453 if m in ('c', 'dc', 'cm'):
453 if m in ('c', 'dc', 'cm'):
454 files.append((f, hex(mctx.filenode(f))))
454 files.append((f, hex(mctx.filenode(f))))
455 elif m == 'dg':
455 elif m == 'dg':
456 f2 = actionargs[0]
456 f2 = actionargs[0]
457 files.append((f2, hex(mctx.filenode(f2))))
457 files.append((f2, hex(mctx.filenode(f2))))
458 # batch fetch the needed files from the server
458 # batch fetch the needed files from the server
459 repo.fileservice.prefetch(files)
459 repo.fileservice.prefetch(files)
460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
462
462
463 # Prefetch files before status attempts to look at their size and contents
463 # Prefetch files before status attempts to look at their size and contents
464 def checklookup(orig, self, files):
464 def checklookup(orig, self, files):
465 repo = self._repo
465 repo = self._repo
466 if isenabled(repo):
466 if isenabled(repo):
467 prefetchfiles = []
467 prefetchfiles = []
468 for parent in self._parents:
468 for parent in self._parents:
469 for f in files:
469 for f in files:
470 if f in parent:
470 if f in parent:
471 prefetchfiles.append((f, hex(parent.filenode(f))))
471 prefetchfiles.append((f, hex(parent.filenode(f))))
472 # batch fetch the needed files from the server
472 # batch fetch the needed files from the server
473 repo.fileservice.prefetch(prefetchfiles)
473 repo.fileservice.prefetch(prefetchfiles)
474 return orig(self, files)
474 return orig(self, files)
475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
476
476
477 # Prefetch the logic that compares added and removed files for renames
477 # Prefetch the logic that compares added and removed files for renames
478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
479 if isenabled(repo):
479 if isenabled(repo):
480 files = []
480 files = []
481 parentctx = repo['.']
481 parentctx = repo['.']
482 for f in removed:
482 for f in removed:
483 files.append((f, hex(parentctx.filenode(f))))
483 files.append((f, hex(parentctx.filenode(f))))
484 # batch fetch the needed files from the server
484 # batch fetch the needed files from the server
485 repo.fileservice.prefetch(files)
485 repo.fileservice.prefetch(files)
486 return orig(repo, matcher, added, removed, *args, **kwargs)
486 return orig(repo, matcher, added, removed, *args, **kwargs)
487 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
487 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488
488
489 # prefetch files before mergecopies check
489 # prefetch files before mergecopies check
490 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
490 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
491 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 if isenabled(repo):
492 if isenabled(repo):
493 m1 = c1.manifest()
493 m1 = c1.manifest()
494 m2 = c2.manifest()
494 m2 = c2.manifest()
495 files = []
495 files = []
496
496
497 sparsematch1 = repo.maybesparsematch(c1.rev())
497 sparsematch1 = repo.maybesparsematch(c1.rev())
498 if sparsematch1:
498 if sparsematch1:
499 sparseu1 = []
499 sparseu1 = []
500 for f in u1:
500 for f in u1:
501 if sparsematch1(f):
501 if sparsematch1(f):
502 files.append((f, hex(m1[f])))
502 files.append((f, hex(m1[f])))
503 sparseu1.append(f)
503 sparseu1.append(f)
504 u1 = sparseu1
504 u1 = sparseu1
505
505
506 sparsematch2 = repo.maybesparsematch(c2.rev())
506 sparsematch2 = repo.maybesparsematch(c2.rev())
507 if sparsematch2:
507 if sparsematch2:
508 sparseu2 = []
508 sparseu2 = []
509 for f in u2:
509 for f in u2:
510 if sparsematch2(f):
510 if sparsematch2(f):
511 files.append((f, hex(m2[f])))
511 files.append((f, hex(m2[f])))
512 sparseu2.append(f)
512 sparseu2.append(f)
513 u2 = sparseu2
513 u2 = sparseu2
514
514
515 # batch fetch the needed files from the server
515 # batch fetch the needed files from the server
516 repo.fileservice.prefetch(files)
516 repo.fileservice.prefetch(files)
517 return u1, u2
517 return u1, u2
518 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
518 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519
519
520 # prefetch files before pathcopies check
520 # prefetch files before pathcopies check
521 def computeforwardmissing(orig, a, b, match=None):
521 def computeforwardmissing(orig, a, b, match=None):
522 missing = list(orig(a, b, match=match))
522 missing = list(orig(a, b, match=match))
523 repo = a._repo
523 repo = a._repo
524 if isenabled(repo):
524 if isenabled(repo):
525 mb = b.manifest()
525 mb = b.manifest()
526
526
527 files = []
527 files = []
528 sparsematch = repo.maybesparsematch(b.rev())
528 sparsematch = repo.maybesparsematch(b.rev())
529 if sparsematch:
529 if sparsematch:
530 sparsemissing = []
530 sparsemissing = []
531 for f in missing:
531 for f in missing:
532 if sparsematch(f):
532 if sparsematch(f):
533 files.append((f, hex(mb[f])))
533 files.append((f, hex(mb[f])))
534 sparsemissing.append(f)
534 sparsemissing.append(f)
535 missing = sparsemissing
535 missing = sparsemissing
536
536
537 # batch fetch the needed files from the server
537 # batch fetch the needed files from the server
538 repo.fileservice.prefetch(files)
538 repo.fileservice.prefetch(files)
539 return missing
539 return missing
540 extensions.wrapfunction(copies, '_computeforwardmissing',
540 extensions.wrapfunction(copies, '_computeforwardmissing',
541 computeforwardmissing)
541 computeforwardmissing)
542
542
543 # close cache miss server connection after the command has finished
543 # close cache miss server connection after the command has finished
544 def runcommand(orig, lui, repo, *args, **kwargs):
544 def runcommand(orig, lui, repo, *args, **kwargs):
545 try:
545 fileservice = None
546 return orig(lui, repo, *args, **kwargs)
547 finally:
548 # repo can be None when running in chg:
546 # repo can be None when running in chg:
549 # - at startup, reposetup was called because serve is not norepo
547 # - at startup, reposetup was called because serve is not norepo
550 # - a norepo command like "help" is called
548 # - a norepo command like "help" is called
551 if repo and isenabled(repo):
549 if repo and isenabled(repo):
552 repo.fileservice.close()
550 fileservice = repo.fileservice
551 try:
552 return orig(lui, repo, *args, **kwargs)
553 finally:
554 if fileservice:
555 fileservice.close()
553 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
556 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
554
557
555 # disappointing hacks below
558 # disappointing hacks below
556 templatekw.getrenamedfn = getrenamedfn
559 templatekw.getrenamedfn = getrenamedfn
557 extensions.wrapfunction(revset, 'filelog', filelogrevset)
560 extensions.wrapfunction(revset, 'filelog', filelogrevset)
558 revset.symbols['filelog'] = revset.filelog
561 revset.symbols['filelog'] = revset.filelog
559 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
562 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
560
563
561 # prevent strip from stripping remotefilelogs
564 # prevent strip from stripping remotefilelogs
562 def _collectbrokencsets(orig, repo, files, striprev):
565 def _collectbrokencsets(orig, repo, files, striprev):
563 if isenabled(repo):
566 if isenabled(repo):
564 files = list([f for f in files if not repo.shallowmatch(f)])
567 files = list([f for f in files if not repo.shallowmatch(f)])
565 return orig(repo, files, striprev)
568 return orig(repo, files, striprev)
566 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
569 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
567
570
568 # Don't commit filelogs until we know the commit hash, since the hash
571 # Don't commit filelogs until we know the commit hash, since the hash
569 # is present in the filelog blob.
572 # is present in the filelog blob.
570 # This violates Mercurial's filelog->manifest->changelog write order,
573 # This violates Mercurial's filelog->manifest->changelog write order,
571 # but is generally fine for client repos.
574 # but is generally fine for client repos.
572 pendingfilecommits = []
575 pendingfilecommits = []
573 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
576 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
574 flags, cachedelta=None, _metatuple=None):
577 flags, cachedelta=None, _metatuple=None):
575 if isinstance(link, int):
578 if isinstance(link, int):
576 pendingfilecommits.append(
579 pendingfilecommits.append(
577 (self, rawtext, transaction, link, p1, p2, node, flags,
580 (self, rawtext, transaction, link, p1, p2, node, flags,
578 cachedelta, _metatuple))
581 cachedelta, _metatuple))
579 return node
582 return node
580 else:
583 else:
581 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
584 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
582 cachedelta, _metatuple=_metatuple)
585 cachedelta, _metatuple=_metatuple)
583 extensions.wrapfunction(
586 extensions.wrapfunction(
584 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
587 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
585
588
586 def changelogadd(orig, self, *args):
589 def changelogadd(orig, self, *args):
587 oldlen = len(self)
590 oldlen = len(self)
588 node = orig(self, *args)
591 node = orig(self, *args)
589 newlen = len(self)
592 newlen = len(self)
590 if oldlen != newlen:
593 if oldlen != newlen:
591 for oldargs in pendingfilecommits:
594 for oldargs in pendingfilecommits:
592 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
595 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
593 linknode = self.node(link)
596 linknode = self.node(link)
594 if linknode == node:
597 if linknode == node:
595 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
598 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
596 else:
599 else:
597 raise error.ProgrammingError(
600 raise error.ProgrammingError(
598 'pending multiple integer revisions are not supported')
601 'pending multiple integer revisions are not supported')
599 else:
602 else:
600 # "link" is actually wrong here (it is set to len(changelog))
603 # "link" is actually wrong here (it is set to len(changelog))
601 # if changelog remains unchanged, skip writing file revisions
604 # if changelog remains unchanged, skip writing file revisions
602 # but still do a sanity check about pending multiple revisions
605 # but still do a sanity check about pending multiple revisions
603 if len(set(x[3] for x in pendingfilecommits)) > 1:
606 if len(set(x[3] for x in pendingfilecommits)) > 1:
604 raise error.ProgrammingError(
607 raise error.ProgrammingError(
605 'pending multiple integer revisions are not supported')
608 'pending multiple integer revisions are not supported')
606 del pendingfilecommits[:]
609 del pendingfilecommits[:]
607 return node
610 return node
608 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
611 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
609
612
610 # changectx wrappers
613 # changectx wrappers
611 def filectx(orig, self, path, fileid=None, filelog=None):
614 def filectx(orig, self, path, fileid=None, filelog=None):
612 if fileid is None:
615 if fileid is None:
613 fileid = self.filenode(path)
616 fileid = self.filenode(path)
614 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
617 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
615 return remotefilectx.remotefilectx(self._repo, path,
618 return remotefilectx.remotefilectx(self._repo, path,
616 fileid=fileid, changectx=self, filelog=filelog)
619 fileid=fileid, changectx=self, filelog=filelog)
617 return orig(self, path, fileid=fileid, filelog=filelog)
620 return orig(self, path, fileid=fileid, filelog=filelog)
618 extensions.wrapfunction(context.changectx, 'filectx', filectx)
621 extensions.wrapfunction(context.changectx, 'filectx', filectx)
619
622
620 def workingfilectx(orig, self, path, filelog=None):
623 def workingfilectx(orig, self, path, filelog=None):
621 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
624 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
622 return remotefilectx.remoteworkingfilectx(self._repo,
625 return remotefilectx.remoteworkingfilectx(self._repo,
623 path, workingctx=self, filelog=filelog)
626 path, workingctx=self, filelog=filelog)
624 return orig(self, path, filelog=filelog)
627 return orig(self, path, filelog=filelog)
625 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
628 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
626
629
627 # prefetch required revisions before a diff
630 # prefetch required revisions before a diff
628 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
631 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
629 copy, getfilectx, *args, **kwargs):
632 copy, getfilectx, *args, **kwargs):
630 if isenabled(repo):
633 if isenabled(repo):
631 prefetch = []
634 prefetch = []
632 mf1 = ctx1.manifest()
635 mf1 = ctx1.manifest()
633 for fname in modified + added + removed:
636 for fname in modified + added + removed:
634 if fname in mf1:
637 if fname in mf1:
635 fnode = getfilectx(fname, ctx1).filenode()
638 fnode = getfilectx(fname, ctx1).filenode()
636 # fnode can be None if it's a edited working ctx file
639 # fnode can be None if it's a edited working ctx file
637 if fnode:
640 if fnode:
638 prefetch.append((fname, hex(fnode)))
641 prefetch.append((fname, hex(fnode)))
639 if fname not in removed:
642 if fname not in removed:
640 fnode = getfilectx(fname, ctx2).filenode()
643 fnode = getfilectx(fname, ctx2).filenode()
641 if fnode:
644 if fnode:
642 prefetch.append((fname, hex(fnode)))
645 prefetch.append((fname, hex(fnode)))
643
646
644 repo.fileservice.prefetch(prefetch)
647 repo.fileservice.prefetch(prefetch)
645
648
646 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
649 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
647 copy, getfilectx, *args, **kwargs)
650 copy, getfilectx, *args, **kwargs)
648 extensions.wrapfunction(patch, 'trydiff', trydiff)
651 extensions.wrapfunction(patch, 'trydiff', trydiff)
649
652
650 # Prevent verify from processing files
653 # Prevent verify from processing files
651 # a stub for mercurial.hg.verify()
654 # a stub for mercurial.hg.verify()
652 def _verify(orig, repo):
655 def _verify(orig, repo):
653 lock = repo.lock()
656 lock = repo.lock()
654 try:
657 try:
655 return shallowverifier.shallowverifier(repo).verify()
658 return shallowverifier.shallowverifier(repo).verify()
656 finally:
659 finally:
657 lock.release()
660 lock.release()
658
661
659 extensions.wrapfunction(hg, 'verify', _verify)
662 extensions.wrapfunction(hg, 'verify', _verify)
660
663
661 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
664 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
662
665
663 def getrenamedfn(repo, endrev=None):
666 def getrenamedfn(repo, endrev=None):
664 rcache = {}
667 rcache = {}
665
668
666 def getrenamed(fn, rev):
669 def getrenamed(fn, rev):
667 '''looks up all renames for a file (up to endrev) the first
670 '''looks up all renames for a file (up to endrev) the first
668 time the file is given. It indexes on the changerev and only
671 time the file is given. It indexes on the changerev and only
669 parses the manifest if linkrev != changerev.
672 parses the manifest if linkrev != changerev.
670 Returns rename info for fn at changerev rev.'''
673 Returns rename info for fn at changerev rev.'''
671 if rev in rcache.setdefault(fn, {}):
674 if rev in rcache.setdefault(fn, {}):
672 return rcache[fn][rev]
675 return rcache[fn][rev]
673
676
674 try:
677 try:
675 fctx = repo[rev].filectx(fn)
678 fctx = repo[rev].filectx(fn)
676 for ancestor in fctx.ancestors():
679 for ancestor in fctx.ancestors():
677 if ancestor.path() == fn:
680 if ancestor.path() == fn:
678 renamed = ancestor.renamed()
681 renamed = ancestor.renamed()
679 rcache[fn][ancestor.rev()] = renamed
682 rcache[fn][ancestor.rev()] = renamed
680
683
681 return fctx.renamed()
684 return fctx.renamed()
682 except error.LookupError:
685 except error.LookupError:
683 return None
686 return None
684
687
685 return getrenamed
688 return getrenamed
686
689
687 def walkfilerevs(orig, repo, match, follow, revs, fncache):
690 def walkfilerevs(orig, repo, match, follow, revs, fncache):
688 if not isenabled(repo):
691 if not isenabled(repo):
689 return orig(repo, match, follow, revs, fncache)
692 return orig(repo, match, follow, revs, fncache)
690
693
691 # remotefilelog's can't be walked in rev order, so throw.
694 # remotefilelog's can't be walked in rev order, so throw.
692 # The caller will see the exception and walk the commit tree instead.
695 # The caller will see the exception and walk the commit tree instead.
693 if not follow:
696 if not follow:
694 raise cmdutil.FileWalkError("Cannot walk via filelog")
697 raise cmdutil.FileWalkError("Cannot walk via filelog")
695
698
696 wanted = set()
699 wanted = set()
697 minrev, maxrev = min(revs), max(revs)
700 minrev, maxrev = min(revs), max(revs)
698
701
699 pctx = repo['.']
702 pctx = repo['.']
700 for filename in match.files():
703 for filename in match.files():
701 if filename not in pctx:
704 if filename not in pctx:
702 raise error.Abort(_('cannot follow file not in parent '
705 raise error.Abort(_('cannot follow file not in parent '
703 'revision: "%s"') % filename)
706 'revision: "%s"') % filename)
704 fctx = pctx[filename]
707 fctx = pctx[filename]
705
708
706 linkrev = fctx.linkrev()
709 linkrev = fctx.linkrev()
707 if linkrev >= minrev and linkrev <= maxrev:
710 if linkrev >= minrev and linkrev <= maxrev:
708 fncache.setdefault(linkrev, []).append(filename)
711 fncache.setdefault(linkrev, []).append(filename)
709 wanted.add(linkrev)
712 wanted.add(linkrev)
710
713
711 for ancestor in fctx.ancestors():
714 for ancestor in fctx.ancestors():
712 linkrev = ancestor.linkrev()
715 linkrev = ancestor.linkrev()
713 if linkrev >= minrev and linkrev <= maxrev:
716 if linkrev >= minrev and linkrev <= maxrev:
714 fncache.setdefault(linkrev, []).append(ancestor.path())
717 fncache.setdefault(linkrev, []).append(ancestor.path())
715 wanted.add(linkrev)
718 wanted.add(linkrev)
716
719
717 return wanted
720 return wanted
718
721
719 def filelogrevset(orig, repo, subset, x):
722 def filelogrevset(orig, repo, subset, x):
720 """``filelog(pattern)``
723 """``filelog(pattern)``
721 Changesets connected to the specified filelog.
724 Changesets connected to the specified filelog.
722
725
723 For performance reasons, ``filelog()`` does not show every changeset
726 For performance reasons, ``filelog()`` does not show every changeset
724 that affects the requested file(s). See :hg:`help log` for details. For
727 that affects the requested file(s). See :hg:`help log` for details. For
725 a slower, more accurate result, use ``file()``.
728 a slower, more accurate result, use ``file()``.
726 """
729 """
727
730
728 if not isenabled(repo):
731 if not isenabled(repo):
729 return orig(repo, subset, x)
732 return orig(repo, subset, x)
730
733
731 # i18n: "filelog" is a keyword
734 # i18n: "filelog" is a keyword
732 pat = revset.getstring(x, _("filelog requires a pattern"))
735 pat = revset.getstring(x, _("filelog requires a pattern"))
733 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
736 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
734 ctx=repo[None])
737 ctx=repo[None])
735 s = set()
738 s = set()
736
739
737 if not match.patkind(pat):
740 if not match.patkind(pat):
738 # slow
741 # slow
739 for r in subset:
742 for r in subset:
740 ctx = repo[r]
743 ctx = repo[r]
741 cfiles = ctx.files()
744 cfiles = ctx.files()
742 for f in m.files():
745 for f in m.files():
743 if f in cfiles:
746 if f in cfiles:
744 s.add(ctx.rev())
747 s.add(ctx.rev())
745 break
748 break
746 else:
749 else:
747 # partial
750 # partial
748 files = (f for f in repo[None] if m(f))
751 files = (f for f in repo[None] if m(f))
749 for f in files:
752 for f in files:
750 fctx = repo[None].filectx(f)
753 fctx = repo[None].filectx(f)
751 s.add(fctx.linkrev())
754 s.add(fctx.linkrev())
752 for actx in fctx.ancestors():
755 for actx in fctx.ancestors():
753 s.add(actx.linkrev())
756 s.add(actx.linkrev())
754
757
755 return smartset.baseset([r for r in subset if r in s])
758 return smartset.baseset([r for r in subset if r in s])
756
759
757 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
760 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
758 def gc(ui, *args, **opts):
761 def gc(ui, *args, **opts):
759 '''garbage collect the client and server filelog caches
762 '''garbage collect the client and server filelog caches
760 '''
763 '''
761 cachepaths = set()
764 cachepaths = set()
762
765
763 # get the system client cache
766 # get the system client cache
764 systemcache = shallowutil.getcachepath(ui, allowempty=True)
767 systemcache = shallowutil.getcachepath(ui, allowempty=True)
765 if systemcache:
768 if systemcache:
766 cachepaths.add(systemcache)
769 cachepaths.add(systemcache)
767
770
768 # get repo client and server cache
771 # get repo client and server cache
769 repopaths = []
772 repopaths = []
770 pwd = ui.environ.get('PWD')
773 pwd = ui.environ.get('PWD')
771 if pwd:
774 if pwd:
772 repopaths.append(pwd)
775 repopaths.append(pwd)
773
776
774 repopaths.extend(args)
777 repopaths.extend(args)
775 repos = []
778 repos = []
776 for repopath in repopaths:
779 for repopath in repopaths:
777 try:
780 try:
778 repo = hg.peer(ui, {}, repopath)
781 repo = hg.peer(ui, {}, repopath)
779 repos.append(repo)
782 repos.append(repo)
780
783
781 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
784 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
782 if repocache:
785 if repocache:
783 cachepaths.add(repocache)
786 cachepaths.add(repocache)
784 except error.RepoError:
787 except error.RepoError:
785 pass
788 pass
786
789
787 # gc client cache
790 # gc client cache
788 for cachepath in cachepaths:
791 for cachepath in cachepaths:
789 gcclient(ui, cachepath)
792 gcclient(ui, cachepath)
790
793
791 # gc server cache
794 # gc server cache
792 for repo in repos:
795 for repo in repos:
793 remotefilelogserver.gcserver(ui, repo._repo)
796 remotefilelogserver.gcserver(ui, repo._repo)
794
797
795 def gcclient(ui, cachepath):
798 def gcclient(ui, cachepath):
796 # get list of repos that use this cache
799 # get list of repos that use this cache
797 repospath = os.path.join(cachepath, 'repos')
800 repospath = os.path.join(cachepath, 'repos')
798 if not os.path.exists(repospath):
801 if not os.path.exists(repospath):
799 ui.warn(_("no known cache at %s\n") % cachepath)
802 ui.warn(_("no known cache at %s\n") % cachepath)
800 return
803 return
801
804
802 reposfile = open(repospath, 'r')
805 reposfile = open(repospath, 'r')
803 repos = set([r[:-1] for r in reposfile.readlines()])
806 repos = set([r[:-1] for r in reposfile.readlines()])
804 reposfile.close()
807 reposfile.close()
805
808
806 # build list of useful files
809 # build list of useful files
807 validrepos = []
810 validrepos = []
808 keepkeys = set()
811 keepkeys = set()
809
812
810 _analyzing = _("analyzing repositories")
813 _analyzing = _("analyzing repositories")
811
814
812 sharedcache = None
815 sharedcache = None
813 filesrepacked = False
816 filesrepacked = False
814
817
815 count = 0
818 count = 0
816 for path in repos:
819 for path in repos:
817 ui.progress(_analyzing, count, unit="repos", total=len(repos))
820 ui.progress(_analyzing, count, unit="repos", total=len(repos))
818 count += 1
821 count += 1
819 try:
822 try:
820 path = ui.expandpath(os.path.normpath(path))
823 path = ui.expandpath(os.path.normpath(path))
821 except TypeError as e:
824 except TypeError as e:
822 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
825 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
823 traceback.print_exc()
826 traceback.print_exc()
824 continue
827 continue
825 try:
828 try:
826 peer = hg.peer(ui, {}, path)
829 peer = hg.peer(ui, {}, path)
827 repo = peer._repo
830 repo = peer._repo
828 except error.RepoError:
831 except error.RepoError:
829 continue
832 continue
830
833
831 validrepos.append(path)
834 validrepos.append(path)
832
835
833 # Protect against any repo or config changes that have happened since
836 # Protect against any repo or config changes that have happened since
834 # this repo was added to the repos file. We'd rather this loop succeed
837 # this repo was added to the repos file. We'd rather this loop succeed
835 # and too much be deleted, than the loop fail and nothing gets deleted.
838 # and too much be deleted, than the loop fail and nothing gets deleted.
836 if not isenabled(repo):
839 if not isenabled(repo):
837 continue
840 continue
838
841
839 if not util.safehasattr(repo, 'name'):
842 if not util.safehasattr(repo, 'name'):
840 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
843 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
841 continue
844 continue
842
845
843 # If garbage collection on repack and repack on hg gc are enabled
846 # If garbage collection on repack and repack on hg gc are enabled
844 # then loose files are repacked and garbage collected.
847 # then loose files are repacked and garbage collected.
845 # Otherwise regular garbage collection is performed.
848 # Otherwise regular garbage collection is performed.
846 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
849 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
847 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
850 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
848 if repackonhggc and gcrepack:
851 if repackonhggc and gcrepack:
849 try:
852 try:
850 repackmod.incrementalrepack(repo)
853 repackmod.incrementalrepack(repo)
851 filesrepacked = True
854 filesrepacked = True
852 continue
855 continue
853 except (IOError, repackmod.RepackAlreadyRunning):
856 except (IOError, repackmod.RepackAlreadyRunning):
854 # If repack cannot be performed due to not enough disk space
857 # If repack cannot be performed due to not enough disk space
855 # continue doing garbage collection of loose files w/o repack
858 # continue doing garbage collection of loose files w/o repack
856 pass
859 pass
857
860
858 reponame = repo.name
861 reponame = repo.name
859 if not sharedcache:
862 if not sharedcache:
860 sharedcache = repo.sharedstore
863 sharedcache = repo.sharedstore
861
864
862 # Compute a keepset which is not garbage collected
865 # Compute a keepset which is not garbage collected
863 def keyfn(fname, fnode):
866 def keyfn(fname, fnode):
864 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
867 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
865 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
868 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
866
869
867 ui.progress(_analyzing, None)
870 ui.progress(_analyzing, None)
868
871
869 # write list of valid repos back
872 # write list of valid repos back
870 oldumask = os.umask(0o002)
873 oldumask = os.umask(0o002)
871 try:
874 try:
872 reposfile = open(repospath, 'w')
875 reposfile = open(repospath, 'w')
873 reposfile.writelines([("%s\n" % r) for r in validrepos])
876 reposfile.writelines([("%s\n" % r) for r in validrepos])
874 reposfile.close()
877 reposfile.close()
875 finally:
878 finally:
876 os.umask(oldumask)
879 os.umask(oldumask)
877
880
878 # prune cache
881 # prune cache
879 if sharedcache is not None:
882 if sharedcache is not None:
880 sharedcache.gc(keepkeys)
883 sharedcache.gc(keepkeys)
881 elif not filesrepacked:
884 elif not filesrepacked:
882 ui.warn(_("warning: no valid repos in repofile\n"))
885 ui.warn(_("warning: no valid repos in repofile\n"))
883
886
884 def log(orig, ui, repo, *pats, **opts):
887 def log(orig, ui, repo, *pats, **opts):
885 if not isenabled(repo):
888 if not isenabled(repo):
886 return orig(ui, repo, *pats, **opts)
889 return orig(ui, repo, *pats, **opts)
887
890
888 follow = opts.get('follow')
891 follow = opts.get('follow')
889 revs = opts.get('rev')
892 revs = opts.get('rev')
890 if pats:
893 if pats:
891 # Force slowpath for non-follow patterns and follows that start from
894 # Force slowpath for non-follow patterns and follows that start from
892 # non-working-copy-parent revs.
895 # non-working-copy-parent revs.
893 if not follow or revs:
896 if not follow or revs:
894 # This forces the slowpath
897 # This forces the slowpath
895 opts['removed'] = True
898 opts['removed'] = True
896
899
897 # If this is a non-follow log without any revs specified, recommend that
900 # If this is a non-follow log without any revs specified, recommend that
898 # the user add -f to speed it up.
901 # the user add -f to speed it up.
899 if not follow and not revs:
902 if not follow and not revs:
900 match, pats = scmutil.matchandpats(repo['.'], pats, opts)
903 match, pats = scmutil.matchandpats(repo['.'], pats, opts)
901 isfile = not match.anypats()
904 isfile = not match.anypats()
902 if isfile:
905 if isfile:
903 for file in match.files():
906 for file in match.files():
904 if not os.path.isfile(repo.wjoin(file)):
907 if not os.path.isfile(repo.wjoin(file)):
905 isfile = False
908 isfile = False
906 break
909 break
907
910
908 if isfile:
911 if isfile:
909 ui.warn(_("warning: file log can be slow on large repos - " +
912 ui.warn(_("warning: file log can be slow on large repos - " +
910 "use -f to speed it up\n"))
913 "use -f to speed it up\n"))
911
914
912 return orig(ui, repo, *pats, **opts)
915 return orig(ui, repo, *pats, **opts)
913
916
914 def revdatelimit(ui, revset):
917 def revdatelimit(ui, revset):
915 """Update revset so that only changesets no older than 'prefetchdays' days
918 """Update revset so that only changesets no older than 'prefetchdays' days
916 are included. The default value is set to 14 days. If 'prefetchdays' is set
919 are included. The default value is set to 14 days. If 'prefetchdays' is set
917 to zero or negative value then date restriction is not applied.
920 to zero or negative value then date restriction is not applied.
918 """
921 """
919 days = ui.configint('remotefilelog', 'prefetchdays')
922 days = ui.configint('remotefilelog', 'prefetchdays')
920 if days > 0:
923 if days > 0:
921 revset = '(%s) & date(-%s)' % (revset, days)
924 revset = '(%s) & date(-%s)' % (revset, days)
922 return revset
925 return revset
923
926
924 def readytofetch(repo):
927 def readytofetch(repo):
925 """Check that enough time has passed since the last background prefetch.
928 """Check that enough time has passed since the last background prefetch.
926 This only relates to prefetches after operations that change the working
929 This only relates to prefetches after operations that change the working
927 copy parent. Default delay between background prefetches is 2 minutes.
930 copy parent. Default delay between background prefetches is 2 minutes.
928 """
931 """
929 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
932 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
930 fname = repo.vfs.join('lastprefetch')
933 fname = repo.vfs.join('lastprefetch')
931
934
932 ready = False
935 ready = False
933 with open(fname, 'a'):
936 with open(fname, 'a'):
934 # the with construct above is used to avoid race conditions
937 # the with construct above is used to avoid race conditions
935 modtime = os.path.getmtime(fname)
938 modtime = os.path.getmtime(fname)
936 if (time.time() - modtime) > timeout:
939 if (time.time() - modtime) > timeout:
937 os.utime(fname, None)
940 os.utime(fname, None)
938 ready = True
941 ready = True
939
942
940 return ready
943 return ready
941
944
942 def wcpprefetch(ui, repo, **kwargs):
945 def wcpprefetch(ui, repo, **kwargs):
943 """Prefetches in background revisions specified by bgprefetchrevs revset.
946 """Prefetches in background revisions specified by bgprefetchrevs revset.
944 Does background repack if backgroundrepack flag is set in config.
947 Does background repack if backgroundrepack flag is set in config.
945 """
948 """
946 shallow = isenabled(repo)
949 shallow = isenabled(repo)
947 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
950 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
948 isready = readytofetch(repo)
951 isready = readytofetch(repo)
949
952
950 if not (shallow and bgprefetchrevs and isready):
953 if not (shallow and bgprefetchrevs and isready):
951 return
954 return
952
955
953 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
956 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
954 # update a revset with a date limit
957 # update a revset with a date limit
955 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
958 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
956
959
957 def anon():
960 def anon():
958 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
961 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
959 return
962 return
960 repo.ranprefetch = True
963 repo.ranprefetch = True
961 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
964 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
962
965
963 repo._afterlock(anon)
966 repo._afterlock(anon)
964
967
965 def pull(orig, ui, repo, *pats, **opts):
968 def pull(orig, ui, repo, *pats, **opts):
966 result = orig(ui, repo, *pats, **opts)
969 result = orig(ui, repo, *pats, **opts)
967
970
968 if isenabled(repo):
971 if isenabled(repo):
969 # prefetch if it's configured
972 # prefetch if it's configured
970 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
973 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
971 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
974 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
972 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
975 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
973
976
974 if prefetchrevset:
977 if prefetchrevset:
975 ui.status(_("prefetching file contents\n"))
978 ui.status(_("prefetching file contents\n"))
976 revs = scmutil.revrange(repo, [prefetchrevset])
979 revs = scmutil.revrange(repo, [prefetchrevset])
977 base = repo['.'].rev()
980 base = repo['.'].rev()
978 if bgprefetch:
981 if bgprefetch:
979 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
982 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
980 else:
983 else:
981 repo.prefetch(revs, base=base)
984 repo.prefetch(revs, base=base)
982 if bgrepack:
985 if bgrepack:
983 repackmod.backgroundrepack(repo, incremental=True)
986 repackmod.backgroundrepack(repo, incremental=True)
984 elif bgrepack:
987 elif bgrepack:
985 repackmod.backgroundrepack(repo, incremental=True)
988 repackmod.backgroundrepack(repo, incremental=True)
986
989
987 return result
990 return result
988
991
989 def exchangepull(orig, repo, remote, *args, **kwargs):
992 def exchangepull(orig, repo, remote, *args, **kwargs):
990 # Hook into the callstream/getbundle to insert bundle capabilities
993 # Hook into the callstream/getbundle to insert bundle capabilities
991 # during a pull.
994 # during a pull.
992 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
995 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
993 **kwargs):
996 **kwargs):
994 if not bundlecaps:
997 if not bundlecaps:
995 bundlecaps = set()
998 bundlecaps = set()
996 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
999 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
997 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1000 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
998 **kwargs)
1001 **kwargs)
999
1002
1000 if util.safehasattr(remote, '_callstream'):
1003 if util.safehasattr(remote, '_callstream'):
1001 remote._localrepo = repo
1004 remote._localrepo = repo
1002 elif util.safehasattr(remote, 'getbundle'):
1005 elif util.safehasattr(remote, 'getbundle'):
1003 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1006 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1004
1007
1005 return orig(repo, remote, *args, **kwargs)
1008 return orig(repo, remote, *args, **kwargs)
1006
1009
1007 def _fileprefetchhook(repo, revs, match):
1010 def _fileprefetchhook(repo, revs, match):
1008 if isenabled(repo):
1011 if isenabled(repo):
1009 allfiles = []
1012 allfiles = []
1010 for rev in revs:
1013 for rev in revs:
1011 if rev == nodemod.wdirrev or rev is None:
1014 if rev == nodemod.wdirrev or rev is None:
1012 continue
1015 continue
1013 ctx = repo[rev]
1016 ctx = repo[rev]
1014 mf = ctx.manifest()
1017 mf = ctx.manifest()
1015 sparsematch = repo.maybesparsematch(ctx.rev())
1018 sparsematch = repo.maybesparsematch(ctx.rev())
1016 for path in ctx.walk(match):
1019 for path in ctx.walk(match):
1017 if path.endswith('/'):
1020 if path.endswith('/'):
1018 # Tree manifest that's being excluded as part of narrow
1021 # Tree manifest that's being excluded as part of narrow
1019 continue
1022 continue
1020 if (not sparsematch or sparsematch(path)) and path in mf:
1023 if (not sparsematch or sparsematch(path)) and path in mf:
1021 allfiles.append((path, hex(mf[path])))
1024 allfiles.append((path, hex(mf[path])))
1022 repo.fileservice.prefetch(allfiles)
1025 repo.fileservice.prefetch(allfiles)
1023
1026
1024 @command('debugremotefilelog', [
1027 @command('debugremotefilelog', [
1025 ('d', 'decompress', None, _('decompress the filelog first')),
1028 ('d', 'decompress', None, _('decompress the filelog first')),
1026 ], _('hg debugremotefilelog <path>'), norepo=True)
1029 ], _('hg debugremotefilelog <path>'), norepo=True)
1027 def debugremotefilelog(ui, path, **opts):
1030 def debugremotefilelog(ui, path, **opts):
1028 return debugcommands.debugremotefilelog(ui, path, **opts)
1031 return debugcommands.debugremotefilelog(ui, path, **opts)
1029
1032
1030 @command('verifyremotefilelog', [
1033 @command('verifyremotefilelog', [
1031 ('d', 'decompress', None, _('decompress the filelogs first')),
1034 ('d', 'decompress', None, _('decompress the filelogs first')),
1032 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1035 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1033 def verifyremotefilelog(ui, path, **opts):
1036 def verifyremotefilelog(ui, path, **opts):
1034 return debugcommands.verifyremotefilelog(ui, path, **opts)
1037 return debugcommands.verifyremotefilelog(ui, path, **opts)
1035
1038
1036 @command('debugdatapack', [
1039 @command('debugdatapack', [
1037 ('', 'long', None, _('print the long hashes')),
1040 ('', 'long', None, _('print the long hashes')),
1038 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1041 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1039 ], _('hg debugdatapack <paths>'), norepo=True)
1042 ], _('hg debugdatapack <paths>'), norepo=True)
1040 def debugdatapack(ui, *paths, **opts):
1043 def debugdatapack(ui, *paths, **opts):
1041 return debugcommands.debugdatapack(ui, *paths, **opts)
1044 return debugcommands.debugdatapack(ui, *paths, **opts)
1042
1045
1043 @command('debughistorypack', [
1046 @command('debughistorypack', [
1044 ], _('hg debughistorypack <path>'), norepo=True)
1047 ], _('hg debughistorypack <path>'), norepo=True)
1045 def debughistorypack(ui, path, **opts):
1048 def debughistorypack(ui, path, **opts):
1046 return debugcommands.debughistorypack(ui, path)
1049 return debugcommands.debughistorypack(ui, path)
1047
1050
1048 @command('debugkeepset', [
1051 @command('debugkeepset', [
1049 ], _('hg debugkeepset'))
1052 ], _('hg debugkeepset'))
1050 def debugkeepset(ui, repo, **opts):
1053 def debugkeepset(ui, repo, **opts):
1051 # The command is used to measure keepset computation time
1054 # The command is used to measure keepset computation time
1052 def keyfn(fname, fnode):
1055 def keyfn(fname, fnode):
1053 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1056 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1054 repackmod.keepset(repo, keyfn)
1057 repackmod.keepset(repo, keyfn)
1055 return
1058 return
1056
1059
1057 @command('debugwaitonrepack', [
1060 @command('debugwaitonrepack', [
1058 ], _('hg debugwaitonrepack'))
1061 ], _('hg debugwaitonrepack'))
1059 def debugwaitonrepack(ui, repo, **opts):
1062 def debugwaitonrepack(ui, repo, **opts):
1060 return debugcommands.debugwaitonrepack(repo)
1063 return debugcommands.debugwaitonrepack(repo)
1061
1064
1062 @command('debugwaitonprefetch', [
1065 @command('debugwaitonprefetch', [
1063 ], _('hg debugwaitonprefetch'))
1066 ], _('hg debugwaitonprefetch'))
1064 def debugwaitonprefetch(ui, repo, **opts):
1067 def debugwaitonprefetch(ui, repo, **opts):
1065 return debugcommands.debugwaitonprefetch(repo)
1068 return debugcommands.debugwaitonprefetch(repo)
1066
1069
1067 def resolveprefetchopts(ui, opts):
1070 def resolveprefetchopts(ui, opts):
1068 if not opts.get('rev'):
1071 if not opts.get('rev'):
1069 revset = ['.', 'draft()']
1072 revset = ['.', 'draft()']
1070
1073
1071 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1074 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1072 if prefetchrevset:
1075 if prefetchrevset:
1073 revset.append('(%s)' % prefetchrevset)
1076 revset.append('(%s)' % prefetchrevset)
1074 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1077 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1075 if bgprefetchrevs:
1078 if bgprefetchrevs:
1076 revset.append('(%s)' % bgprefetchrevs)
1079 revset.append('(%s)' % bgprefetchrevs)
1077 revset = '+'.join(revset)
1080 revset = '+'.join(revset)
1078
1081
1079 # update a revset with a date limit
1082 # update a revset with a date limit
1080 revset = revdatelimit(ui, revset)
1083 revset = revdatelimit(ui, revset)
1081
1084
1082 opts['rev'] = [revset]
1085 opts['rev'] = [revset]
1083
1086
1084 if not opts.get('base'):
1087 if not opts.get('base'):
1085 opts['base'] = None
1088 opts['base'] = None
1086
1089
1087 return opts
1090 return opts
1088
1091
1089 @command('prefetch', [
1092 @command('prefetch', [
1090 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1093 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1091 ('', 'repack', False, _('run repack after prefetch')),
1094 ('', 'repack', False, _('run repack after prefetch')),
1092 ('b', 'base', '', _("rev that is assumed to already be local")),
1095 ('b', 'base', '', _("rev that is assumed to already be local")),
1093 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1096 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1094 def prefetch(ui, repo, *pats, **opts):
1097 def prefetch(ui, repo, *pats, **opts):
1095 """prefetch file revisions from the server
1098 """prefetch file revisions from the server
1096
1099
1097 Prefetchs file revisions for the specified revs and stores them in the
1100 Prefetchs file revisions for the specified revs and stores them in the
1098 local remotefilelog cache. If no rev is specified, the default rev is
1101 local remotefilelog cache. If no rev is specified, the default rev is
1099 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1102 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1100 File names or patterns can be used to limit which files are downloaded.
1103 File names or patterns can be used to limit which files are downloaded.
1101
1104
1102 Return 0 on success.
1105 Return 0 on success.
1103 """
1106 """
1104 if not isenabled(repo):
1107 if not isenabled(repo):
1105 raise error.Abort(_("repo is not shallow"))
1108 raise error.Abort(_("repo is not shallow"))
1106
1109
1107 opts = resolveprefetchopts(ui, opts)
1110 opts = resolveprefetchopts(ui, opts)
1108 revs = scmutil.revrange(repo, opts.get('rev'))
1111 revs = scmutil.revrange(repo, opts.get('rev'))
1109 repo.prefetch(revs, opts.get('base'), pats, opts)
1112 repo.prefetch(revs, opts.get('base'), pats, opts)
1110
1113
1111 # Run repack in background
1114 # Run repack in background
1112 if opts.get('repack'):
1115 if opts.get('repack'):
1113 repackmod.backgroundrepack(repo, incremental=True)
1116 repackmod.backgroundrepack(repo, incremental=True)
1114
1117
1115 @command('repack', [
1118 @command('repack', [
1116 ('', 'background', None, _('run in a background process'), None),
1119 ('', 'background', None, _('run in a background process'), None),
1117 ('', 'incremental', None, _('do an incremental repack'), None),
1120 ('', 'incremental', None, _('do an incremental repack'), None),
1118 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1121 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1119 ], _('hg repack [OPTIONS]'))
1122 ], _('hg repack [OPTIONS]'))
1120 def repack_(ui, repo, *pats, **opts):
1123 def repack_(ui, repo, *pats, **opts):
1121 if opts.get('background'):
1124 if opts.get('background'):
1122 repackmod.backgroundrepack(repo, incremental=opts.get('incremental'),
1125 repackmod.backgroundrepack(repo, incremental=opts.get('incremental'),
1123 packsonly=opts.get('packsonly', False))
1126 packsonly=opts.get('packsonly', False))
1124 return
1127 return
1125
1128
1126 options = {'packsonly': opts.get('packsonly')}
1129 options = {'packsonly': opts.get('packsonly')}
1127
1130
1128 try:
1131 try:
1129 if opts.get('incremental'):
1132 if opts.get('incremental'):
1130 repackmod.incrementalrepack(repo, options=options)
1133 repackmod.incrementalrepack(repo, options=options)
1131 else:
1134 else:
1132 repackmod.fullrepack(repo, options=options)
1135 repackmod.fullrepack(repo, options=options)
1133 except repackmod.RepackAlreadyRunning as ex:
1136 except repackmod.RepackAlreadyRunning as ex:
1134 # Don't propogate the exception if the repack is already in
1137 # Don't propogate the exception if the repack is already in
1135 # progress, since we want the command to exit 0.
1138 # progress, since we want the command to exit 0.
1136 repo.ui.warn('%s\n' % ex)
1139 repo.ui.warn('%s\n' % ex)
@@ -1,24 +1,25 b''
1 $ . "$TESTDIR/remotefilelog-library.sh"
1 $ . "$TESTDIR/remotefilelog-library.sh"
2
2
3 $ cat >> $HGRCPATH <<EOF
3 $ cat >> $HGRCPATH <<EOF
4 > [extensions]
4 > [extensions]
5 > remotefilelog=
5 > remotefilelog=
6 > share=
6 > share=
7 > EOF
7 > EOF
8
8
9 $ hg init master
9 $ hg init master
10 $ cd master
10 $ cd master
11 $ cat >> .hg/hgrc <<EOF
11 $ cat >> .hg/hgrc <<EOF
12 > [remotefilelog]
12 > [remotefilelog]
13 > server=True
13 > server=True
14 > EOF
14 > EOF
15 $ echo x > x
15 $ echo x > x
16 $ hg commit -qAm x
16 $ hg commit -qAm x
17
17
18 $ cd ..
18 $ cd ..
19
19
20
20
21 $ hgcloneshallow ssh://user@dummy/master source --noupdate -q
21 $ hgcloneshallow ssh://user@dummy/master source --noupdate -q
22 $ hg share source dest
22 $ hg share source dest
23 updating working directory
23 updating working directory
24 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
24 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
25 $ hg -R dest unshare
General Comments 0
You need to be logged in to leave comments. Login now