##// END OF EJS Templates
remotefilelog: fix {file_copies} template keyword...
Martin von Zweigbergk -
r41228:2338eab5 default
parent child Browse files
Show More
@@ -1,1142 +1,1143 b''
1 # __init__.py - remotefilelog extension
1 # __init__.py - remotefilelog extension
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8
8
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 GUARANTEES. This means that repositories created with this extension may
10 GUARANTEES. This means that repositories created with this extension may
11 only be usable with the exact version of this extension/Mercurial that was
11 only be usable with the exact version of this extension/Mercurial that was
12 used. The extension attempts to enforce this in order to prevent repository
12 used. The extension attempts to enforce this in order to prevent repository
13 corruption.
13 corruption.
14
14
15 remotefilelog works by fetching file contents lazily and storing them
15 remotefilelog works by fetching file contents lazily and storing them
16 in a cache on the client rather than in revlogs. This allows enormous
16 in a cache on the client rather than in revlogs. This allows enormous
17 histories to be transferred only partially, making them easier to
17 histories to be transferred only partially, making them easier to
18 operate on.
18 operate on.
19
19
20 Configs:
20 Configs:
21
21
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23
23
24 ``packs.maxpacksize`` specifies the maximum pack file size
24 ``packs.maxpacksize`` specifies the maximum pack file size
25
25
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 shared cache (trees only for now)
27 shared cache (trees only for now)
28
28
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30
30
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 update, and on other commands that use them. Different from pullprefetch.
32 update, and on other commands that use them. Different from pullprefetch.
33
33
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35
35
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 it is garbage collected
37 it is garbage collected
38
38
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40
40
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 days after which it is no longer prefetched.
42 days after which it is no longer prefetched.
43
43
44 ``remotefilelog.prefetchdelay`` specifies delay between background
44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 prefetches in seconds after operations that change the working copy parent
45 prefetches in seconds after operations that change the working copy parent
46
46
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 pack files required to be considered part of a generation. In particular,
48 pack files required to be considered part of a generation. In particular,
49 minimum number of packs files > gencountlimit.
49 minimum number of packs files > gencountlimit.
50
50
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 each generation of the data pack files. For example, list ['100MB','1MB']
52 each generation of the data pack files. For example, list ['100MB','1MB']
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 1MB, 100MB) and [100MB, infinity).
54 1MB, 100MB) and [100MB, infinity).
55
55
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 include in an incremental data repack.
57 include in an incremental data repack.
58
58
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 it to be considered for an incremental data repack.
60 it to be considered for an incremental data repack.
61
61
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 to include in an incremental data repack.
63 to include in an incremental data repack.
64
64
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 history pack files required to be considered part of a generation. In
66 history pack files required to be considered part of a generation. In
67 particular, minimum number of packs files > gencountlimit.
67 particular, minimum number of packs files > gencountlimit.
68
68
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 each generation of the history pack files. For example, list [
70 each generation of the history pack files. For example, list [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73
73
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 include in an incremental history repack.
75 include in an incremental history repack.
76
76
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 for it to be considered for an incremental history repack.
78 for it to be considered for an incremental history repack.
79
79
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 files to include in an incremental history repack.
81 files to include in an incremental history repack.
82
82
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 background
84 background
85
85
86 ``remotefilelog.cachepath`` path to cache
86 ``remotefilelog.cachepath`` path to cache
87
87
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 group
89 group
90
90
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92
92
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94
94
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96
96
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98
98
99 ``remotefilelog.fetchwarning``: message to print when too many
99 ``remotefilelog.fetchwarning``: message to print when too many
100 single-file fetches occur
100 single-file fetches occur
101
101
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103
103
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 files, otherwise use optimistic fetching
105 files, otherwise use optimistic fetching
106
106
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 eagerly downloaded rather than lazily
108 eagerly downloaded rather than lazily
109
109
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 data from other repos in a shared store.
111 data from other repos in a shared store.
112
112
113 ``remotefilelog.server`` if true, enable server-side functionality
113 ``remotefilelog.server`` if true, enable server-side functionality
114
114
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116
116
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 blobs
118 blobs
119
119
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 before returning blobs
121 before returning blobs
122
122
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 corruption before returning metadata
124 corruption before returning metadata
125
125
126 """
126 """
127 from __future__ import absolute_import
127 from __future__ import absolute_import
128
128
129 import os
129 import os
130 import time
130 import time
131 import traceback
131 import traceback
132
132
133 from mercurial.node import hex
133 from mercurial.node import hex
134 from mercurial.i18n import _
134 from mercurial.i18n import _
135 from mercurial import (
135 from mercurial import (
136 changegroup,
136 changegroup,
137 changelog,
137 changelog,
138 cmdutil,
138 cmdutil,
139 commands,
139 commands,
140 configitems,
140 configitems,
141 context,
141 context,
142 copies,
142 copies,
143 debugcommands as hgdebugcommands,
143 debugcommands as hgdebugcommands,
144 dispatch,
144 dispatch,
145 error,
145 error,
146 exchange,
146 exchange,
147 extensions,
147 extensions,
148 hg,
148 hg,
149 localrepo,
149 localrepo,
150 match,
150 match,
151 merge,
151 merge,
152 node as nodemod,
152 node as nodemod,
153 patch,
153 patch,
154 pycompat,
154 pycompat,
155 registrar,
155 registrar,
156 repair,
156 repair,
157 repoview,
157 repoview,
158 revset,
158 revset,
159 scmutil,
159 scmutil,
160 smartset,
160 smartset,
161 streamclone,
161 streamclone,
162 templatekw,
162 templatekw,
163 util,
163 util,
164 )
164 )
165 from . import (
165 from . import (
166 constants,
166 constants,
167 debugcommands,
167 debugcommands,
168 fileserverclient,
168 fileserverclient,
169 remotefilectx,
169 remotefilectx,
170 remotefilelog,
170 remotefilelog,
171 remotefilelogserver,
171 remotefilelogserver,
172 repack as repackmod,
172 repack as repackmod,
173 shallowbundle,
173 shallowbundle,
174 shallowrepo,
174 shallowrepo,
175 shallowstore,
175 shallowstore,
176 shallowutil,
176 shallowutil,
177 shallowverifier,
177 shallowverifier,
178 )
178 )
179
179
180 # ensures debug commands are registered
180 # ensures debug commands are registered
181 hgdebugcommands.command
181 hgdebugcommands.command
182
182
183 cmdtable = {}
183 cmdtable = {}
184 command = registrar.command(cmdtable)
184 command = registrar.command(cmdtable)
185
185
186 configtable = {}
186 configtable = {}
187 configitem = registrar.configitem(configtable)
187 configitem = registrar.configitem(configtable)
188
188
189 configitem('remotefilelog', 'debug', default=False)
189 configitem('remotefilelog', 'debug', default=False)
190
190
191 configitem('remotefilelog', 'reponame', default='')
191 configitem('remotefilelog', 'reponame', default='')
192 configitem('remotefilelog', 'cachepath', default=None)
192 configitem('remotefilelog', 'cachepath', default=None)
193 configitem('remotefilelog', 'cachegroup', default=None)
193 configitem('remotefilelog', 'cachegroup', default=None)
194 configitem('remotefilelog', 'cacheprocess', default=None)
194 configitem('remotefilelog', 'cacheprocess', default=None)
195 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
195 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
196 configitem("remotefilelog", "cachelimit", default="1000 GB")
196 configitem("remotefilelog", "cachelimit", default="1000 GB")
197
197
198 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
198 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
199 alias=[('remotefilelog', 'fallbackrepo')])
199 alias=[('remotefilelog', 'fallbackrepo')])
200
200
201 configitem('remotefilelog', 'validatecachelog', default=None)
201 configitem('remotefilelog', 'validatecachelog', default=None)
202 configitem('remotefilelog', 'validatecache', default='on')
202 configitem('remotefilelog', 'validatecache', default='on')
203 configitem('remotefilelog', 'server', default=None)
203 configitem('remotefilelog', 'server', default=None)
204 configitem('remotefilelog', 'servercachepath', default=None)
204 configitem('remotefilelog', 'servercachepath', default=None)
205 configitem("remotefilelog", "serverexpiration", default=30)
205 configitem("remotefilelog", "serverexpiration", default=30)
206 configitem('remotefilelog', 'backgroundrepack', default=False)
206 configitem('remotefilelog', 'backgroundrepack', default=False)
207 configitem('remotefilelog', 'bgprefetchrevs', default=None)
207 configitem('remotefilelog', 'bgprefetchrevs', default=None)
208 configitem('remotefilelog', 'pullprefetch', default=None)
208 configitem('remotefilelog', 'pullprefetch', default=None)
209 configitem('remotefilelog', 'backgroundprefetch', default=False)
209 configitem('remotefilelog', 'backgroundprefetch', default=False)
210 configitem('remotefilelog', 'prefetchdelay', default=120)
210 configitem('remotefilelog', 'prefetchdelay', default=120)
211 configitem('remotefilelog', 'prefetchdays', default=14)
211 configitem('remotefilelog', 'prefetchdays', default=14)
212
212
213 configitem('remotefilelog', 'getfilesstep', default=10000)
213 configitem('remotefilelog', 'getfilesstep', default=10000)
214 configitem('remotefilelog', 'getfilestype', default='optimistic')
214 configitem('remotefilelog', 'getfilestype', default='optimistic')
215 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
215 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
216 configitem('remotefilelog', 'fetchwarning', default='')
216 configitem('remotefilelog', 'fetchwarning', default='')
217
217
218 configitem('remotefilelog', 'includepattern', default=None)
218 configitem('remotefilelog', 'includepattern', default=None)
219 configitem('remotefilelog', 'excludepattern', default=None)
219 configitem('remotefilelog', 'excludepattern', default=None)
220
220
221 configitem('remotefilelog', 'gcrepack', default=False)
221 configitem('remotefilelog', 'gcrepack', default=False)
222 configitem('remotefilelog', 'repackonhggc', default=False)
222 configitem('remotefilelog', 'repackonhggc', default=False)
223 configitem('repack', 'chainorphansbysize', default=True)
223 configitem('repack', 'chainorphansbysize', default=True)
224
224
225 configitem('packs', 'maxpacksize', default=0)
225 configitem('packs', 'maxpacksize', default=0)
226 configitem('packs', 'maxchainlen', default=1000)
226 configitem('packs', 'maxchainlen', default=1000)
227
227
228 # default TTL limit is 30 days
228 # default TTL limit is 30 days
229 _defaultlimit = 60 * 60 * 24 * 30
229 _defaultlimit = 60 * 60 * 24 * 30
230 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
230 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
231
231
232 configitem('remotefilelog', 'data.gencountlimit', default=2),
232 configitem('remotefilelog', 'data.gencountlimit', default=2),
233 configitem('remotefilelog', 'data.generations',
233 configitem('remotefilelog', 'data.generations',
234 default=['1GB', '100MB', '1MB'])
234 default=['1GB', '100MB', '1MB'])
235 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
235 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
236 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
236 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
237 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
237 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
238
238
239 configitem('remotefilelog', 'history.gencountlimit', default=2),
239 configitem('remotefilelog', 'history.gencountlimit', default=2),
240 configitem('remotefilelog', 'history.generations', default=['100MB'])
240 configitem('remotefilelog', 'history.generations', default=['100MB'])
241 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
241 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
242 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
242 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
243 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
243 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
244
244
245 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
245 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
246 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
246 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
247 # be specifying the version(s) of Mercurial they are tested with, or
247 # be specifying the version(s) of Mercurial they are tested with, or
248 # leave the attribute unspecified.
248 # leave the attribute unspecified.
249 testedwith = 'ships-with-hg-core'
249 testedwith = 'ships-with-hg-core'
250
250
251 repoclass = localrepo.localrepository
251 repoclass = localrepo.localrepository
252 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
252 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
253
253
254 isenabled = shallowutil.isenabled
254 isenabled = shallowutil.isenabled
255
255
256 def uisetup(ui):
256 def uisetup(ui):
257 """Wraps user facing Mercurial commands to swap them out with shallow
257 """Wraps user facing Mercurial commands to swap them out with shallow
258 versions.
258 versions.
259 """
259 """
260 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
260 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
261
261
262 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
262 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
263 entry[1].append(('', 'shallow', None,
263 entry[1].append(('', 'shallow', None,
264 _("create a shallow clone which uses remote file "
264 _("create a shallow clone which uses remote file "
265 "history")))
265 "history")))
266
266
267 extensions.wrapcommand(commands.table, 'debugindex',
267 extensions.wrapcommand(commands.table, 'debugindex',
268 debugcommands.debugindex)
268 debugcommands.debugindex)
269 extensions.wrapcommand(commands.table, 'debugindexdot',
269 extensions.wrapcommand(commands.table, 'debugindexdot',
270 debugcommands.debugindexdot)
270 debugcommands.debugindexdot)
271 extensions.wrapcommand(commands.table, 'log', log)
271 extensions.wrapcommand(commands.table, 'log', log)
272 extensions.wrapcommand(commands.table, 'pull', pull)
272 extensions.wrapcommand(commands.table, 'pull', pull)
273
273
274 # Prevent 'hg manifest --all'
274 # Prevent 'hg manifest --all'
275 def _manifest(orig, ui, repo, *args, **opts):
275 def _manifest(orig, ui, repo, *args, **opts):
276 if (isenabled(repo) and opts.get(r'all')):
276 if (isenabled(repo) and opts.get(r'all')):
277 raise error.Abort(_("--all is not supported in a shallow repo"))
277 raise error.Abort(_("--all is not supported in a shallow repo"))
278
278
279 return orig(ui, repo, *args, **opts)
279 return orig(ui, repo, *args, **opts)
280 extensions.wrapcommand(commands.table, "manifest", _manifest)
280 extensions.wrapcommand(commands.table, "manifest", _manifest)
281
281
282 # Wrap remotefilelog with lfs code
282 # Wrap remotefilelog with lfs code
283 def _lfsloaded(loaded=False):
283 def _lfsloaded(loaded=False):
284 lfsmod = None
284 lfsmod = None
285 try:
285 try:
286 lfsmod = extensions.find('lfs')
286 lfsmod = extensions.find('lfs')
287 except KeyError:
287 except KeyError:
288 pass
288 pass
289 if lfsmod:
289 if lfsmod:
290 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
290 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
291 fileserverclient._lfsmod = lfsmod
291 fileserverclient._lfsmod = lfsmod
292 extensions.afterloaded('lfs', _lfsloaded)
292 extensions.afterloaded('lfs', _lfsloaded)
293
293
294 # debugdata needs remotefilelog.len to work
294 # debugdata needs remotefilelog.len to work
295 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
295 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
296
296
297 def cloneshallow(orig, ui, repo, *args, **opts):
297 def cloneshallow(orig, ui, repo, *args, **opts):
298 if opts.get(r'shallow'):
298 if opts.get(r'shallow'):
299 repos = []
299 repos = []
300 def pull_shallow(orig, self, *args, **kwargs):
300 def pull_shallow(orig, self, *args, **kwargs):
301 if not isenabled(self):
301 if not isenabled(self):
302 repos.append(self.unfiltered())
302 repos.append(self.unfiltered())
303 # set up the client hooks so the post-clone update works
303 # set up the client hooks so the post-clone update works
304 setupclient(self.ui, self.unfiltered())
304 setupclient(self.ui, self.unfiltered())
305
305
306 # setupclient fixed the class on the repo itself
306 # setupclient fixed the class on the repo itself
307 # but we also need to fix it on the repoview
307 # but we also need to fix it on the repoview
308 if isinstance(self, repoview.repoview):
308 if isinstance(self, repoview.repoview):
309 self.__class__.__bases__ = (self.__class__.__bases__[0],
309 self.__class__.__bases__ = (self.__class__.__bases__[0],
310 self.unfiltered().__class__)
310 self.unfiltered().__class__)
311 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
311 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
312 self._writerequirements()
312 self._writerequirements()
313
313
314 # Since setupclient hadn't been called, exchange.pull was not
314 # Since setupclient hadn't been called, exchange.pull was not
315 # wrapped. So we need to manually invoke our version of it.
315 # wrapped. So we need to manually invoke our version of it.
316 return exchangepull(orig, self, *args, **kwargs)
316 return exchangepull(orig, self, *args, **kwargs)
317 else:
317 else:
318 return orig(self, *args, **kwargs)
318 return orig(self, *args, **kwargs)
319 extensions.wrapfunction(exchange, 'pull', pull_shallow)
319 extensions.wrapfunction(exchange, 'pull', pull_shallow)
320
320
321 # Wrap the stream logic to add requirements and to pass include/exclude
321 # Wrap the stream logic to add requirements and to pass include/exclude
322 # patterns around.
322 # patterns around.
323 def setup_streamout(repo, remote):
323 def setup_streamout(repo, remote):
324 # Replace remote.stream_out with a version that sends file
324 # Replace remote.stream_out with a version that sends file
325 # patterns.
325 # patterns.
326 def stream_out_shallow(orig):
326 def stream_out_shallow(orig):
327 caps = remote.capabilities()
327 caps = remote.capabilities()
328 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
328 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
329 opts = {}
329 opts = {}
330 if repo.includepattern:
330 if repo.includepattern:
331 opts[r'includepattern'] = '\0'.join(repo.includepattern)
331 opts[r'includepattern'] = '\0'.join(repo.includepattern)
332 if repo.excludepattern:
332 if repo.excludepattern:
333 opts[r'excludepattern'] = '\0'.join(repo.excludepattern)
333 opts[r'excludepattern'] = '\0'.join(repo.excludepattern)
334 return remote._callstream('stream_out_shallow', **opts)
334 return remote._callstream('stream_out_shallow', **opts)
335 else:
335 else:
336 return orig()
336 return orig()
337 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
337 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
338 def stream_wrap(orig, op):
338 def stream_wrap(orig, op):
339 setup_streamout(op.repo, op.remote)
339 setup_streamout(op.repo, op.remote)
340 return orig(op)
340 return orig(op)
341 extensions.wrapfunction(
341 extensions.wrapfunction(
342 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
342 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
343
343
344 def canperformstreamclone(orig, pullop, bundle2=False):
344 def canperformstreamclone(orig, pullop, bundle2=False):
345 # remotefilelog is currently incompatible with the
345 # remotefilelog is currently incompatible with the
346 # bundle2 flavor of streamclones, so force us to use
346 # bundle2 flavor of streamclones, so force us to use
347 # v1 instead.
347 # v1 instead.
348 if 'v2' in pullop.remotebundle2caps.get('stream', []):
348 if 'v2' in pullop.remotebundle2caps.get('stream', []):
349 pullop.remotebundle2caps['stream'] = [
349 pullop.remotebundle2caps['stream'] = [
350 c for c in pullop.remotebundle2caps['stream']
350 c for c in pullop.remotebundle2caps['stream']
351 if c != 'v2']
351 if c != 'v2']
352 if bundle2:
352 if bundle2:
353 return False, None
353 return False, None
354 supported, requirements = orig(pullop, bundle2=bundle2)
354 supported, requirements = orig(pullop, bundle2=bundle2)
355 if requirements is not None:
355 if requirements is not None:
356 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
356 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
357 return supported, requirements
357 return supported, requirements
358 extensions.wrapfunction(
358 extensions.wrapfunction(
359 streamclone, 'canperformstreamclone', canperformstreamclone)
359 streamclone, 'canperformstreamclone', canperformstreamclone)
360
360
361 try:
361 try:
362 orig(ui, repo, *args, **opts)
362 orig(ui, repo, *args, **opts)
363 finally:
363 finally:
364 if opts.get(r'shallow'):
364 if opts.get(r'shallow'):
365 for r in repos:
365 for r in repos:
366 if util.safehasattr(r, 'fileservice'):
366 if util.safehasattr(r, 'fileservice'):
367 r.fileservice.close()
367 r.fileservice.close()
368
368
369 def debugdatashallow(orig, *args, **kwds):
369 def debugdatashallow(orig, *args, **kwds):
370 oldlen = remotefilelog.remotefilelog.__len__
370 oldlen = remotefilelog.remotefilelog.__len__
371 try:
371 try:
372 remotefilelog.remotefilelog.__len__ = lambda x: 1
372 remotefilelog.remotefilelog.__len__ = lambda x: 1
373 return orig(*args, **kwds)
373 return orig(*args, **kwds)
374 finally:
374 finally:
375 remotefilelog.remotefilelog.__len__ = oldlen
375 remotefilelog.remotefilelog.__len__ = oldlen
376
376
377 def reposetup(ui, repo):
377 def reposetup(ui, repo):
378 if not repo.local():
378 if not repo.local():
379 return
379 return
380
380
381 # put here intentionally bc doesnt work in uisetup
381 # put here intentionally bc doesnt work in uisetup
382 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
382 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
383 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
383 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
384
384
385 isserverenabled = ui.configbool('remotefilelog', 'server')
385 isserverenabled = ui.configbool('remotefilelog', 'server')
386 isshallowclient = isenabled(repo)
386 isshallowclient = isenabled(repo)
387
387
388 if isserverenabled and isshallowclient:
388 if isserverenabled and isshallowclient:
389 raise RuntimeError("Cannot be both a server and shallow client.")
389 raise RuntimeError("Cannot be both a server and shallow client.")
390
390
391 if isshallowclient:
391 if isshallowclient:
392 setupclient(ui, repo)
392 setupclient(ui, repo)
393
393
394 if isserverenabled:
394 if isserverenabled:
395 remotefilelogserver.setupserver(ui, repo)
395 remotefilelogserver.setupserver(ui, repo)
396
396
397 def setupclient(ui, repo):
397 def setupclient(ui, repo):
398 if not isinstance(repo, localrepo.localrepository):
398 if not isinstance(repo, localrepo.localrepository):
399 return
399 return
400
400
401 # Even clients get the server setup since they need to have the
401 # Even clients get the server setup since they need to have the
402 # wireprotocol endpoints registered.
402 # wireprotocol endpoints registered.
403 remotefilelogserver.onetimesetup(ui)
403 remotefilelogserver.onetimesetup(ui)
404 onetimeclientsetup(ui)
404 onetimeclientsetup(ui)
405
405
406 shallowrepo.wraprepo(repo)
406 shallowrepo.wraprepo(repo)
407 repo.store = shallowstore.wrapstore(repo.store)
407 repo.store = shallowstore.wrapstore(repo.store)
408
408
409 clientonetime = False
409 clientonetime = False
410 def onetimeclientsetup(ui):
410 def onetimeclientsetup(ui):
411 global clientonetime
411 global clientonetime
412 if clientonetime:
412 if clientonetime:
413 return
413 return
414 clientonetime = True
414 clientonetime = True
415
415
416 changegroup.cgpacker = shallowbundle.shallowcg1packer
416 changegroup.cgpacker = shallowbundle.shallowcg1packer
417
417
418 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
418 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
419 shallowbundle.addchangegroupfiles)
419 shallowbundle.addchangegroupfiles)
420 extensions.wrapfunction(
420 extensions.wrapfunction(
421 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
421 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
422
422
423 def storewrapper(orig, requirements, path, vfstype):
423 def storewrapper(orig, requirements, path, vfstype):
424 s = orig(requirements, path, vfstype)
424 s = orig(requirements, path, vfstype)
425 if constants.SHALLOWREPO_REQUIREMENT in requirements:
425 if constants.SHALLOWREPO_REQUIREMENT in requirements:
426 s = shallowstore.wrapstore(s)
426 s = shallowstore.wrapstore(s)
427
427
428 return s
428 return s
429 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
429 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
430
430
431 extensions.wrapfunction(exchange, 'pull', exchangepull)
431 extensions.wrapfunction(exchange, 'pull', exchangepull)
432
432
433 # prefetch files before update
433 # prefetch files before update
434 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
434 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
435 if isenabled(repo):
435 if isenabled(repo):
436 manifest = mctx.manifest()
436 manifest = mctx.manifest()
437 files = []
437 files = []
438 for f, args, msg in actions['g']:
438 for f, args, msg in actions['g']:
439 files.append((f, hex(manifest[f])))
439 files.append((f, hex(manifest[f])))
440 # batch fetch the needed files from the server
440 # batch fetch the needed files from the server
441 repo.fileservice.prefetch(files)
441 repo.fileservice.prefetch(files)
442 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
442 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
443 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
443 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
444
444
445 # Prefetch merge checkunknownfiles
445 # Prefetch merge checkunknownfiles
446 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
446 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
447 *args, **kwargs):
447 *args, **kwargs):
448 if isenabled(repo):
448 if isenabled(repo):
449 files = []
449 files = []
450 sparsematch = repo.maybesparsematch(mctx.rev())
450 sparsematch = repo.maybesparsematch(mctx.rev())
451 for f, (m, actionargs, msg) in actions.iteritems():
451 for f, (m, actionargs, msg) in actions.iteritems():
452 if sparsematch and not sparsematch(f):
452 if sparsematch and not sparsematch(f):
453 continue
453 continue
454 if m in ('c', 'dc', 'cm'):
454 if m in ('c', 'dc', 'cm'):
455 files.append((f, hex(mctx.filenode(f))))
455 files.append((f, hex(mctx.filenode(f))))
456 elif m == 'dg':
456 elif m == 'dg':
457 f2 = actionargs[0]
457 f2 = actionargs[0]
458 files.append((f2, hex(mctx.filenode(f2))))
458 files.append((f2, hex(mctx.filenode(f2))))
459 # batch fetch the needed files from the server
459 # batch fetch the needed files from the server
460 repo.fileservice.prefetch(files)
460 repo.fileservice.prefetch(files)
461 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
461 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
462 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
462 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
463
463
464 # Prefetch files before status attempts to look at their size and contents
464 # Prefetch files before status attempts to look at their size and contents
465 def checklookup(orig, self, files):
465 def checklookup(orig, self, files):
466 repo = self._repo
466 repo = self._repo
467 if isenabled(repo):
467 if isenabled(repo):
468 prefetchfiles = []
468 prefetchfiles = []
469 for parent in self._parents:
469 for parent in self._parents:
470 for f in files:
470 for f in files:
471 if f in parent:
471 if f in parent:
472 prefetchfiles.append((f, hex(parent.filenode(f))))
472 prefetchfiles.append((f, hex(parent.filenode(f))))
473 # batch fetch the needed files from the server
473 # batch fetch the needed files from the server
474 repo.fileservice.prefetch(prefetchfiles)
474 repo.fileservice.prefetch(prefetchfiles)
475 return orig(self, files)
475 return orig(self, files)
476 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
476 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
477
477
478 # Prefetch the logic that compares added and removed files for renames
478 # Prefetch the logic that compares added and removed files for renames
479 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
479 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
480 if isenabled(repo):
480 if isenabled(repo):
481 files = []
481 files = []
482 parentctx = repo['.']
482 parentctx = repo['.']
483 for f in removed:
483 for f in removed:
484 files.append((f, hex(parentctx.filenode(f))))
484 files.append((f, hex(parentctx.filenode(f))))
485 # batch fetch the needed files from the server
485 # batch fetch the needed files from the server
486 repo.fileservice.prefetch(files)
486 repo.fileservice.prefetch(files)
487 return orig(repo, matcher, added, removed, *args, **kwargs)
487 return orig(repo, matcher, added, removed, *args, **kwargs)
488 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
489
489
490 # prefetch files before mergecopies check
490 # prefetch files before mergecopies check
491 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
492 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
493 if isenabled(repo):
493 if isenabled(repo):
494 m1 = c1.manifest()
494 m1 = c1.manifest()
495 m2 = c2.manifest()
495 m2 = c2.manifest()
496 files = []
496 files = []
497
497
498 sparsematch1 = repo.maybesparsematch(c1.rev())
498 sparsematch1 = repo.maybesparsematch(c1.rev())
499 if sparsematch1:
499 if sparsematch1:
500 sparseu1 = []
500 sparseu1 = []
501 for f in u1:
501 for f in u1:
502 if sparsematch1(f):
502 if sparsematch1(f):
503 files.append((f, hex(m1[f])))
503 files.append((f, hex(m1[f])))
504 sparseu1.append(f)
504 sparseu1.append(f)
505 u1 = sparseu1
505 u1 = sparseu1
506
506
507 sparsematch2 = repo.maybesparsematch(c2.rev())
507 sparsematch2 = repo.maybesparsematch(c2.rev())
508 if sparsematch2:
508 if sparsematch2:
509 sparseu2 = []
509 sparseu2 = []
510 for f in u2:
510 for f in u2:
511 if sparsematch2(f):
511 if sparsematch2(f):
512 files.append((f, hex(m2[f])))
512 files.append((f, hex(m2[f])))
513 sparseu2.append(f)
513 sparseu2.append(f)
514 u2 = sparseu2
514 u2 = sparseu2
515
515
516 # batch fetch the needed files from the server
516 # batch fetch the needed files from the server
517 repo.fileservice.prefetch(files)
517 repo.fileservice.prefetch(files)
518 return u1, u2
518 return u1, u2
519 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
520
520
521 # prefetch files before pathcopies check
521 # prefetch files before pathcopies check
522 def computeforwardmissing(orig, a, b, match=None):
522 def computeforwardmissing(orig, a, b, match=None):
523 missing = list(orig(a, b, match=match))
523 missing = list(orig(a, b, match=match))
524 repo = a._repo
524 repo = a._repo
525 if isenabled(repo):
525 if isenabled(repo):
526 mb = b.manifest()
526 mb = b.manifest()
527
527
528 files = []
528 files = []
529 sparsematch = repo.maybesparsematch(b.rev())
529 sparsematch = repo.maybesparsematch(b.rev())
530 if sparsematch:
530 if sparsematch:
531 sparsemissing = []
531 sparsemissing = []
532 for f in missing:
532 for f in missing:
533 if sparsematch(f):
533 if sparsematch(f):
534 files.append((f, hex(mb[f])))
534 files.append((f, hex(mb[f])))
535 sparsemissing.append(f)
535 sparsemissing.append(f)
536 missing = sparsemissing
536 missing = sparsemissing
537
537
538 # batch fetch the needed files from the server
538 # batch fetch the needed files from the server
539 repo.fileservice.prefetch(files)
539 repo.fileservice.prefetch(files)
540 return missing
540 return missing
541 extensions.wrapfunction(copies, '_computeforwardmissing',
541 extensions.wrapfunction(copies, '_computeforwardmissing',
542 computeforwardmissing)
542 computeforwardmissing)
543
543
544 # close cache miss server connection after the command has finished
544 # close cache miss server connection after the command has finished
545 def runcommand(orig, lui, repo, *args, **kwargs):
545 def runcommand(orig, lui, repo, *args, **kwargs):
546 fileservice = None
546 fileservice = None
547 # repo can be None when running in chg:
547 # repo can be None when running in chg:
548 # - at startup, reposetup was called because serve is not norepo
548 # - at startup, reposetup was called because serve is not norepo
549 # - a norepo command like "help" is called
549 # - a norepo command like "help" is called
550 if repo and isenabled(repo):
550 if repo and isenabled(repo):
551 fileservice = repo.fileservice
551 fileservice = repo.fileservice
552 try:
552 try:
553 return orig(lui, repo, *args, **kwargs)
553 return orig(lui, repo, *args, **kwargs)
554 finally:
554 finally:
555 if fileservice:
555 if fileservice:
556 fileservice.close()
556 fileservice.close()
557 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
557 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
558
558
559 # disappointing hacks below
559 # disappointing hacks below
560 templatekw.getrenamedfn = getrenamedfn
560 templatekw.getrenamedfn = getrenamedfn
561 extensions.wrapfunction(revset, 'filelog', filelogrevset)
561 extensions.wrapfunction(revset, 'filelog', filelogrevset)
562 revset.symbols['filelog'] = revset.filelog
562 revset.symbols['filelog'] = revset.filelog
563 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
563 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
564
564
565 # prevent strip from stripping remotefilelogs
565 # prevent strip from stripping remotefilelogs
566 def _collectbrokencsets(orig, repo, files, striprev):
566 def _collectbrokencsets(orig, repo, files, striprev):
567 if isenabled(repo):
567 if isenabled(repo):
568 files = list([f for f in files if not repo.shallowmatch(f)])
568 files = list([f for f in files if not repo.shallowmatch(f)])
569 return orig(repo, files, striprev)
569 return orig(repo, files, striprev)
570 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
570 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
571
571
572 # Don't commit filelogs until we know the commit hash, since the hash
572 # Don't commit filelogs until we know the commit hash, since the hash
573 # is present in the filelog blob.
573 # is present in the filelog blob.
574 # This violates Mercurial's filelog->manifest->changelog write order,
574 # This violates Mercurial's filelog->manifest->changelog write order,
575 # but is generally fine for client repos.
575 # but is generally fine for client repos.
576 pendingfilecommits = []
576 pendingfilecommits = []
577 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
577 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
578 flags, cachedelta=None, _metatuple=None):
578 flags, cachedelta=None, _metatuple=None):
579 if isinstance(link, int):
579 if isinstance(link, int):
580 pendingfilecommits.append(
580 pendingfilecommits.append(
581 (self, rawtext, transaction, link, p1, p2, node, flags,
581 (self, rawtext, transaction, link, p1, p2, node, flags,
582 cachedelta, _metatuple))
582 cachedelta, _metatuple))
583 return node
583 return node
584 else:
584 else:
585 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
585 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
586 cachedelta, _metatuple=_metatuple)
586 cachedelta, _metatuple=_metatuple)
587 extensions.wrapfunction(
587 extensions.wrapfunction(
588 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
588 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
589
589
590 def changelogadd(orig, self, *args):
590 def changelogadd(orig, self, *args):
591 oldlen = len(self)
591 oldlen = len(self)
592 node = orig(self, *args)
592 node = orig(self, *args)
593 newlen = len(self)
593 newlen = len(self)
594 if oldlen != newlen:
594 if oldlen != newlen:
595 for oldargs in pendingfilecommits:
595 for oldargs in pendingfilecommits:
596 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
596 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
597 linknode = self.node(link)
597 linknode = self.node(link)
598 if linknode == node:
598 if linknode == node:
599 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
599 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
600 else:
600 else:
601 raise error.ProgrammingError(
601 raise error.ProgrammingError(
602 'pending multiple integer revisions are not supported')
602 'pending multiple integer revisions are not supported')
603 else:
603 else:
604 # "link" is actually wrong here (it is set to len(changelog))
604 # "link" is actually wrong here (it is set to len(changelog))
605 # if changelog remains unchanged, skip writing file revisions
605 # if changelog remains unchanged, skip writing file revisions
606 # but still do a sanity check about pending multiple revisions
606 # but still do a sanity check about pending multiple revisions
607 if len(set(x[3] for x in pendingfilecommits)) > 1:
607 if len(set(x[3] for x in pendingfilecommits)) > 1:
608 raise error.ProgrammingError(
608 raise error.ProgrammingError(
609 'pending multiple integer revisions are not supported')
609 'pending multiple integer revisions are not supported')
610 del pendingfilecommits[:]
610 del pendingfilecommits[:]
611 return node
611 return node
612 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
612 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
613
613
614 # changectx wrappers
614 # changectx wrappers
615 def filectx(orig, self, path, fileid=None, filelog=None):
615 def filectx(orig, self, path, fileid=None, filelog=None):
616 if fileid is None:
616 if fileid is None:
617 fileid = self.filenode(path)
617 fileid = self.filenode(path)
618 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
618 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
619 return remotefilectx.remotefilectx(self._repo, path,
619 return remotefilectx.remotefilectx(self._repo, path,
620 fileid=fileid, changectx=self, filelog=filelog)
620 fileid=fileid, changectx=self, filelog=filelog)
621 return orig(self, path, fileid=fileid, filelog=filelog)
621 return orig(self, path, fileid=fileid, filelog=filelog)
622 extensions.wrapfunction(context.changectx, 'filectx', filectx)
622 extensions.wrapfunction(context.changectx, 'filectx', filectx)
623
623
624 def workingfilectx(orig, self, path, filelog=None):
624 def workingfilectx(orig, self, path, filelog=None):
625 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
625 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
626 return remotefilectx.remoteworkingfilectx(self._repo,
626 return remotefilectx.remoteworkingfilectx(self._repo,
627 path, workingctx=self, filelog=filelog)
627 path, workingctx=self, filelog=filelog)
628 return orig(self, path, filelog=filelog)
628 return orig(self, path, filelog=filelog)
629 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
629 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
630
630
631 # prefetch required revisions before a diff
631 # prefetch required revisions before a diff
632 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
632 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
633 copy, getfilectx, *args, **kwargs):
633 copy, getfilectx, *args, **kwargs):
634 if isenabled(repo):
634 if isenabled(repo):
635 prefetch = []
635 prefetch = []
636 mf1 = ctx1.manifest()
636 mf1 = ctx1.manifest()
637 for fname in modified + added + removed:
637 for fname in modified + added + removed:
638 if fname in mf1:
638 if fname in mf1:
639 fnode = getfilectx(fname, ctx1).filenode()
639 fnode = getfilectx(fname, ctx1).filenode()
640 # fnode can be None if it's a edited working ctx file
640 # fnode can be None if it's a edited working ctx file
641 if fnode:
641 if fnode:
642 prefetch.append((fname, hex(fnode)))
642 prefetch.append((fname, hex(fnode)))
643 if fname not in removed:
643 if fname not in removed:
644 fnode = getfilectx(fname, ctx2).filenode()
644 fnode = getfilectx(fname, ctx2).filenode()
645 if fnode:
645 if fnode:
646 prefetch.append((fname, hex(fnode)))
646 prefetch.append((fname, hex(fnode)))
647
647
648 repo.fileservice.prefetch(prefetch)
648 repo.fileservice.prefetch(prefetch)
649
649
650 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
650 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
651 copy, getfilectx, *args, **kwargs)
651 copy, getfilectx, *args, **kwargs)
652 extensions.wrapfunction(patch, 'trydiff', trydiff)
652 extensions.wrapfunction(patch, 'trydiff', trydiff)
653
653
654 # Prevent verify from processing files
654 # Prevent verify from processing files
655 # a stub for mercurial.hg.verify()
655 # a stub for mercurial.hg.verify()
656 def _verify(orig, repo):
656 def _verify(orig, repo):
657 lock = repo.lock()
657 lock = repo.lock()
658 try:
658 try:
659 return shallowverifier.shallowverifier(repo).verify()
659 return shallowverifier.shallowverifier(repo).verify()
660 finally:
660 finally:
661 lock.release()
661 lock.release()
662
662
663 extensions.wrapfunction(hg, 'verify', _verify)
663 extensions.wrapfunction(hg, 'verify', _verify)
664
664
665 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
665 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
666
666
667 def getrenamedfn(repo, endrev=None):
667 def getrenamedfn(repo, endrev=None):
668 rcache = {}
668 rcache = {}
669
669
670 def getrenamed(fn, rev):
670 def getrenamed(fn, rev):
671 '''looks up all renames for a file (up to endrev) the first
671 '''looks up all renames for a file (up to endrev) the first
672 time the file is given. It indexes on the changerev and only
672 time the file is given. It indexes on the changerev and only
673 parses the manifest if linkrev != changerev.
673 parses the manifest if linkrev != changerev.
674 Returns rename info for fn at changerev rev.'''
674 Returns rename info for fn at changerev rev.'''
675 if rev in rcache.setdefault(fn, {}):
675 if rev in rcache.setdefault(fn, {}):
676 return rcache[fn][rev]
676 return rcache[fn][rev]
677
677
678 try:
678 try:
679 fctx = repo[rev].filectx(fn)
679 fctx = repo[rev].filectx(fn)
680 for ancestor in fctx.ancestors():
680 for ancestor in fctx.ancestors():
681 if ancestor.path() == fn:
681 if ancestor.path() == fn:
682 renamed = ancestor.renamed()
682 renamed = ancestor.renamed()
683 rcache[fn][ancestor.rev()] = renamed
683 rcache[fn][ancestor.rev()] = renamed and renamed[0]
684
684
685 return fctx.renamed()
685 renamed = fctx.renamed()
686 return renamed and renamed[0]
686 except error.LookupError:
687 except error.LookupError:
687 return None
688 return None
688
689
689 return getrenamed
690 return getrenamed
690
691
691 def walkfilerevs(orig, repo, match, follow, revs, fncache):
692 def walkfilerevs(orig, repo, match, follow, revs, fncache):
692 if not isenabled(repo):
693 if not isenabled(repo):
693 return orig(repo, match, follow, revs, fncache)
694 return orig(repo, match, follow, revs, fncache)
694
695
695 # remotefilelog's can't be walked in rev order, so throw.
696 # remotefilelog's can't be walked in rev order, so throw.
696 # The caller will see the exception and walk the commit tree instead.
697 # The caller will see the exception and walk the commit tree instead.
697 if not follow:
698 if not follow:
698 raise cmdutil.FileWalkError("Cannot walk via filelog")
699 raise cmdutil.FileWalkError("Cannot walk via filelog")
699
700
700 wanted = set()
701 wanted = set()
701 minrev, maxrev = min(revs), max(revs)
702 minrev, maxrev = min(revs), max(revs)
702
703
703 pctx = repo['.']
704 pctx = repo['.']
704 for filename in match.files():
705 for filename in match.files():
705 if filename not in pctx:
706 if filename not in pctx:
706 raise error.Abort(_('cannot follow file not in parent '
707 raise error.Abort(_('cannot follow file not in parent '
707 'revision: "%s"') % filename)
708 'revision: "%s"') % filename)
708 fctx = pctx[filename]
709 fctx = pctx[filename]
709
710
710 linkrev = fctx.linkrev()
711 linkrev = fctx.linkrev()
711 if linkrev >= minrev and linkrev <= maxrev:
712 if linkrev >= minrev and linkrev <= maxrev:
712 fncache.setdefault(linkrev, []).append(filename)
713 fncache.setdefault(linkrev, []).append(filename)
713 wanted.add(linkrev)
714 wanted.add(linkrev)
714
715
715 for ancestor in fctx.ancestors():
716 for ancestor in fctx.ancestors():
716 linkrev = ancestor.linkrev()
717 linkrev = ancestor.linkrev()
717 if linkrev >= minrev and linkrev <= maxrev:
718 if linkrev >= minrev and linkrev <= maxrev:
718 fncache.setdefault(linkrev, []).append(ancestor.path())
719 fncache.setdefault(linkrev, []).append(ancestor.path())
719 wanted.add(linkrev)
720 wanted.add(linkrev)
720
721
721 return wanted
722 return wanted
722
723
723 def filelogrevset(orig, repo, subset, x):
724 def filelogrevset(orig, repo, subset, x):
724 """``filelog(pattern)``
725 """``filelog(pattern)``
725 Changesets connected to the specified filelog.
726 Changesets connected to the specified filelog.
726
727
727 For performance reasons, ``filelog()`` does not show every changeset
728 For performance reasons, ``filelog()`` does not show every changeset
728 that affects the requested file(s). See :hg:`help log` for details. For
729 that affects the requested file(s). See :hg:`help log` for details. For
729 a slower, more accurate result, use ``file()``.
730 a slower, more accurate result, use ``file()``.
730 """
731 """
731
732
732 if not isenabled(repo):
733 if not isenabled(repo):
733 return orig(repo, subset, x)
734 return orig(repo, subset, x)
734
735
735 # i18n: "filelog" is a keyword
736 # i18n: "filelog" is a keyword
736 pat = revset.getstring(x, _("filelog requires a pattern"))
737 pat = revset.getstring(x, _("filelog requires a pattern"))
737 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
738 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
738 ctx=repo[None])
739 ctx=repo[None])
739 s = set()
740 s = set()
740
741
741 if not match.patkind(pat):
742 if not match.patkind(pat):
742 # slow
743 # slow
743 for r in subset:
744 for r in subset:
744 ctx = repo[r]
745 ctx = repo[r]
745 cfiles = ctx.files()
746 cfiles = ctx.files()
746 for f in m.files():
747 for f in m.files():
747 if f in cfiles:
748 if f in cfiles:
748 s.add(ctx.rev())
749 s.add(ctx.rev())
749 break
750 break
750 else:
751 else:
751 # partial
752 # partial
752 files = (f for f in repo[None] if m(f))
753 files = (f for f in repo[None] if m(f))
753 for f in files:
754 for f in files:
754 fctx = repo[None].filectx(f)
755 fctx = repo[None].filectx(f)
755 s.add(fctx.linkrev())
756 s.add(fctx.linkrev())
756 for actx in fctx.ancestors():
757 for actx in fctx.ancestors():
757 s.add(actx.linkrev())
758 s.add(actx.linkrev())
758
759
759 return smartset.baseset([r for r in subset if r in s])
760 return smartset.baseset([r for r in subset if r in s])
760
761
761 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
762 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
762 def gc(ui, *args, **opts):
763 def gc(ui, *args, **opts):
763 '''garbage collect the client and server filelog caches
764 '''garbage collect the client and server filelog caches
764 '''
765 '''
765 cachepaths = set()
766 cachepaths = set()
766
767
767 # get the system client cache
768 # get the system client cache
768 systemcache = shallowutil.getcachepath(ui, allowempty=True)
769 systemcache = shallowutil.getcachepath(ui, allowempty=True)
769 if systemcache:
770 if systemcache:
770 cachepaths.add(systemcache)
771 cachepaths.add(systemcache)
771
772
772 # get repo client and server cache
773 # get repo client and server cache
773 repopaths = []
774 repopaths = []
774 pwd = ui.environ.get('PWD')
775 pwd = ui.environ.get('PWD')
775 if pwd:
776 if pwd:
776 repopaths.append(pwd)
777 repopaths.append(pwd)
777
778
778 repopaths.extend(args)
779 repopaths.extend(args)
779 repos = []
780 repos = []
780 for repopath in repopaths:
781 for repopath in repopaths:
781 try:
782 try:
782 repo = hg.peer(ui, {}, repopath)
783 repo = hg.peer(ui, {}, repopath)
783 repos.append(repo)
784 repos.append(repo)
784
785
785 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
786 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
786 if repocache:
787 if repocache:
787 cachepaths.add(repocache)
788 cachepaths.add(repocache)
788 except error.RepoError:
789 except error.RepoError:
789 pass
790 pass
790
791
791 # gc client cache
792 # gc client cache
792 for cachepath in cachepaths:
793 for cachepath in cachepaths:
793 gcclient(ui, cachepath)
794 gcclient(ui, cachepath)
794
795
795 # gc server cache
796 # gc server cache
796 for repo in repos:
797 for repo in repos:
797 remotefilelogserver.gcserver(ui, repo._repo)
798 remotefilelogserver.gcserver(ui, repo._repo)
798
799
799 def gcclient(ui, cachepath):
800 def gcclient(ui, cachepath):
800 # get list of repos that use this cache
801 # get list of repos that use this cache
801 repospath = os.path.join(cachepath, 'repos')
802 repospath = os.path.join(cachepath, 'repos')
802 if not os.path.exists(repospath):
803 if not os.path.exists(repospath):
803 ui.warn(_("no known cache at %s\n") % cachepath)
804 ui.warn(_("no known cache at %s\n") % cachepath)
804 return
805 return
805
806
806 reposfile = open(repospath, 'r')
807 reposfile = open(repospath, 'r')
807 repos = set([r[:-1] for r in reposfile.readlines()])
808 repos = set([r[:-1] for r in reposfile.readlines()])
808 reposfile.close()
809 reposfile.close()
809
810
810 # build list of useful files
811 # build list of useful files
811 validrepos = []
812 validrepos = []
812 keepkeys = set()
813 keepkeys = set()
813
814
814 sharedcache = None
815 sharedcache = None
815 filesrepacked = False
816 filesrepacked = False
816
817
817 count = 0
818 count = 0
818 progress = ui.makeprogress(_("analyzing repositories"), unit="repos",
819 progress = ui.makeprogress(_("analyzing repositories"), unit="repos",
819 total=len(repos))
820 total=len(repos))
820 for path in repos:
821 for path in repos:
821 progress.update(count)
822 progress.update(count)
822 count += 1
823 count += 1
823 try:
824 try:
824 path = ui.expandpath(os.path.normpath(path))
825 path = ui.expandpath(os.path.normpath(path))
825 except TypeError as e:
826 except TypeError as e:
826 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
827 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
827 traceback.print_exc()
828 traceback.print_exc()
828 continue
829 continue
829 try:
830 try:
830 peer = hg.peer(ui, {}, path)
831 peer = hg.peer(ui, {}, path)
831 repo = peer._repo
832 repo = peer._repo
832 except error.RepoError:
833 except error.RepoError:
833 continue
834 continue
834
835
835 validrepos.append(path)
836 validrepos.append(path)
836
837
837 # Protect against any repo or config changes that have happened since
838 # Protect against any repo or config changes that have happened since
838 # this repo was added to the repos file. We'd rather this loop succeed
839 # this repo was added to the repos file. We'd rather this loop succeed
839 # and too much be deleted, than the loop fail and nothing gets deleted.
840 # and too much be deleted, than the loop fail and nothing gets deleted.
840 if not isenabled(repo):
841 if not isenabled(repo):
841 continue
842 continue
842
843
843 if not util.safehasattr(repo, 'name'):
844 if not util.safehasattr(repo, 'name'):
844 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
845 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
845 continue
846 continue
846
847
847 # If garbage collection on repack and repack on hg gc are enabled
848 # If garbage collection on repack and repack on hg gc are enabled
848 # then loose files are repacked and garbage collected.
849 # then loose files are repacked and garbage collected.
849 # Otherwise regular garbage collection is performed.
850 # Otherwise regular garbage collection is performed.
850 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
851 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
851 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
852 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
852 if repackonhggc and gcrepack:
853 if repackonhggc and gcrepack:
853 try:
854 try:
854 repackmod.incrementalrepack(repo)
855 repackmod.incrementalrepack(repo)
855 filesrepacked = True
856 filesrepacked = True
856 continue
857 continue
857 except (IOError, repackmod.RepackAlreadyRunning):
858 except (IOError, repackmod.RepackAlreadyRunning):
858 # If repack cannot be performed due to not enough disk space
859 # If repack cannot be performed due to not enough disk space
859 # continue doing garbage collection of loose files w/o repack
860 # continue doing garbage collection of loose files w/o repack
860 pass
861 pass
861
862
862 reponame = repo.name
863 reponame = repo.name
863 if not sharedcache:
864 if not sharedcache:
864 sharedcache = repo.sharedstore
865 sharedcache = repo.sharedstore
865
866
866 # Compute a keepset which is not garbage collected
867 # Compute a keepset which is not garbage collected
867 def keyfn(fname, fnode):
868 def keyfn(fname, fnode):
868 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
869 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
869 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
870 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
870
871
871 progress.complete()
872 progress.complete()
872
873
873 # write list of valid repos back
874 # write list of valid repos back
874 oldumask = os.umask(0o002)
875 oldumask = os.umask(0o002)
875 try:
876 try:
876 reposfile = open(repospath, 'w')
877 reposfile = open(repospath, 'w')
877 reposfile.writelines([("%s\n" % r) for r in validrepos])
878 reposfile.writelines([("%s\n" % r) for r in validrepos])
878 reposfile.close()
879 reposfile.close()
879 finally:
880 finally:
880 os.umask(oldumask)
881 os.umask(oldumask)
881
882
882 # prune cache
883 # prune cache
883 if sharedcache is not None:
884 if sharedcache is not None:
884 sharedcache.gc(keepkeys)
885 sharedcache.gc(keepkeys)
885 elif not filesrepacked:
886 elif not filesrepacked:
886 ui.warn(_("warning: no valid repos in repofile\n"))
887 ui.warn(_("warning: no valid repos in repofile\n"))
887
888
888 def log(orig, ui, repo, *pats, **opts):
889 def log(orig, ui, repo, *pats, **opts):
889 if not isenabled(repo):
890 if not isenabled(repo):
890 return orig(ui, repo, *pats, **opts)
891 return orig(ui, repo, *pats, **opts)
891
892
892 follow = opts.get(r'follow')
893 follow = opts.get(r'follow')
893 revs = opts.get(r'rev')
894 revs = opts.get(r'rev')
894 if pats:
895 if pats:
895 # Force slowpath for non-follow patterns and follows that start from
896 # Force slowpath for non-follow patterns and follows that start from
896 # non-working-copy-parent revs.
897 # non-working-copy-parent revs.
897 if not follow or revs:
898 if not follow or revs:
898 # This forces the slowpath
899 # This forces the slowpath
899 opts[r'removed'] = True
900 opts[r'removed'] = True
900
901
901 # If this is a non-follow log without any revs specified, recommend that
902 # If this is a non-follow log without any revs specified, recommend that
902 # the user add -f to speed it up.
903 # the user add -f to speed it up.
903 if not follow and not revs:
904 if not follow and not revs:
904 match, pats = scmutil.matchandpats(repo['.'], pats,
905 match, pats = scmutil.matchandpats(repo['.'], pats,
905 pycompat.byteskwargs(opts))
906 pycompat.byteskwargs(opts))
906 isfile = not match.anypats()
907 isfile = not match.anypats()
907 if isfile:
908 if isfile:
908 for file in match.files():
909 for file in match.files():
909 if not os.path.isfile(repo.wjoin(file)):
910 if not os.path.isfile(repo.wjoin(file)):
910 isfile = False
911 isfile = False
911 break
912 break
912
913
913 if isfile:
914 if isfile:
914 ui.warn(_("warning: file log can be slow on large repos - " +
915 ui.warn(_("warning: file log can be slow on large repos - " +
915 "use -f to speed it up\n"))
916 "use -f to speed it up\n"))
916
917
917 return orig(ui, repo, *pats, **opts)
918 return orig(ui, repo, *pats, **opts)
918
919
919 def revdatelimit(ui, revset):
920 def revdatelimit(ui, revset):
920 """Update revset so that only changesets no older than 'prefetchdays' days
921 """Update revset so that only changesets no older than 'prefetchdays' days
921 are included. The default value is set to 14 days. If 'prefetchdays' is set
922 are included. The default value is set to 14 days. If 'prefetchdays' is set
922 to zero or negative value then date restriction is not applied.
923 to zero or negative value then date restriction is not applied.
923 """
924 """
924 days = ui.configint('remotefilelog', 'prefetchdays')
925 days = ui.configint('remotefilelog', 'prefetchdays')
925 if days > 0:
926 if days > 0:
926 revset = '(%s) & date(-%s)' % (revset, days)
927 revset = '(%s) & date(-%s)' % (revset, days)
927 return revset
928 return revset
928
929
929 def readytofetch(repo):
930 def readytofetch(repo):
930 """Check that enough time has passed since the last background prefetch.
931 """Check that enough time has passed since the last background prefetch.
931 This only relates to prefetches after operations that change the working
932 This only relates to prefetches after operations that change the working
932 copy parent. Default delay between background prefetches is 2 minutes.
933 copy parent. Default delay between background prefetches is 2 minutes.
933 """
934 """
934 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
935 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
935 fname = repo.vfs.join('lastprefetch')
936 fname = repo.vfs.join('lastprefetch')
936
937
937 ready = False
938 ready = False
938 with open(fname, 'a'):
939 with open(fname, 'a'):
939 # the with construct above is used to avoid race conditions
940 # the with construct above is used to avoid race conditions
940 modtime = os.path.getmtime(fname)
941 modtime = os.path.getmtime(fname)
941 if (time.time() - modtime) > timeout:
942 if (time.time() - modtime) > timeout:
942 os.utime(fname, None)
943 os.utime(fname, None)
943 ready = True
944 ready = True
944
945
945 return ready
946 return ready
946
947
947 def wcpprefetch(ui, repo, **kwargs):
948 def wcpprefetch(ui, repo, **kwargs):
948 """Prefetches in background revisions specified by bgprefetchrevs revset.
949 """Prefetches in background revisions specified by bgprefetchrevs revset.
949 Does background repack if backgroundrepack flag is set in config.
950 Does background repack if backgroundrepack flag is set in config.
950 """
951 """
951 shallow = isenabled(repo)
952 shallow = isenabled(repo)
952 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
953 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
953 isready = readytofetch(repo)
954 isready = readytofetch(repo)
954
955
955 if not (shallow and bgprefetchrevs and isready):
956 if not (shallow and bgprefetchrevs and isready):
956 return
957 return
957
958
958 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
959 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
959 # update a revset with a date limit
960 # update a revset with a date limit
960 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
961 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
961
962
962 def anon():
963 def anon():
963 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
964 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
964 return
965 return
965 repo.ranprefetch = True
966 repo.ranprefetch = True
966 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
967 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
967
968
968 repo._afterlock(anon)
969 repo._afterlock(anon)
969
970
970 def pull(orig, ui, repo, *pats, **opts):
971 def pull(orig, ui, repo, *pats, **opts):
971 result = orig(ui, repo, *pats, **opts)
972 result = orig(ui, repo, *pats, **opts)
972
973
973 if isenabled(repo):
974 if isenabled(repo):
974 # prefetch if it's configured
975 # prefetch if it's configured
975 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
976 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
976 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
977 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
977 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
978 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
978
979
979 if prefetchrevset:
980 if prefetchrevset:
980 ui.status(_("prefetching file contents\n"))
981 ui.status(_("prefetching file contents\n"))
981 revs = scmutil.revrange(repo, [prefetchrevset])
982 revs = scmutil.revrange(repo, [prefetchrevset])
982 base = repo['.'].rev()
983 base = repo['.'].rev()
983 if bgprefetch:
984 if bgprefetch:
984 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
985 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
985 else:
986 else:
986 repo.prefetch(revs, base=base)
987 repo.prefetch(revs, base=base)
987 if bgrepack:
988 if bgrepack:
988 repackmod.backgroundrepack(repo, incremental=True)
989 repackmod.backgroundrepack(repo, incremental=True)
989 elif bgrepack:
990 elif bgrepack:
990 repackmod.backgroundrepack(repo, incremental=True)
991 repackmod.backgroundrepack(repo, incremental=True)
991
992
992 return result
993 return result
993
994
994 def exchangepull(orig, repo, remote, *args, **kwargs):
995 def exchangepull(orig, repo, remote, *args, **kwargs):
995 # Hook into the callstream/getbundle to insert bundle capabilities
996 # Hook into the callstream/getbundle to insert bundle capabilities
996 # during a pull.
997 # during a pull.
997 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
998 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
998 **kwargs):
999 **kwargs):
999 if not bundlecaps:
1000 if not bundlecaps:
1000 bundlecaps = set()
1001 bundlecaps = set()
1001 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1002 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1002 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1003 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1003 **kwargs)
1004 **kwargs)
1004
1005
1005 if util.safehasattr(remote, '_callstream'):
1006 if util.safehasattr(remote, '_callstream'):
1006 remote._localrepo = repo
1007 remote._localrepo = repo
1007 elif util.safehasattr(remote, 'getbundle'):
1008 elif util.safehasattr(remote, 'getbundle'):
1008 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1009 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1009
1010
1010 return orig(repo, remote, *args, **kwargs)
1011 return orig(repo, remote, *args, **kwargs)
1011
1012
1012 def _fileprefetchhook(repo, revs, match):
1013 def _fileprefetchhook(repo, revs, match):
1013 if isenabled(repo):
1014 if isenabled(repo):
1014 allfiles = []
1015 allfiles = []
1015 for rev in revs:
1016 for rev in revs:
1016 if rev == nodemod.wdirrev or rev is None:
1017 if rev == nodemod.wdirrev or rev is None:
1017 continue
1018 continue
1018 ctx = repo[rev]
1019 ctx = repo[rev]
1019 mf = ctx.manifest()
1020 mf = ctx.manifest()
1020 sparsematch = repo.maybesparsematch(ctx.rev())
1021 sparsematch = repo.maybesparsematch(ctx.rev())
1021 for path in ctx.walk(match):
1022 for path in ctx.walk(match):
1022 if path.endswith('/'):
1023 if path.endswith('/'):
1023 # Tree manifest that's being excluded as part of narrow
1024 # Tree manifest that's being excluded as part of narrow
1024 continue
1025 continue
1025 if (not sparsematch or sparsematch(path)) and path in mf:
1026 if (not sparsematch or sparsematch(path)) and path in mf:
1026 allfiles.append((path, hex(mf[path])))
1027 allfiles.append((path, hex(mf[path])))
1027 repo.fileservice.prefetch(allfiles)
1028 repo.fileservice.prefetch(allfiles)
1028
1029
1029 @command('debugremotefilelog', [
1030 @command('debugremotefilelog', [
1030 ('d', 'decompress', None, _('decompress the filelog first')),
1031 ('d', 'decompress', None, _('decompress the filelog first')),
1031 ], _('hg debugremotefilelog <path>'), norepo=True)
1032 ], _('hg debugremotefilelog <path>'), norepo=True)
1032 def debugremotefilelog(ui, path, **opts):
1033 def debugremotefilelog(ui, path, **opts):
1033 return debugcommands.debugremotefilelog(ui, path, **opts)
1034 return debugcommands.debugremotefilelog(ui, path, **opts)
1034
1035
1035 @command('verifyremotefilelog', [
1036 @command('verifyremotefilelog', [
1036 ('d', 'decompress', None, _('decompress the filelogs first')),
1037 ('d', 'decompress', None, _('decompress the filelogs first')),
1037 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1038 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1038 def verifyremotefilelog(ui, path, **opts):
1039 def verifyremotefilelog(ui, path, **opts):
1039 return debugcommands.verifyremotefilelog(ui, path, **opts)
1040 return debugcommands.verifyremotefilelog(ui, path, **opts)
1040
1041
1041 @command('debugdatapack', [
1042 @command('debugdatapack', [
1042 ('', 'long', None, _('print the long hashes')),
1043 ('', 'long', None, _('print the long hashes')),
1043 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1044 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1044 ], _('hg debugdatapack <paths>'), norepo=True)
1045 ], _('hg debugdatapack <paths>'), norepo=True)
1045 def debugdatapack(ui, *paths, **opts):
1046 def debugdatapack(ui, *paths, **opts):
1046 return debugcommands.debugdatapack(ui, *paths, **opts)
1047 return debugcommands.debugdatapack(ui, *paths, **opts)
1047
1048
1048 @command('debughistorypack', [
1049 @command('debughistorypack', [
1049 ], _('hg debughistorypack <path>'), norepo=True)
1050 ], _('hg debughistorypack <path>'), norepo=True)
1050 def debughistorypack(ui, path, **opts):
1051 def debughistorypack(ui, path, **opts):
1051 return debugcommands.debughistorypack(ui, path)
1052 return debugcommands.debughistorypack(ui, path)
1052
1053
1053 @command('debugkeepset', [
1054 @command('debugkeepset', [
1054 ], _('hg debugkeepset'))
1055 ], _('hg debugkeepset'))
1055 def debugkeepset(ui, repo, **opts):
1056 def debugkeepset(ui, repo, **opts):
1056 # The command is used to measure keepset computation time
1057 # The command is used to measure keepset computation time
1057 def keyfn(fname, fnode):
1058 def keyfn(fname, fnode):
1058 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1059 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1059 repackmod.keepset(repo, keyfn)
1060 repackmod.keepset(repo, keyfn)
1060 return
1061 return
1061
1062
1062 @command('debugwaitonrepack', [
1063 @command('debugwaitonrepack', [
1063 ], _('hg debugwaitonrepack'))
1064 ], _('hg debugwaitonrepack'))
1064 def debugwaitonrepack(ui, repo, **opts):
1065 def debugwaitonrepack(ui, repo, **opts):
1065 return debugcommands.debugwaitonrepack(repo)
1066 return debugcommands.debugwaitonrepack(repo)
1066
1067
1067 @command('debugwaitonprefetch', [
1068 @command('debugwaitonprefetch', [
1068 ], _('hg debugwaitonprefetch'))
1069 ], _('hg debugwaitonprefetch'))
1069 def debugwaitonprefetch(ui, repo, **opts):
1070 def debugwaitonprefetch(ui, repo, **opts):
1070 return debugcommands.debugwaitonprefetch(repo)
1071 return debugcommands.debugwaitonprefetch(repo)
1071
1072
1072 def resolveprefetchopts(ui, opts):
1073 def resolveprefetchopts(ui, opts):
1073 if not opts.get('rev'):
1074 if not opts.get('rev'):
1074 revset = ['.', 'draft()']
1075 revset = ['.', 'draft()']
1075
1076
1076 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1077 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1077 if prefetchrevset:
1078 if prefetchrevset:
1078 revset.append('(%s)' % prefetchrevset)
1079 revset.append('(%s)' % prefetchrevset)
1079 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1080 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1080 if bgprefetchrevs:
1081 if bgprefetchrevs:
1081 revset.append('(%s)' % bgprefetchrevs)
1082 revset.append('(%s)' % bgprefetchrevs)
1082 revset = '+'.join(revset)
1083 revset = '+'.join(revset)
1083
1084
1084 # update a revset with a date limit
1085 # update a revset with a date limit
1085 revset = revdatelimit(ui, revset)
1086 revset = revdatelimit(ui, revset)
1086
1087
1087 opts['rev'] = [revset]
1088 opts['rev'] = [revset]
1088
1089
1089 if not opts.get('base'):
1090 if not opts.get('base'):
1090 opts['base'] = None
1091 opts['base'] = None
1091
1092
1092 return opts
1093 return opts
1093
1094
1094 @command('prefetch', [
1095 @command('prefetch', [
1095 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1096 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1096 ('', 'repack', False, _('run repack after prefetch')),
1097 ('', 'repack', False, _('run repack after prefetch')),
1097 ('b', 'base', '', _("rev that is assumed to already be local")),
1098 ('b', 'base', '', _("rev that is assumed to already be local")),
1098 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1099 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1099 def prefetch(ui, repo, *pats, **opts):
1100 def prefetch(ui, repo, *pats, **opts):
1100 """prefetch file revisions from the server
1101 """prefetch file revisions from the server
1101
1102
1102 Prefetchs file revisions for the specified revs and stores them in the
1103 Prefetchs file revisions for the specified revs and stores them in the
1103 local remotefilelog cache. If no rev is specified, the default rev is
1104 local remotefilelog cache. If no rev is specified, the default rev is
1104 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1105 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1105 File names or patterns can be used to limit which files are downloaded.
1106 File names or patterns can be used to limit which files are downloaded.
1106
1107
1107 Return 0 on success.
1108 Return 0 on success.
1108 """
1109 """
1109 opts = pycompat.byteskwargs(opts)
1110 opts = pycompat.byteskwargs(opts)
1110 if not isenabled(repo):
1111 if not isenabled(repo):
1111 raise error.Abort(_("repo is not shallow"))
1112 raise error.Abort(_("repo is not shallow"))
1112
1113
1113 opts = resolveprefetchopts(ui, opts)
1114 opts = resolveprefetchopts(ui, opts)
1114 revs = scmutil.revrange(repo, opts.get('rev'))
1115 revs = scmutil.revrange(repo, opts.get('rev'))
1115 repo.prefetch(revs, opts.get('base'), pats, opts)
1116 repo.prefetch(revs, opts.get('base'), pats, opts)
1116
1117
1117 # Run repack in background
1118 # Run repack in background
1118 if opts.get('repack'):
1119 if opts.get('repack'):
1119 repackmod.backgroundrepack(repo, incremental=True)
1120 repackmod.backgroundrepack(repo, incremental=True)
1120
1121
1121 @command('repack', [
1122 @command('repack', [
1122 ('', 'background', None, _('run in a background process'), None),
1123 ('', 'background', None, _('run in a background process'), None),
1123 ('', 'incremental', None, _('do an incremental repack'), None),
1124 ('', 'incremental', None, _('do an incremental repack'), None),
1124 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1125 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1125 ], _('hg repack [OPTIONS]'))
1126 ], _('hg repack [OPTIONS]'))
1126 def repack_(ui, repo, *pats, **opts):
1127 def repack_(ui, repo, *pats, **opts):
1127 if opts.get(r'background'):
1128 if opts.get(r'background'):
1128 repackmod.backgroundrepack(repo, incremental=opts.get(r'incremental'),
1129 repackmod.backgroundrepack(repo, incremental=opts.get(r'incremental'),
1129 packsonly=opts.get(r'packsonly', False))
1130 packsonly=opts.get(r'packsonly', False))
1130 return
1131 return
1131
1132
1132 options = {'packsonly': opts.get(r'packsonly')}
1133 options = {'packsonly': opts.get(r'packsonly')}
1133
1134
1134 try:
1135 try:
1135 if opts.get(r'incremental'):
1136 if opts.get(r'incremental'):
1136 repackmod.incrementalrepack(repo, options=options)
1137 repackmod.incrementalrepack(repo, options=options)
1137 else:
1138 else:
1138 repackmod.fullrepack(repo, options=options)
1139 repackmod.fullrepack(repo, options=options)
1139 except repackmod.RepackAlreadyRunning as ex:
1140 except repackmod.RepackAlreadyRunning as ex:
1140 # Don't propogate the exception if the repack is already in
1141 # Don't propogate the exception if the repack is already in
1141 # progress, since we want the command to exit 0.
1142 # progress, since we want the command to exit 0.
1142 repo.ui.warn('%s\n' % ex)
1143 repo.ui.warn('%s\n' % ex)
@@ -1,118 +1,118 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ hg commit -qAm x
12 $ hg commit -qAm x
13 $ mkdir dir
13 $ mkdir dir
14 $ echo y > dir/y
14 $ echo y > dir/y
15 $ hg commit -qAm y
15 $ hg commit -qAm y
16
16
17 $ cd ..
17 $ cd ..
18
18
19 Shallow clone from full
19 Shallow clone from full
20
20
21 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
21 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
22 streaming all changes
22 streaming all changes
23 2 files to transfer, 473 bytes of data
23 2 files to transfer, 473 bytes of data
24 transferred 473 bytes in * seconds (*/sec) (glob)
24 transferred 473 bytes in * seconds (*/sec) (glob)
25 searching for changes
25 searching for changes
26 no changes found
26 no changes found
27 $ cd shallow
27 $ cd shallow
28 $ cat .hg/requires
28 $ cat .hg/requires
29 dotencode
29 dotencode
30 exp-remotefilelog-repo-req-1
30 exp-remotefilelog-repo-req-1
31 fncache
31 fncache
32 generaldelta
32 generaldelta
33 revlogv1
33 revlogv1
34 sparserevlog
34 sparserevlog
35 store
35 store
36
36
37 $ hg update
37 $ hg update
38 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
38 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
39 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
39 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
40
40
41 Log on a file without -f
41 Log on a file without -f
42
42
43 $ hg log dir/y
43 $ hg log dir/y
44 warning: file log can be slow on large repos - use -f to speed it up
44 warning: file log can be slow on large repos - use -f to speed it up
45 changeset: 1:2e73264fab97
45 changeset: 1:2e73264fab97
46 tag: tip
46 tag: tip
47 user: test
47 user: test
48 date: Thu Jan 01 00:00:00 1970 +0000
48 date: Thu Jan 01 00:00:00 1970 +0000
49 summary: y
49 summary: y
50
50
51 Log on a file with -f
51 Log on a file with -f
52
52
53 $ hg log -f dir/y
53 $ hg log -f dir/y
54 changeset: 1:2e73264fab97
54 changeset: 1:2e73264fab97
55 tag: tip
55 tag: tip
56 user: test
56 user: test
57 date: Thu Jan 01 00:00:00 1970 +0000
57 date: Thu Jan 01 00:00:00 1970 +0000
58 summary: y
58 summary: y
59
59
60 Log on a file with kind in path
60 Log on a file with kind in path
61 $ hg log -r "filelog('path:dir/y')"
61 $ hg log -r "filelog('path:dir/y')"
62 changeset: 1:2e73264fab97
62 changeset: 1:2e73264fab97
63 tag: tip
63 tag: tip
64 user: test
64 user: test
65 date: Thu Jan 01 00:00:00 1970 +0000
65 date: Thu Jan 01 00:00:00 1970 +0000
66 summary: y
66 summary: y
67
67
68 Log on multiple files with -f
68 Log on multiple files with -f
69
69
70 $ hg log -f dir/y x
70 $ hg log -f dir/y x
71 changeset: 1:2e73264fab97
71 changeset: 1:2e73264fab97
72 tag: tip
72 tag: tip
73 user: test
73 user: test
74 date: Thu Jan 01 00:00:00 1970 +0000
74 date: Thu Jan 01 00:00:00 1970 +0000
75 summary: y
75 summary: y
76
76
77 changeset: 0:b292c1e3311f
77 changeset: 0:b292c1e3311f
78 user: test
78 user: test
79 date: Thu Jan 01 00:00:00 1970 +0000
79 date: Thu Jan 01 00:00:00 1970 +0000
80 summary: x
80 summary: x
81
81
82 Log on a directory
82 Log on a directory
83
83
84 $ hg log dir
84 $ hg log dir
85 changeset: 1:2e73264fab97
85 changeset: 1:2e73264fab97
86 tag: tip
86 tag: tip
87 user: test
87 user: test
88 date: Thu Jan 01 00:00:00 1970 +0000
88 date: Thu Jan 01 00:00:00 1970 +0000
89 summary: y
89 summary: y
90
90
91 Log on a file from inside a directory
91 Log on a file from inside a directory
92
92
93 $ cd dir
93 $ cd dir
94 $ hg log y
94 $ hg log y
95 warning: file log can be slow on large repos - use -f to speed it up
95 warning: file log can be slow on large repos - use -f to speed it up
96 changeset: 1:2e73264fab97
96 changeset: 1:2e73264fab97
97 tag: tip
97 tag: tip
98 user: test
98 user: test
99 date: Thu Jan 01 00:00:00 1970 +0000
99 date: Thu Jan 01 00:00:00 1970 +0000
100 summary: y
100 summary: y
101
101
102 Log on a file via -fr
102 Log on a file via -fr
103 $ cd ..
103 $ cd ..
104 $ hg log -fr tip dir/ --template '{rev}\n'
104 $ hg log -fr tip dir/ --template '{rev}\n'
105 1
105 1
106
106
107 Trace renames
107 Trace renames
108 $ hg mv x z
108 $ hg mv x z
109 $ hg commit -m move
109 $ hg commit -m move
110 $ hg log -f z -T '{desc} {file_copies}\n' -G
110 $ hg log -f z -T '{desc} {file_copies}\n' -G
111 @ move z (x\x14\x06\xe7A\x18bv\x94&\x84\x17I\x1f\x01\x8aJ\x881R\xf0) (esc)
111 @ move z (x)
112 :
112 :
113 o x
113 o x
114
114
115
115
116 Verify remotefilelog handles rename metadata stripping when comparing file sizes
116 Verify remotefilelog handles rename metadata stripping when comparing file sizes
117 $ hg debugrebuilddirstate
117 $ hg debugrebuilddirstate
118 $ hg status
118 $ hg status
General Comments 0
You need to be logged in to leave comments. Login now