##// END OF EJS Templates
remotefilelog: fix crash on `hg addremove` of added-but-deleted file...
Martin von Zweigbergk -
r42261:91cc8dc8 default
parent child Browse files
Show More
@@ -1,1141 +1,1142 b''
1 # __init__.py - remotefilelog extension
1 # __init__.py - remotefilelog extension
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8
8
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 GUARANTEES. This means that repositories created with this extension may
10 GUARANTEES. This means that repositories created with this extension may
11 only be usable with the exact version of this extension/Mercurial that was
11 only be usable with the exact version of this extension/Mercurial that was
12 used. The extension attempts to enforce this in order to prevent repository
12 used. The extension attempts to enforce this in order to prevent repository
13 corruption.
13 corruption.
14
14
15 remotefilelog works by fetching file contents lazily and storing them
15 remotefilelog works by fetching file contents lazily and storing them
16 in a cache on the client rather than in revlogs. This allows enormous
16 in a cache on the client rather than in revlogs. This allows enormous
17 histories to be transferred only partially, making them easier to
17 histories to be transferred only partially, making them easier to
18 operate on.
18 operate on.
19
19
20 Configs:
20 Configs:
21
21
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23
23
24 ``packs.maxpacksize`` specifies the maximum pack file size
24 ``packs.maxpacksize`` specifies the maximum pack file size
25
25
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 shared cache (trees only for now)
27 shared cache (trees only for now)
28
28
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30
30
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 update, and on other commands that use them. Different from pullprefetch.
32 update, and on other commands that use them. Different from pullprefetch.
33
33
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35
35
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 it is garbage collected
37 it is garbage collected
38
38
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40
40
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 days after which it is no longer prefetched.
42 days after which it is no longer prefetched.
43
43
44 ``remotefilelog.prefetchdelay`` specifies delay between background
44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 prefetches in seconds after operations that change the working copy parent
45 prefetches in seconds after operations that change the working copy parent
46
46
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 pack files required to be considered part of a generation. In particular,
48 pack files required to be considered part of a generation. In particular,
49 minimum number of packs files > gencountlimit.
49 minimum number of packs files > gencountlimit.
50
50
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 each generation of the data pack files. For example, list ['100MB','1MB']
52 each generation of the data pack files. For example, list ['100MB','1MB']
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 1MB, 100MB) and [100MB, infinity).
54 1MB, 100MB) and [100MB, infinity).
55
55
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 include in an incremental data repack.
57 include in an incremental data repack.
58
58
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 it to be considered for an incremental data repack.
60 it to be considered for an incremental data repack.
61
61
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 to include in an incremental data repack.
63 to include in an incremental data repack.
64
64
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 history pack files required to be considered part of a generation. In
66 history pack files required to be considered part of a generation. In
67 particular, minimum number of packs files > gencountlimit.
67 particular, minimum number of packs files > gencountlimit.
68
68
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 each generation of the history pack files. For example, list [
70 each generation of the history pack files. For example, list [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73
73
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 include in an incremental history repack.
75 include in an incremental history repack.
76
76
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 for it to be considered for an incremental history repack.
78 for it to be considered for an incremental history repack.
79
79
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 files to include in an incremental history repack.
81 files to include in an incremental history repack.
82
82
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 background
84 background
85
85
86 ``remotefilelog.cachepath`` path to cache
86 ``remotefilelog.cachepath`` path to cache
87
87
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 group
89 group
90
90
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92
92
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94
94
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96
96
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98
98
99 ``remotefilelog.fetchwarning``: message to print when too many
99 ``remotefilelog.fetchwarning``: message to print when too many
100 single-file fetches occur
100 single-file fetches occur
101
101
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103
103
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 files, otherwise use optimistic fetching
105 files, otherwise use optimistic fetching
106
106
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 eagerly downloaded rather than lazily
108 eagerly downloaded rather than lazily
109
109
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 data from other repos in a shared store.
111 data from other repos in a shared store.
112
112
113 ``remotefilelog.server`` if true, enable server-side functionality
113 ``remotefilelog.server`` if true, enable server-side functionality
114
114
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116
116
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 blobs
118 blobs
119
119
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 before returning blobs
121 before returning blobs
122
122
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 corruption before returning metadata
124 corruption before returning metadata
125
125
126 """
126 """
127 from __future__ import absolute_import
127 from __future__ import absolute_import
128
128
129 import os
129 import os
130 import time
130 import time
131 import traceback
131 import traceback
132
132
133 from mercurial.node import hex
133 from mercurial.node import hex
134 from mercurial.i18n import _
134 from mercurial.i18n import _
135 from mercurial import (
135 from mercurial import (
136 changegroup,
136 changegroup,
137 changelog,
137 changelog,
138 cmdutil,
138 cmdutil,
139 commands,
139 commands,
140 configitems,
140 configitems,
141 context,
141 context,
142 copies,
142 copies,
143 debugcommands as hgdebugcommands,
143 debugcommands as hgdebugcommands,
144 dispatch,
144 dispatch,
145 error,
145 error,
146 exchange,
146 exchange,
147 extensions,
147 extensions,
148 hg,
148 hg,
149 localrepo,
149 localrepo,
150 match,
150 match,
151 merge,
151 merge,
152 node as nodemod,
152 node as nodemod,
153 patch,
153 patch,
154 pycompat,
154 pycompat,
155 registrar,
155 registrar,
156 repair,
156 repair,
157 repoview,
157 repoview,
158 revset,
158 revset,
159 scmutil,
159 scmutil,
160 smartset,
160 smartset,
161 streamclone,
161 streamclone,
162 util,
162 util,
163 )
163 )
164 from . import (
164 from . import (
165 constants,
165 constants,
166 debugcommands,
166 debugcommands,
167 fileserverclient,
167 fileserverclient,
168 remotefilectx,
168 remotefilectx,
169 remotefilelog,
169 remotefilelog,
170 remotefilelogserver,
170 remotefilelogserver,
171 repack as repackmod,
171 repack as repackmod,
172 shallowbundle,
172 shallowbundle,
173 shallowrepo,
173 shallowrepo,
174 shallowstore,
174 shallowstore,
175 shallowutil,
175 shallowutil,
176 shallowverifier,
176 shallowverifier,
177 )
177 )
178
178
179 # ensures debug commands are registered
179 # ensures debug commands are registered
180 hgdebugcommands.command
180 hgdebugcommands.command
181
181
182 cmdtable = {}
182 cmdtable = {}
183 command = registrar.command(cmdtable)
183 command = registrar.command(cmdtable)
184
184
185 configtable = {}
185 configtable = {}
186 configitem = registrar.configitem(configtable)
186 configitem = registrar.configitem(configtable)
187
187
188 configitem('remotefilelog', 'debug', default=False)
188 configitem('remotefilelog', 'debug', default=False)
189
189
190 configitem('remotefilelog', 'reponame', default='')
190 configitem('remotefilelog', 'reponame', default='')
191 configitem('remotefilelog', 'cachepath', default=None)
191 configitem('remotefilelog', 'cachepath', default=None)
192 configitem('remotefilelog', 'cachegroup', default=None)
192 configitem('remotefilelog', 'cachegroup', default=None)
193 configitem('remotefilelog', 'cacheprocess', default=None)
193 configitem('remotefilelog', 'cacheprocess', default=None)
194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
194 configitem('remotefilelog', 'cacheprocess.includepath', default=None)
195 configitem("remotefilelog", "cachelimit", default="1000 GB")
195 configitem("remotefilelog", "cachelimit", default="1000 GB")
196
196
197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
197 configitem('remotefilelog', 'fallbackpath', default=configitems.dynamicdefault,
198 alias=[('remotefilelog', 'fallbackrepo')])
198 alias=[('remotefilelog', 'fallbackrepo')])
199
199
200 configitem('remotefilelog', 'validatecachelog', default=None)
200 configitem('remotefilelog', 'validatecachelog', default=None)
201 configitem('remotefilelog', 'validatecache', default='on')
201 configitem('remotefilelog', 'validatecache', default='on')
202 configitem('remotefilelog', 'server', default=None)
202 configitem('remotefilelog', 'server', default=None)
203 configitem('remotefilelog', 'servercachepath', default=None)
203 configitem('remotefilelog', 'servercachepath', default=None)
204 configitem("remotefilelog", "serverexpiration", default=30)
204 configitem("remotefilelog", "serverexpiration", default=30)
205 configitem('remotefilelog', 'backgroundrepack', default=False)
205 configitem('remotefilelog', 'backgroundrepack', default=False)
206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
206 configitem('remotefilelog', 'bgprefetchrevs', default=None)
207 configitem('remotefilelog', 'pullprefetch', default=None)
207 configitem('remotefilelog', 'pullprefetch', default=None)
208 configitem('remotefilelog', 'backgroundprefetch', default=False)
208 configitem('remotefilelog', 'backgroundprefetch', default=False)
209 configitem('remotefilelog', 'prefetchdelay', default=120)
209 configitem('remotefilelog', 'prefetchdelay', default=120)
210 configitem('remotefilelog', 'prefetchdays', default=14)
210 configitem('remotefilelog', 'prefetchdays', default=14)
211
211
212 configitem('remotefilelog', 'getfilesstep', default=10000)
212 configitem('remotefilelog', 'getfilesstep', default=10000)
213 configitem('remotefilelog', 'getfilestype', default='optimistic')
213 configitem('remotefilelog', 'getfilestype', default='optimistic')
214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
214 configitem('remotefilelog', 'batchsize', configitems.dynamicdefault)
215 configitem('remotefilelog', 'fetchwarning', default='')
215 configitem('remotefilelog', 'fetchwarning', default='')
216
216
217 configitem('remotefilelog', 'includepattern', default=None)
217 configitem('remotefilelog', 'includepattern', default=None)
218 configitem('remotefilelog', 'excludepattern', default=None)
218 configitem('remotefilelog', 'excludepattern', default=None)
219
219
220 configitem('remotefilelog', 'gcrepack', default=False)
220 configitem('remotefilelog', 'gcrepack', default=False)
221 configitem('remotefilelog', 'repackonhggc', default=False)
221 configitem('remotefilelog', 'repackonhggc', default=False)
222 configitem('repack', 'chainorphansbysize', default=True)
222 configitem('repack', 'chainorphansbysize', default=True)
223
223
224 configitem('packs', 'maxpacksize', default=0)
224 configitem('packs', 'maxpacksize', default=0)
225 configitem('packs', 'maxchainlen', default=1000)
225 configitem('packs', 'maxchainlen', default=1000)
226
226
227 # default TTL limit is 30 days
227 # default TTL limit is 30 days
228 _defaultlimit = 60 * 60 * 24 * 30
228 _defaultlimit = 60 * 60 * 24 * 30
229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
229 configitem('remotefilelog', 'nodettl', default=_defaultlimit)
230
230
231 configitem('remotefilelog', 'data.gencountlimit', default=2),
231 configitem('remotefilelog', 'data.gencountlimit', default=2),
232 configitem('remotefilelog', 'data.generations',
232 configitem('remotefilelog', 'data.generations',
233 default=['1GB', '100MB', '1MB'])
233 default=['1GB', '100MB', '1MB'])
234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
234 configitem('remotefilelog', 'data.maxrepackpacks', default=50)
235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
235 configitem('remotefilelog', 'data.repackmaxpacksize', default='4GB')
236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
236 configitem('remotefilelog', 'data.repacksizelimit', default='100MB')
237
237
238 configitem('remotefilelog', 'history.gencountlimit', default=2),
238 configitem('remotefilelog', 'history.gencountlimit', default=2),
239 configitem('remotefilelog', 'history.generations', default=['100MB'])
239 configitem('remotefilelog', 'history.generations', default=['100MB'])
240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
240 configitem('remotefilelog', 'history.maxrepackpacks', default=50)
241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
241 configitem('remotefilelog', 'history.repackmaxpacksize', default='400MB')
242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
242 configitem('remotefilelog', 'history.repacksizelimit', default='100MB')
243
243
244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
244 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
245 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
246 # be specifying the version(s) of Mercurial they are tested with, or
246 # be specifying the version(s) of Mercurial they are tested with, or
247 # leave the attribute unspecified.
247 # leave the attribute unspecified.
248 testedwith = 'ships-with-hg-core'
248 testedwith = 'ships-with-hg-core'
249
249
250 repoclass = localrepo.localrepository
250 repoclass = localrepo.localrepository
251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
251 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
252
252
253 isenabled = shallowutil.isenabled
253 isenabled = shallowutil.isenabled
254
254
255 def uisetup(ui):
255 def uisetup(ui):
256 """Wraps user facing Mercurial commands to swap them out with shallow
256 """Wraps user facing Mercurial commands to swap them out with shallow
257 versions.
257 versions.
258 """
258 """
259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
259 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
260
260
261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
261 entry = extensions.wrapcommand(commands.table, 'clone', cloneshallow)
262 entry[1].append(('', 'shallow', None,
262 entry[1].append(('', 'shallow', None,
263 _("create a shallow clone which uses remote file "
263 _("create a shallow clone which uses remote file "
264 "history")))
264 "history")))
265
265
266 extensions.wrapcommand(commands.table, 'debugindex',
266 extensions.wrapcommand(commands.table, 'debugindex',
267 debugcommands.debugindex)
267 debugcommands.debugindex)
268 extensions.wrapcommand(commands.table, 'debugindexdot',
268 extensions.wrapcommand(commands.table, 'debugindexdot',
269 debugcommands.debugindexdot)
269 debugcommands.debugindexdot)
270 extensions.wrapcommand(commands.table, 'log', log)
270 extensions.wrapcommand(commands.table, 'log', log)
271 extensions.wrapcommand(commands.table, 'pull', pull)
271 extensions.wrapcommand(commands.table, 'pull', pull)
272
272
273 # Prevent 'hg manifest --all'
273 # Prevent 'hg manifest --all'
274 def _manifest(orig, ui, repo, *args, **opts):
274 def _manifest(orig, ui, repo, *args, **opts):
275 if (isenabled(repo) and opts.get(r'all')):
275 if (isenabled(repo) and opts.get(r'all')):
276 raise error.Abort(_("--all is not supported in a shallow repo"))
276 raise error.Abort(_("--all is not supported in a shallow repo"))
277
277
278 return orig(ui, repo, *args, **opts)
278 return orig(ui, repo, *args, **opts)
279 extensions.wrapcommand(commands.table, "manifest", _manifest)
279 extensions.wrapcommand(commands.table, "manifest", _manifest)
280
280
281 # Wrap remotefilelog with lfs code
281 # Wrap remotefilelog with lfs code
282 def _lfsloaded(loaded=False):
282 def _lfsloaded(loaded=False):
283 lfsmod = None
283 lfsmod = None
284 try:
284 try:
285 lfsmod = extensions.find('lfs')
285 lfsmod = extensions.find('lfs')
286 except KeyError:
286 except KeyError:
287 pass
287 pass
288 if lfsmod:
288 if lfsmod:
289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
289 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
290 fileserverclient._lfsmod = lfsmod
290 fileserverclient._lfsmod = lfsmod
291 extensions.afterloaded('lfs', _lfsloaded)
291 extensions.afterloaded('lfs', _lfsloaded)
292
292
293 # debugdata needs remotefilelog.len to work
293 # debugdata needs remotefilelog.len to work
294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
294 extensions.wrapcommand(commands.table, 'debugdata', debugdatashallow)
295
295
296 def cloneshallow(orig, ui, repo, *args, **opts):
296 def cloneshallow(orig, ui, repo, *args, **opts):
297 if opts.get(r'shallow'):
297 if opts.get(r'shallow'):
298 repos = []
298 repos = []
299 def pull_shallow(orig, self, *args, **kwargs):
299 def pull_shallow(orig, self, *args, **kwargs):
300 if not isenabled(self):
300 if not isenabled(self):
301 repos.append(self.unfiltered())
301 repos.append(self.unfiltered())
302 # set up the client hooks so the post-clone update works
302 # set up the client hooks so the post-clone update works
303 setupclient(self.ui, self.unfiltered())
303 setupclient(self.ui, self.unfiltered())
304
304
305 # setupclient fixed the class on the repo itself
305 # setupclient fixed the class on the repo itself
306 # but we also need to fix it on the repoview
306 # but we also need to fix it on the repoview
307 if isinstance(self, repoview.repoview):
307 if isinstance(self, repoview.repoview):
308 self.__class__.__bases__ = (self.__class__.__bases__[0],
308 self.__class__.__bases__ = (self.__class__.__bases__[0],
309 self.unfiltered().__class__)
309 self.unfiltered().__class__)
310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
310 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
311 self._writerequirements()
311 self._writerequirements()
312
312
313 # Since setupclient hadn't been called, exchange.pull was not
313 # Since setupclient hadn't been called, exchange.pull was not
314 # wrapped. So we need to manually invoke our version of it.
314 # wrapped. So we need to manually invoke our version of it.
315 return exchangepull(orig, self, *args, **kwargs)
315 return exchangepull(orig, self, *args, **kwargs)
316 else:
316 else:
317 return orig(self, *args, **kwargs)
317 return orig(self, *args, **kwargs)
318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
318 extensions.wrapfunction(exchange, 'pull', pull_shallow)
319
319
320 # Wrap the stream logic to add requirements and to pass include/exclude
320 # Wrap the stream logic to add requirements and to pass include/exclude
321 # patterns around.
321 # patterns around.
322 def setup_streamout(repo, remote):
322 def setup_streamout(repo, remote):
323 # Replace remote.stream_out with a version that sends file
323 # Replace remote.stream_out with a version that sends file
324 # patterns.
324 # patterns.
325 def stream_out_shallow(orig):
325 def stream_out_shallow(orig):
326 caps = remote.capabilities()
326 caps = remote.capabilities()
327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
327 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
328 opts = {}
328 opts = {}
329 if repo.includepattern:
329 if repo.includepattern:
330 opts[r'includepattern'] = '\0'.join(repo.includepattern)
330 opts[r'includepattern'] = '\0'.join(repo.includepattern)
331 if repo.excludepattern:
331 if repo.excludepattern:
332 opts[r'excludepattern'] = '\0'.join(repo.excludepattern)
332 opts[r'excludepattern'] = '\0'.join(repo.excludepattern)
333 return remote._callstream('stream_out_shallow', **opts)
333 return remote._callstream('stream_out_shallow', **opts)
334 else:
334 else:
335 return orig()
335 return orig()
336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
336 extensions.wrapfunction(remote, 'stream_out', stream_out_shallow)
337 def stream_wrap(orig, op):
337 def stream_wrap(orig, op):
338 setup_streamout(op.repo, op.remote)
338 setup_streamout(op.repo, op.remote)
339 return orig(op)
339 return orig(op)
340 extensions.wrapfunction(
340 extensions.wrapfunction(
341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
341 streamclone, 'maybeperformlegacystreamclone', stream_wrap)
342
342
343 def canperformstreamclone(orig, pullop, bundle2=False):
343 def canperformstreamclone(orig, pullop, bundle2=False):
344 # remotefilelog is currently incompatible with the
344 # remotefilelog is currently incompatible with the
345 # bundle2 flavor of streamclones, so force us to use
345 # bundle2 flavor of streamclones, so force us to use
346 # v1 instead.
346 # v1 instead.
347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
347 if 'v2' in pullop.remotebundle2caps.get('stream', []):
348 pullop.remotebundle2caps['stream'] = [
348 pullop.remotebundle2caps['stream'] = [
349 c for c in pullop.remotebundle2caps['stream']
349 c for c in pullop.remotebundle2caps['stream']
350 if c != 'v2']
350 if c != 'v2']
351 if bundle2:
351 if bundle2:
352 return False, None
352 return False, None
353 supported, requirements = orig(pullop, bundle2=bundle2)
353 supported, requirements = orig(pullop, bundle2=bundle2)
354 if requirements is not None:
354 if requirements is not None:
355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
355 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
356 return supported, requirements
356 return supported, requirements
357 extensions.wrapfunction(
357 extensions.wrapfunction(
358 streamclone, 'canperformstreamclone', canperformstreamclone)
358 streamclone, 'canperformstreamclone', canperformstreamclone)
359
359
360 try:
360 try:
361 orig(ui, repo, *args, **opts)
361 orig(ui, repo, *args, **opts)
362 finally:
362 finally:
363 if opts.get(r'shallow'):
363 if opts.get(r'shallow'):
364 for r in repos:
364 for r in repos:
365 if util.safehasattr(r, 'fileservice'):
365 if util.safehasattr(r, 'fileservice'):
366 r.fileservice.close()
366 r.fileservice.close()
367
367
368 def debugdatashallow(orig, *args, **kwds):
368 def debugdatashallow(orig, *args, **kwds):
369 oldlen = remotefilelog.remotefilelog.__len__
369 oldlen = remotefilelog.remotefilelog.__len__
370 try:
370 try:
371 remotefilelog.remotefilelog.__len__ = lambda x: 1
371 remotefilelog.remotefilelog.__len__ = lambda x: 1
372 return orig(*args, **kwds)
372 return orig(*args, **kwds)
373 finally:
373 finally:
374 remotefilelog.remotefilelog.__len__ = oldlen
374 remotefilelog.remotefilelog.__len__ = oldlen
375
375
376 def reposetup(ui, repo):
376 def reposetup(ui, repo):
377 if not repo.local():
377 if not repo.local():
378 return
378 return
379
379
380 # put here intentionally bc doesnt work in uisetup
380 # put here intentionally bc doesnt work in uisetup
381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
381 ui.setconfig('hooks', 'update.prefetch', wcpprefetch)
382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
382 ui.setconfig('hooks', 'commit.prefetch', wcpprefetch)
383
383
384 isserverenabled = ui.configbool('remotefilelog', 'server')
384 isserverenabled = ui.configbool('remotefilelog', 'server')
385 isshallowclient = isenabled(repo)
385 isshallowclient = isenabled(repo)
386
386
387 if isserverenabled and isshallowclient:
387 if isserverenabled and isshallowclient:
388 raise RuntimeError("Cannot be both a server and shallow client.")
388 raise RuntimeError("Cannot be both a server and shallow client.")
389
389
390 if isshallowclient:
390 if isshallowclient:
391 setupclient(ui, repo)
391 setupclient(ui, repo)
392
392
393 if isserverenabled:
393 if isserverenabled:
394 remotefilelogserver.setupserver(ui, repo)
394 remotefilelogserver.setupserver(ui, repo)
395
395
396 def setupclient(ui, repo):
396 def setupclient(ui, repo):
397 if not isinstance(repo, localrepo.localrepository):
397 if not isinstance(repo, localrepo.localrepository):
398 return
398 return
399
399
400 # Even clients get the server setup since they need to have the
400 # Even clients get the server setup since they need to have the
401 # wireprotocol endpoints registered.
401 # wireprotocol endpoints registered.
402 remotefilelogserver.onetimesetup(ui)
402 remotefilelogserver.onetimesetup(ui)
403 onetimeclientsetup(ui)
403 onetimeclientsetup(ui)
404
404
405 shallowrepo.wraprepo(repo)
405 shallowrepo.wraprepo(repo)
406 repo.store = shallowstore.wrapstore(repo.store)
406 repo.store = shallowstore.wrapstore(repo.store)
407
407
408 clientonetime = False
408 clientonetime = False
409 def onetimeclientsetup(ui):
409 def onetimeclientsetup(ui):
410 global clientonetime
410 global clientonetime
411 if clientonetime:
411 if clientonetime:
412 return
412 return
413 clientonetime = True
413 clientonetime = True
414
414
415 changegroup.cgpacker = shallowbundle.shallowcg1packer
415 changegroup.cgpacker = shallowbundle.shallowcg1packer
416
416
417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
417 extensions.wrapfunction(changegroup, '_addchangegroupfiles',
418 shallowbundle.addchangegroupfiles)
418 shallowbundle.addchangegroupfiles)
419 extensions.wrapfunction(
419 extensions.wrapfunction(
420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
420 changegroup, 'makechangegroup', shallowbundle.makechangegroup)
421
421
422 def storewrapper(orig, requirements, path, vfstype):
422 def storewrapper(orig, requirements, path, vfstype):
423 s = orig(requirements, path, vfstype)
423 s = orig(requirements, path, vfstype)
424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
424 if constants.SHALLOWREPO_REQUIREMENT in requirements:
425 s = shallowstore.wrapstore(s)
425 s = shallowstore.wrapstore(s)
426
426
427 return s
427 return s
428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
428 extensions.wrapfunction(localrepo, 'makestore', storewrapper)
429
429
430 extensions.wrapfunction(exchange, 'pull', exchangepull)
430 extensions.wrapfunction(exchange, 'pull', exchangepull)
431
431
432 # prefetch files before update
432 # prefetch files before update
433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
433 def applyupdates(orig, repo, actions, wctx, mctx, overwrite, labels=None):
434 if isenabled(repo):
434 if isenabled(repo):
435 manifest = mctx.manifest()
435 manifest = mctx.manifest()
436 files = []
436 files = []
437 for f, args, msg in actions['g']:
437 for f, args, msg in actions['g']:
438 files.append((f, hex(manifest[f])))
438 files.append((f, hex(manifest[f])))
439 # batch fetch the needed files from the server
439 # batch fetch the needed files from the server
440 repo.fileservice.prefetch(files)
440 repo.fileservice.prefetch(files)
441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
441 return orig(repo, actions, wctx, mctx, overwrite, labels=labels)
442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
442 extensions.wrapfunction(merge, 'applyupdates', applyupdates)
443
443
444 # Prefetch merge checkunknownfiles
444 # Prefetch merge checkunknownfiles
445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
445 def checkunknownfiles(orig, repo, wctx, mctx, force, actions,
446 *args, **kwargs):
446 *args, **kwargs):
447 if isenabled(repo):
447 if isenabled(repo):
448 files = []
448 files = []
449 sparsematch = repo.maybesparsematch(mctx.rev())
449 sparsematch = repo.maybesparsematch(mctx.rev())
450 for f, (m, actionargs, msg) in actions.iteritems():
450 for f, (m, actionargs, msg) in actions.iteritems():
451 if sparsematch and not sparsematch(f):
451 if sparsematch and not sparsematch(f):
452 continue
452 continue
453 if m in ('c', 'dc', 'cm'):
453 if m in ('c', 'dc', 'cm'):
454 files.append((f, hex(mctx.filenode(f))))
454 files.append((f, hex(mctx.filenode(f))))
455 elif m == 'dg':
455 elif m == 'dg':
456 f2 = actionargs[0]
456 f2 = actionargs[0]
457 files.append((f2, hex(mctx.filenode(f2))))
457 files.append((f2, hex(mctx.filenode(f2))))
458 # batch fetch the needed files from the server
458 # batch fetch the needed files from the server
459 repo.fileservice.prefetch(files)
459 repo.fileservice.prefetch(files)
460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
460 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
461 extensions.wrapfunction(merge, '_checkunknownfiles', checkunknownfiles)
462
462
463 # Prefetch files before status attempts to look at their size and contents
463 # Prefetch files before status attempts to look at their size and contents
464 def checklookup(orig, self, files):
464 def checklookup(orig, self, files):
465 repo = self._repo
465 repo = self._repo
466 if isenabled(repo):
466 if isenabled(repo):
467 prefetchfiles = []
467 prefetchfiles = []
468 for parent in self._parents:
468 for parent in self._parents:
469 for f in files:
469 for f in files:
470 if f in parent:
470 if f in parent:
471 prefetchfiles.append((f, hex(parent.filenode(f))))
471 prefetchfiles.append((f, hex(parent.filenode(f))))
472 # batch fetch the needed files from the server
472 # batch fetch the needed files from the server
473 repo.fileservice.prefetch(prefetchfiles)
473 repo.fileservice.prefetch(prefetchfiles)
474 return orig(self, files)
474 return orig(self, files)
475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
475 extensions.wrapfunction(context.workingctx, '_checklookup', checklookup)
476
476
477 # Prefetch the logic that compares added and removed files for renames
477 # Prefetch the logic that compares added and removed files for renames
478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
478 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
479 if isenabled(repo):
479 if isenabled(repo):
480 files = []
480 files = []
481 parentctx = repo['.']
481 pmf = repo['.'].manifest()
482 for f in removed:
482 for f in removed:
483 files.append((f, hex(parentctx.filenode(f))))
483 if f in pmf:
484 files.append((f, hex(pmf[f])))
484 # batch fetch the needed files from the server
485 # batch fetch the needed files from the server
485 repo.fileservice.prefetch(files)
486 repo.fileservice.prefetch(files)
486 return orig(repo, matcher, added, removed, *args, **kwargs)
487 return orig(repo, matcher, added, removed, *args, **kwargs)
487 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488 extensions.wrapfunction(scmutil, '_findrenames', findrenames)
488
489
489 # prefetch files before mergecopies check
490 # prefetch files before mergecopies check
490 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 def computenonoverlap(orig, repo, c1, c2, *args, **kwargs):
491 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 u1, u2 = orig(repo, c1, c2, *args, **kwargs)
492 if isenabled(repo):
493 if isenabled(repo):
493 m1 = c1.manifest()
494 m1 = c1.manifest()
494 m2 = c2.manifest()
495 m2 = c2.manifest()
495 files = []
496 files = []
496
497
497 sparsematch1 = repo.maybesparsematch(c1.rev())
498 sparsematch1 = repo.maybesparsematch(c1.rev())
498 if sparsematch1:
499 if sparsematch1:
499 sparseu1 = []
500 sparseu1 = []
500 for f in u1:
501 for f in u1:
501 if sparsematch1(f):
502 if sparsematch1(f):
502 files.append((f, hex(m1[f])))
503 files.append((f, hex(m1[f])))
503 sparseu1.append(f)
504 sparseu1.append(f)
504 u1 = sparseu1
505 u1 = sparseu1
505
506
506 sparsematch2 = repo.maybesparsematch(c2.rev())
507 sparsematch2 = repo.maybesparsematch(c2.rev())
507 if sparsematch2:
508 if sparsematch2:
508 sparseu2 = []
509 sparseu2 = []
509 for f in u2:
510 for f in u2:
510 if sparsematch2(f):
511 if sparsematch2(f):
511 files.append((f, hex(m2[f])))
512 files.append((f, hex(m2[f])))
512 sparseu2.append(f)
513 sparseu2.append(f)
513 u2 = sparseu2
514 u2 = sparseu2
514
515
515 # batch fetch the needed files from the server
516 # batch fetch the needed files from the server
516 repo.fileservice.prefetch(files)
517 repo.fileservice.prefetch(files)
517 return u1, u2
518 return u1, u2
518 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519 extensions.wrapfunction(copies, '_computenonoverlap', computenonoverlap)
519
520
520 # prefetch files before pathcopies check
521 # prefetch files before pathcopies check
521 def computeforwardmissing(orig, a, b, match=None):
522 def computeforwardmissing(orig, a, b, match=None):
522 missing = list(orig(a, b, match=match))
523 missing = list(orig(a, b, match=match))
523 repo = a._repo
524 repo = a._repo
524 if isenabled(repo):
525 if isenabled(repo):
525 mb = b.manifest()
526 mb = b.manifest()
526
527
527 files = []
528 files = []
528 sparsematch = repo.maybesparsematch(b.rev())
529 sparsematch = repo.maybesparsematch(b.rev())
529 if sparsematch:
530 if sparsematch:
530 sparsemissing = []
531 sparsemissing = []
531 for f in missing:
532 for f in missing:
532 if sparsematch(f):
533 if sparsematch(f):
533 files.append((f, hex(mb[f])))
534 files.append((f, hex(mb[f])))
534 sparsemissing.append(f)
535 sparsemissing.append(f)
535 missing = sparsemissing
536 missing = sparsemissing
536
537
537 # batch fetch the needed files from the server
538 # batch fetch the needed files from the server
538 repo.fileservice.prefetch(files)
539 repo.fileservice.prefetch(files)
539 return missing
540 return missing
540 extensions.wrapfunction(copies, '_computeforwardmissing',
541 extensions.wrapfunction(copies, '_computeforwardmissing',
541 computeforwardmissing)
542 computeforwardmissing)
542
543
543 # close cache miss server connection after the command has finished
544 # close cache miss server connection after the command has finished
544 def runcommand(orig, lui, repo, *args, **kwargs):
545 def runcommand(orig, lui, repo, *args, **kwargs):
545 fileservice = None
546 fileservice = None
546 # repo can be None when running in chg:
547 # repo can be None when running in chg:
547 # - at startup, reposetup was called because serve is not norepo
548 # - at startup, reposetup was called because serve is not norepo
548 # - a norepo command like "help" is called
549 # - a norepo command like "help" is called
549 if repo and isenabled(repo):
550 if repo and isenabled(repo):
550 fileservice = repo.fileservice
551 fileservice = repo.fileservice
551 try:
552 try:
552 return orig(lui, repo, *args, **kwargs)
553 return orig(lui, repo, *args, **kwargs)
553 finally:
554 finally:
554 if fileservice:
555 if fileservice:
555 fileservice.close()
556 fileservice.close()
556 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
557 extensions.wrapfunction(dispatch, 'runcommand', runcommand)
557
558
558 # disappointing hacks below
559 # disappointing hacks below
559 scmutil.getrenamedfn = getrenamedfn
560 scmutil.getrenamedfn = getrenamedfn
560 extensions.wrapfunction(revset, 'filelog', filelogrevset)
561 extensions.wrapfunction(revset, 'filelog', filelogrevset)
561 revset.symbols['filelog'] = revset.filelog
562 revset.symbols['filelog'] = revset.filelog
562 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
563 extensions.wrapfunction(cmdutil, 'walkfilerevs', walkfilerevs)
563
564
564 # prevent strip from stripping remotefilelogs
565 # prevent strip from stripping remotefilelogs
565 def _collectbrokencsets(orig, repo, files, striprev):
566 def _collectbrokencsets(orig, repo, files, striprev):
566 if isenabled(repo):
567 if isenabled(repo):
567 files = list([f for f in files if not repo.shallowmatch(f)])
568 files = list([f for f in files if not repo.shallowmatch(f)])
568 return orig(repo, files, striprev)
569 return orig(repo, files, striprev)
569 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
570 extensions.wrapfunction(repair, '_collectbrokencsets', _collectbrokencsets)
570
571
571 # Don't commit filelogs until we know the commit hash, since the hash
572 # Don't commit filelogs until we know the commit hash, since the hash
572 # is present in the filelog blob.
573 # is present in the filelog blob.
573 # This violates Mercurial's filelog->manifest->changelog write order,
574 # This violates Mercurial's filelog->manifest->changelog write order,
574 # but is generally fine for client repos.
575 # but is generally fine for client repos.
575 pendingfilecommits = []
576 pendingfilecommits = []
576 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
577 def addrawrevision(orig, self, rawtext, transaction, link, p1, p2, node,
577 flags, cachedelta=None, _metatuple=None):
578 flags, cachedelta=None, _metatuple=None):
578 if isinstance(link, int):
579 if isinstance(link, int):
579 pendingfilecommits.append(
580 pendingfilecommits.append(
580 (self, rawtext, transaction, link, p1, p2, node, flags,
581 (self, rawtext, transaction, link, p1, p2, node, flags,
581 cachedelta, _metatuple))
582 cachedelta, _metatuple))
582 return node
583 return node
583 else:
584 else:
584 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
585 return orig(self, rawtext, transaction, link, p1, p2, node, flags,
585 cachedelta, _metatuple=_metatuple)
586 cachedelta, _metatuple=_metatuple)
586 extensions.wrapfunction(
587 extensions.wrapfunction(
587 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
588 remotefilelog.remotefilelog, 'addrawrevision', addrawrevision)
588
589
589 def changelogadd(orig, self, *args):
590 def changelogadd(orig, self, *args):
590 oldlen = len(self)
591 oldlen = len(self)
591 node = orig(self, *args)
592 node = orig(self, *args)
592 newlen = len(self)
593 newlen = len(self)
593 if oldlen != newlen:
594 if oldlen != newlen:
594 for oldargs in pendingfilecommits:
595 for oldargs in pendingfilecommits:
595 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
596 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
596 linknode = self.node(link)
597 linknode = self.node(link)
597 if linknode == node:
598 if linknode == node:
598 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
599 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
599 else:
600 else:
600 raise error.ProgrammingError(
601 raise error.ProgrammingError(
601 'pending multiple integer revisions are not supported')
602 'pending multiple integer revisions are not supported')
602 else:
603 else:
603 # "link" is actually wrong here (it is set to len(changelog))
604 # "link" is actually wrong here (it is set to len(changelog))
604 # if changelog remains unchanged, skip writing file revisions
605 # if changelog remains unchanged, skip writing file revisions
605 # but still do a sanity check about pending multiple revisions
606 # but still do a sanity check about pending multiple revisions
606 if len(set(x[3] for x in pendingfilecommits)) > 1:
607 if len(set(x[3] for x in pendingfilecommits)) > 1:
607 raise error.ProgrammingError(
608 raise error.ProgrammingError(
608 'pending multiple integer revisions are not supported')
609 'pending multiple integer revisions are not supported')
609 del pendingfilecommits[:]
610 del pendingfilecommits[:]
610 return node
611 return node
611 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
612 extensions.wrapfunction(changelog.changelog, 'add', changelogadd)
612
613
613 # changectx wrappers
614 # changectx wrappers
614 def filectx(orig, self, path, fileid=None, filelog=None):
615 def filectx(orig, self, path, fileid=None, filelog=None):
615 if fileid is None:
616 if fileid is None:
616 fileid = self.filenode(path)
617 fileid = self.filenode(path)
617 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
618 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
618 return remotefilectx.remotefilectx(self._repo, path,
619 return remotefilectx.remotefilectx(self._repo, path,
619 fileid=fileid, changectx=self, filelog=filelog)
620 fileid=fileid, changectx=self, filelog=filelog)
620 return orig(self, path, fileid=fileid, filelog=filelog)
621 return orig(self, path, fileid=fileid, filelog=filelog)
621 extensions.wrapfunction(context.changectx, 'filectx', filectx)
622 extensions.wrapfunction(context.changectx, 'filectx', filectx)
622
623
623 def workingfilectx(orig, self, path, filelog=None):
624 def workingfilectx(orig, self, path, filelog=None):
624 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
625 if (isenabled(self._repo) and self._repo.shallowmatch(path)):
625 return remotefilectx.remoteworkingfilectx(self._repo,
626 return remotefilectx.remoteworkingfilectx(self._repo,
626 path, workingctx=self, filelog=filelog)
627 path, workingctx=self, filelog=filelog)
627 return orig(self, path, filelog=filelog)
628 return orig(self, path, filelog=filelog)
628 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
629 extensions.wrapfunction(context.workingctx, 'filectx', workingfilectx)
629
630
630 # prefetch required revisions before a diff
631 # prefetch required revisions before a diff
631 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
632 def trydiff(orig, repo, revs, ctx1, ctx2, modified, added, removed,
632 copy, getfilectx, *args, **kwargs):
633 copy, getfilectx, *args, **kwargs):
633 if isenabled(repo):
634 if isenabled(repo):
634 prefetch = []
635 prefetch = []
635 mf1 = ctx1.manifest()
636 mf1 = ctx1.manifest()
636 for fname in modified + added + removed:
637 for fname in modified + added + removed:
637 if fname in mf1:
638 if fname in mf1:
638 fnode = getfilectx(fname, ctx1).filenode()
639 fnode = getfilectx(fname, ctx1).filenode()
639 # fnode can be None if it's a edited working ctx file
640 # fnode can be None if it's a edited working ctx file
640 if fnode:
641 if fnode:
641 prefetch.append((fname, hex(fnode)))
642 prefetch.append((fname, hex(fnode)))
642 if fname not in removed:
643 if fname not in removed:
643 fnode = getfilectx(fname, ctx2).filenode()
644 fnode = getfilectx(fname, ctx2).filenode()
644 if fnode:
645 if fnode:
645 prefetch.append((fname, hex(fnode)))
646 prefetch.append((fname, hex(fnode)))
646
647
647 repo.fileservice.prefetch(prefetch)
648 repo.fileservice.prefetch(prefetch)
648
649
649 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
650 return orig(repo, revs, ctx1, ctx2, modified, added, removed,
650 copy, getfilectx, *args, **kwargs)
651 copy, getfilectx, *args, **kwargs)
651 extensions.wrapfunction(patch, 'trydiff', trydiff)
652 extensions.wrapfunction(patch, 'trydiff', trydiff)
652
653
653 # Prevent verify from processing files
654 # Prevent verify from processing files
654 # a stub for mercurial.hg.verify()
655 # a stub for mercurial.hg.verify()
655 def _verify(orig, repo):
656 def _verify(orig, repo):
656 lock = repo.lock()
657 lock = repo.lock()
657 try:
658 try:
658 return shallowverifier.shallowverifier(repo).verify()
659 return shallowverifier.shallowverifier(repo).verify()
659 finally:
660 finally:
660 lock.release()
661 lock.release()
661
662
662 extensions.wrapfunction(hg, 'verify', _verify)
663 extensions.wrapfunction(hg, 'verify', _verify)
663
664
664 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
665 scmutil.fileprefetchhooks.add('remotefilelog', _fileprefetchhook)
665
666
666 def getrenamedfn(repo, endrev=None):
667 def getrenamedfn(repo, endrev=None):
667 rcache = {}
668 rcache = {}
668
669
669 def getrenamed(fn, rev):
670 def getrenamed(fn, rev):
670 '''looks up all renames for a file (up to endrev) the first
671 '''looks up all renames for a file (up to endrev) the first
671 time the file is given. It indexes on the changerev and only
672 time the file is given. It indexes on the changerev and only
672 parses the manifest if linkrev != changerev.
673 parses the manifest if linkrev != changerev.
673 Returns rename info for fn at changerev rev.'''
674 Returns rename info for fn at changerev rev.'''
674 if rev in rcache.setdefault(fn, {}):
675 if rev in rcache.setdefault(fn, {}):
675 return rcache[fn][rev]
676 return rcache[fn][rev]
676
677
677 try:
678 try:
678 fctx = repo[rev].filectx(fn)
679 fctx = repo[rev].filectx(fn)
679 for ancestor in fctx.ancestors():
680 for ancestor in fctx.ancestors():
680 if ancestor.path() == fn:
681 if ancestor.path() == fn:
681 renamed = ancestor.renamed()
682 renamed = ancestor.renamed()
682 rcache[fn][ancestor.rev()] = renamed and renamed[0]
683 rcache[fn][ancestor.rev()] = renamed and renamed[0]
683
684
684 renamed = fctx.renamed()
685 renamed = fctx.renamed()
685 return renamed and renamed[0]
686 return renamed and renamed[0]
686 except error.LookupError:
687 except error.LookupError:
687 return None
688 return None
688
689
689 return getrenamed
690 return getrenamed
690
691
691 def walkfilerevs(orig, repo, match, follow, revs, fncache):
692 def walkfilerevs(orig, repo, match, follow, revs, fncache):
692 if not isenabled(repo):
693 if not isenabled(repo):
693 return orig(repo, match, follow, revs, fncache)
694 return orig(repo, match, follow, revs, fncache)
694
695
695 # remotefilelog's can't be walked in rev order, so throw.
696 # remotefilelog's can't be walked in rev order, so throw.
696 # The caller will see the exception and walk the commit tree instead.
697 # The caller will see the exception and walk the commit tree instead.
697 if not follow:
698 if not follow:
698 raise cmdutil.FileWalkError("Cannot walk via filelog")
699 raise cmdutil.FileWalkError("Cannot walk via filelog")
699
700
700 wanted = set()
701 wanted = set()
701 minrev, maxrev = min(revs), max(revs)
702 minrev, maxrev = min(revs), max(revs)
702
703
703 pctx = repo['.']
704 pctx = repo['.']
704 for filename in match.files():
705 for filename in match.files():
705 if filename not in pctx:
706 if filename not in pctx:
706 raise error.Abort(_('cannot follow file not in parent '
707 raise error.Abort(_('cannot follow file not in parent '
707 'revision: "%s"') % filename)
708 'revision: "%s"') % filename)
708 fctx = pctx[filename]
709 fctx = pctx[filename]
709
710
710 linkrev = fctx.linkrev()
711 linkrev = fctx.linkrev()
711 if linkrev >= minrev and linkrev <= maxrev:
712 if linkrev >= minrev and linkrev <= maxrev:
712 fncache.setdefault(linkrev, []).append(filename)
713 fncache.setdefault(linkrev, []).append(filename)
713 wanted.add(linkrev)
714 wanted.add(linkrev)
714
715
715 for ancestor in fctx.ancestors():
716 for ancestor in fctx.ancestors():
716 linkrev = ancestor.linkrev()
717 linkrev = ancestor.linkrev()
717 if linkrev >= minrev and linkrev <= maxrev:
718 if linkrev >= minrev and linkrev <= maxrev:
718 fncache.setdefault(linkrev, []).append(ancestor.path())
719 fncache.setdefault(linkrev, []).append(ancestor.path())
719 wanted.add(linkrev)
720 wanted.add(linkrev)
720
721
721 return wanted
722 return wanted
722
723
723 def filelogrevset(orig, repo, subset, x):
724 def filelogrevset(orig, repo, subset, x):
724 """``filelog(pattern)``
725 """``filelog(pattern)``
725 Changesets connected to the specified filelog.
726 Changesets connected to the specified filelog.
726
727
727 For performance reasons, ``filelog()`` does not show every changeset
728 For performance reasons, ``filelog()`` does not show every changeset
728 that affects the requested file(s). See :hg:`help log` for details. For
729 that affects the requested file(s). See :hg:`help log` for details. For
729 a slower, more accurate result, use ``file()``.
730 a slower, more accurate result, use ``file()``.
730 """
731 """
731
732
732 if not isenabled(repo):
733 if not isenabled(repo):
733 return orig(repo, subset, x)
734 return orig(repo, subset, x)
734
735
735 # i18n: "filelog" is a keyword
736 # i18n: "filelog" is a keyword
736 pat = revset.getstring(x, _("filelog requires a pattern"))
737 pat = revset.getstring(x, _("filelog requires a pattern"))
737 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
738 m = match.match(repo.root, repo.getcwd(), [pat], default='relpath',
738 ctx=repo[None])
739 ctx=repo[None])
739 s = set()
740 s = set()
740
741
741 if not match.patkind(pat):
742 if not match.patkind(pat):
742 # slow
743 # slow
743 for r in subset:
744 for r in subset:
744 ctx = repo[r]
745 ctx = repo[r]
745 cfiles = ctx.files()
746 cfiles = ctx.files()
746 for f in m.files():
747 for f in m.files():
747 if f in cfiles:
748 if f in cfiles:
748 s.add(ctx.rev())
749 s.add(ctx.rev())
749 break
750 break
750 else:
751 else:
751 # partial
752 # partial
752 files = (f for f in repo[None] if m(f))
753 files = (f for f in repo[None] if m(f))
753 for f in files:
754 for f in files:
754 fctx = repo[None].filectx(f)
755 fctx = repo[None].filectx(f)
755 s.add(fctx.linkrev())
756 s.add(fctx.linkrev())
756 for actx in fctx.ancestors():
757 for actx in fctx.ancestors():
757 s.add(actx.linkrev())
758 s.add(actx.linkrev())
758
759
759 return smartset.baseset([r for r in subset if r in s])
760 return smartset.baseset([r for r in subset if r in s])
760
761
761 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
762 @command('gc', [], _('hg gc [REPO...]'), norepo=True)
762 def gc(ui, *args, **opts):
763 def gc(ui, *args, **opts):
763 '''garbage collect the client and server filelog caches
764 '''garbage collect the client and server filelog caches
764 '''
765 '''
765 cachepaths = set()
766 cachepaths = set()
766
767
767 # get the system client cache
768 # get the system client cache
768 systemcache = shallowutil.getcachepath(ui, allowempty=True)
769 systemcache = shallowutil.getcachepath(ui, allowempty=True)
769 if systemcache:
770 if systemcache:
770 cachepaths.add(systemcache)
771 cachepaths.add(systemcache)
771
772
772 # get repo client and server cache
773 # get repo client and server cache
773 repopaths = []
774 repopaths = []
774 pwd = ui.environ.get('PWD')
775 pwd = ui.environ.get('PWD')
775 if pwd:
776 if pwd:
776 repopaths.append(pwd)
777 repopaths.append(pwd)
777
778
778 repopaths.extend(args)
779 repopaths.extend(args)
779 repos = []
780 repos = []
780 for repopath in repopaths:
781 for repopath in repopaths:
781 try:
782 try:
782 repo = hg.peer(ui, {}, repopath)
783 repo = hg.peer(ui, {}, repopath)
783 repos.append(repo)
784 repos.append(repo)
784
785
785 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
786 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
786 if repocache:
787 if repocache:
787 cachepaths.add(repocache)
788 cachepaths.add(repocache)
788 except error.RepoError:
789 except error.RepoError:
789 pass
790 pass
790
791
791 # gc client cache
792 # gc client cache
792 for cachepath in cachepaths:
793 for cachepath in cachepaths:
793 gcclient(ui, cachepath)
794 gcclient(ui, cachepath)
794
795
795 # gc server cache
796 # gc server cache
796 for repo in repos:
797 for repo in repos:
797 remotefilelogserver.gcserver(ui, repo._repo)
798 remotefilelogserver.gcserver(ui, repo._repo)
798
799
799 def gcclient(ui, cachepath):
800 def gcclient(ui, cachepath):
800 # get list of repos that use this cache
801 # get list of repos that use this cache
801 repospath = os.path.join(cachepath, 'repos')
802 repospath = os.path.join(cachepath, 'repos')
802 if not os.path.exists(repospath):
803 if not os.path.exists(repospath):
803 ui.warn(_("no known cache at %s\n") % cachepath)
804 ui.warn(_("no known cache at %s\n") % cachepath)
804 return
805 return
805
806
806 reposfile = open(repospath, 'rb')
807 reposfile = open(repospath, 'rb')
807 repos = {r[:-1] for r in reposfile.readlines()}
808 repos = {r[:-1] for r in reposfile.readlines()}
808 reposfile.close()
809 reposfile.close()
809
810
810 # build list of useful files
811 # build list of useful files
811 validrepos = []
812 validrepos = []
812 keepkeys = set()
813 keepkeys = set()
813
814
814 sharedcache = None
815 sharedcache = None
815 filesrepacked = False
816 filesrepacked = False
816
817
817 count = 0
818 count = 0
818 progress = ui.makeprogress(_("analyzing repositories"), unit="repos",
819 progress = ui.makeprogress(_("analyzing repositories"), unit="repos",
819 total=len(repos))
820 total=len(repos))
820 for path in repos:
821 for path in repos:
821 progress.update(count)
822 progress.update(count)
822 count += 1
823 count += 1
823 try:
824 try:
824 path = ui.expandpath(os.path.normpath(path))
825 path = ui.expandpath(os.path.normpath(path))
825 except TypeError as e:
826 except TypeError as e:
826 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
827 ui.warn(_("warning: malformed path: %r:%s\n") % (path, e))
827 traceback.print_exc()
828 traceback.print_exc()
828 continue
829 continue
829 try:
830 try:
830 peer = hg.peer(ui, {}, path)
831 peer = hg.peer(ui, {}, path)
831 repo = peer._repo
832 repo = peer._repo
832 except error.RepoError:
833 except error.RepoError:
833 continue
834 continue
834
835
835 validrepos.append(path)
836 validrepos.append(path)
836
837
837 # Protect against any repo or config changes that have happened since
838 # Protect against any repo or config changes that have happened since
838 # this repo was added to the repos file. We'd rather this loop succeed
839 # this repo was added to the repos file. We'd rather this loop succeed
839 # and too much be deleted, than the loop fail and nothing gets deleted.
840 # and too much be deleted, than the loop fail and nothing gets deleted.
840 if not isenabled(repo):
841 if not isenabled(repo):
841 continue
842 continue
842
843
843 if not util.safehasattr(repo, 'name'):
844 if not util.safehasattr(repo, 'name'):
844 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
845 ui.warn(_("repo %s is a misconfigured remotefilelog repo\n") % path)
845 continue
846 continue
846
847
847 # If garbage collection on repack and repack on hg gc are enabled
848 # If garbage collection on repack and repack on hg gc are enabled
848 # then loose files are repacked and garbage collected.
849 # then loose files are repacked and garbage collected.
849 # Otherwise regular garbage collection is performed.
850 # Otherwise regular garbage collection is performed.
850 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
851 repackonhggc = repo.ui.configbool('remotefilelog', 'repackonhggc')
851 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
852 gcrepack = repo.ui.configbool('remotefilelog', 'gcrepack')
852 if repackonhggc and gcrepack:
853 if repackonhggc and gcrepack:
853 try:
854 try:
854 repackmod.incrementalrepack(repo)
855 repackmod.incrementalrepack(repo)
855 filesrepacked = True
856 filesrepacked = True
856 continue
857 continue
857 except (IOError, repackmod.RepackAlreadyRunning):
858 except (IOError, repackmod.RepackAlreadyRunning):
858 # If repack cannot be performed due to not enough disk space
859 # If repack cannot be performed due to not enough disk space
859 # continue doing garbage collection of loose files w/o repack
860 # continue doing garbage collection of loose files w/o repack
860 pass
861 pass
861
862
862 reponame = repo.name
863 reponame = repo.name
863 if not sharedcache:
864 if not sharedcache:
864 sharedcache = repo.sharedstore
865 sharedcache = repo.sharedstore
865
866
866 # Compute a keepset which is not garbage collected
867 # Compute a keepset which is not garbage collected
867 def keyfn(fname, fnode):
868 def keyfn(fname, fnode):
868 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
869 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
869 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
870 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
870
871
871 progress.complete()
872 progress.complete()
872
873
873 # write list of valid repos back
874 # write list of valid repos back
874 oldumask = os.umask(0o002)
875 oldumask = os.umask(0o002)
875 try:
876 try:
876 reposfile = open(repospath, 'wb')
877 reposfile = open(repospath, 'wb')
877 reposfile.writelines([("%s\n" % r) for r in validrepos])
878 reposfile.writelines([("%s\n" % r) for r in validrepos])
878 reposfile.close()
879 reposfile.close()
879 finally:
880 finally:
880 os.umask(oldumask)
881 os.umask(oldumask)
881
882
882 # prune cache
883 # prune cache
883 if sharedcache is not None:
884 if sharedcache is not None:
884 sharedcache.gc(keepkeys)
885 sharedcache.gc(keepkeys)
885 elif not filesrepacked:
886 elif not filesrepacked:
886 ui.warn(_("warning: no valid repos in repofile\n"))
887 ui.warn(_("warning: no valid repos in repofile\n"))
887
888
888 def log(orig, ui, repo, *pats, **opts):
889 def log(orig, ui, repo, *pats, **opts):
889 if not isenabled(repo):
890 if not isenabled(repo):
890 return orig(ui, repo, *pats, **opts)
891 return orig(ui, repo, *pats, **opts)
891
892
892 follow = opts.get(r'follow')
893 follow = opts.get(r'follow')
893 revs = opts.get(r'rev')
894 revs = opts.get(r'rev')
894 if pats:
895 if pats:
895 # Force slowpath for non-follow patterns and follows that start from
896 # Force slowpath for non-follow patterns and follows that start from
896 # non-working-copy-parent revs.
897 # non-working-copy-parent revs.
897 if not follow or revs:
898 if not follow or revs:
898 # This forces the slowpath
899 # This forces the slowpath
899 opts[r'removed'] = True
900 opts[r'removed'] = True
900
901
901 # If this is a non-follow log without any revs specified, recommend that
902 # If this is a non-follow log without any revs specified, recommend that
902 # the user add -f to speed it up.
903 # the user add -f to speed it up.
903 if not follow and not revs:
904 if not follow and not revs:
904 match = scmutil.match(repo['.'], pats, pycompat.byteskwargs(opts))
905 match = scmutil.match(repo['.'], pats, pycompat.byteskwargs(opts))
905 isfile = not match.anypats()
906 isfile = not match.anypats()
906 if isfile:
907 if isfile:
907 for file in match.files():
908 for file in match.files():
908 if not os.path.isfile(repo.wjoin(file)):
909 if not os.path.isfile(repo.wjoin(file)):
909 isfile = False
910 isfile = False
910 break
911 break
911
912
912 if isfile:
913 if isfile:
913 ui.warn(_("warning: file log can be slow on large repos - " +
914 ui.warn(_("warning: file log can be slow on large repos - " +
914 "use -f to speed it up\n"))
915 "use -f to speed it up\n"))
915
916
916 return orig(ui, repo, *pats, **opts)
917 return orig(ui, repo, *pats, **opts)
917
918
918 def revdatelimit(ui, revset):
919 def revdatelimit(ui, revset):
919 """Update revset so that only changesets no older than 'prefetchdays' days
920 """Update revset so that only changesets no older than 'prefetchdays' days
920 are included. The default value is set to 14 days. If 'prefetchdays' is set
921 are included. The default value is set to 14 days. If 'prefetchdays' is set
921 to zero or negative value then date restriction is not applied.
922 to zero or negative value then date restriction is not applied.
922 """
923 """
923 days = ui.configint('remotefilelog', 'prefetchdays')
924 days = ui.configint('remotefilelog', 'prefetchdays')
924 if days > 0:
925 if days > 0:
925 revset = '(%s) & date(-%s)' % (revset, days)
926 revset = '(%s) & date(-%s)' % (revset, days)
926 return revset
927 return revset
927
928
928 def readytofetch(repo):
929 def readytofetch(repo):
929 """Check that enough time has passed since the last background prefetch.
930 """Check that enough time has passed since the last background prefetch.
930 This only relates to prefetches after operations that change the working
931 This only relates to prefetches after operations that change the working
931 copy parent. Default delay between background prefetches is 2 minutes.
932 copy parent. Default delay between background prefetches is 2 minutes.
932 """
933 """
933 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
934 timeout = repo.ui.configint('remotefilelog', 'prefetchdelay')
934 fname = repo.vfs.join('lastprefetch')
935 fname = repo.vfs.join('lastprefetch')
935
936
936 ready = False
937 ready = False
937 with open(fname, 'a'):
938 with open(fname, 'a'):
938 # the with construct above is used to avoid race conditions
939 # the with construct above is used to avoid race conditions
939 modtime = os.path.getmtime(fname)
940 modtime = os.path.getmtime(fname)
940 if (time.time() - modtime) > timeout:
941 if (time.time() - modtime) > timeout:
941 os.utime(fname, None)
942 os.utime(fname, None)
942 ready = True
943 ready = True
943
944
944 return ready
945 return ready
945
946
946 def wcpprefetch(ui, repo, **kwargs):
947 def wcpprefetch(ui, repo, **kwargs):
947 """Prefetches in background revisions specified by bgprefetchrevs revset.
948 """Prefetches in background revisions specified by bgprefetchrevs revset.
948 Does background repack if backgroundrepack flag is set in config.
949 Does background repack if backgroundrepack flag is set in config.
949 """
950 """
950 shallow = isenabled(repo)
951 shallow = isenabled(repo)
951 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
952 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs')
952 isready = readytofetch(repo)
953 isready = readytofetch(repo)
953
954
954 if not (shallow and bgprefetchrevs and isready):
955 if not (shallow and bgprefetchrevs and isready):
955 return
956 return
956
957
957 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
958 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
958 # update a revset with a date limit
959 # update a revset with a date limit
959 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
960 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
960
961
961 def anon():
962 def anon():
962 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
963 if util.safehasattr(repo, 'ranprefetch') and repo.ranprefetch:
963 return
964 return
964 repo.ranprefetch = True
965 repo.ranprefetch = True
965 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
966 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
966
967
967 repo._afterlock(anon)
968 repo._afterlock(anon)
968
969
969 def pull(orig, ui, repo, *pats, **opts):
970 def pull(orig, ui, repo, *pats, **opts):
970 result = orig(ui, repo, *pats, **opts)
971 result = orig(ui, repo, *pats, **opts)
971
972
972 if isenabled(repo):
973 if isenabled(repo):
973 # prefetch if it's configured
974 # prefetch if it's configured
974 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
975 prefetchrevset = ui.config('remotefilelog', 'pullprefetch')
975 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
976 bgrepack = repo.ui.configbool('remotefilelog', 'backgroundrepack')
976 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
977 bgprefetch = repo.ui.configbool('remotefilelog', 'backgroundprefetch')
977
978
978 if prefetchrevset:
979 if prefetchrevset:
979 ui.status(_("prefetching file contents\n"))
980 ui.status(_("prefetching file contents\n"))
980 revs = scmutil.revrange(repo, [prefetchrevset])
981 revs = scmutil.revrange(repo, [prefetchrevset])
981 base = repo['.'].rev()
982 base = repo['.'].rev()
982 if bgprefetch:
983 if bgprefetch:
983 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
984 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
984 else:
985 else:
985 repo.prefetch(revs, base=base)
986 repo.prefetch(revs, base=base)
986 if bgrepack:
987 if bgrepack:
987 repackmod.backgroundrepack(repo, incremental=True)
988 repackmod.backgroundrepack(repo, incremental=True)
988 elif bgrepack:
989 elif bgrepack:
989 repackmod.backgroundrepack(repo, incremental=True)
990 repackmod.backgroundrepack(repo, incremental=True)
990
991
991 return result
992 return result
992
993
993 def exchangepull(orig, repo, remote, *args, **kwargs):
994 def exchangepull(orig, repo, remote, *args, **kwargs):
994 # Hook into the callstream/getbundle to insert bundle capabilities
995 # Hook into the callstream/getbundle to insert bundle capabilities
995 # during a pull.
996 # during a pull.
996 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
997 def localgetbundle(orig, source, heads=None, common=None, bundlecaps=None,
997 **kwargs):
998 **kwargs):
998 if not bundlecaps:
999 if not bundlecaps:
999 bundlecaps = set()
1000 bundlecaps = set()
1000 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1001 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1001 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1002 return orig(source, heads=heads, common=common, bundlecaps=bundlecaps,
1002 **kwargs)
1003 **kwargs)
1003
1004
1004 if util.safehasattr(remote, '_callstream'):
1005 if util.safehasattr(remote, '_callstream'):
1005 remote._localrepo = repo
1006 remote._localrepo = repo
1006 elif util.safehasattr(remote, 'getbundle'):
1007 elif util.safehasattr(remote, 'getbundle'):
1007 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1008 extensions.wrapfunction(remote, 'getbundle', localgetbundle)
1008
1009
1009 return orig(repo, remote, *args, **kwargs)
1010 return orig(repo, remote, *args, **kwargs)
1010
1011
1011 def _fileprefetchhook(repo, revs, match):
1012 def _fileprefetchhook(repo, revs, match):
1012 if isenabled(repo):
1013 if isenabled(repo):
1013 allfiles = []
1014 allfiles = []
1014 for rev in revs:
1015 for rev in revs:
1015 if rev == nodemod.wdirrev or rev is None:
1016 if rev == nodemod.wdirrev or rev is None:
1016 continue
1017 continue
1017 ctx = repo[rev]
1018 ctx = repo[rev]
1018 mf = ctx.manifest()
1019 mf = ctx.manifest()
1019 sparsematch = repo.maybesparsematch(ctx.rev())
1020 sparsematch = repo.maybesparsematch(ctx.rev())
1020 for path in ctx.walk(match):
1021 for path in ctx.walk(match):
1021 if path.endswith('/'):
1022 if path.endswith('/'):
1022 # Tree manifest that's being excluded as part of narrow
1023 # Tree manifest that's being excluded as part of narrow
1023 continue
1024 continue
1024 if (not sparsematch or sparsematch(path)) and path in mf:
1025 if (not sparsematch or sparsematch(path)) and path in mf:
1025 allfiles.append((path, hex(mf[path])))
1026 allfiles.append((path, hex(mf[path])))
1026 repo.fileservice.prefetch(allfiles)
1027 repo.fileservice.prefetch(allfiles)
1027
1028
1028 @command('debugremotefilelog', [
1029 @command('debugremotefilelog', [
1029 ('d', 'decompress', None, _('decompress the filelog first')),
1030 ('d', 'decompress', None, _('decompress the filelog first')),
1030 ], _('hg debugremotefilelog <path>'), norepo=True)
1031 ], _('hg debugremotefilelog <path>'), norepo=True)
1031 def debugremotefilelog(ui, path, **opts):
1032 def debugremotefilelog(ui, path, **opts):
1032 return debugcommands.debugremotefilelog(ui, path, **opts)
1033 return debugcommands.debugremotefilelog(ui, path, **opts)
1033
1034
1034 @command('verifyremotefilelog', [
1035 @command('verifyremotefilelog', [
1035 ('d', 'decompress', None, _('decompress the filelogs first')),
1036 ('d', 'decompress', None, _('decompress the filelogs first')),
1036 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1037 ], _('hg verifyremotefilelogs <directory>'), norepo=True)
1037 def verifyremotefilelog(ui, path, **opts):
1038 def verifyremotefilelog(ui, path, **opts):
1038 return debugcommands.verifyremotefilelog(ui, path, **opts)
1039 return debugcommands.verifyremotefilelog(ui, path, **opts)
1039
1040
1040 @command('debugdatapack', [
1041 @command('debugdatapack', [
1041 ('', 'long', None, _('print the long hashes')),
1042 ('', 'long', None, _('print the long hashes')),
1042 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1043 ('', 'node', '', _('dump the contents of node'), 'NODE'),
1043 ], _('hg debugdatapack <paths>'), norepo=True)
1044 ], _('hg debugdatapack <paths>'), norepo=True)
1044 def debugdatapack(ui, *paths, **opts):
1045 def debugdatapack(ui, *paths, **opts):
1045 return debugcommands.debugdatapack(ui, *paths, **opts)
1046 return debugcommands.debugdatapack(ui, *paths, **opts)
1046
1047
1047 @command('debughistorypack', [
1048 @command('debughistorypack', [
1048 ], _('hg debughistorypack <path>'), norepo=True)
1049 ], _('hg debughistorypack <path>'), norepo=True)
1049 def debughistorypack(ui, path, **opts):
1050 def debughistorypack(ui, path, **opts):
1050 return debugcommands.debughistorypack(ui, path)
1051 return debugcommands.debughistorypack(ui, path)
1051
1052
1052 @command('debugkeepset', [
1053 @command('debugkeepset', [
1053 ], _('hg debugkeepset'))
1054 ], _('hg debugkeepset'))
1054 def debugkeepset(ui, repo, **opts):
1055 def debugkeepset(ui, repo, **opts):
1055 # The command is used to measure keepset computation time
1056 # The command is used to measure keepset computation time
1056 def keyfn(fname, fnode):
1057 def keyfn(fname, fnode):
1057 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1058 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1058 repackmod.keepset(repo, keyfn)
1059 repackmod.keepset(repo, keyfn)
1059 return
1060 return
1060
1061
1061 @command('debugwaitonrepack', [
1062 @command('debugwaitonrepack', [
1062 ], _('hg debugwaitonrepack'))
1063 ], _('hg debugwaitonrepack'))
1063 def debugwaitonrepack(ui, repo, **opts):
1064 def debugwaitonrepack(ui, repo, **opts):
1064 return debugcommands.debugwaitonrepack(repo)
1065 return debugcommands.debugwaitonrepack(repo)
1065
1066
1066 @command('debugwaitonprefetch', [
1067 @command('debugwaitonprefetch', [
1067 ], _('hg debugwaitonprefetch'))
1068 ], _('hg debugwaitonprefetch'))
1068 def debugwaitonprefetch(ui, repo, **opts):
1069 def debugwaitonprefetch(ui, repo, **opts):
1069 return debugcommands.debugwaitonprefetch(repo)
1070 return debugcommands.debugwaitonprefetch(repo)
1070
1071
1071 def resolveprefetchopts(ui, opts):
1072 def resolveprefetchopts(ui, opts):
1072 if not opts.get('rev'):
1073 if not opts.get('rev'):
1073 revset = ['.', 'draft()']
1074 revset = ['.', 'draft()']
1074
1075
1075 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1076 prefetchrevset = ui.config('remotefilelog', 'pullprefetch', None)
1076 if prefetchrevset:
1077 if prefetchrevset:
1077 revset.append('(%s)' % prefetchrevset)
1078 revset.append('(%s)' % prefetchrevset)
1078 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1079 bgprefetchrevs = ui.config('remotefilelog', 'bgprefetchrevs', None)
1079 if bgprefetchrevs:
1080 if bgprefetchrevs:
1080 revset.append('(%s)' % bgprefetchrevs)
1081 revset.append('(%s)' % bgprefetchrevs)
1081 revset = '+'.join(revset)
1082 revset = '+'.join(revset)
1082
1083
1083 # update a revset with a date limit
1084 # update a revset with a date limit
1084 revset = revdatelimit(ui, revset)
1085 revset = revdatelimit(ui, revset)
1085
1086
1086 opts['rev'] = [revset]
1087 opts['rev'] = [revset]
1087
1088
1088 if not opts.get('base'):
1089 if not opts.get('base'):
1089 opts['base'] = None
1090 opts['base'] = None
1090
1091
1091 return opts
1092 return opts
1092
1093
1093 @command('prefetch', [
1094 @command('prefetch', [
1094 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1095 ('r', 'rev', [], _('prefetch the specified revisions'), _('REV')),
1095 ('', 'repack', False, _('run repack after prefetch')),
1096 ('', 'repack', False, _('run repack after prefetch')),
1096 ('b', 'base', '', _("rev that is assumed to already be local")),
1097 ('b', 'base', '', _("rev that is assumed to already be local")),
1097 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1098 ] + commands.walkopts, _('hg prefetch [OPTIONS] [FILE...]'))
1098 def prefetch(ui, repo, *pats, **opts):
1099 def prefetch(ui, repo, *pats, **opts):
1099 """prefetch file revisions from the server
1100 """prefetch file revisions from the server
1100
1101
1101 Prefetchs file revisions for the specified revs and stores them in the
1102 Prefetchs file revisions for the specified revs and stores them in the
1102 local remotefilelog cache. If no rev is specified, the default rev is
1103 local remotefilelog cache. If no rev is specified, the default rev is
1103 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1104 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1104 File names or patterns can be used to limit which files are downloaded.
1105 File names or patterns can be used to limit which files are downloaded.
1105
1106
1106 Return 0 on success.
1107 Return 0 on success.
1107 """
1108 """
1108 opts = pycompat.byteskwargs(opts)
1109 opts = pycompat.byteskwargs(opts)
1109 if not isenabled(repo):
1110 if not isenabled(repo):
1110 raise error.Abort(_("repo is not shallow"))
1111 raise error.Abort(_("repo is not shallow"))
1111
1112
1112 opts = resolveprefetchopts(ui, opts)
1113 opts = resolveprefetchopts(ui, opts)
1113 revs = scmutil.revrange(repo, opts.get('rev'))
1114 revs = scmutil.revrange(repo, opts.get('rev'))
1114 repo.prefetch(revs, opts.get('base'), pats, opts)
1115 repo.prefetch(revs, opts.get('base'), pats, opts)
1115
1116
1116 # Run repack in background
1117 # Run repack in background
1117 if opts.get('repack'):
1118 if opts.get('repack'):
1118 repackmod.backgroundrepack(repo, incremental=True)
1119 repackmod.backgroundrepack(repo, incremental=True)
1119
1120
1120 @command('repack', [
1121 @command('repack', [
1121 ('', 'background', None, _('run in a background process'), None),
1122 ('', 'background', None, _('run in a background process'), None),
1122 ('', 'incremental', None, _('do an incremental repack'), None),
1123 ('', 'incremental', None, _('do an incremental repack'), None),
1123 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1124 ('', 'packsonly', None, _('only repack packs (skip loose objects)'), None),
1124 ], _('hg repack [OPTIONS]'))
1125 ], _('hg repack [OPTIONS]'))
1125 def repack_(ui, repo, *pats, **opts):
1126 def repack_(ui, repo, *pats, **opts):
1126 if opts.get(r'background'):
1127 if opts.get(r'background'):
1127 repackmod.backgroundrepack(repo, incremental=opts.get(r'incremental'),
1128 repackmod.backgroundrepack(repo, incremental=opts.get(r'incremental'),
1128 packsonly=opts.get(r'packsonly', False))
1129 packsonly=opts.get(r'packsonly', False))
1129 return
1130 return
1130
1131
1131 options = {'packsonly': opts.get(r'packsonly')}
1132 options = {'packsonly': opts.get(r'packsonly')}
1132
1133
1133 try:
1134 try:
1134 if opts.get(r'incremental'):
1135 if opts.get(r'incremental'):
1135 repackmod.incrementalrepack(repo, options=options)
1136 repackmod.incrementalrepack(repo, options=options)
1136 else:
1137 else:
1137 repackmod.fullrepack(repo, options=options)
1138 repackmod.fullrepack(repo, options=options)
1138 except repackmod.RepackAlreadyRunning as ex:
1139 except repackmod.RepackAlreadyRunning as ex:
1139 # Don't propogate the exception if the repack is already in
1140 # Don't propogate the exception if the repack is already in
1140 # progress, since we want the command to exit 0.
1141 # progress, since we want the command to exit 0.
1141 repo.ui.warn('%s\n' % ex)
1142 repo.ui.warn('%s\n' % ex)
@@ -1,235 +1,238 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ echo z > z
12 $ echo z > z
13 $ hg commit -qAm x
13 $ hg commit -qAm x
14 $ echo x2 > x
14 $ echo x2 > x
15 $ echo y > y
15 $ echo y > y
16 $ hg commit -qAm y
16 $ hg commit -qAm y
17 $ hg bookmark foo
17 $ hg bookmark foo
18
18
19 $ cd ..
19 $ cd ..
20
20
21 # prefetch a revision
21 # prefetch a revision
22
22
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 streaming all changes
24 streaming all changes
25 2 files to transfer, 528 bytes of data
25 2 files to transfer, 528 bytes of data
26 transferred 528 bytes in * seconds (*/sec) (glob)
26 transferred 528 bytes in * seconds (*/sec) (glob)
27 searching for changes
27 searching for changes
28 no changes found
28 no changes found
29 $ cd shallow
29 $ cd shallow
30
30
31 $ hg prefetch -r 0
31 $ hg prefetch -r 0
32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
33
33
34 $ hg cat -r 0 x
34 $ hg cat -r 0 x
35 x
35 x
36
36
37 # prefetch with base
37 # prefetch with base
38
38
39 $ clearcache
39 $ clearcache
40 $ hg prefetch -r 0::1 -b 0
40 $ hg prefetch -r 0::1 -b 0
41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
42
42
43 $ hg cat -r 1 x
43 $ hg cat -r 1 x
44 x2
44 x2
45 $ hg cat -r 1 y
45 $ hg cat -r 1 y
46 y
46 y
47
47
48 $ hg cat -r 0 x
48 $ hg cat -r 0 x
49 x
49 x
50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
51
51
52 $ hg cat -r 0 z
52 $ hg cat -r 0 z
53 z
53 z
54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
55
55
56 $ hg prefetch -r 0::1 --base 0
56 $ hg prefetch -r 0::1 --base 0
57 $ hg prefetch -r 0::1 -b 1
57 $ hg prefetch -r 0::1 -b 1
58 $ hg prefetch -r 0::1
58 $ hg prefetch -r 0::1
59
59
60 # prefetch a range of revisions
60 # prefetch a range of revisions
61
61
62 $ clearcache
62 $ clearcache
63 $ hg prefetch -r 0::1
63 $ hg prefetch -r 0::1
64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
65
65
66 $ hg cat -r 0 x
66 $ hg cat -r 0 x
67 x
67 x
68 $ hg cat -r 1 x
68 $ hg cat -r 1 x
69 x2
69 x2
70
70
71 # prefetch certain files
71 # prefetch certain files
72
72
73 $ clearcache
73 $ clearcache
74 $ hg prefetch -r 1 x
74 $ hg prefetch -r 1 x
75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
76
76
77 $ hg cat -r 1 x
77 $ hg cat -r 1 x
78 x2
78 x2
79
79
80 $ hg cat -r 1 y
80 $ hg cat -r 1 y
81 y
81 y
82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
83
83
84 # prefetch on pull when configured
84 # prefetch on pull when configured
85
85
86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
87 $ hg strip tip
87 $ hg strip tip
88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
89
89
90 $ clearcache
90 $ clearcache
91 $ hg pull
91 $ hg pull
92 pulling from ssh://user@dummy/master
92 pulling from ssh://user@dummy/master
93 searching for changes
93 searching for changes
94 adding changesets
94 adding changesets
95 adding manifests
95 adding manifests
96 adding file changes
96 adding file changes
97 added 1 changesets with 0 changes to 0 files
97 added 1 changesets with 0 changes to 0 files
98 updating bookmark foo
98 updating bookmark foo
99 new changesets 109c3a557a73
99 new changesets 109c3a557a73
100 (run 'hg update' to get a working copy)
100 (run 'hg update' to get a working copy)
101 prefetching file contents
101 prefetching file contents
102 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
102 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
103
103
104 $ hg up tip
104 $ hg up tip
105 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
105 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
106
106
107 # prefetch only fetches changes not in working copy
107 # prefetch only fetches changes not in working copy
108
108
109 $ hg strip tip
109 $ hg strip tip
110 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
110 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
111 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
111 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
112 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
112 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
113 $ clearcache
113 $ clearcache
114
114
115 $ hg pull
115 $ hg pull
116 pulling from ssh://user@dummy/master
116 pulling from ssh://user@dummy/master
117 searching for changes
117 searching for changes
118 adding changesets
118 adding changesets
119 adding manifests
119 adding manifests
120 adding file changes
120 adding file changes
121 added 1 changesets with 0 changes to 0 files
121 added 1 changesets with 0 changes to 0 files
122 updating bookmark foo
122 updating bookmark foo
123 new changesets 109c3a557a73
123 new changesets 109c3a557a73
124 (run 'hg update' to get a working copy)
124 (run 'hg update' to get a working copy)
125 prefetching file contents
125 prefetching file contents
126 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
126 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127
127
128 # Make some local commits that produce the same file versions as are on the
128 # Make some local commits that produce the same file versions as are on the
129 # server. To simulate a situation where we have local commits that were somehow
129 # server. To simulate a situation where we have local commits that were somehow
130 # pushed, and we will soon pull.
130 # pushed, and we will soon pull.
131
131
132 $ hg prefetch -r 'all()'
132 $ hg prefetch -r 'all()'
133 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
133 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
134 $ hg strip -q -r 0
134 $ hg strip -q -r 0
135 $ echo x > x
135 $ echo x > x
136 $ echo z > z
136 $ echo z > z
137 $ hg commit -qAm x
137 $ hg commit -qAm x
138 $ echo x2 > x
138 $ echo x2 > x
139 $ echo y > y
139 $ echo y > y
140 $ hg commit -qAm y
140 $ hg commit -qAm y
141
141
142 # prefetch server versions, even if local versions are available
142 # prefetch server versions, even if local versions are available
143
143
144 $ clearcache
144 $ clearcache
145 $ hg strip -q tip
145 $ hg strip -q tip
146 $ hg pull
146 $ hg pull
147 pulling from ssh://user@dummy/master
147 pulling from ssh://user@dummy/master
148 searching for changes
148 searching for changes
149 adding changesets
149 adding changesets
150 adding manifests
150 adding manifests
151 adding file changes
151 adding file changes
152 added 1 changesets with 0 changes to 0 files
152 added 1 changesets with 0 changes to 0 files
153 updating bookmark foo
153 updating bookmark foo
154 new changesets 109c3a557a73
154 new changesets 109c3a557a73
155 1 local changesets published (?)
155 1 local changesets published (?)
156 (run 'hg update' to get a working copy)
156 (run 'hg update' to get a working copy)
157 prefetching file contents
157 prefetching file contents
158 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
158 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
159
159
160 $ cd ..
160 $ cd ..
161
161
162 # Prefetch unknown files during checkout
162 # Prefetch unknown files during checkout
163
163
164 $ hgcloneshallow ssh://user@dummy/master shallow2
164 $ hgcloneshallow ssh://user@dummy/master shallow2
165 streaming all changes
165 streaming all changes
166 2 files to transfer, 528 bytes of data
166 2 files to transfer, 528 bytes of data
167 transferred 528 bytes in * seconds * (glob)
167 transferred 528 bytes in * seconds * (glob)
168 searching for changes
168 searching for changes
169 no changes found
169 no changes found
170 updating to branch default
170 updating to branch default
171 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
171 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
172 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
172 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
173 $ cd shallow2
173 $ cd shallow2
174 $ hg up -q null
174 $ hg up -q null
175 $ echo x > x
175 $ echo x > x
176 $ echo y > y
176 $ echo y > y
177 $ echo z > z
177 $ echo z > z
178 $ clearcache
178 $ clearcache
179 $ hg up tip
179 $ hg up tip
180 x: untracked file differs
180 x: untracked file differs
181 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
181 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
182 abort: untracked files in working directory differ from files in requested revision
182 abort: untracked files in working directory differ from files in requested revision
183 [255]
183 [255]
184 $ hg revert --all
184 $ hg revert --all
185
185
186 # Test batch fetching of lookup files during hg status
186 # Test batch fetching of lookup files during hg status
187 $ hg up --clean tip
187 $ hg up --clean tip
188 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
188 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 $ hg debugrebuilddirstate
189 $ hg debugrebuilddirstate
190 $ clearcache
190 $ clearcache
191 $ hg status
191 $ hg status
192 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
192 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
193
193
194 # Prefetch during addrename detection
194 # Prefetch during addrename detection
195 $ hg up -q --clean tip
195 $ hg up -q --clean tip
196 $ hg revert --all
196 $ hg revert --all
197 $ mv x x2
197 $ mv x x2
198 $ mv y y2
198 $ mv y y2
199 $ mv z z2
199 $ mv z z2
200 $ echo a > a
201 $ hg add a
202 $ rm a
200 $ clearcache
203 $ clearcache
201 $ hg addremove -s 50 > /dev/null
204 $ hg addremove -s 50 > /dev/null
202 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
205 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
203 $ hg revert --all
206 $ hg revert --all
204 forgetting x2
207 forgetting x2
205 forgetting y2
208 forgetting y2
206 forgetting z2
209 forgetting z2
207 undeleting x
210 undeleting x
208 undeleting y
211 undeleting y
209 undeleting z
212 undeleting z
210
213
211
214
212 # Revert across double renames. Note: the scary "abort", error is because
215 # Revert across double renames. Note: the scary "abort", error is because
213 # https://bz.mercurial-scm.org/5419 .
216 # https://bz.mercurial-scm.org/5419 .
214
217
215 $ cd ../master
218 $ cd ../master
216 $ hg mv z z2
219 $ hg mv z z2
217 $ hg commit -m 'move z -> z2'
220 $ hg commit -m 'move z -> z2'
218 $ cd ../shallow2
221 $ cd ../shallow2
219 $ hg pull -q
222 $ hg pull -q
220 $ clearcache
223 $ clearcache
221 $ hg mv y y2
224 $ hg mv y y2
222 y2: not overwriting - file exists
225 y2: not overwriting - file exists
223 ('hg rename --after' to record the rename)
226 ('hg rename --after' to record the rename)
224 [1]
227 [1]
225 $ hg mv x x2
228 $ hg mv x x2
226 x2: not overwriting - file exists
229 x2: not overwriting - file exists
227 ('hg rename --after' to record the rename)
230 ('hg rename --after' to record the rename)
228 [1]
231 [1]
229 $ hg mv z2 z3
232 $ hg mv z2 z3
230 z2: not copying - file is not managed
233 z2: not copying - file is not managed
231 abort: no files to copy
234 abort: no files to copy
232 [255]
235 [255]
233 $ hg revert -a -r 1 || true
236 $ hg revert -a -r 1 || true
234 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
237 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
235 abort: z2@109c3a557a73: not found in manifest! (?)
238 abort: z2@109c3a557a73: not found in manifest! (?)
General Comments 0
You need to be logged in to leave comments. Login now