##// END OF EJS Templates
remotefilelog: include file contents in bundles produced during strip...
Kyle Lippincott -
r47606:47a95277 default
parent child Browse files
Show More
@@ -1,1260 +1,1262 b''
1 # __init__.py - remotefilelog extension
1 # __init__.py - remotefilelog extension
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8
8
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 GUARANTEES. This means that repositories created with this extension may
10 GUARANTEES. This means that repositories created with this extension may
11 only be usable with the exact version of this extension/Mercurial that was
11 only be usable with the exact version of this extension/Mercurial that was
12 used. The extension attempts to enforce this in order to prevent repository
12 used. The extension attempts to enforce this in order to prevent repository
13 corruption.
13 corruption.
14
14
15 remotefilelog works by fetching file contents lazily and storing them
15 remotefilelog works by fetching file contents lazily and storing them
16 in a cache on the client rather than in revlogs. This allows enormous
16 in a cache on the client rather than in revlogs. This allows enormous
17 histories to be transferred only partially, making them easier to
17 histories to be transferred only partially, making them easier to
18 operate on.
18 operate on.
19
19
20 Configs:
20 Configs:
21
21
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23
23
24 ``packs.maxpacksize`` specifies the maximum pack file size
24 ``packs.maxpacksize`` specifies the maximum pack file size
25
25
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 shared cache (trees only for now)
27 shared cache (trees only for now)
28
28
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30
30
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 update, and on other commands that use them. Different from pullprefetch.
32 update, and on other commands that use them. Different from pullprefetch.
33
33
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35
35
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 it is garbage collected
37 it is garbage collected
38
38
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40
40
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 days after which it is no longer prefetched.
42 days after which it is no longer prefetched.
43
43
44 ``remotefilelog.prefetchdelay`` specifies delay between background
44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 prefetches in seconds after operations that change the working copy parent
45 prefetches in seconds after operations that change the working copy parent
46
46
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 pack files required to be considered part of a generation. In particular,
48 pack files required to be considered part of a generation. In particular,
49 minimum number of packs files > gencountlimit.
49 minimum number of packs files > gencountlimit.
50
50
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 each generation of the data pack files. For example, list ['100MB','1MB']
52 each generation of the data pack files. For example, list ['100MB','1MB']
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 1MB, 100MB) and [100MB, infinity).
54 1MB, 100MB) and [100MB, infinity).
55
55
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 include in an incremental data repack.
57 include in an incremental data repack.
58
58
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 it to be considered for an incremental data repack.
60 it to be considered for an incremental data repack.
61
61
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 to include in an incremental data repack.
63 to include in an incremental data repack.
64
64
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 history pack files required to be considered part of a generation. In
66 history pack files required to be considered part of a generation. In
67 particular, minimum number of packs files > gencountlimit.
67 particular, minimum number of packs files > gencountlimit.
68
68
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 each generation of the history pack files. For example, list [
70 each generation of the history pack files. For example, list [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73
73
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 include in an incremental history repack.
75 include in an incremental history repack.
76
76
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 for it to be considered for an incremental history repack.
78 for it to be considered for an incremental history repack.
79
79
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 files to include in an incremental history repack.
81 files to include in an incremental history repack.
82
82
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 background
84 background
85
85
86 ``remotefilelog.cachepath`` path to cache
86 ``remotefilelog.cachepath`` path to cache
87
87
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 group
89 group
90
90
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92
92
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94
94
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96
96
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98
98
99 ``remotefilelog.fetchwarning``: message to print when too many
99 ``remotefilelog.fetchwarning``: message to print when too many
100 single-file fetches occur
100 single-file fetches occur
101
101
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103
103
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 files, otherwise use optimistic fetching
105 files, otherwise use optimistic fetching
106
106
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 eagerly downloaded rather than lazily
108 eagerly downloaded rather than lazily
109
109
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 data from other repos in a shared store.
111 data from other repos in a shared store.
112
112
113 ``remotefilelog.server`` if true, enable server-side functionality
113 ``remotefilelog.server`` if true, enable server-side functionality
114
114
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116
116
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 blobs
118 blobs
119
119
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 before returning blobs
121 before returning blobs
122
122
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 corruption before returning metadata
124 corruption before returning metadata
125
125
126 """
126 """
127 from __future__ import absolute_import
127 from __future__ import absolute_import
128
128
129 import os
129 import os
130 import time
130 import time
131 import traceback
131 import traceback
132
132
133 from mercurial.node import (
133 from mercurial.node import (
134 hex,
134 hex,
135 wdirrev,
135 wdirrev,
136 )
136 )
137 from mercurial.i18n import _
137 from mercurial.i18n import _
138 from mercurial.pycompat import open
138 from mercurial.pycompat import open
139 from mercurial import (
139 from mercurial import (
140 changegroup,
140 changegroup,
141 changelog,
141 changelog,
142 commands,
142 commands,
143 configitems,
143 configitems,
144 context,
144 context,
145 copies,
145 copies,
146 debugcommands as hgdebugcommands,
146 debugcommands as hgdebugcommands,
147 dispatch,
147 dispatch,
148 error,
148 error,
149 exchange,
149 exchange,
150 extensions,
150 extensions,
151 hg,
151 hg,
152 localrepo,
152 localrepo,
153 match as matchmod,
153 match as matchmod,
154 merge,
154 merge,
155 mergestate as mergestatemod,
155 mergestate as mergestatemod,
156 patch,
156 patch,
157 pycompat,
157 pycompat,
158 registrar,
158 registrar,
159 repair,
159 repair,
160 repoview,
160 repoview,
161 revset,
161 revset,
162 scmutil,
162 scmutil,
163 smartset,
163 smartset,
164 streamclone,
164 streamclone,
165 util,
165 util,
166 )
166 )
167 from . import (
167 from . import (
168 constants,
168 constants,
169 debugcommands,
169 debugcommands,
170 fileserverclient,
170 fileserverclient,
171 remotefilectx,
171 remotefilectx,
172 remotefilelog,
172 remotefilelog,
173 remotefilelogserver,
173 remotefilelogserver,
174 repack as repackmod,
174 repack as repackmod,
175 shallowbundle,
175 shallowbundle,
176 shallowrepo,
176 shallowrepo,
177 shallowstore,
177 shallowstore,
178 shallowutil,
178 shallowutil,
179 shallowverifier,
179 shallowverifier,
180 )
180 )
181
181
182 # ensures debug commands are registered
182 # ensures debug commands are registered
183 hgdebugcommands.command
183 hgdebugcommands.command
184
184
185 cmdtable = {}
185 cmdtable = {}
186 command = registrar.command(cmdtable)
186 command = registrar.command(cmdtable)
187
187
188 configtable = {}
188 configtable = {}
189 configitem = registrar.configitem(configtable)
189 configitem = registrar.configitem(configtable)
190
190
191 configitem(b'remotefilelog', b'debug', default=False)
191 configitem(b'remotefilelog', b'debug', default=False)
192
192
193 configitem(b'remotefilelog', b'reponame', default=b'')
193 configitem(b'remotefilelog', b'reponame', default=b'')
194 configitem(b'remotefilelog', b'cachepath', default=None)
194 configitem(b'remotefilelog', b'cachepath', default=None)
195 configitem(b'remotefilelog', b'cachegroup', default=None)
195 configitem(b'remotefilelog', b'cachegroup', default=None)
196 configitem(b'remotefilelog', b'cacheprocess', default=None)
196 configitem(b'remotefilelog', b'cacheprocess', default=None)
197 configitem(b'remotefilelog', b'cacheprocess.includepath', default=None)
197 configitem(b'remotefilelog', b'cacheprocess.includepath', default=None)
198 configitem(b"remotefilelog", b"cachelimit", default=b"1000 GB")
198 configitem(b"remotefilelog", b"cachelimit", default=b"1000 GB")
199
199
200 configitem(
200 configitem(
201 b'remotefilelog',
201 b'remotefilelog',
202 b'fallbackpath',
202 b'fallbackpath',
203 default=configitems.dynamicdefault,
203 default=configitems.dynamicdefault,
204 alias=[(b'remotefilelog', b'fallbackrepo')],
204 alias=[(b'remotefilelog', b'fallbackrepo')],
205 )
205 )
206
206
207 configitem(b'remotefilelog', b'validatecachelog', default=None)
207 configitem(b'remotefilelog', b'validatecachelog', default=None)
208 configitem(b'remotefilelog', b'validatecache', default=b'on')
208 configitem(b'remotefilelog', b'validatecache', default=b'on')
209 configitem(b'remotefilelog', b'server', default=None)
209 configitem(b'remotefilelog', b'server', default=None)
210 configitem(b'remotefilelog', b'servercachepath', default=None)
210 configitem(b'remotefilelog', b'servercachepath', default=None)
211 configitem(b"remotefilelog", b"serverexpiration", default=30)
211 configitem(b"remotefilelog", b"serverexpiration", default=30)
212 configitem(b'remotefilelog', b'backgroundrepack', default=False)
212 configitem(b'remotefilelog', b'backgroundrepack', default=False)
213 configitem(b'remotefilelog', b'bgprefetchrevs', default=None)
213 configitem(b'remotefilelog', b'bgprefetchrevs', default=None)
214 configitem(b'remotefilelog', b'pullprefetch', default=None)
214 configitem(b'remotefilelog', b'pullprefetch', default=None)
215 configitem(b'remotefilelog', b'backgroundprefetch', default=False)
215 configitem(b'remotefilelog', b'backgroundprefetch', default=False)
216 configitem(b'remotefilelog', b'prefetchdelay', default=120)
216 configitem(b'remotefilelog', b'prefetchdelay', default=120)
217 configitem(b'remotefilelog', b'prefetchdays', default=14)
217 configitem(b'remotefilelog', b'prefetchdays', default=14)
218 # Other values include 'local' or 'none'. Any unrecognized value is 'all'.
219 configitem(b'remotefilelog', b'strip.includefiles', default='all')
218
220
219 configitem(b'remotefilelog', b'getfilesstep', default=10000)
221 configitem(b'remotefilelog', b'getfilesstep', default=10000)
220 configitem(b'remotefilelog', b'getfilestype', default=b'optimistic')
222 configitem(b'remotefilelog', b'getfilestype', default=b'optimistic')
221 configitem(b'remotefilelog', b'batchsize', configitems.dynamicdefault)
223 configitem(b'remotefilelog', b'batchsize', configitems.dynamicdefault)
222 configitem(b'remotefilelog', b'fetchwarning', default=b'')
224 configitem(b'remotefilelog', b'fetchwarning', default=b'')
223
225
224 configitem(b'remotefilelog', b'includepattern', default=None)
226 configitem(b'remotefilelog', b'includepattern', default=None)
225 configitem(b'remotefilelog', b'excludepattern', default=None)
227 configitem(b'remotefilelog', b'excludepattern', default=None)
226
228
227 configitem(b'remotefilelog', b'gcrepack', default=False)
229 configitem(b'remotefilelog', b'gcrepack', default=False)
228 configitem(b'remotefilelog', b'repackonhggc', default=False)
230 configitem(b'remotefilelog', b'repackonhggc', default=False)
229 configitem(b'repack', b'chainorphansbysize', default=True, experimental=True)
231 configitem(b'repack', b'chainorphansbysize', default=True, experimental=True)
230
232
231 configitem(b'packs', b'maxpacksize', default=0)
233 configitem(b'packs', b'maxpacksize', default=0)
232 configitem(b'packs', b'maxchainlen', default=1000)
234 configitem(b'packs', b'maxchainlen', default=1000)
233
235
234 configitem(b'devel', b'remotefilelog.bg-wait', default=False)
236 configitem(b'devel', b'remotefilelog.bg-wait', default=False)
235
237
236 # default TTL limit is 30 days
238 # default TTL limit is 30 days
237 _defaultlimit = 60 * 60 * 24 * 30
239 _defaultlimit = 60 * 60 * 24 * 30
238 configitem(b'remotefilelog', b'nodettl', default=_defaultlimit)
240 configitem(b'remotefilelog', b'nodettl', default=_defaultlimit)
239
241
240 configitem(b'remotefilelog', b'data.gencountlimit', default=2),
242 configitem(b'remotefilelog', b'data.gencountlimit', default=2),
241 configitem(
243 configitem(
242 b'remotefilelog', b'data.generations', default=[b'1GB', b'100MB', b'1MB']
244 b'remotefilelog', b'data.generations', default=[b'1GB', b'100MB', b'1MB']
243 )
245 )
244 configitem(b'remotefilelog', b'data.maxrepackpacks', default=50)
246 configitem(b'remotefilelog', b'data.maxrepackpacks', default=50)
245 configitem(b'remotefilelog', b'data.repackmaxpacksize', default=b'4GB')
247 configitem(b'remotefilelog', b'data.repackmaxpacksize', default=b'4GB')
246 configitem(b'remotefilelog', b'data.repacksizelimit', default=b'100MB')
248 configitem(b'remotefilelog', b'data.repacksizelimit', default=b'100MB')
247
249
248 configitem(b'remotefilelog', b'history.gencountlimit', default=2),
250 configitem(b'remotefilelog', b'history.gencountlimit', default=2),
249 configitem(b'remotefilelog', b'history.generations', default=[b'100MB'])
251 configitem(b'remotefilelog', b'history.generations', default=[b'100MB'])
250 configitem(b'remotefilelog', b'history.maxrepackpacks', default=50)
252 configitem(b'remotefilelog', b'history.maxrepackpacks', default=50)
251 configitem(b'remotefilelog', b'history.repackmaxpacksize', default=b'400MB')
253 configitem(b'remotefilelog', b'history.repackmaxpacksize', default=b'400MB')
252 configitem(b'remotefilelog', b'history.repacksizelimit', default=b'100MB')
254 configitem(b'remotefilelog', b'history.repacksizelimit', default=b'100MB')
253
255
254 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
256 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
255 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
257 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
256 # be specifying the version(s) of Mercurial they are tested with, or
258 # be specifying the version(s) of Mercurial they are tested with, or
257 # leave the attribute unspecified.
259 # leave the attribute unspecified.
258 testedwith = b'ships-with-hg-core'
260 testedwith = b'ships-with-hg-core'
259
261
260 repoclass = localrepo.localrepository
262 repoclass = localrepo.localrepository
261 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
263 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
262
264
263 isenabled = shallowutil.isenabled
265 isenabled = shallowutil.isenabled
264
266
265
267
266 def uisetup(ui):
268 def uisetup(ui):
267 """Wraps user facing Mercurial commands to swap them out with shallow
269 """Wraps user facing Mercurial commands to swap them out with shallow
268 versions.
270 versions.
269 """
271 """
270 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
272 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
271
273
272 entry = extensions.wrapcommand(commands.table, b'clone', cloneshallow)
274 entry = extensions.wrapcommand(commands.table, b'clone', cloneshallow)
273 entry[1].append(
275 entry[1].append(
274 (
276 (
275 b'',
277 b'',
276 b'shallow',
278 b'shallow',
277 None,
279 None,
278 _(b"create a shallow clone which uses remote file history"),
280 _(b"create a shallow clone which uses remote file history"),
279 )
281 )
280 )
282 )
281
283
282 extensions.wrapcommand(
284 extensions.wrapcommand(
283 commands.table, b'debugindex', debugcommands.debugindex
285 commands.table, b'debugindex', debugcommands.debugindex
284 )
286 )
285 extensions.wrapcommand(
287 extensions.wrapcommand(
286 commands.table, b'debugindexdot', debugcommands.debugindexdot
288 commands.table, b'debugindexdot', debugcommands.debugindexdot
287 )
289 )
288 extensions.wrapcommand(commands.table, b'log', log)
290 extensions.wrapcommand(commands.table, b'log', log)
289 extensions.wrapcommand(commands.table, b'pull', pull)
291 extensions.wrapcommand(commands.table, b'pull', pull)
290
292
291 # Prevent 'hg manifest --all'
293 # Prevent 'hg manifest --all'
292 def _manifest(orig, ui, repo, *args, **opts):
294 def _manifest(orig, ui, repo, *args, **opts):
293 if isenabled(repo) and opts.get('all'):
295 if isenabled(repo) and opts.get('all'):
294 raise error.Abort(_(b"--all is not supported in a shallow repo"))
296 raise error.Abort(_(b"--all is not supported in a shallow repo"))
295
297
296 return orig(ui, repo, *args, **opts)
298 return orig(ui, repo, *args, **opts)
297
299
298 extensions.wrapcommand(commands.table, b"manifest", _manifest)
300 extensions.wrapcommand(commands.table, b"manifest", _manifest)
299
301
300 # Wrap remotefilelog with lfs code
302 # Wrap remotefilelog with lfs code
301 def _lfsloaded(loaded=False):
303 def _lfsloaded(loaded=False):
302 lfsmod = None
304 lfsmod = None
303 try:
305 try:
304 lfsmod = extensions.find(b'lfs')
306 lfsmod = extensions.find(b'lfs')
305 except KeyError:
307 except KeyError:
306 pass
308 pass
307 if lfsmod:
309 if lfsmod:
308 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
310 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
309 fileserverclient._lfsmod = lfsmod
311 fileserverclient._lfsmod = lfsmod
310
312
311 extensions.afterloaded(b'lfs', _lfsloaded)
313 extensions.afterloaded(b'lfs', _lfsloaded)
312
314
313 # debugdata needs remotefilelog.len to work
315 # debugdata needs remotefilelog.len to work
314 extensions.wrapcommand(commands.table, b'debugdata', debugdatashallow)
316 extensions.wrapcommand(commands.table, b'debugdata', debugdatashallow)
315
317
316 changegroup.cgpacker = shallowbundle.shallowcg1packer
318 changegroup.cgpacker = shallowbundle.shallowcg1packer
317
319
318 extensions.wrapfunction(
320 extensions.wrapfunction(
319 changegroup, b'_addchangegroupfiles', shallowbundle.addchangegroupfiles
321 changegroup, b'_addchangegroupfiles', shallowbundle.addchangegroupfiles
320 )
322 )
321 extensions.wrapfunction(
323 extensions.wrapfunction(
322 changegroup, b'makechangegroup', shallowbundle.makechangegroup
324 changegroup, b'makechangegroup', shallowbundle.makechangegroup
323 )
325 )
324 extensions.wrapfunction(localrepo, b'makestore', storewrapper)
326 extensions.wrapfunction(localrepo, b'makestore', storewrapper)
325 extensions.wrapfunction(exchange, b'pull', exchangepull)
327 extensions.wrapfunction(exchange, b'pull', exchangepull)
326 extensions.wrapfunction(merge, b'applyupdates', applyupdates)
328 extensions.wrapfunction(merge, b'applyupdates', applyupdates)
327 extensions.wrapfunction(merge, b'_checkunknownfiles', checkunknownfiles)
329 extensions.wrapfunction(merge, b'_checkunknownfiles', checkunknownfiles)
328 extensions.wrapfunction(context.workingctx, b'_checklookup', checklookup)
330 extensions.wrapfunction(context.workingctx, b'_checklookup', checklookup)
329 extensions.wrapfunction(scmutil, b'_findrenames', findrenames)
331 extensions.wrapfunction(scmutil, b'_findrenames', findrenames)
330 extensions.wrapfunction(
332 extensions.wrapfunction(
331 copies, b'_computeforwardmissing', computeforwardmissing
333 copies, b'_computeforwardmissing', computeforwardmissing
332 )
334 )
333 extensions.wrapfunction(dispatch, b'runcommand', runcommand)
335 extensions.wrapfunction(dispatch, b'runcommand', runcommand)
334 extensions.wrapfunction(repair, b'_collectbrokencsets', _collectbrokencsets)
336 extensions.wrapfunction(repair, b'_collectbrokencsets', _collectbrokencsets)
335 extensions.wrapfunction(context.changectx, b'filectx', filectx)
337 extensions.wrapfunction(context.changectx, b'filectx', filectx)
336 extensions.wrapfunction(context.workingctx, b'filectx', workingfilectx)
338 extensions.wrapfunction(context.workingctx, b'filectx', workingfilectx)
337 extensions.wrapfunction(patch, b'trydiff', trydiff)
339 extensions.wrapfunction(patch, b'trydiff', trydiff)
338 extensions.wrapfunction(hg, b'verify', _verify)
340 extensions.wrapfunction(hg, b'verify', _verify)
339 scmutil.fileprefetchhooks.add(b'remotefilelog', _fileprefetchhook)
341 scmutil.fileprefetchhooks.add(b'remotefilelog', _fileprefetchhook)
340
342
341 # disappointing hacks below
343 # disappointing hacks below
342 extensions.wrapfunction(scmutil, b'getrenamedfn', getrenamedfn)
344 extensions.wrapfunction(scmutil, b'getrenamedfn', getrenamedfn)
343 extensions.wrapfunction(revset, b'filelog', filelogrevset)
345 extensions.wrapfunction(revset, b'filelog', filelogrevset)
344 revset.symbols[b'filelog'] = revset.filelog
346 revset.symbols[b'filelog'] = revset.filelog
345
347
346
348
347 def cloneshallow(orig, ui, repo, *args, **opts):
349 def cloneshallow(orig, ui, repo, *args, **opts):
348 if opts.get('shallow'):
350 if opts.get('shallow'):
349 repos = []
351 repos = []
350
352
351 def pull_shallow(orig, self, *args, **kwargs):
353 def pull_shallow(orig, self, *args, **kwargs):
352 if not isenabled(self):
354 if not isenabled(self):
353 repos.append(self.unfiltered())
355 repos.append(self.unfiltered())
354 # set up the client hooks so the post-clone update works
356 # set up the client hooks so the post-clone update works
355 setupclient(self.ui, self.unfiltered())
357 setupclient(self.ui, self.unfiltered())
356
358
357 # setupclient fixed the class on the repo itself
359 # setupclient fixed the class on the repo itself
358 # but we also need to fix it on the repoview
360 # but we also need to fix it on the repoview
359 if isinstance(self, repoview.repoview):
361 if isinstance(self, repoview.repoview):
360 self.__class__.__bases__ = (
362 self.__class__.__bases__ = (
361 self.__class__.__bases__[0],
363 self.__class__.__bases__[0],
362 self.unfiltered().__class__,
364 self.unfiltered().__class__,
363 )
365 )
364 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
366 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
365 with self.lock():
367 with self.lock():
366 # acquire store lock before writing requirements as some
368 # acquire store lock before writing requirements as some
367 # requirements might be written to .hg/store/requires
369 # requirements might be written to .hg/store/requires
368 scmutil.writereporequirements(self)
370 scmutil.writereporequirements(self)
369
371
370 # Since setupclient hadn't been called, exchange.pull was not
372 # Since setupclient hadn't been called, exchange.pull was not
371 # wrapped. So we need to manually invoke our version of it.
373 # wrapped. So we need to manually invoke our version of it.
372 return exchangepull(orig, self, *args, **kwargs)
374 return exchangepull(orig, self, *args, **kwargs)
373 else:
375 else:
374 return orig(self, *args, **kwargs)
376 return orig(self, *args, **kwargs)
375
377
376 extensions.wrapfunction(exchange, b'pull', pull_shallow)
378 extensions.wrapfunction(exchange, b'pull', pull_shallow)
377
379
378 # Wrap the stream logic to add requirements and to pass include/exclude
380 # Wrap the stream logic to add requirements and to pass include/exclude
379 # patterns around.
381 # patterns around.
380 def setup_streamout(repo, remote):
382 def setup_streamout(repo, remote):
381 # Replace remote.stream_out with a version that sends file
383 # Replace remote.stream_out with a version that sends file
382 # patterns.
384 # patterns.
383 def stream_out_shallow(orig):
385 def stream_out_shallow(orig):
384 caps = remote.capabilities()
386 caps = remote.capabilities()
385 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
387 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
386 opts = {}
388 opts = {}
387 if repo.includepattern:
389 if repo.includepattern:
388 opts['includepattern'] = b'\0'.join(repo.includepattern)
390 opts['includepattern'] = b'\0'.join(repo.includepattern)
389 if repo.excludepattern:
391 if repo.excludepattern:
390 opts['excludepattern'] = b'\0'.join(repo.excludepattern)
392 opts['excludepattern'] = b'\0'.join(repo.excludepattern)
391 return remote._callstream(b'stream_out_shallow', **opts)
393 return remote._callstream(b'stream_out_shallow', **opts)
392 else:
394 else:
393 return orig()
395 return orig()
394
396
395 extensions.wrapfunction(remote, b'stream_out', stream_out_shallow)
397 extensions.wrapfunction(remote, b'stream_out', stream_out_shallow)
396
398
397 def stream_wrap(orig, op):
399 def stream_wrap(orig, op):
398 setup_streamout(op.repo, op.remote)
400 setup_streamout(op.repo, op.remote)
399 return orig(op)
401 return orig(op)
400
402
401 extensions.wrapfunction(
403 extensions.wrapfunction(
402 streamclone, b'maybeperformlegacystreamclone', stream_wrap
404 streamclone, b'maybeperformlegacystreamclone', stream_wrap
403 )
405 )
404
406
405 def canperformstreamclone(orig, pullop, bundle2=False):
407 def canperformstreamclone(orig, pullop, bundle2=False):
406 # remotefilelog is currently incompatible with the
408 # remotefilelog is currently incompatible with the
407 # bundle2 flavor of streamclones, so force us to use
409 # bundle2 flavor of streamclones, so force us to use
408 # v1 instead.
410 # v1 instead.
409 if b'v2' in pullop.remotebundle2caps.get(b'stream', []):
411 if b'v2' in pullop.remotebundle2caps.get(b'stream', []):
410 pullop.remotebundle2caps[b'stream'] = [
412 pullop.remotebundle2caps[b'stream'] = [
411 c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2'
413 c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2'
412 ]
414 ]
413 if bundle2:
415 if bundle2:
414 return False, None
416 return False, None
415 supported, requirements = orig(pullop, bundle2=bundle2)
417 supported, requirements = orig(pullop, bundle2=bundle2)
416 if requirements is not None:
418 if requirements is not None:
417 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
419 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
418 return supported, requirements
420 return supported, requirements
419
421
420 extensions.wrapfunction(
422 extensions.wrapfunction(
421 streamclone, b'canperformstreamclone', canperformstreamclone
423 streamclone, b'canperformstreamclone', canperformstreamclone
422 )
424 )
423
425
424 try:
426 try:
425 orig(ui, repo, *args, **opts)
427 orig(ui, repo, *args, **opts)
426 finally:
428 finally:
427 if opts.get('shallow'):
429 if opts.get('shallow'):
428 for r in repos:
430 for r in repos:
429 if util.safehasattr(r, b'fileservice'):
431 if util.safehasattr(r, b'fileservice'):
430 r.fileservice.close()
432 r.fileservice.close()
431
433
432
434
433 def debugdatashallow(orig, *args, **kwds):
435 def debugdatashallow(orig, *args, **kwds):
434 oldlen = remotefilelog.remotefilelog.__len__
436 oldlen = remotefilelog.remotefilelog.__len__
435 try:
437 try:
436 remotefilelog.remotefilelog.__len__ = lambda x: 1
438 remotefilelog.remotefilelog.__len__ = lambda x: 1
437 return orig(*args, **kwds)
439 return orig(*args, **kwds)
438 finally:
440 finally:
439 remotefilelog.remotefilelog.__len__ = oldlen
441 remotefilelog.remotefilelog.__len__ = oldlen
440
442
441
443
442 def reposetup(ui, repo):
444 def reposetup(ui, repo):
443 if not repo.local():
445 if not repo.local():
444 return
446 return
445
447
446 # put here intentionally bc doesnt work in uisetup
448 # put here intentionally bc doesnt work in uisetup
447 ui.setconfig(b'hooks', b'update.prefetch', wcpprefetch)
449 ui.setconfig(b'hooks', b'update.prefetch', wcpprefetch)
448 ui.setconfig(b'hooks', b'commit.prefetch', wcpprefetch)
450 ui.setconfig(b'hooks', b'commit.prefetch', wcpprefetch)
449
451
450 isserverenabled = ui.configbool(b'remotefilelog', b'server')
452 isserverenabled = ui.configbool(b'remotefilelog', b'server')
451 isshallowclient = isenabled(repo)
453 isshallowclient = isenabled(repo)
452
454
453 if isserverenabled and isshallowclient:
455 if isserverenabled and isshallowclient:
454 raise RuntimeError(b"Cannot be both a server and shallow client.")
456 raise RuntimeError(b"Cannot be both a server and shallow client.")
455
457
456 if isshallowclient:
458 if isshallowclient:
457 setupclient(ui, repo)
459 setupclient(ui, repo)
458
460
459 if isserverenabled:
461 if isserverenabled:
460 remotefilelogserver.setupserver(ui, repo)
462 remotefilelogserver.setupserver(ui, repo)
461
463
462
464
463 def setupclient(ui, repo):
465 def setupclient(ui, repo):
464 if not isinstance(repo, localrepo.localrepository):
466 if not isinstance(repo, localrepo.localrepository):
465 return
467 return
466
468
467 # Even clients get the server setup since they need to have the
469 # Even clients get the server setup since they need to have the
468 # wireprotocol endpoints registered.
470 # wireprotocol endpoints registered.
469 remotefilelogserver.onetimesetup(ui)
471 remotefilelogserver.onetimesetup(ui)
470 onetimeclientsetup(ui)
472 onetimeclientsetup(ui)
471
473
472 shallowrepo.wraprepo(repo)
474 shallowrepo.wraprepo(repo)
473 repo.store = shallowstore.wrapstore(repo.store)
475 repo.store = shallowstore.wrapstore(repo.store)
474
476
475
477
476 def storewrapper(orig, requirements, path, vfstype):
478 def storewrapper(orig, requirements, path, vfstype):
477 s = orig(requirements, path, vfstype)
479 s = orig(requirements, path, vfstype)
478 if constants.SHALLOWREPO_REQUIREMENT in requirements:
480 if constants.SHALLOWREPO_REQUIREMENT in requirements:
479 s = shallowstore.wrapstore(s)
481 s = shallowstore.wrapstore(s)
480
482
481 return s
483 return s
482
484
483
485
484 # prefetch files before update
486 # prefetch files before update
485 def applyupdates(
487 def applyupdates(
486 orig, repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts
488 orig, repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts
487 ):
489 ):
488 if isenabled(repo):
490 if isenabled(repo):
489 manifest = mctx.manifest()
491 manifest = mctx.manifest()
490 files = []
492 files = []
491 for f, args, msg in mresult.getactions([mergestatemod.ACTION_GET]):
493 for f, args, msg in mresult.getactions([mergestatemod.ACTION_GET]):
492 files.append((f, hex(manifest[f])))
494 files.append((f, hex(manifest[f])))
493 # batch fetch the needed files from the server
495 # batch fetch the needed files from the server
494 repo.fileservice.prefetch(files)
496 repo.fileservice.prefetch(files)
495 return orig(repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts)
497 return orig(repo, mresult, wctx, mctx, overwrite, wantfiledata, **opts)
496
498
497
499
498 # Prefetch merge checkunknownfiles
500 # Prefetch merge checkunknownfiles
499 def checkunknownfiles(orig, repo, wctx, mctx, force, mresult, *args, **kwargs):
501 def checkunknownfiles(orig, repo, wctx, mctx, force, mresult, *args, **kwargs):
500 if isenabled(repo):
502 if isenabled(repo):
501 files = []
503 files = []
502 sparsematch = repo.maybesparsematch(mctx.rev())
504 sparsematch = repo.maybesparsematch(mctx.rev())
503 for f, (m, actionargs, msg) in mresult.filemap():
505 for f, (m, actionargs, msg) in mresult.filemap():
504 if sparsematch and not sparsematch(f):
506 if sparsematch and not sparsematch(f):
505 continue
507 continue
506 if m in (
508 if m in (
507 mergestatemod.ACTION_CREATED,
509 mergestatemod.ACTION_CREATED,
508 mergestatemod.ACTION_DELETED_CHANGED,
510 mergestatemod.ACTION_DELETED_CHANGED,
509 mergestatemod.ACTION_CREATED_MERGE,
511 mergestatemod.ACTION_CREATED_MERGE,
510 ):
512 ):
511 files.append((f, hex(mctx.filenode(f))))
513 files.append((f, hex(mctx.filenode(f))))
512 elif m == mergestatemod.ACTION_LOCAL_DIR_RENAME_GET:
514 elif m == mergestatemod.ACTION_LOCAL_DIR_RENAME_GET:
513 f2 = actionargs[0]
515 f2 = actionargs[0]
514 files.append((f2, hex(mctx.filenode(f2))))
516 files.append((f2, hex(mctx.filenode(f2))))
515 # batch fetch the needed files from the server
517 # batch fetch the needed files from the server
516 repo.fileservice.prefetch(files)
518 repo.fileservice.prefetch(files)
517 return orig(repo, wctx, mctx, force, mresult, *args, **kwargs)
519 return orig(repo, wctx, mctx, force, mresult, *args, **kwargs)
518
520
519
521
520 # Prefetch files before status attempts to look at their size and contents
522 # Prefetch files before status attempts to look at their size and contents
521 def checklookup(orig, self, files):
523 def checklookup(orig, self, files):
522 repo = self._repo
524 repo = self._repo
523 if isenabled(repo):
525 if isenabled(repo):
524 prefetchfiles = []
526 prefetchfiles = []
525 for parent in self._parents:
527 for parent in self._parents:
526 for f in files:
528 for f in files:
527 if f in parent:
529 if f in parent:
528 prefetchfiles.append((f, hex(parent.filenode(f))))
530 prefetchfiles.append((f, hex(parent.filenode(f))))
529 # batch fetch the needed files from the server
531 # batch fetch the needed files from the server
530 repo.fileservice.prefetch(prefetchfiles)
532 repo.fileservice.prefetch(prefetchfiles)
531 return orig(self, files)
533 return orig(self, files)
532
534
533
535
534 # Prefetch the logic that compares added and removed files for renames
536 # Prefetch the logic that compares added and removed files for renames
535 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
537 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
536 if isenabled(repo):
538 if isenabled(repo):
537 files = []
539 files = []
538 pmf = repo[b'.'].manifest()
540 pmf = repo[b'.'].manifest()
539 for f in removed:
541 for f in removed:
540 if f in pmf:
542 if f in pmf:
541 files.append((f, hex(pmf[f])))
543 files.append((f, hex(pmf[f])))
542 # batch fetch the needed files from the server
544 # batch fetch the needed files from the server
543 repo.fileservice.prefetch(files)
545 repo.fileservice.prefetch(files)
544 return orig(repo, matcher, added, removed, *args, **kwargs)
546 return orig(repo, matcher, added, removed, *args, **kwargs)
545
547
546
548
547 # prefetch files before pathcopies check
549 # prefetch files before pathcopies check
548 def computeforwardmissing(orig, a, b, match=None):
550 def computeforwardmissing(orig, a, b, match=None):
549 missing = orig(a, b, match=match)
551 missing = orig(a, b, match=match)
550 repo = a._repo
552 repo = a._repo
551 if isenabled(repo):
553 if isenabled(repo):
552 mb = b.manifest()
554 mb = b.manifest()
553
555
554 files = []
556 files = []
555 sparsematch = repo.maybesparsematch(b.rev())
557 sparsematch = repo.maybesparsematch(b.rev())
556 if sparsematch:
558 if sparsematch:
557 sparsemissing = set()
559 sparsemissing = set()
558 for f in missing:
560 for f in missing:
559 if sparsematch(f):
561 if sparsematch(f):
560 files.append((f, hex(mb[f])))
562 files.append((f, hex(mb[f])))
561 sparsemissing.add(f)
563 sparsemissing.add(f)
562 missing = sparsemissing
564 missing = sparsemissing
563
565
564 # batch fetch the needed files from the server
566 # batch fetch the needed files from the server
565 repo.fileservice.prefetch(files)
567 repo.fileservice.prefetch(files)
566 return missing
568 return missing
567
569
568
570
569 # close cache miss server connection after the command has finished
571 # close cache miss server connection after the command has finished
570 def runcommand(orig, lui, repo, *args, **kwargs):
572 def runcommand(orig, lui, repo, *args, **kwargs):
571 fileservice = None
573 fileservice = None
572 # repo can be None when running in chg:
574 # repo can be None when running in chg:
573 # - at startup, reposetup was called because serve is not norepo
575 # - at startup, reposetup was called because serve is not norepo
574 # - a norepo command like "help" is called
576 # - a norepo command like "help" is called
575 if repo and isenabled(repo):
577 if repo and isenabled(repo):
576 fileservice = repo.fileservice
578 fileservice = repo.fileservice
577 try:
579 try:
578 return orig(lui, repo, *args, **kwargs)
580 return orig(lui, repo, *args, **kwargs)
579 finally:
581 finally:
580 if fileservice:
582 if fileservice:
581 fileservice.close()
583 fileservice.close()
582
584
583
585
584 # prevent strip from stripping remotefilelogs
586 # prevent strip from stripping remotefilelogs
585 def _collectbrokencsets(orig, repo, files, striprev):
587 def _collectbrokencsets(orig, repo, files, striprev):
586 if isenabled(repo):
588 if isenabled(repo):
587 files = list([f for f in files if not repo.shallowmatch(f)])
589 files = list([f for f in files if not repo.shallowmatch(f)])
588 return orig(repo, files, striprev)
590 return orig(repo, files, striprev)
589
591
590
592
591 # changectx wrappers
593 # changectx wrappers
592 def filectx(orig, self, path, fileid=None, filelog=None):
594 def filectx(orig, self, path, fileid=None, filelog=None):
593 if fileid is None:
595 if fileid is None:
594 fileid = self.filenode(path)
596 fileid = self.filenode(path)
595 if isenabled(self._repo) and self._repo.shallowmatch(path):
597 if isenabled(self._repo) and self._repo.shallowmatch(path):
596 return remotefilectx.remotefilectx(
598 return remotefilectx.remotefilectx(
597 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
599 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
598 )
600 )
599 return orig(self, path, fileid=fileid, filelog=filelog)
601 return orig(self, path, fileid=fileid, filelog=filelog)
600
602
601
603
602 def workingfilectx(orig, self, path, filelog=None):
604 def workingfilectx(orig, self, path, filelog=None):
603 if isenabled(self._repo) and self._repo.shallowmatch(path):
605 if isenabled(self._repo) and self._repo.shallowmatch(path):
604 return remotefilectx.remoteworkingfilectx(
606 return remotefilectx.remoteworkingfilectx(
605 self._repo, path, workingctx=self, filelog=filelog
607 self._repo, path, workingctx=self, filelog=filelog
606 )
608 )
607 return orig(self, path, filelog=filelog)
609 return orig(self, path, filelog=filelog)
608
610
609
611
610 # prefetch required revisions before a diff
612 # prefetch required revisions before a diff
611 def trydiff(
613 def trydiff(
612 orig,
614 orig,
613 repo,
615 repo,
614 revs,
616 revs,
615 ctx1,
617 ctx1,
616 ctx2,
618 ctx2,
617 modified,
619 modified,
618 added,
620 added,
619 removed,
621 removed,
620 copy,
622 copy,
621 getfilectx,
623 getfilectx,
622 *args,
624 *args,
623 **kwargs
625 **kwargs
624 ):
626 ):
625 if isenabled(repo):
627 if isenabled(repo):
626 prefetch = []
628 prefetch = []
627 mf1 = ctx1.manifest()
629 mf1 = ctx1.manifest()
628 for fname in modified + added + removed:
630 for fname in modified + added + removed:
629 if fname in mf1:
631 if fname in mf1:
630 fnode = getfilectx(fname, ctx1).filenode()
632 fnode = getfilectx(fname, ctx1).filenode()
631 # fnode can be None if it's a edited working ctx file
633 # fnode can be None if it's a edited working ctx file
632 if fnode:
634 if fnode:
633 prefetch.append((fname, hex(fnode)))
635 prefetch.append((fname, hex(fnode)))
634 if fname not in removed:
636 if fname not in removed:
635 fnode = getfilectx(fname, ctx2).filenode()
637 fnode = getfilectx(fname, ctx2).filenode()
636 if fnode:
638 if fnode:
637 prefetch.append((fname, hex(fnode)))
639 prefetch.append((fname, hex(fnode)))
638
640
639 repo.fileservice.prefetch(prefetch)
641 repo.fileservice.prefetch(prefetch)
640
642
641 return orig(
643 return orig(
642 repo,
644 repo,
643 revs,
645 revs,
644 ctx1,
646 ctx1,
645 ctx2,
647 ctx2,
646 modified,
648 modified,
647 added,
649 added,
648 removed,
650 removed,
649 copy,
651 copy,
650 getfilectx,
652 getfilectx,
651 *args,
653 *args,
652 **kwargs
654 **kwargs
653 )
655 )
654
656
655
657
656 # Prevent verify from processing files
658 # Prevent verify from processing files
657 # a stub for mercurial.hg.verify()
659 # a stub for mercurial.hg.verify()
658 def _verify(orig, repo, level=None):
660 def _verify(orig, repo, level=None):
659 lock = repo.lock()
661 lock = repo.lock()
660 try:
662 try:
661 return shallowverifier.shallowverifier(repo).verify()
663 return shallowverifier.shallowverifier(repo).verify()
662 finally:
664 finally:
663 lock.release()
665 lock.release()
664
666
665
667
666 clientonetime = False
668 clientonetime = False
667
669
668
670
669 def onetimeclientsetup(ui):
671 def onetimeclientsetup(ui):
670 global clientonetime
672 global clientonetime
671 if clientonetime:
673 if clientonetime:
672 return
674 return
673 clientonetime = True
675 clientonetime = True
674
676
675 # Don't commit filelogs until we know the commit hash, since the hash
677 # Don't commit filelogs until we know the commit hash, since the hash
676 # is present in the filelog blob.
678 # is present in the filelog blob.
677 # This violates Mercurial's filelog->manifest->changelog write order,
679 # This violates Mercurial's filelog->manifest->changelog write order,
678 # but is generally fine for client repos.
680 # but is generally fine for client repos.
679 pendingfilecommits = []
681 pendingfilecommits = []
680
682
681 def addrawrevision(
683 def addrawrevision(
682 orig,
684 orig,
683 self,
685 self,
684 rawtext,
686 rawtext,
685 transaction,
687 transaction,
686 link,
688 link,
687 p1,
689 p1,
688 p2,
690 p2,
689 node,
691 node,
690 flags,
692 flags,
691 cachedelta=None,
693 cachedelta=None,
692 _metatuple=None,
694 _metatuple=None,
693 ):
695 ):
694 if isinstance(link, int):
696 if isinstance(link, int):
695 pendingfilecommits.append(
697 pendingfilecommits.append(
696 (
698 (
697 self,
699 self,
698 rawtext,
700 rawtext,
699 transaction,
701 transaction,
700 link,
702 link,
701 p1,
703 p1,
702 p2,
704 p2,
703 node,
705 node,
704 flags,
706 flags,
705 cachedelta,
707 cachedelta,
706 _metatuple,
708 _metatuple,
707 )
709 )
708 )
710 )
709 return node
711 return node
710 else:
712 else:
711 return orig(
713 return orig(
712 self,
714 self,
713 rawtext,
715 rawtext,
714 transaction,
716 transaction,
715 link,
717 link,
716 p1,
718 p1,
717 p2,
719 p2,
718 node,
720 node,
719 flags,
721 flags,
720 cachedelta,
722 cachedelta,
721 _metatuple=_metatuple,
723 _metatuple=_metatuple,
722 )
724 )
723
725
724 extensions.wrapfunction(
726 extensions.wrapfunction(
725 remotefilelog.remotefilelog, b'addrawrevision', addrawrevision
727 remotefilelog.remotefilelog, b'addrawrevision', addrawrevision
726 )
728 )
727
729
728 def changelogadd(orig, self, *args, **kwargs):
730 def changelogadd(orig, self, *args, **kwargs):
729 oldlen = len(self)
731 oldlen = len(self)
730 node = orig(self, *args, **kwargs)
732 node = orig(self, *args, **kwargs)
731 newlen = len(self)
733 newlen = len(self)
732 if oldlen != newlen:
734 if oldlen != newlen:
733 for oldargs in pendingfilecommits:
735 for oldargs in pendingfilecommits:
734 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
736 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
735 linknode = self.node(link)
737 linknode = self.node(link)
736 if linknode == node:
738 if linknode == node:
737 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
739 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
738 else:
740 else:
739 raise error.ProgrammingError(
741 raise error.ProgrammingError(
740 b'pending multiple integer revisions are not supported'
742 b'pending multiple integer revisions are not supported'
741 )
743 )
742 else:
744 else:
743 # "link" is actually wrong here (it is set to len(changelog))
745 # "link" is actually wrong here (it is set to len(changelog))
744 # if changelog remains unchanged, skip writing file revisions
746 # if changelog remains unchanged, skip writing file revisions
745 # but still do a sanity check about pending multiple revisions
747 # but still do a sanity check about pending multiple revisions
746 if len({x[3] for x in pendingfilecommits}) > 1:
748 if len({x[3] for x in pendingfilecommits}) > 1:
747 raise error.ProgrammingError(
749 raise error.ProgrammingError(
748 b'pending multiple integer revisions are not supported'
750 b'pending multiple integer revisions are not supported'
749 )
751 )
750 del pendingfilecommits[:]
752 del pendingfilecommits[:]
751 return node
753 return node
752
754
753 extensions.wrapfunction(changelog.changelog, b'add', changelogadd)
755 extensions.wrapfunction(changelog.changelog, b'add', changelogadd)
754
756
755
757
756 def getrenamedfn(orig, repo, endrev=None):
758 def getrenamedfn(orig, repo, endrev=None):
757 if not isenabled(repo) or copies.usechangesetcentricalgo(repo):
759 if not isenabled(repo) or copies.usechangesetcentricalgo(repo):
758 return orig(repo, endrev)
760 return orig(repo, endrev)
759
761
760 rcache = {}
762 rcache = {}
761
763
762 def getrenamed(fn, rev):
764 def getrenamed(fn, rev):
763 """looks up all renames for a file (up to endrev) the first
765 """looks up all renames for a file (up to endrev) the first
764 time the file is given. It indexes on the changerev and only
766 time the file is given. It indexes on the changerev and only
765 parses the manifest if linkrev != changerev.
767 parses the manifest if linkrev != changerev.
766 Returns rename info for fn at changerev rev."""
768 Returns rename info for fn at changerev rev."""
767 if rev in rcache.setdefault(fn, {}):
769 if rev in rcache.setdefault(fn, {}):
768 return rcache[fn][rev]
770 return rcache[fn][rev]
769
771
770 try:
772 try:
771 fctx = repo[rev].filectx(fn)
773 fctx = repo[rev].filectx(fn)
772 for ancestor in fctx.ancestors():
774 for ancestor in fctx.ancestors():
773 if ancestor.path() == fn:
775 if ancestor.path() == fn:
774 renamed = ancestor.renamed()
776 renamed = ancestor.renamed()
775 rcache[fn][ancestor.rev()] = renamed and renamed[0]
777 rcache[fn][ancestor.rev()] = renamed and renamed[0]
776
778
777 renamed = fctx.renamed()
779 renamed = fctx.renamed()
778 return renamed and renamed[0]
780 return renamed and renamed[0]
779 except error.LookupError:
781 except error.LookupError:
780 return None
782 return None
781
783
782 return getrenamed
784 return getrenamed
783
785
784
786
785 def filelogrevset(orig, repo, subset, x):
787 def filelogrevset(orig, repo, subset, x):
786 """``filelog(pattern)``
788 """``filelog(pattern)``
787 Changesets connected to the specified filelog.
789 Changesets connected to the specified filelog.
788
790
789 For performance reasons, ``filelog()`` does not show every changeset
791 For performance reasons, ``filelog()`` does not show every changeset
790 that affects the requested file(s). See :hg:`help log` for details. For
792 that affects the requested file(s). See :hg:`help log` for details. For
791 a slower, more accurate result, use ``file()``.
793 a slower, more accurate result, use ``file()``.
792 """
794 """
793
795
794 if not isenabled(repo):
796 if not isenabled(repo):
795 return orig(repo, subset, x)
797 return orig(repo, subset, x)
796
798
797 # i18n: "filelog" is a keyword
799 # i18n: "filelog" is a keyword
798 pat = revset.getstring(x, _(b"filelog requires a pattern"))
800 pat = revset.getstring(x, _(b"filelog requires a pattern"))
799 m = matchmod.match(
801 m = matchmod.match(
800 repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None]
802 repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None]
801 )
803 )
802 s = set()
804 s = set()
803
805
804 if not matchmod.patkind(pat):
806 if not matchmod.patkind(pat):
805 # slow
807 # slow
806 for r in subset:
808 for r in subset:
807 ctx = repo[r]
809 ctx = repo[r]
808 cfiles = ctx.files()
810 cfiles = ctx.files()
809 for f in m.files():
811 for f in m.files():
810 if f in cfiles:
812 if f in cfiles:
811 s.add(ctx.rev())
813 s.add(ctx.rev())
812 break
814 break
813 else:
815 else:
814 # partial
816 # partial
815 files = (f for f in repo[None] if m(f))
817 files = (f for f in repo[None] if m(f))
816 for f in files:
818 for f in files:
817 fctx = repo[None].filectx(f)
819 fctx = repo[None].filectx(f)
818 s.add(fctx.linkrev())
820 s.add(fctx.linkrev())
819 for actx in fctx.ancestors():
821 for actx in fctx.ancestors():
820 s.add(actx.linkrev())
822 s.add(actx.linkrev())
821
823
822 return smartset.baseset([r for r in subset if r in s])
824 return smartset.baseset([r for r in subset if r in s])
823
825
824
826
825 @command(b'gc', [], _(b'hg gc [REPO...]'), norepo=True)
827 @command(b'gc', [], _(b'hg gc [REPO...]'), norepo=True)
826 def gc(ui, *args, **opts):
828 def gc(ui, *args, **opts):
827 """garbage collect the client and server filelog caches"""
829 """garbage collect the client and server filelog caches"""
828 cachepaths = set()
830 cachepaths = set()
829
831
830 # get the system client cache
832 # get the system client cache
831 systemcache = shallowutil.getcachepath(ui, allowempty=True)
833 systemcache = shallowutil.getcachepath(ui, allowempty=True)
832 if systemcache:
834 if systemcache:
833 cachepaths.add(systemcache)
835 cachepaths.add(systemcache)
834
836
835 # get repo client and server cache
837 # get repo client and server cache
836 repopaths = []
838 repopaths = []
837 pwd = ui.environ.get(b'PWD')
839 pwd = ui.environ.get(b'PWD')
838 if pwd:
840 if pwd:
839 repopaths.append(pwd)
841 repopaths.append(pwd)
840
842
841 repopaths.extend(args)
843 repopaths.extend(args)
842 repos = []
844 repos = []
843 for repopath in repopaths:
845 for repopath in repopaths:
844 try:
846 try:
845 repo = hg.peer(ui, {}, repopath)
847 repo = hg.peer(ui, {}, repopath)
846 repos.append(repo)
848 repos.append(repo)
847
849
848 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
850 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
849 if repocache:
851 if repocache:
850 cachepaths.add(repocache)
852 cachepaths.add(repocache)
851 except error.RepoError:
853 except error.RepoError:
852 pass
854 pass
853
855
854 # gc client cache
856 # gc client cache
855 for cachepath in cachepaths:
857 for cachepath in cachepaths:
856 gcclient(ui, cachepath)
858 gcclient(ui, cachepath)
857
859
858 # gc server cache
860 # gc server cache
859 for repo in repos:
861 for repo in repos:
860 remotefilelogserver.gcserver(ui, repo._repo)
862 remotefilelogserver.gcserver(ui, repo._repo)
861
863
862
864
863 def gcclient(ui, cachepath):
865 def gcclient(ui, cachepath):
864 # get list of repos that use this cache
866 # get list of repos that use this cache
865 repospath = os.path.join(cachepath, b'repos')
867 repospath = os.path.join(cachepath, b'repos')
866 if not os.path.exists(repospath):
868 if not os.path.exists(repospath):
867 ui.warn(_(b"no known cache at %s\n") % cachepath)
869 ui.warn(_(b"no known cache at %s\n") % cachepath)
868 return
870 return
869
871
870 reposfile = open(repospath, b'rb')
872 reposfile = open(repospath, b'rb')
871 repos = {r[:-1] for r in reposfile.readlines()}
873 repos = {r[:-1] for r in reposfile.readlines()}
872 reposfile.close()
874 reposfile.close()
873
875
874 # build list of useful files
876 # build list of useful files
875 validrepos = []
877 validrepos = []
876 keepkeys = set()
878 keepkeys = set()
877
879
878 sharedcache = None
880 sharedcache = None
879 filesrepacked = False
881 filesrepacked = False
880
882
881 count = 0
883 count = 0
882 progress = ui.makeprogress(
884 progress = ui.makeprogress(
883 _(b"analyzing repositories"), unit=b"repos", total=len(repos)
885 _(b"analyzing repositories"), unit=b"repos", total=len(repos)
884 )
886 )
885 for path in repos:
887 for path in repos:
886 progress.update(count)
888 progress.update(count)
887 count += 1
889 count += 1
888 try:
890 try:
889 path = ui.expandpath(os.path.normpath(path))
891 path = ui.expandpath(os.path.normpath(path))
890 except TypeError as e:
892 except TypeError as e:
891 ui.warn(_(b"warning: malformed path: %r:%s\n") % (path, e))
893 ui.warn(_(b"warning: malformed path: %r:%s\n") % (path, e))
892 traceback.print_exc()
894 traceback.print_exc()
893 continue
895 continue
894 try:
896 try:
895 peer = hg.peer(ui, {}, path)
897 peer = hg.peer(ui, {}, path)
896 repo = peer._repo
898 repo = peer._repo
897 except error.RepoError:
899 except error.RepoError:
898 continue
900 continue
899
901
900 validrepos.append(path)
902 validrepos.append(path)
901
903
902 # Protect against any repo or config changes that have happened since
904 # Protect against any repo or config changes that have happened since
903 # this repo was added to the repos file. We'd rather this loop succeed
905 # this repo was added to the repos file. We'd rather this loop succeed
904 # and too much be deleted, than the loop fail and nothing gets deleted.
906 # and too much be deleted, than the loop fail and nothing gets deleted.
905 if not isenabled(repo):
907 if not isenabled(repo):
906 continue
908 continue
907
909
908 if not util.safehasattr(repo, b'name'):
910 if not util.safehasattr(repo, b'name'):
909 ui.warn(
911 ui.warn(
910 _(b"repo %s is a misconfigured remotefilelog repo\n") % path
912 _(b"repo %s is a misconfigured remotefilelog repo\n") % path
911 )
913 )
912 continue
914 continue
913
915
914 # If garbage collection on repack and repack on hg gc are enabled
916 # If garbage collection on repack and repack on hg gc are enabled
915 # then loose files are repacked and garbage collected.
917 # then loose files are repacked and garbage collected.
916 # Otherwise regular garbage collection is performed.
918 # Otherwise regular garbage collection is performed.
917 repackonhggc = repo.ui.configbool(b'remotefilelog', b'repackonhggc')
919 repackonhggc = repo.ui.configbool(b'remotefilelog', b'repackonhggc')
918 gcrepack = repo.ui.configbool(b'remotefilelog', b'gcrepack')
920 gcrepack = repo.ui.configbool(b'remotefilelog', b'gcrepack')
919 if repackonhggc and gcrepack:
921 if repackonhggc and gcrepack:
920 try:
922 try:
921 repackmod.incrementalrepack(repo)
923 repackmod.incrementalrepack(repo)
922 filesrepacked = True
924 filesrepacked = True
923 continue
925 continue
924 except (IOError, repackmod.RepackAlreadyRunning):
926 except (IOError, repackmod.RepackAlreadyRunning):
925 # If repack cannot be performed due to not enough disk space
927 # If repack cannot be performed due to not enough disk space
926 # continue doing garbage collection of loose files w/o repack
928 # continue doing garbage collection of loose files w/o repack
927 pass
929 pass
928
930
929 reponame = repo.name
931 reponame = repo.name
930 if not sharedcache:
932 if not sharedcache:
931 sharedcache = repo.sharedstore
933 sharedcache = repo.sharedstore
932
934
933 # Compute a keepset which is not garbage collected
935 # Compute a keepset which is not garbage collected
934 def keyfn(fname, fnode):
936 def keyfn(fname, fnode):
935 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
937 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
936
938
937 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
939 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
938
940
939 progress.complete()
941 progress.complete()
940
942
941 # write list of valid repos back
943 # write list of valid repos back
942 oldumask = os.umask(0o002)
944 oldumask = os.umask(0o002)
943 try:
945 try:
944 reposfile = open(repospath, b'wb')
946 reposfile = open(repospath, b'wb')
945 reposfile.writelines([(b"%s\n" % r) for r in validrepos])
947 reposfile.writelines([(b"%s\n" % r) for r in validrepos])
946 reposfile.close()
948 reposfile.close()
947 finally:
949 finally:
948 os.umask(oldumask)
950 os.umask(oldumask)
949
951
950 # prune cache
952 # prune cache
951 if sharedcache is not None:
953 if sharedcache is not None:
952 sharedcache.gc(keepkeys)
954 sharedcache.gc(keepkeys)
953 elif not filesrepacked:
955 elif not filesrepacked:
954 ui.warn(_(b"warning: no valid repos in repofile\n"))
956 ui.warn(_(b"warning: no valid repos in repofile\n"))
955
957
956
958
957 def log(orig, ui, repo, *pats, **opts):
959 def log(orig, ui, repo, *pats, **opts):
958 if not isenabled(repo):
960 if not isenabled(repo):
959 return orig(ui, repo, *pats, **opts)
961 return orig(ui, repo, *pats, **opts)
960
962
961 follow = opts.get('follow')
963 follow = opts.get('follow')
962 revs = opts.get('rev')
964 revs = opts.get('rev')
963 if pats:
965 if pats:
964 # Force slowpath for non-follow patterns and follows that start from
966 # Force slowpath for non-follow patterns and follows that start from
965 # non-working-copy-parent revs.
967 # non-working-copy-parent revs.
966 if not follow or revs:
968 if not follow or revs:
967 # This forces the slowpath
969 # This forces the slowpath
968 opts['removed'] = True
970 opts['removed'] = True
969
971
970 # If this is a non-follow log without any revs specified, recommend that
972 # If this is a non-follow log without any revs specified, recommend that
971 # the user add -f to speed it up.
973 # the user add -f to speed it up.
972 if not follow and not revs:
974 if not follow and not revs:
973 match = scmutil.match(repo[b'.'], pats, pycompat.byteskwargs(opts))
975 match = scmutil.match(repo[b'.'], pats, pycompat.byteskwargs(opts))
974 isfile = not match.anypats()
976 isfile = not match.anypats()
975 if isfile:
977 if isfile:
976 for file in match.files():
978 for file in match.files():
977 if not os.path.isfile(repo.wjoin(file)):
979 if not os.path.isfile(repo.wjoin(file)):
978 isfile = False
980 isfile = False
979 break
981 break
980
982
981 if isfile:
983 if isfile:
982 ui.warn(
984 ui.warn(
983 _(
985 _(
984 b"warning: file log can be slow on large repos - "
986 b"warning: file log can be slow on large repos - "
985 + b"use -f to speed it up\n"
987 + b"use -f to speed it up\n"
986 )
988 )
987 )
989 )
988
990
989 return orig(ui, repo, *pats, **opts)
991 return orig(ui, repo, *pats, **opts)
990
992
991
993
992 def revdatelimit(ui, revset):
994 def revdatelimit(ui, revset):
993 """Update revset so that only changesets no older than 'prefetchdays' days
995 """Update revset so that only changesets no older than 'prefetchdays' days
994 are included. The default value is set to 14 days. If 'prefetchdays' is set
996 are included. The default value is set to 14 days. If 'prefetchdays' is set
995 to zero or negative value then date restriction is not applied.
997 to zero or negative value then date restriction is not applied.
996 """
998 """
997 days = ui.configint(b'remotefilelog', b'prefetchdays')
999 days = ui.configint(b'remotefilelog', b'prefetchdays')
998 if days > 0:
1000 if days > 0:
999 revset = b'(%s) & date(-%s)' % (revset, days)
1001 revset = b'(%s) & date(-%s)' % (revset, days)
1000 return revset
1002 return revset
1001
1003
1002
1004
1003 def readytofetch(repo):
1005 def readytofetch(repo):
1004 """Check that enough time has passed since the last background prefetch.
1006 """Check that enough time has passed since the last background prefetch.
1005 This only relates to prefetches after operations that change the working
1007 This only relates to prefetches after operations that change the working
1006 copy parent. Default delay between background prefetches is 2 minutes.
1008 copy parent. Default delay between background prefetches is 2 minutes.
1007 """
1009 """
1008 timeout = repo.ui.configint(b'remotefilelog', b'prefetchdelay')
1010 timeout = repo.ui.configint(b'remotefilelog', b'prefetchdelay')
1009 fname = repo.vfs.join(b'lastprefetch')
1011 fname = repo.vfs.join(b'lastprefetch')
1010
1012
1011 ready = False
1013 ready = False
1012 with open(fname, b'a'):
1014 with open(fname, b'a'):
1013 # the with construct above is used to avoid race conditions
1015 # the with construct above is used to avoid race conditions
1014 modtime = os.path.getmtime(fname)
1016 modtime = os.path.getmtime(fname)
1015 if (time.time() - modtime) > timeout:
1017 if (time.time() - modtime) > timeout:
1016 os.utime(fname, None)
1018 os.utime(fname, None)
1017 ready = True
1019 ready = True
1018
1020
1019 return ready
1021 return ready
1020
1022
1021
1023
1022 def wcpprefetch(ui, repo, **kwargs):
1024 def wcpprefetch(ui, repo, **kwargs):
1023 """Prefetches in background revisions specified by bgprefetchrevs revset.
1025 """Prefetches in background revisions specified by bgprefetchrevs revset.
1024 Does background repack if backgroundrepack flag is set in config.
1026 Does background repack if backgroundrepack flag is set in config.
1025 """
1027 """
1026 shallow = isenabled(repo)
1028 shallow = isenabled(repo)
1027 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs')
1029 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs')
1028 isready = readytofetch(repo)
1030 isready = readytofetch(repo)
1029
1031
1030 if not (shallow and bgprefetchrevs and isready):
1032 if not (shallow and bgprefetchrevs and isready):
1031 return
1033 return
1032
1034
1033 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1035 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1034 # update a revset with a date limit
1036 # update a revset with a date limit
1035 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
1037 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
1036
1038
1037 def anon(unused_success):
1039 def anon(unused_success):
1038 if util.safehasattr(repo, b'ranprefetch') and repo.ranprefetch:
1040 if util.safehasattr(repo, b'ranprefetch') and repo.ranprefetch:
1039 return
1041 return
1040 repo.ranprefetch = True
1042 repo.ranprefetch = True
1041 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
1043 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
1042
1044
1043 repo._afterlock(anon)
1045 repo._afterlock(anon)
1044
1046
1045
1047
1046 def pull(orig, ui, repo, *pats, **opts):
1048 def pull(orig, ui, repo, *pats, **opts):
1047 result = orig(ui, repo, *pats, **opts)
1049 result = orig(ui, repo, *pats, **opts)
1048
1050
1049 if isenabled(repo):
1051 if isenabled(repo):
1050 # prefetch if it's configured
1052 # prefetch if it's configured
1051 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch')
1053 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch')
1052 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1054 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1053 bgprefetch = repo.ui.configbool(b'remotefilelog', b'backgroundprefetch')
1055 bgprefetch = repo.ui.configbool(b'remotefilelog', b'backgroundprefetch')
1054
1056
1055 if prefetchrevset:
1057 if prefetchrevset:
1056 ui.status(_(b"prefetching file contents\n"))
1058 ui.status(_(b"prefetching file contents\n"))
1057 revs = scmutil.revrange(repo, [prefetchrevset])
1059 revs = scmutil.revrange(repo, [prefetchrevset])
1058 base = repo[b'.'].rev()
1060 base = repo[b'.'].rev()
1059 if bgprefetch:
1061 if bgprefetch:
1060 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
1062 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
1061 else:
1063 else:
1062 repo.prefetch(revs, base=base)
1064 repo.prefetch(revs, base=base)
1063 if bgrepack:
1065 if bgrepack:
1064 repackmod.backgroundrepack(repo, incremental=True)
1066 repackmod.backgroundrepack(repo, incremental=True)
1065 elif bgrepack:
1067 elif bgrepack:
1066 repackmod.backgroundrepack(repo, incremental=True)
1068 repackmod.backgroundrepack(repo, incremental=True)
1067
1069
1068 return result
1070 return result
1069
1071
1070
1072
1071 def exchangepull(orig, repo, remote, *args, **kwargs):
1073 def exchangepull(orig, repo, remote, *args, **kwargs):
1072 # Hook into the callstream/getbundle to insert bundle capabilities
1074 # Hook into the callstream/getbundle to insert bundle capabilities
1073 # during a pull.
1075 # during a pull.
1074 def localgetbundle(
1076 def localgetbundle(
1075 orig, source, heads=None, common=None, bundlecaps=None, **kwargs
1077 orig, source, heads=None, common=None, bundlecaps=None, **kwargs
1076 ):
1078 ):
1077 if not bundlecaps:
1079 if not bundlecaps:
1078 bundlecaps = set()
1080 bundlecaps = set()
1079 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1081 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1080 return orig(
1082 return orig(
1081 source, heads=heads, common=common, bundlecaps=bundlecaps, **kwargs
1083 source, heads=heads, common=common, bundlecaps=bundlecaps, **kwargs
1082 )
1084 )
1083
1085
1084 if util.safehasattr(remote, b'_callstream'):
1086 if util.safehasattr(remote, b'_callstream'):
1085 remote._localrepo = repo
1087 remote._localrepo = repo
1086 elif util.safehasattr(remote, b'getbundle'):
1088 elif util.safehasattr(remote, b'getbundle'):
1087 extensions.wrapfunction(remote, b'getbundle', localgetbundle)
1089 extensions.wrapfunction(remote, b'getbundle', localgetbundle)
1088
1090
1089 return orig(repo, remote, *args, **kwargs)
1091 return orig(repo, remote, *args, **kwargs)
1090
1092
1091
1093
1092 def _fileprefetchhook(repo, revmatches):
1094 def _fileprefetchhook(repo, revmatches):
1093 if isenabled(repo):
1095 if isenabled(repo):
1094 allfiles = []
1096 allfiles = []
1095 for rev, match in revmatches:
1097 for rev, match in revmatches:
1096 if rev == wdirrev or rev is None:
1098 if rev == wdirrev or rev is None:
1097 continue
1099 continue
1098 ctx = repo[rev]
1100 ctx = repo[rev]
1099 mf = ctx.manifest()
1101 mf = ctx.manifest()
1100 sparsematch = repo.maybesparsematch(ctx.rev())
1102 sparsematch = repo.maybesparsematch(ctx.rev())
1101 for path in ctx.walk(match):
1103 for path in ctx.walk(match):
1102 if (not sparsematch or sparsematch(path)) and path in mf:
1104 if (not sparsematch or sparsematch(path)) and path in mf:
1103 allfiles.append((path, hex(mf[path])))
1105 allfiles.append((path, hex(mf[path])))
1104 repo.fileservice.prefetch(allfiles)
1106 repo.fileservice.prefetch(allfiles)
1105
1107
1106
1108
1107 @command(
1109 @command(
1108 b'debugremotefilelog',
1110 b'debugremotefilelog',
1109 [
1111 [
1110 (b'd', b'decompress', None, _(b'decompress the filelog first')),
1112 (b'd', b'decompress', None, _(b'decompress the filelog first')),
1111 ],
1113 ],
1112 _(b'hg debugremotefilelog <path>'),
1114 _(b'hg debugremotefilelog <path>'),
1113 norepo=True,
1115 norepo=True,
1114 )
1116 )
1115 def debugremotefilelog(ui, path, **opts):
1117 def debugremotefilelog(ui, path, **opts):
1116 return debugcommands.debugremotefilelog(ui, path, **opts)
1118 return debugcommands.debugremotefilelog(ui, path, **opts)
1117
1119
1118
1120
1119 @command(
1121 @command(
1120 b'verifyremotefilelog',
1122 b'verifyremotefilelog',
1121 [
1123 [
1122 (b'd', b'decompress', None, _(b'decompress the filelogs first')),
1124 (b'd', b'decompress', None, _(b'decompress the filelogs first')),
1123 ],
1125 ],
1124 _(b'hg verifyremotefilelogs <directory>'),
1126 _(b'hg verifyremotefilelogs <directory>'),
1125 norepo=True,
1127 norepo=True,
1126 )
1128 )
1127 def verifyremotefilelog(ui, path, **opts):
1129 def verifyremotefilelog(ui, path, **opts):
1128 return debugcommands.verifyremotefilelog(ui, path, **opts)
1130 return debugcommands.verifyremotefilelog(ui, path, **opts)
1129
1131
1130
1132
1131 @command(
1133 @command(
1132 b'debugdatapack',
1134 b'debugdatapack',
1133 [
1135 [
1134 (b'', b'long', None, _(b'print the long hashes')),
1136 (b'', b'long', None, _(b'print the long hashes')),
1135 (b'', b'node', b'', _(b'dump the contents of node'), b'NODE'),
1137 (b'', b'node', b'', _(b'dump the contents of node'), b'NODE'),
1136 ],
1138 ],
1137 _(b'hg debugdatapack <paths>'),
1139 _(b'hg debugdatapack <paths>'),
1138 norepo=True,
1140 norepo=True,
1139 )
1141 )
1140 def debugdatapack(ui, *paths, **opts):
1142 def debugdatapack(ui, *paths, **opts):
1141 return debugcommands.debugdatapack(ui, *paths, **opts)
1143 return debugcommands.debugdatapack(ui, *paths, **opts)
1142
1144
1143
1145
1144 @command(b'debughistorypack', [], _(b'hg debughistorypack <path>'), norepo=True)
1146 @command(b'debughistorypack', [], _(b'hg debughistorypack <path>'), norepo=True)
1145 def debughistorypack(ui, path, **opts):
1147 def debughistorypack(ui, path, **opts):
1146 return debugcommands.debughistorypack(ui, path)
1148 return debugcommands.debughistorypack(ui, path)
1147
1149
1148
1150
1149 @command(b'debugkeepset', [], _(b'hg debugkeepset'))
1151 @command(b'debugkeepset', [], _(b'hg debugkeepset'))
1150 def debugkeepset(ui, repo, **opts):
1152 def debugkeepset(ui, repo, **opts):
1151 # The command is used to measure keepset computation time
1153 # The command is used to measure keepset computation time
1152 def keyfn(fname, fnode):
1154 def keyfn(fname, fnode):
1153 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1155 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1154
1156
1155 repackmod.keepset(repo, keyfn)
1157 repackmod.keepset(repo, keyfn)
1156 return
1158 return
1157
1159
1158
1160
1159 @command(b'debugwaitonrepack', [], _(b'hg debugwaitonrepack'))
1161 @command(b'debugwaitonrepack', [], _(b'hg debugwaitonrepack'))
1160 def debugwaitonrepack(ui, repo, **opts):
1162 def debugwaitonrepack(ui, repo, **opts):
1161 return debugcommands.debugwaitonrepack(repo)
1163 return debugcommands.debugwaitonrepack(repo)
1162
1164
1163
1165
1164 @command(b'debugwaitonprefetch', [], _(b'hg debugwaitonprefetch'))
1166 @command(b'debugwaitonprefetch', [], _(b'hg debugwaitonprefetch'))
1165 def debugwaitonprefetch(ui, repo, **opts):
1167 def debugwaitonprefetch(ui, repo, **opts):
1166 return debugcommands.debugwaitonprefetch(repo)
1168 return debugcommands.debugwaitonprefetch(repo)
1167
1169
1168
1170
1169 def resolveprefetchopts(ui, opts):
1171 def resolveprefetchopts(ui, opts):
1170 if not opts.get(b'rev'):
1172 if not opts.get(b'rev'):
1171 revset = [b'.', b'draft()']
1173 revset = [b'.', b'draft()']
1172
1174
1173 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch', None)
1175 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch', None)
1174 if prefetchrevset:
1176 if prefetchrevset:
1175 revset.append(b'(%s)' % prefetchrevset)
1177 revset.append(b'(%s)' % prefetchrevset)
1176 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs', None)
1178 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs', None)
1177 if bgprefetchrevs:
1179 if bgprefetchrevs:
1178 revset.append(b'(%s)' % bgprefetchrevs)
1180 revset.append(b'(%s)' % bgprefetchrevs)
1179 revset = b'+'.join(revset)
1181 revset = b'+'.join(revset)
1180
1182
1181 # update a revset with a date limit
1183 # update a revset with a date limit
1182 revset = revdatelimit(ui, revset)
1184 revset = revdatelimit(ui, revset)
1183
1185
1184 opts[b'rev'] = [revset]
1186 opts[b'rev'] = [revset]
1185
1187
1186 if not opts.get(b'base'):
1188 if not opts.get(b'base'):
1187 opts[b'base'] = None
1189 opts[b'base'] = None
1188
1190
1189 return opts
1191 return opts
1190
1192
1191
1193
1192 @command(
1194 @command(
1193 b'prefetch',
1195 b'prefetch',
1194 [
1196 [
1195 (b'r', b'rev', [], _(b'prefetch the specified revisions'), _(b'REV')),
1197 (b'r', b'rev', [], _(b'prefetch the specified revisions'), _(b'REV')),
1196 (b'', b'repack', False, _(b'run repack after prefetch')),
1198 (b'', b'repack', False, _(b'run repack after prefetch')),
1197 (b'b', b'base', b'', _(b"rev that is assumed to already be local")),
1199 (b'b', b'base', b'', _(b"rev that is assumed to already be local")),
1198 ]
1200 ]
1199 + commands.walkopts,
1201 + commands.walkopts,
1200 _(b'hg prefetch [OPTIONS] [FILE...]'),
1202 _(b'hg prefetch [OPTIONS] [FILE...]'),
1201 helpcategory=command.CATEGORY_MAINTENANCE,
1203 helpcategory=command.CATEGORY_MAINTENANCE,
1202 )
1204 )
1203 def prefetch(ui, repo, *pats, **opts):
1205 def prefetch(ui, repo, *pats, **opts):
1204 """prefetch file revisions from the server
1206 """prefetch file revisions from the server
1205
1207
1206 Prefetchs file revisions for the specified revs and stores them in the
1208 Prefetchs file revisions for the specified revs and stores them in the
1207 local remotefilelog cache. If no rev is specified, the default rev is
1209 local remotefilelog cache. If no rev is specified, the default rev is
1208 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1210 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1209 File names or patterns can be used to limit which files are downloaded.
1211 File names or patterns can be used to limit which files are downloaded.
1210
1212
1211 Return 0 on success.
1213 Return 0 on success.
1212 """
1214 """
1213 opts = pycompat.byteskwargs(opts)
1215 opts = pycompat.byteskwargs(opts)
1214 if not isenabled(repo):
1216 if not isenabled(repo):
1215 raise error.Abort(_(b"repo is not shallow"))
1217 raise error.Abort(_(b"repo is not shallow"))
1216
1218
1217 opts = resolveprefetchopts(ui, opts)
1219 opts = resolveprefetchopts(ui, opts)
1218 revs = scmutil.revrange(repo, opts.get(b'rev'))
1220 revs = scmutil.revrange(repo, opts.get(b'rev'))
1219 repo.prefetch(revs, opts.get(b'base'), pats, opts)
1221 repo.prefetch(revs, opts.get(b'base'), pats, opts)
1220
1222
1221 # Run repack in background
1223 # Run repack in background
1222 if opts.get(b'repack'):
1224 if opts.get(b'repack'):
1223 repackmod.backgroundrepack(repo, incremental=True)
1225 repackmod.backgroundrepack(repo, incremental=True)
1224
1226
1225
1227
1226 @command(
1228 @command(
1227 b'repack',
1229 b'repack',
1228 [
1230 [
1229 (b'', b'background', None, _(b'run in a background process'), None),
1231 (b'', b'background', None, _(b'run in a background process'), None),
1230 (b'', b'incremental', None, _(b'do an incremental repack'), None),
1232 (b'', b'incremental', None, _(b'do an incremental repack'), None),
1231 (
1233 (
1232 b'',
1234 b'',
1233 b'packsonly',
1235 b'packsonly',
1234 None,
1236 None,
1235 _(b'only repack packs (skip loose objects)'),
1237 _(b'only repack packs (skip loose objects)'),
1236 None,
1238 None,
1237 ),
1239 ),
1238 ],
1240 ],
1239 _(b'hg repack [OPTIONS]'),
1241 _(b'hg repack [OPTIONS]'),
1240 )
1242 )
1241 def repack_(ui, repo, *pats, **opts):
1243 def repack_(ui, repo, *pats, **opts):
1242 if opts.get('background'):
1244 if opts.get('background'):
1243 repackmod.backgroundrepack(
1245 repackmod.backgroundrepack(
1244 repo,
1246 repo,
1245 incremental=opts.get('incremental'),
1247 incremental=opts.get('incremental'),
1246 packsonly=opts.get('packsonly', False),
1248 packsonly=opts.get('packsonly', False),
1247 )
1249 )
1248 return
1250 return
1249
1251
1250 options = {b'packsonly': opts.get('packsonly')}
1252 options = {b'packsonly': opts.get('packsonly')}
1251
1253
1252 try:
1254 try:
1253 if opts.get('incremental'):
1255 if opts.get('incremental'):
1254 repackmod.incrementalrepack(repo, options=options)
1256 repackmod.incrementalrepack(repo, options=options)
1255 else:
1257 else:
1256 repackmod.fullrepack(repo, options=options)
1258 repackmod.fullrepack(repo, options=options)
1257 except repackmod.RepackAlreadyRunning as ex:
1259 except repackmod.RepackAlreadyRunning as ex:
1258 # Don't propogate the exception if the repack is already in
1260 # Don't propogate the exception if the repack is already in
1259 # progress, since we want the command to exit 0.
1261 # progress, since we want the command to exit 0.
1260 repo.ui.warn(b'%s\n' % ex)
1262 repo.ui.warn(b'%s\n' % ex)
@@ -1,307 +1,319 b''
1 # shallowbundle.py - bundle10 implementation for use with shallow repositories
1 # shallowbundle.py - bundle10 implementation for use with shallow repositories
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 from mercurial.i18n import _
9 from mercurial.i18n import _
10 from mercurial.node import bin, hex, nullid
10 from mercurial.node import bin, hex, nullid
11 from mercurial import (
11 from mercurial import (
12 bundlerepo,
12 bundlerepo,
13 changegroup,
13 changegroup,
14 error,
14 error,
15 match,
15 match,
16 mdiff,
16 mdiff,
17 pycompat,
17 pycompat,
18 )
18 )
19 from . import (
19 from . import (
20 constants,
20 constants,
21 remotefilelog,
21 remotefilelog,
22 shallowutil,
22 shallowutil,
23 )
23 )
24
24
25 NoFiles = 0
25 NoFiles = 0
26 LocalFiles = 1
26 LocalFiles = 1
27 AllFiles = 2
27 AllFiles = 2
28
28
29
29
30 def shallowgroup(cls, self, nodelist, rlog, lookup, units=None, reorder=None):
30 def shallowgroup(cls, self, nodelist, rlog, lookup, units=None, reorder=None):
31 if not isinstance(rlog, remotefilelog.remotefilelog):
31 if not isinstance(rlog, remotefilelog.remotefilelog):
32 for c in super(cls, self).group(nodelist, rlog, lookup, units=units):
32 for c in super(cls, self).group(nodelist, rlog, lookup, units=units):
33 yield c
33 yield c
34 return
34 return
35
35
36 if len(nodelist) == 0:
36 if len(nodelist) == 0:
37 yield self.close()
37 yield self.close()
38 return
38 return
39
39
40 nodelist = shallowutil.sortnodes(nodelist, rlog.parents)
40 nodelist = shallowutil.sortnodes(nodelist, rlog.parents)
41
41
42 # add the parent of the first rev
42 # add the parent of the first rev
43 p = rlog.parents(nodelist[0])[0]
43 p = rlog.parents(nodelist[0])[0]
44 nodelist.insert(0, p)
44 nodelist.insert(0, p)
45
45
46 # build deltas
46 # build deltas
47 for i in pycompat.xrange(len(nodelist) - 1):
47 for i in pycompat.xrange(len(nodelist) - 1):
48 prev, curr = nodelist[i], nodelist[i + 1]
48 prev, curr = nodelist[i], nodelist[i + 1]
49 linknode = lookup(curr)
49 linknode = lookup(curr)
50 for c in self.nodechunk(rlog, curr, prev, linknode):
50 for c in self.nodechunk(rlog, curr, prev, linknode):
51 yield c
51 yield c
52
52
53 yield self.close()
53 yield self.close()
54
54
55
55
56 class shallowcg1packer(changegroup.cgpacker):
56 class shallowcg1packer(changegroup.cgpacker):
57 def generate(self, commonrevs, clnodes, fastpathlinkrev, source, **kwargs):
57 def generate(self, commonrevs, clnodes, fastpathlinkrev, source, **kwargs):
58 if shallowutil.isenabled(self._repo):
58 if shallowutil.isenabled(self._repo):
59 fastpathlinkrev = False
59 fastpathlinkrev = False
60
60
61 return super(shallowcg1packer, self).generate(
61 return super(shallowcg1packer, self).generate(
62 commonrevs, clnodes, fastpathlinkrev, source, **kwargs
62 commonrevs, clnodes, fastpathlinkrev, source, **kwargs
63 )
63 )
64
64
65 def group(self, nodelist, rlog, lookup, units=None, reorder=None):
65 def group(self, nodelist, rlog, lookup, units=None, reorder=None):
66 return shallowgroup(
66 return shallowgroup(
67 shallowcg1packer, self, nodelist, rlog, lookup, units=units
67 shallowcg1packer, self, nodelist, rlog, lookup, units=units
68 )
68 )
69
69
70 def generatefiles(self, changedfiles, *args, **kwargs):
70 def generatefiles(self, changedfiles, *args, **kwargs):
71 try:
71 try:
72 linknodes, commonrevs, source = args
72 linknodes, commonrevs, source = args
73 except ValueError:
73 except ValueError:
74 commonrevs, source, mfdicts, fastpathlinkrev, fnodes, clrevs = args
74 commonrevs, source, mfdicts, fastpathlinkrev, fnodes, clrevs = args
75 if shallowutil.isenabled(self._repo):
75 if shallowutil.isenabled(self._repo):
76 repo = self._repo
76 repo = self._repo
77 if isinstance(repo, bundlerepo.bundlerepository):
77 if isinstance(repo, bundlerepo.bundlerepository):
78 # If the bundle contains filelogs, we can't pull from it, since
78 # If the bundle contains filelogs, we can't pull from it, since
79 # bundlerepo is heavily tied to revlogs. Instead require that
79 # bundlerepo is heavily tied to revlogs. Instead require that
80 # the user use unbundle instead.
80 # the user use unbundle instead.
81 # Force load the filelog data.
81 # Force load the filelog data.
82 bundlerepo.bundlerepository.file(repo, b'foo')
82 bundlerepo.bundlerepository.file(repo, b'foo')
83 if repo._cgfilespos:
83 if repo._cgfilespos:
84 raise error.Abort(
84 raise error.Abort(
85 b"cannot pull from full bundles",
85 b"cannot pull from full bundles",
86 hint=b"use `hg unbundle` instead",
86 hint=b"use `hg unbundle` instead",
87 )
87 )
88 return []
88 return []
89 filestosend = self.shouldaddfilegroups(source)
89 filestosend = self.shouldaddfilegroups(source)
90 if filestosend == NoFiles:
90 if filestosend == NoFiles:
91 changedfiles = list(
91 changedfiles = list(
92 [f for f in changedfiles if not repo.shallowmatch(f)]
92 [f for f in changedfiles if not repo.shallowmatch(f)]
93 )
93 )
94
94
95 return super(shallowcg1packer, self).generatefiles(
95 return super(shallowcg1packer, self).generatefiles(
96 changedfiles, *args, **kwargs
96 changedfiles, *args, **kwargs
97 )
97 )
98
98
99 def shouldaddfilegroups(self, source):
99 def shouldaddfilegroups(self, source):
100 repo = self._repo
100 repo = self._repo
101 if not shallowutil.isenabled(repo):
101 if not shallowutil.isenabled(repo):
102 return AllFiles
102 return AllFiles
103
103
104 if source == b"push" or source == b"bundle":
104 if source == b"push" or source == b"bundle":
105 return AllFiles
105 return AllFiles
106
106
107 # We won't actually strip the files, but we should put them in any
108 # backup bundle generated by strip (especially for cases like narrow's
109 # `hg tracked --removeinclude`, as failing to do so means that the
110 # "saved" changesets during a strip won't have their files reapplied and
111 # thus their linknode adjusted, if necessary).
112 if source == b"strip":
113 cfg = repo.ui.config(b'remotefilelog', b'strip.includefiles')
114 if cfg == b'local':
115 return LocalFiles
116 elif cfg != b'none':
117 return AllFiles
118
107 caps = self._bundlecaps or []
119 caps = self._bundlecaps or []
108 if source == b"serve" or source == b"pull":
120 if source == b"serve" or source == b"pull":
109 if constants.BUNDLE2_CAPABLITY in caps:
121 if constants.BUNDLE2_CAPABLITY in caps:
110 return LocalFiles
122 return LocalFiles
111 else:
123 else:
112 # Serving to a full repo requires us to serve everything
124 # Serving to a full repo requires us to serve everything
113 repo.ui.warn(_(b"pulling from a shallow repo\n"))
125 repo.ui.warn(_(b"pulling from a shallow repo\n"))
114 return AllFiles
126 return AllFiles
115
127
116 return NoFiles
128 return NoFiles
117
129
118 def prune(self, rlog, missing, commonrevs):
130 def prune(self, rlog, missing, commonrevs):
119 if not isinstance(rlog, remotefilelog.remotefilelog):
131 if not isinstance(rlog, remotefilelog.remotefilelog):
120 return super(shallowcg1packer, self).prune(
132 return super(shallowcg1packer, self).prune(
121 rlog, missing, commonrevs
133 rlog, missing, commonrevs
122 )
134 )
123
135
124 repo = self._repo
136 repo = self._repo
125 results = []
137 results = []
126 for fnode in missing:
138 for fnode in missing:
127 fctx = repo.filectx(rlog.filename, fileid=fnode)
139 fctx = repo.filectx(rlog.filename, fileid=fnode)
128 if fctx.linkrev() not in commonrevs:
140 if fctx.linkrev() not in commonrevs:
129 results.append(fnode)
141 results.append(fnode)
130 return results
142 return results
131
143
132 def nodechunk(self, revlog, node, prevnode, linknode):
144 def nodechunk(self, revlog, node, prevnode, linknode):
133 prefix = b''
145 prefix = b''
134 if prevnode == nullid:
146 if prevnode == nullid:
135 delta = revlog.rawdata(node)
147 delta = revlog.rawdata(node)
136 prefix = mdiff.trivialdiffheader(len(delta))
148 prefix = mdiff.trivialdiffheader(len(delta))
137 else:
149 else:
138 # Actually uses remotefilelog.revdiff which works on nodes, not revs
150 # Actually uses remotefilelog.revdiff which works on nodes, not revs
139 delta = revlog.revdiff(prevnode, node)
151 delta = revlog.revdiff(prevnode, node)
140 p1, p2 = revlog.parents(node)
152 p1, p2 = revlog.parents(node)
141 flags = revlog.flags(node)
153 flags = revlog.flags(node)
142 meta = self.builddeltaheader(node, p1, p2, prevnode, linknode, flags)
154 meta = self.builddeltaheader(node, p1, p2, prevnode, linknode, flags)
143 meta += prefix
155 meta += prefix
144 l = len(meta) + len(delta)
156 l = len(meta) + len(delta)
145 yield changegroup.chunkheader(l)
157 yield changegroup.chunkheader(l)
146 yield meta
158 yield meta
147 yield delta
159 yield delta
148
160
149
161
150 def makechangegroup(orig, repo, outgoing, version, source, *args, **kwargs):
162 def makechangegroup(orig, repo, outgoing, version, source, *args, **kwargs):
151 if not shallowutil.isenabled(repo):
163 if not shallowutil.isenabled(repo):
152 return orig(repo, outgoing, version, source, *args, **kwargs)
164 return orig(repo, outgoing, version, source, *args, **kwargs)
153
165
154 original = repo.shallowmatch
166 original = repo.shallowmatch
155 try:
167 try:
156 # if serving, only send files the clients has patterns for
168 # if serving, only send files the clients has patterns for
157 if source == b'serve':
169 if source == b'serve':
158 bundlecaps = kwargs.get('bundlecaps')
170 bundlecaps = kwargs.get('bundlecaps')
159 includepattern = None
171 includepattern = None
160 excludepattern = None
172 excludepattern = None
161 for cap in bundlecaps or []:
173 for cap in bundlecaps or []:
162 if cap.startswith(b"includepattern="):
174 if cap.startswith(b"includepattern="):
163 raw = cap[len(b"includepattern=") :]
175 raw = cap[len(b"includepattern=") :]
164 if raw:
176 if raw:
165 includepattern = raw.split(b'\0')
177 includepattern = raw.split(b'\0')
166 elif cap.startswith(b"excludepattern="):
178 elif cap.startswith(b"excludepattern="):
167 raw = cap[len(b"excludepattern=") :]
179 raw = cap[len(b"excludepattern=") :]
168 if raw:
180 if raw:
169 excludepattern = raw.split(b'\0')
181 excludepattern = raw.split(b'\0')
170 if includepattern or excludepattern:
182 if includepattern or excludepattern:
171 repo.shallowmatch = match.match(
183 repo.shallowmatch = match.match(
172 repo.root, b'', None, includepattern, excludepattern
184 repo.root, b'', None, includepattern, excludepattern
173 )
185 )
174 else:
186 else:
175 repo.shallowmatch = match.always()
187 repo.shallowmatch = match.always()
176 return orig(repo, outgoing, version, source, *args, **kwargs)
188 return orig(repo, outgoing, version, source, *args, **kwargs)
177 finally:
189 finally:
178 repo.shallowmatch = original
190 repo.shallowmatch = original
179
191
180
192
181 def addchangegroupfiles(
193 def addchangegroupfiles(
182 orig, repo, source, revmap, trp, expectedfiles, *args, **kwargs
194 orig, repo, source, revmap, trp, expectedfiles, *args, **kwargs
183 ):
195 ):
184 if not shallowutil.isenabled(repo):
196 if not shallowutil.isenabled(repo):
185 return orig(repo, source, revmap, trp, expectedfiles, *args, **kwargs)
197 return orig(repo, source, revmap, trp, expectedfiles, *args, **kwargs)
186
198
187 newfiles = 0
199 newfiles = 0
188 visited = set()
200 visited = set()
189 revisiondatas = {}
201 revisiondatas = {}
190 queue = []
202 queue = []
191
203
192 # Normal Mercurial processes each file one at a time, adding all
204 # Normal Mercurial processes each file one at a time, adding all
193 # the new revisions for that file at once. In remotefilelog a file
205 # the new revisions for that file at once. In remotefilelog a file
194 # revision may depend on a different file's revision (in the case
206 # revision may depend on a different file's revision (in the case
195 # of a rename/copy), so we must lay all revisions down across all
207 # of a rename/copy), so we must lay all revisions down across all
196 # files in topological order.
208 # files in topological order.
197
209
198 # read all the file chunks but don't add them
210 # read all the file chunks but don't add them
199 progress = repo.ui.makeprogress(_(b'files'), total=expectedfiles)
211 progress = repo.ui.makeprogress(_(b'files'), total=expectedfiles)
200 while True:
212 while True:
201 chunkdata = source.filelogheader()
213 chunkdata = source.filelogheader()
202 if not chunkdata:
214 if not chunkdata:
203 break
215 break
204 f = chunkdata[b"filename"]
216 f = chunkdata[b"filename"]
205 repo.ui.debug(b"adding %s revisions\n" % f)
217 repo.ui.debug(b"adding %s revisions\n" % f)
206 progress.increment()
218 progress.increment()
207
219
208 if not repo.shallowmatch(f):
220 if not repo.shallowmatch(f):
209 fl = repo.file(f)
221 fl = repo.file(f)
210 deltas = source.deltaiter()
222 deltas = source.deltaiter()
211 fl.addgroup(deltas, revmap, trp)
223 fl.addgroup(deltas, revmap, trp)
212 continue
224 continue
213
225
214 chain = None
226 chain = None
215 while True:
227 while True:
216 # returns: (node, p1, p2, cs, deltabase, delta, flags) or None
228 # returns: (node, p1, p2, cs, deltabase, delta, flags) or None
217 revisiondata = source.deltachunk(chain)
229 revisiondata = source.deltachunk(chain)
218 if not revisiondata:
230 if not revisiondata:
219 break
231 break
220
232
221 chain = revisiondata[0]
233 chain = revisiondata[0]
222
234
223 revisiondatas[(f, chain)] = revisiondata
235 revisiondatas[(f, chain)] = revisiondata
224 queue.append((f, chain))
236 queue.append((f, chain))
225
237
226 if f not in visited:
238 if f not in visited:
227 newfiles += 1
239 newfiles += 1
228 visited.add(f)
240 visited.add(f)
229
241
230 if chain is None:
242 if chain is None:
231 raise error.Abort(_(b"received file revlog group is empty"))
243 raise error.Abort(_(b"received file revlog group is empty"))
232
244
233 processed = set()
245 processed = set()
234
246
235 def available(f, node, depf, depnode):
247 def available(f, node, depf, depnode):
236 if depnode != nullid and (depf, depnode) not in processed:
248 if depnode != nullid and (depf, depnode) not in processed:
237 if not (depf, depnode) in revisiondatas:
249 if not (depf, depnode) in revisiondatas:
238 # It's not in the changegroup, assume it's already
250 # It's not in the changegroup, assume it's already
239 # in the repo
251 # in the repo
240 return True
252 return True
241 # re-add self to queue
253 # re-add self to queue
242 queue.insert(0, (f, node))
254 queue.insert(0, (f, node))
243 # add dependency in front
255 # add dependency in front
244 queue.insert(0, (depf, depnode))
256 queue.insert(0, (depf, depnode))
245 return False
257 return False
246 return True
258 return True
247
259
248 skipcount = 0
260 skipcount = 0
249
261
250 # Prefetch the non-bundled revisions that we will need
262 # Prefetch the non-bundled revisions that we will need
251 prefetchfiles = []
263 prefetchfiles = []
252 for f, node in queue:
264 for f, node in queue:
253 revisiondata = revisiondatas[(f, node)]
265 revisiondata = revisiondatas[(f, node)]
254 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
266 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
255 dependents = [revisiondata[1], revisiondata[2], revisiondata[4]]
267 dependents = [revisiondata[1], revisiondata[2], revisiondata[4]]
256
268
257 for dependent in dependents:
269 for dependent in dependents:
258 if dependent == nullid or (f, dependent) in revisiondatas:
270 if dependent == nullid or (f, dependent) in revisiondatas:
259 continue
271 continue
260 prefetchfiles.append((f, hex(dependent)))
272 prefetchfiles.append((f, hex(dependent)))
261
273
262 repo.fileservice.prefetch(prefetchfiles)
274 repo.fileservice.prefetch(prefetchfiles)
263
275
264 # Apply the revisions in topological order such that a revision
276 # Apply the revisions in topological order such that a revision
265 # is only written once it's deltabase and parents have been written.
277 # is only written once it's deltabase and parents have been written.
266 while queue:
278 while queue:
267 f, node = queue.pop(0)
279 f, node = queue.pop(0)
268 if (f, node) in processed:
280 if (f, node) in processed:
269 continue
281 continue
270
282
271 skipcount += 1
283 skipcount += 1
272 if skipcount > len(queue) + 1:
284 if skipcount > len(queue) + 1:
273 raise error.Abort(_(b"circular node dependency"))
285 raise error.Abort(_(b"circular node dependency"))
274
286
275 fl = repo.file(f)
287 fl = repo.file(f)
276
288
277 revisiondata = revisiondatas[(f, node)]
289 revisiondata = revisiondatas[(f, node)]
278 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
290 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
279 node, p1, p2, linknode, deltabase, delta, flags, sidedata = revisiondata
291 node, p1, p2, linknode, deltabase, delta, flags, sidedata = revisiondata
280
292
281 if not available(f, node, f, deltabase):
293 if not available(f, node, f, deltabase):
282 continue
294 continue
283
295
284 base = fl.rawdata(deltabase)
296 base = fl.rawdata(deltabase)
285 text = mdiff.patch(base, delta)
297 text = mdiff.patch(base, delta)
286 if not isinstance(text, bytes):
298 if not isinstance(text, bytes):
287 text = bytes(text)
299 text = bytes(text)
288
300
289 meta, text = shallowutil.parsemeta(text)
301 meta, text = shallowutil.parsemeta(text)
290 if b'copy' in meta:
302 if b'copy' in meta:
291 copyfrom = meta[b'copy']
303 copyfrom = meta[b'copy']
292 copynode = bin(meta[b'copyrev'])
304 copynode = bin(meta[b'copyrev'])
293 if not available(f, node, copyfrom, copynode):
305 if not available(f, node, copyfrom, copynode):
294 continue
306 continue
295
307
296 for p in [p1, p2]:
308 for p in [p1, p2]:
297 if p != nullid:
309 if p != nullid:
298 if not available(f, node, f, p):
310 if not available(f, node, f, p):
299 continue
311 continue
300
312
301 fl.add(text, meta, trp, linknode, p1, p2)
313 fl.add(text, meta, trp, linknode, p1, p2)
302 processed.add((f, node))
314 processed.add((f, node))
303 skipcount = 0
315 skipcount = 0
304
316
305 progress.complete()
317 progress.complete()
306
318
307 return len(revisiondatas), newfiles
319 return len(revisiondatas), newfiles
@@ -1,353 +1,354 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4 $ cat >> $HGRCPATH <<EOF
4 $ cat >> $HGRCPATH <<EOF
5 > [devel]
5 > [devel]
6 > remotefilelog.bg-wait=True
6 > remotefilelog.bg-wait=True
7 > EOF
7 > EOF
8
8
9 $ hg init master
9 $ hg init master
10 $ cd master
10 $ cd master
11 $ cat >> .hg/hgrc <<EOF
11 $ cat >> .hg/hgrc <<EOF
12 > [remotefilelog]
12 > [remotefilelog]
13 > server=True
13 > server=True
14 > EOF
14 > EOF
15 $ echo x > x
15 $ echo x > x
16 $ echo z > z
16 $ echo z > z
17 $ hg commit -qAm x
17 $ hg commit -qAm x
18 $ echo x2 > x
18 $ echo x2 > x
19 $ echo y > y
19 $ echo y > y
20 $ hg commit -qAm y
20 $ hg commit -qAm y
21 $ echo w > w
21 $ echo w > w
22 $ rm z
22 $ rm z
23 $ hg commit -qAm w
23 $ hg commit -qAm w
24 $ hg bookmark foo
24 $ hg bookmark foo
25
25
26 $ cd ..
26 $ cd ..
27
27
28 # clone the repo
28 # clone the repo
29
29
30 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
30 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
31 streaming all changes
31 streaming all changes
32 2 files to transfer, 776 bytes of data
32 2 files to transfer, 776 bytes of data
33 transferred 776 bytes in * seconds (*/sec) (glob)
33 transferred 776 bytes in * seconds (*/sec) (glob)
34 searching for changes
34 searching for changes
35 no changes found
35 no changes found
36
36
37 # Set the prefetchdays config to zero so that all commits are prefetched
37 # Set the prefetchdays config to zero so that all commits are prefetched
38 # no matter what their creation date is. Also set prefetchdelay config
38 # no matter what their creation date is. Also set prefetchdelay config
39 # to zero so that there is no delay between prefetches.
39 # to zero so that there is no delay between prefetches.
40 $ cd shallow
40 $ cd shallow
41 $ cat >> .hg/hgrc <<EOF
41 $ cat >> .hg/hgrc <<EOF
42 > [remotefilelog]
42 > [remotefilelog]
43 > prefetchdays=0
43 > prefetchdays=0
44 > prefetchdelay=0
44 > prefetchdelay=0
45 > EOF
45 > EOF
46 $ cd ..
46 $ cd ..
47
47
48 # prefetch a revision
48 # prefetch a revision
49 $ cd shallow
49 $ cd shallow
50
50
51 $ hg prefetch -r 0
51 $ hg prefetch -r 0
52 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
52 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
53
53
54 $ hg cat -r 0 x
54 $ hg cat -r 0 x
55 x
55 x
56
56
57 # background prefetch on pull when configured
57 # background prefetch on pull when configured
58
58
59 $ cat >> .hg/hgrc <<EOF
59 $ cat >> .hg/hgrc <<EOF
60 > [remotefilelog]
60 > [remotefilelog]
61 > pullprefetch=bookmark()
61 > pullprefetch=bookmark()
62 > backgroundprefetch=True
62 > backgroundprefetch=True
63 > EOF
63 > EOF
64 $ hg strip tip
64 $ hg strip tip
65 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
65 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
66 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
66
67
67 $ clearcache
68 $ clearcache
68 $ hg pull
69 $ hg pull
69 pulling from ssh://user@dummy/master
70 pulling from ssh://user@dummy/master
70 searching for changes
71 searching for changes
71 adding changesets
72 adding changesets
72 adding manifests
73 adding manifests
73 adding file changes
74 adding file changes
74 updating bookmark foo
75 updating bookmark foo
75 added 1 changesets with 0 changes to 0 files
76 added 1 changesets with 0 changes to 0 files
76 new changesets 6b4b6f66ef8c
77 new changesets 6b4b6f66ef8c
77 (run 'hg update' to get a working copy)
78 (run 'hg update' to get a working copy)
78 prefetching file contents
79 prefetching file contents
79 $ find $CACHEDIR -type f | sort
80 $ find $CACHEDIR -type f | sort
80 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
81 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
81 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
82 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
82 $TESTTMP/hgcache/master/af/f024fe4ab0fece4091de044c58c9ae4233383a/bb6ccd5dceaa5e9dc220e0dad65e051b94f69a2c
83 $TESTTMP/hgcache/master/af/f024fe4ab0fece4091de044c58c9ae4233383a/bb6ccd5dceaa5e9dc220e0dad65e051b94f69a2c
83 $TESTTMP/hgcache/repos
84 $TESTTMP/hgcache/repos
84
85
85 # background prefetch with repack on pull when configured
86 # background prefetch with repack on pull when configured
86
87
87 $ cat >> .hg/hgrc <<EOF
88 $ cat >> .hg/hgrc <<EOF
88 > [remotefilelog]
89 > [remotefilelog]
89 > backgroundrepack=True
90 > backgroundrepack=True
90 > EOF
91 > EOF
91 $ hg strip tip
92 $ hg strip tip
92 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
93 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
93
94
94 $ clearcache
95 $ clearcache
95 $ hg pull
96 $ hg pull
96 pulling from ssh://user@dummy/master
97 pulling from ssh://user@dummy/master
97 searching for changes
98 searching for changes
98 adding changesets
99 adding changesets
99 adding manifests
100 adding manifests
100 adding file changes
101 adding file changes
101 updating bookmark foo
102 updating bookmark foo
102 added 1 changesets with 0 changes to 0 files
103 added 1 changesets with 0 changes to 0 files
103 new changesets 6b4b6f66ef8c
104 new changesets 6b4b6f66ef8c
104 (run 'hg update' to get a working copy)
105 (run 'hg update' to get a working copy)
105 prefetching file contents
106 prefetching file contents
106 $ find $CACHEDIR -type f | sort
107 $ find $CACHEDIR -type f | sort
107 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
108 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
108 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
109 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
109 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.dataidx
110 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.dataidx
110 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.datapack
111 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.datapack
111 $TESTTMP/hgcache/repos
112 $TESTTMP/hgcache/repos
112
113
113 # background prefetch with repack on update when wcprevset configured
114 # background prefetch with repack on update when wcprevset configured
114
115
115 $ clearcache
116 $ clearcache
116 $ hg up -r 0
117 $ hg up -r 0
117 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
118 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
118 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
119 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
119 $ find $CACHEDIR -type f | sort
120 $ find $CACHEDIR -type f | sort
120 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1406e74118627694268417491f018a4a883152f0
121 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1406e74118627694268417491f018a4a883152f0
121 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
122 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
122 $TESTTMP/hgcache/repos
123 $TESTTMP/hgcache/repos
123
124
124 $ hg up -r 1
125 $ hg up -r 1
125 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
126 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
126 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127
128
128 $ cat >> .hg/hgrc <<EOF
129 $ cat >> .hg/hgrc <<EOF
129 > [remotefilelog]
130 > [remotefilelog]
130 > bgprefetchrevs=.::
131 > bgprefetchrevs=.::
131 > EOF
132 > EOF
132
133
133 $ clearcache
134 $ clearcache
134 $ hg up -r 0
135 $ hg up -r 0
135 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
136 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
136 * files fetched over * fetches - (* misses, 0.00% hit ratio) over *s (glob)
137 * files fetched over * fetches - (* misses, 0.00% hit ratio) over *s (glob)
137 $ find $CACHEDIR -type f | sort
138 $ find $CACHEDIR -type f | sort
138 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
139 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
139 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
140 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
140 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
141 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
141 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
142 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
142 $TESTTMP/hgcache/repos
143 $TESTTMP/hgcache/repos
143
144
144 # Ensure that file 'w' was prefetched - it was not part of the update operation and therefore
145 # Ensure that file 'w' was prefetched - it was not part of the update operation and therefore
145 # could only be downloaded by the background prefetch
146 # could only be downloaded by the background prefetch
146
147
147 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
148 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
148 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
149 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
149 w:
150 w:
150 Node Delta Base Delta Length Blob Size
151 Node Delta Base Delta Length Blob Size
151 bb6ccd5dceaa 000000000000 2 2
152 bb6ccd5dceaa 000000000000 2 2
152
153
153 Total: 2 2 (0.0% bigger)
154 Total: 2 2 (0.0% bigger)
154 x:
155 x:
155 Node Delta Base Delta Length Blob Size
156 Node Delta Base Delta Length Blob Size
156 ef95c5376f34 000000000000 3 3
157 ef95c5376f34 000000000000 3 3
157 1406e7411862 ef95c5376f34 14 2
158 1406e7411862 ef95c5376f34 14 2
158
159
159 Total: 17 5 (240.0% bigger)
160 Total: 17 5 (240.0% bigger)
160 y:
161 y:
161 Node Delta Base Delta Length Blob Size
162 Node Delta Base Delta Length Blob Size
162 076f5e2225b3 000000000000 2 2
163 076f5e2225b3 000000000000 2 2
163
164
164 Total: 2 2 (0.0% bigger)
165 Total: 2 2 (0.0% bigger)
165 z:
166 z:
166 Node Delta Base Delta Length Blob Size
167 Node Delta Base Delta Length Blob Size
167 69a1b6752270 000000000000 2 2
168 69a1b6752270 000000000000 2 2
168
169
169 Total: 2 2 (0.0% bigger)
170 Total: 2 2 (0.0% bigger)
170
171
171 # background prefetch with repack on commit when wcprevset configured
172 # background prefetch with repack on commit when wcprevset configured
172
173
173 $ cat >> .hg/hgrc <<EOF
174 $ cat >> .hg/hgrc <<EOF
174 > [remotefilelog]
175 > [remotefilelog]
175 > bgprefetchrevs=0::
176 > bgprefetchrevs=0::
176 > EOF
177 > EOF
177
178
178 $ clearcache
179 $ clearcache
179 $ find $CACHEDIR -type f | sort
180 $ find $CACHEDIR -type f | sort
180 $ echo b > b
181 $ echo b > b
181 .. The following output line about files fetches is globed because it is
182 .. The following output line about files fetches is globed because it is
182 .. flaky, the core the test is checked when checking the cache dir, so
183 .. flaky, the core the test is checked when checking the cache dir, so
183 .. hopefully this flakyness is not hiding any actual bug.
184 .. hopefully this flakyness is not hiding any actual bug.
184 $ hg commit -qAm b
185 $ hg commit -qAm b
185 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
186 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
186 $ hg bookmark temporary
187 $ hg bookmark temporary
187 $ find $CACHEDIR -type f | sort
188 $ find $CACHEDIR -type f | sort
188 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
189 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
189 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
190 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
190 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
191 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
191 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
192 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
192 $TESTTMP/hgcache/repos
193 $TESTTMP/hgcache/repos
193
194
194 # Ensure that file 'w' was prefetched - it was not part of the commit operation and therefore
195 # Ensure that file 'w' was prefetched - it was not part of the commit operation and therefore
195 # could only be downloaded by the background prefetch
196 # could only be downloaded by the background prefetch
196
197
197 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
198 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
198 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
199 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
199 w:
200 w:
200 Node Delta Base Delta Length Blob Size
201 Node Delta Base Delta Length Blob Size
201 bb6ccd5dceaa 000000000000 2 2
202 bb6ccd5dceaa 000000000000 2 2
202
203
203 Total: 2 2 (0.0% bigger)
204 Total: 2 2 (0.0% bigger)
204 x:
205 x:
205 Node Delta Base Delta Length Blob Size
206 Node Delta Base Delta Length Blob Size
206 ef95c5376f34 000000000000 3 3
207 ef95c5376f34 000000000000 3 3
207 1406e7411862 ef95c5376f34 14 2
208 1406e7411862 ef95c5376f34 14 2
208
209
209 Total: 17 5 (240.0% bigger)
210 Total: 17 5 (240.0% bigger)
210 y:
211 y:
211 Node Delta Base Delta Length Blob Size
212 Node Delta Base Delta Length Blob Size
212 076f5e2225b3 000000000000 2 2
213 076f5e2225b3 000000000000 2 2
213
214
214 Total: 2 2 (0.0% bigger)
215 Total: 2 2 (0.0% bigger)
215 z:
216 z:
216 Node Delta Base Delta Length Blob Size
217 Node Delta Base Delta Length Blob Size
217 69a1b6752270 000000000000 2 2
218 69a1b6752270 000000000000 2 2
218
219
219 Total: 2 2 (0.0% bigger)
220 Total: 2 2 (0.0% bigger)
220
221
221 # background prefetch with repack on rebase when wcprevset configured
222 # background prefetch with repack on rebase when wcprevset configured
222
223
223 $ hg up -r 2
224 $ hg up -r 2
224 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
225 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
225 (leaving bookmark temporary)
226 (leaving bookmark temporary)
226 $ clearcache
227 $ clearcache
227 $ find $CACHEDIR -type f | sort
228 $ find $CACHEDIR -type f | sort
228 .. The following output line about files fetches is globed because it is
229 .. The following output line about files fetches is globed because it is
229 .. flaky, the core the test is checked when checking the cache dir, so
230 .. flaky, the core the test is checked when checking the cache dir, so
230 .. hopefully this flakyness is not hiding any actual bug.
231 .. hopefully this flakyness is not hiding any actual bug.
231 $ hg rebase -s temporary -d foo
232 $ hg rebase -s temporary -d foo
232 rebasing 3:d9cf06e3b5b6 temporary tip "b"
233 rebasing 3:d9cf06e3b5b6 temporary tip "b"
233 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/d9cf06e3b5b6-e5c3dc63-rebase.hg
234 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/d9cf06e3b5b6-e5c3dc63-rebase.hg
234 ? files fetched over ? fetches - (? misses, 0.00% hit ratio) over *s (glob)
235 ? files fetched over ? fetches - (? misses, 0.00% hit ratio) over *s (glob)
235 $ find $CACHEDIR -type f | sort
236 $ find $CACHEDIR -type f | sort
236 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
237 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
237 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
238 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
238 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
239 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
239 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
240 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
240 $TESTTMP/hgcache/repos
241 $TESTTMP/hgcache/repos
241
242
242 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
243 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
243 # could only be downloaded by the background prefetch
244 # could only be downloaded by the background prefetch
244
245
245 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
246 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
246 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
247 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
247 w:
248 w:
248 Node Delta Base Delta Length Blob Size
249 Node Delta Base Delta Length Blob Size
249 bb6ccd5dceaa 000000000000 2 2
250 bb6ccd5dceaa 000000000000 2 2
250
251
251 Total: 2 2 (0.0% bigger)
252 Total: 2 2 (0.0% bigger)
252 x:
253 x:
253 Node Delta Base Delta Length Blob Size
254 Node Delta Base Delta Length Blob Size
254 ef95c5376f34 000000000000 3 3
255 ef95c5376f34 000000000000 3 3
255 1406e7411862 ef95c5376f34 14 2
256 1406e7411862 ef95c5376f34 14 2
256
257
257 Total: 17 5 (240.0% bigger)
258 Total: 17 5 (240.0% bigger)
258 y:
259 y:
259 Node Delta Base Delta Length Blob Size
260 Node Delta Base Delta Length Blob Size
260 076f5e2225b3 000000000000 2 2
261 076f5e2225b3 000000000000 2 2
261
262
262 Total: 2 2 (0.0% bigger)
263 Total: 2 2 (0.0% bigger)
263 z:
264 z:
264 Node Delta Base Delta Length Blob Size
265 Node Delta Base Delta Length Blob Size
265 69a1b6752270 000000000000 2 2
266 69a1b6752270 000000000000 2 2
266
267
267 Total: 2 2 (0.0% bigger)
268 Total: 2 2 (0.0% bigger)
268
269
269 # Check that foregound prefetch with no arguments blocks until background prefetches finish
270 # Check that foregound prefetch with no arguments blocks until background prefetches finish
270
271
271 $ hg up -r 3
272 $ hg up -r 3
272 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
273 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
273 $ clearcache
274 $ clearcache
274 $ hg prefetch --repack --config ui.timeout.warn=-1
275 $ hg prefetch --repack --config ui.timeout.warn=-1
275 (running background incremental repack)
276 (running background incremental repack)
276 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
277 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
277
278
278 $ find $CACHEDIR -type f | sort
279 $ find $CACHEDIR -type f | sort
279 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
280 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
280 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
281 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
281 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
282 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
282 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
283 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
283 $TESTTMP/hgcache/repos
284 $TESTTMP/hgcache/repos
284
285
285 # Ensure that files were prefetched
286 # Ensure that files were prefetched
286 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
287 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
287 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
288 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
288 w:
289 w:
289 Node Delta Base Delta Length Blob Size
290 Node Delta Base Delta Length Blob Size
290 bb6ccd5dceaa 000000000000 2 2
291 bb6ccd5dceaa 000000000000 2 2
291
292
292 Total: 2 2 (0.0% bigger)
293 Total: 2 2 (0.0% bigger)
293 x:
294 x:
294 Node Delta Base Delta Length Blob Size
295 Node Delta Base Delta Length Blob Size
295 ef95c5376f34 000000000000 3 3
296 ef95c5376f34 000000000000 3 3
296 1406e7411862 ef95c5376f34 14 2
297 1406e7411862 ef95c5376f34 14 2
297
298
298 Total: 17 5 (240.0% bigger)
299 Total: 17 5 (240.0% bigger)
299 y:
300 y:
300 Node Delta Base Delta Length Blob Size
301 Node Delta Base Delta Length Blob Size
301 076f5e2225b3 000000000000 2 2
302 076f5e2225b3 000000000000 2 2
302
303
303 Total: 2 2 (0.0% bigger)
304 Total: 2 2 (0.0% bigger)
304 z:
305 z:
305 Node Delta Base Delta Length Blob Size
306 Node Delta Base Delta Length Blob Size
306 69a1b6752270 000000000000 2 2
307 69a1b6752270 000000000000 2 2
307
308
308 Total: 2 2 (0.0% bigger)
309 Total: 2 2 (0.0% bigger)
309
310
310 # Check that foreground prefetch fetches revs specified by '. + draft() + bgprefetchrevs + pullprefetch'
311 # Check that foreground prefetch fetches revs specified by '. + draft() + bgprefetchrevs + pullprefetch'
311
312
312 $ clearcache
313 $ clearcache
313 $ hg prefetch --repack --config ui.timeout.warn=-1
314 $ hg prefetch --repack --config ui.timeout.warn=-1
314 (running background incremental repack)
315 (running background incremental repack)
315 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
316 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
316
317
317 $ find $CACHEDIR -type f | sort
318 $ find $CACHEDIR -type f | sort
318 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
319 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
319 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
320 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
320 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
321 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
321 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
322 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
322 $TESTTMP/hgcache/repos
323 $TESTTMP/hgcache/repos
323
324
324 # Ensure that files were prefetched
325 # Ensure that files were prefetched
325 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
326 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
326 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
327 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
327 w:
328 w:
328 Node Delta Base Delta Length Blob Size
329 Node Delta Base Delta Length Blob Size
329 bb6ccd5dceaa 000000000000 2 2
330 bb6ccd5dceaa 000000000000 2 2
330
331
331 Total: 2 2 (0.0% bigger)
332 Total: 2 2 (0.0% bigger)
332 x:
333 x:
333 Node Delta Base Delta Length Blob Size
334 Node Delta Base Delta Length Blob Size
334 ef95c5376f34 000000000000 3 3
335 ef95c5376f34 000000000000 3 3
335 1406e7411862 ef95c5376f34 14 2
336 1406e7411862 ef95c5376f34 14 2
336
337
337 Total: 17 5 (240.0% bigger)
338 Total: 17 5 (240.0% bigger)
338 y:
339 y:
339 Node Delta Base Delta Length Blob Size
340 Node Delta Base Delta Length Blob Size
340 076f5e2225b3 000000000000 2 2
341 076f5e2225b3 000000000000 2 2
341
342
342 Total: 2 2 (0.0% bigger)
343 Total: 2 2 (0.0% bigger)
343 z:
344 z:
344 Node Delta Base Delta Length Blob Size
345 Node Delta Base Delta Length Blob Size
345 69a1b6752270 000000000000 2 2
346 69a1b6752270 000000000000 2 2
346
347
347 Total: 2 2 (0.0% bigger)
348 Total: 2 2 (0.0% bigger)
348
349
349 # Test that if data was prefetched and repacked we dont need to prefetch it again
350 # Test that if data was prefetched and repacked we dont need to prefetch it again
350 # It ensures that Mercurial looks not only in loose files but in packs as well
351 # It ensures that Mercurial looks not only in loose files but in packs as well
351
352
352 $ hg prefetch --repack
353 $ hg prefetch --repack
353 (running background incremental repack)
354 (running background incremental repack)
@@ -1,75 +1,76 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ hg commit -qAm x
12 $ hg commit -qAm x
13 $ echo y >> x
13 $ echo y >> x
14 $ hg commit -qAm y
14 $ hg commit -qAm y
15 $ echo z >> x
15 $ echo z >> x
16 $ hg commit -qAm z
16 $ hg commit -qAm z
17
17
18 $ cd ..
18 $ cd ..
19
19
20 $ hgcloneshallow ssh://user@dummy/master shallow -q
20 $ hgcloneshallow ssh://user@dummy/master shallow -q
21 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
21 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
22 $ cd shallow
22 $ cd shallow
23
23
24 Unbundling a shallow bundle
24 Unbundling a shallow bundle
25
25
26 $ hg strip -r 66ee28d0328c
26 $ hg strip -r 66ee28d0328c
27 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
27 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
28 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
28 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
29 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
29 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
30 $ hg unbundle .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
30 $ hg unbundle .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
31 adding changesets
31 adding changesets
32 adding manifests
32 adding manifests
33 adding file changes
33 adding file changes
34 added 2 changesets with 0 changes to 0 files
34 added 2 changesets with 2 changes to 1 files
35 new changesets 66ee28d0328c:16db62c5946f
35 new changesets 66ee28d0328c:16db62c5946f
36 (run 'hg update' to get a working copy)
36 (run 'hg update' to get a working copy)
37
37
38 Unbundling a full bundle
38 Unbundling a full bundle
39
39
40 $ hg -R ../master bundle -r 66ee28d0328c:: --base "66ee28d0328c^" ../fullbundle.hg
40 $ hg -R ../master bundle -r 66ee28d0328c:: --base "66ee28d0328c^" ../fullbundle.hg
41 2 changesets found
41 2 changesets found
42 $ hg strip -r 66ee28d0328c
42 $ hg strip -r 66ee28d0328c
43 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
43 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
44 $ hg unbundle ../fullbundle.hg
44 $ hg unbundle ../fullbundle.hg
45 adding changesets
45 adding changesets
46 adding manifests
46 adding manifests
47 adding file changes
47 adding file changes
48 added 2 changesets with 2 changes to 1 files
48 added 2 changesets with 2 changes to 1 files
49 new changesets 66ee28d0328c:16db62c5946f (2 drafts)
49 new changesets 66ee28d0328c:16db62c5946f (2 drafts)
50 (run 'hg update' to get a working copy)
50 (run 'hg update' to get a working copy)
51
51
52 Pulling from a shallow bundle
52 Pulling from a shallow bundle
53
53
54 $ hg strip -r 66ee28d0328c
54 $ hg strip -r 66ee28d0328c --config remotefilelog.strip.includefiles=none
55 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
55 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg (glob)
56 $ hg pull -r 66ee28d0328c .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
56 $ hg pull -r 66ee28d0328c .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
57 pulling from .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
57 pulling from .hg/strip-backup/66ee28d0328c-3d7aafd1-backup.hg
58 searching for changes
58 searching for changes
59 adding changesets
59 adding changesets
60 adding manifests
60 adding manifests
61 adding file changes
61 adding file changes
62 added 1 changesets with 0 changes to 0 files
62 added 1 changesets with 0 changes to 0 files
63 new changesets 66ee28d0328c (1 drafts)
63 new changesets 66ee28d0328c (1 drafts)
64 (run 'hg update' to get a working copy)
64 (run 'hg update' to get a working copy)
65
65
66 Pulling from a full bundle
66 Pulling from a full bundle, also testing that strip produces a full bundle by
67 default.
67
68
68 $ hg strip -r 66ee28d0328c
69 $ hg strip -r 66ee28d0328c
69 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg (glob)
70 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg (glob)
70 $ hg pull -r 66ee28d0328c ../fullbundle.hg
71 $ hg pull -r 66ee28d0328c .hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg
71 pulling from ../fullbundle.hg
72 pulling from .hg/strip-backup/66ee28d0328c-b6ee89e7-backup.hg
72 searching for changes
73 searching for changes
73 abort: cannot pull from full bundles
74 abort: cannot pull from full bundles
74 (use `hg unbundle` instead)
75 (use `hg unbundle` instead)
75 [255]
76 [255]
@@ -1,209 +1,210 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ echo y > y
12 $ echo y > y
13 $ echo z > z
13 $ echo z > z
14 $ hg commit -qAm xy
14 $ hg commit -qAm xy
15
15
16 $ cd ..
16 $ cd ..
17
17
18 $ hgcloneshallow ssh://user@dummy/master shallow -q
18 $ hgcloneshallow ssh://user@dummy/master shallow -q
19 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
19 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
20 $ cd shallow
20 $ cd shallow
21
21
22 # status
22 # status
23
23
24 $ clearcache
24 $ clearcache
25 $ echo xx > x
25 $ echo xx > x
26 $ echo yy > y
26 $ echo yy > y
27 $ touch a
27 $ touch a
28 $ hg status
28 $ hg status
29 M x
29 M x
30 M y
30 M y
31 ? a
31 ? a
32 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
32 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
33 $ hg add a
33 $ hg add a
34 $ hg status
34 $ hg status
35 M x
35 M x
36 M y
36 M y
37 A a
37 A a
38
38
39 # diff
39 # diff
40
40
41 $ hg debugrebuilddirstate # fixes dirstate non-determinism
41 $ hg debugrebuilddirstate # fixes dirstate non-determinism
42 $ hg add a
42 $ hg add a
43 $ clearcache
43 $ clearcache
44 $ hg diff
44 $ hg diff
45 diff -r f3d0bb0d1e48 x
45 diff -r f3d0bb0d1e48 x
46 --- a/x* (glob)
46 --- a/x* (glob)
47 +++ b/x* (glob)
47 +++ b/x* (glob)
48 @@ -1,1 +1,1 @@
48 @@ -1,1 +1,1 @@
49 -x
49 -x
50 +xx
50 +xx
51 diff -r f3d0bb0d1e48 y
51 diff -r f3d0bb0d1e48 y
52 --- a/y* (glob)
52 --- a/y* (glob)
53 +++ b/y* (glob)
53 +++ b/y* (glob)
54 @@ -1,1 +1,1 @@
54 @@ -1,1 +1,1 @@
55 -y
55 -y
56 +yy
56 +yy
57 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
57 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
58
58
59 # local commit
59 # local commit
60
60
61 $ clearcache
61 $ clearcache
62 $ echo a > a
62 $ echo a > a
63 $ echo xxx > x
63 $ echo xxx > x
64 $ echo yyy > y
64 $ echo yyy > y
65 $ hg commit -m a
65 $ hg commit -m a
66 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over *s (glob)
66 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over *s (glob)
67
67
68 # local commit where the dirstate is clean -- ensure that we do just one fetch
68 # local commit where the dirstate is clean -- ensure that we do just one fetch
69 # (update to a commit on the server first)
69 # (update to a commit on the server first)
70
70
71 $ hg --config debug.dirstate.delaywrite=1 up 0
71 $ hg --config debug.dirstate.delaywrite=1 up 0
72 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
72 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
73 $ clearcache
73 $ clearcache
74 $ hg debugdirstate
74 $ hg debugdirstate
75 n 644 2 * x (glob)
75 n 644 2 * x (glob)
76 n 644 2 * y (glob)
76 n 644 2 * y (glob)
77 n 644 2 * z (glob)
77 n 644 2 * z (glob)
78 $ echo xxxx > x
78 $ echo xxxx > x
79 $ echo yyyy > y
79 $ echo yyyy > y
80 $ hg commit -m x
80 $ hg commit -m x
81 created new head
81 created new head
82 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
82 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
83
83
84 # restore state for future tests
84 # restore state for future tests
85
85
86 $ hg -q strip .
86 $ hg -q strip .
87 $ hg -q up tip
87 $ hg -q up tip
88
88
89 # rebase
89 # rebase
90
90
91 $ clearcache
91 $ clearcache
92 $ cd ../master
92 $ cd ../master
93 $ echo w > w
93 $ echo w > w
94 $ hg commit -qAm w
94 $ hg commit -qAm w
95
95
96 $ cd ../shallow
96 $ cd ../shallow
97 $ hg pull
97 $ hg pull
98 pulling from ssh://user@dummy/master
98 pulling from ssh://user@dummy/master
99 searching for changes
99 searching for changes
100 adding changesets
100 adding changesets
101 adding manifests
101 adding manifests
102 adding file changes
102 adding file changes
103 added 1 changesets with 0 changes to 0 files (+1 heads)
103 added 1 changesets with 0 changes to 0 files (+1 heads)
104 new changesets fed61014d323
104 new changesets fed61014d323
105 (run 'hg heads' to see heads, 'hg merge' to merge)
105 (run 'hg heads' to see heads, 'hg merge' to merge)
106
106
107 $ hg rebase -d tip
107 $ hg rebase -d tip
108 rebasing 1:9abfe7bca547 "a"
108 rebasing 1:9abfe7bca547 "a"
109 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/9abfe7bca547-8b11e5ff-rebase.hg (glob)
109 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/9abfe7bca547-8b11e5ff-rebase.hg (glob)
110 3 files fetched over 2 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
110 3 files fetched over 2 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
111
111
112 # strip
112 # strip
113
113
114 $ clearcache
114 $ clearcache
115 $ hg debugrebuilddirstate # fixes dirstate non-determinism
115 $ hg debugrebuilddirstate # fixes dirstate non-determinism
116 $ hg strip -r .
116 $ hg strip -r .
117 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
117 2 files updated, 0 files merged, 1 files removed, 0 files unresolved
118 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg (glob)
118 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg (glob)
119 4 files fetched over 2 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
119 3 files fetched over 2 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
120
120
121 # unbundle
121 # unbundle
122
122
123 $ clearcache
123 $ clearcache
124 $ ls -A
124 $ ls -A
125 .hg
125 .hg
126 w
126 w
127 x
127 x
128 y
128 y
129 z
129 z
130
130
131 $ hg debugrebuilddirstate # fixes dirstate non-determinism
131 $ hg debugrebuilddirstate # fixes dirstate non-determinism
132 $ hg unbundle .hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg
132 $ hg unbundle .hg/strip-backup/19edf50f4de7-df3d0f74-backup.hg
133 adding changesets
133 adding changesets
134 adding manifests
134 adding manifests
135 adding file changes
135 adding file changes
136 added 1 changesets with 0 changes to 0 files
136 added 1 changesets with 3 changes to 3 files
137 new changesets 19edf50f4de7 (1 drafts)
137 new changesets 19edf50f4de7 (1 drafts)
138 (run 'hg update' to get a working copy)
138 (run 'hg update' to get a working copy)
139 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
139
140
140 $ hg up
141 $ hg up
141 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
142 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
142 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
143 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
143 $ cat a
144 $ cat a
144 a
145 a
145
146
146 # revert
147 # revert
147
148
148 $ clearcache
149 $ clearcache
149 $ hg revert -r .~2 y z
150 $ hg revert -r .~2 y z
150 no changes needed to z
151 no changes needed to z
151 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
152 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
152 $ hg checkout -C -r . -q
153 $ hg checkout -C -r . -q
153
154
154 # explicit bundle should produce full bundle file
155 # explicit bundle should produce full bundle file
155
156
156 $ hg bundle -r 2 --base 1 ../local.bundle
157 $ hg bundle -r 2 --base 1 ../local.bundle
157 1 changesets found
158 1 changesets found
158 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
159 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
159 $ cd ..
160 $ cd ..
160
161
161 $ hgcloneshallow ssh://user@dummy/master shallow2 -q
162 $ hgcloneshallow ssh://user@dummy/master shallow2 -q
162 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
163 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
163 $ cd shallow2
164 $ cd shallow2
164 $ hg unbundle ../local.bundle
165 $ hg unbundle ../local.bundle
165 adding changesets
166 adding changesets
166 adding manifests
167 adding manifests
167 adding file changes
168 adding file changes
168 added 1 changesets with 3 changes to 3 files
169 added 1 changesets with 3 changes to 3 files
169 new changesets 19edf50f4de7 (1 drafts)
170 new changesets 19edf50f4de7 (1 drafts)
170 (run 'hg update' to get a working copy)
171 (run 'hg update' to get a working copy)
171
172
172 $ hg log -r 2 --stat
173 $ hg log -r 2 --stat
173 changeset: 2:19edf50f4de7
174 changeset: 2:19edf50f4de7
174 tag: tip
175 tag: tip
175 user: test
176 user: test
176 date: Thu Jan 01 00:00:00 1970 +0000
177 date: Thu Jan 01 00:00:00 1970 +0000
177 summary: a
178 summary: a
178
179
179 a | 1 +
180 a | 1 +
180 x | 2 +-
181 x | 2 +-
181 y | 2 +-
182 y | 2 +-
182 3 files changed, 3 insertions(+), 2 deletions(-)
183 3 files changed, 3 insertions(+), 2 deletions(-)
183
184
184 # Merge
185 # Merge
185
186
186 $ echo merge >> w
187 $ echo merge >> w
187 $ hg commit -m w
188 $ hg commit -m w
188 created new head
189 created new head
189 $ hg merge 2
190 $ hg merge 2
190 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
191 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
191 (branch merge, don't forget to commit)
192 (branch merge, don't forget to commit)
192 $ hg commit -m merge
193 $ hg commit -m merge
193 $ hg strip -q -r ".^"
194 $ hg strip -q -r ".^"
194
195
195 # commit without producing new node
196 # commit without producing new node
196
197
197 $ cd $TESTTMP
198 $ cd $TESTTMP
198 $ hgcloneshallow ssh://user@dummy/master shallow3 -q
199 $ hgcloneshallow ssh://user@dummy/master shallow3 -q
199 $ cd shallow3
200 $ cd shallow3
200 $ echo 1 > A
201 $ echo 1 > A
201 $ hg commit -m foo -A A
202 $ hg commit -m foo -A A
202 $ hg log -r . -T '{node}\n'
203 $ hg log -r . -T '{node}\n'
203 383ce605500277f879b7460a16ba620eb6930b7f
204 383ce605500277f879b7460a16ba620eb6930b7f
204 $ hg update -r '.^' -q
205 $ hg update -r '.^' -q
205 $ echo 1 > A
206 $ echo 1 > A
206 $ hg commit -m foo -A A
207 $ hg commit -m foo -A A
207 warning: commit already existed in the repository!
208 warning: commit already existed in the repository!
208 $ hg log -r . -T '{node}\n'
209 $ hg log -r . -T '{node}\n'
209 383ce605500277f879b7460a16ba620eb6930b7f
210 383ce605500277f879b7460a16ba620eb6930b7f
@@ -1,280 +1,281 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ echo z > z
12 $ echo z > z
13 $ hg commit -qAm x
13 $ hg commit -qAm x
14 $ echo x2 > x
14 $ echo x2 > x
15 $ echo y > y
15 $ echo y > y
16 $ hg commit -qAm y
16 $ hg commit -qAm y
17 $ hg bookmark foo
17 $ hg bookmark foo
18
18
19 $ cd ..
19 $ cd ..
20
20
21 # prefetch a revision
21 # prefetch a revision
22
22
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 streaming all changes
24 streaming all changes
25 2 files to transfer, 528 bytes of data
25 2 files to transfer, 528 bytes of data
26 transferred 528 bytes in * seconds (*/sec) (glob)
26 transferred 528 bytes in * seconds (*/sec) (glob)
27 searching for changes
27 searching for changes
28 no changes found
28 no changes found
29 $ cd shallow
29 $ cd shallow
30
30
31 $ hg prefetch -r 0
31 $ hg prefetch -r 0
32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
32 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
33
33
34 $ hg cat -r 0 x
34 $ hg cat -r 0 x
35 x
35 x
36
36
37 # prefetch with base
37 # prefetch with base
38
38
39 $ clearcache
39 $ clearcache
40 $ hg prefetch -r 0::1 -b 0
40 $ hg prefetch -r 0::1 -b 0
41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
41 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
42
42
43 $ hg cat -r 1 x
43 $ hg cat -r 1 x
44 x2
44 x2
45 $ hg cat -r 1 y
45 $ hg cat -r 1 y
46 y
46 y
47
47
48 $ hg cat -r 0 x
48 $ hg cat -r 0 x
49 x
49 x
50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
50 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
51
51
52 $ hg cat -r 0 z
52 $ hg cat -r 0 z
53 z
53 z
54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
54 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
55
55
56 $ hg prefetch -r 0::1 --base 0
56 $ hg prefetch -r 0::1 --base 0
57 $ hg prefetch -r 0::1 -b 1
57 $ hg prefetch -r 0::1 -b 1
58 $ hg prefetch -r 0::1
58 $ hg prefetch -r 0::1
59
59
60 # prefetch a range of revisions
60 # prefetch a range of revisions
61
61
62 $ clearcache
62 $ clearcache
63 $ hg prefetch -r 0::1
63 $ hg prefetch -r 0::1
64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
64 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over *s (glob)
65
65
66 $ hg cat -r 0 x
66 $ hg cat -r 0 x
67 x
67 x
68 $ hg cat -r 1 x
68 $ hg cat -r 1 x
69 x2
69 x2
70
70
71 # prefetch certain files
71 # prefetch certain files
72
72
73 $ clearcache
73 $ clearcache
74 $ hg prefetch -r 1 x
74 $ hg prefetch -r 1 x
75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
75 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
76
76
77 $ hg cat -r 1 x
77 $ hg cat -r 1 x
78 x2
78 x2
79
79
80 $ hg cat -r 1 y
80 $ hg cat -r 1 y
81 y
81 y
82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
83
83
84 # prefetch on pull when configured
84 # prefetch on pull when configured
85
85
86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
86 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
87 $ hg strip tip
87 $ hg strip tip
88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
88 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
89 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
89
90
90 $ clearcache
91 $ clearcache
91 $ hg pull
92 $ hg pull
92 pulling from ssh://user@dummy/master
93 pulling from ssh://user@dummy/master
93 searching for changes
94 searching for changes
94 adding changesets
95 adding changesets
95 adding manifests
96 adding manifests
96 adding file changes
97 adding file changes
97 updating bookmark foo
98 updating bookmark foo
98 added 1 changesets with 0 changes to 0 files
99 added 1 changesets with 0 changes to 0 files
99 new changesets 109c3a557a73
100 new changesets 109c3a557a73
100 (run 'hg update' to get a working copy)
101 (run 'hg update' to get a working copy)
101 prefetching file contents
102 prefetching file contents
102 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
103 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
103
104
104 $ hg up tip
105 $ hg up tip
105 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
106 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
106
107
107 # prefetch only fetches changes not in working copy
108 # prefetch only fetches changes not in working copy
108
109
109 $ hg strip tip
110 $ hg strip tip
110 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
111 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
111 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
112 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/109c3a557a73-3f43405e-backup.hg (glob)
112 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
113 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
113 $ clearcache
114 $ clearcache
114
115
115 $ hg pull
116 $ hg pull
116 pulling from ssh://user@dummy/master
117 pulling from ssh://user@dummy/master
117 searching for changes
118 searching for changes
118 adding changesets
119 adding changesets
119 adding manifests
120 adding manifests
120 adding file changes
121 adding file changes
121 updating bookmark foo
122 updating bookmark foo
122 added 1 changesets with 0 changes to 0 files
123 added 1 changesets with 0 changes to 0 files
123 new changesets 109c3a557a73
124 new changesets 109c3a557a73
124 (run 'hg update' to get a working copy)
125 (run 'hg update' to get a working copy)
125 prefetching file contents
126 prefetching file contents
126 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
127
128
128 # Make some local commits that produce the same file versions as are on the
129 # Make some local commits that produce the same file versions as are on the
129 # server. To simulate a situation where we have local commits that were somehow
130 # server. To simulate a situation where we have local commits that were somehow
130 # pushed, and we will soon pull.
131 # pushed, and we will soon pull.
131
132
132 $ hg prefetch -r 'all()'
133 $ hg prefetch -r 'all()'
133 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
134 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
134 $ hg strip -q -r 0
135 $ hg strip -q -r 0
135 $ echo x > x
136 $ echo x > x
136 $ echo z > z
137 $ echo z > z
137 $ hg commit -qAm x
138 $ hg commit -qAm x
138 $ echo x2 > x
139 $ echo x2 > x
139 $ echo y > y
140 $ echo y > y
140 $ hg commit -qAm y
141 $ hg commit -qAm y
141
142
142 # prefetch server versions, even if local versions are available
143 # prefetch server versions, even if local versions are available
143
144
144 $ clearcache
145 $ clearcache
145 $ hg strip -q tip
146 $ hg strip -q tip
146 $ hg pull
147 $ hg pull
147 pulling from ssh://user@dummy/master
148 pulling from ssh://user@dummy/master
148 searching for changes
149 searching for changes
149 adding changesets
150 adding changesets
150 adding manifests
151 adding manifests
151 adding file changes
152 adding file changes
152 updating bookmark foo
153 updating bookmark foo
153 added 1 changesets with 0 changes to 0 files
154 added 1 changesets with 0 changes to 0 files
154 new changesets 109c3a557a73
155 new changesets 109c3a557a73
155 1 local changesets published (?)
156 1 local changesets published (?)
156 (run 'hg update' to get a working copy)
157 (run 'hg update' to get a working copy)
157 prefetching file contents
158 prefetching file contents
158 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
159 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
159
160
160 $ cd ..
161 $ cd ..
161
162
162 # Prefetch unknown files during checkout
163 # Prefetch unknown files during checkout
163
164
164 $ hgcloneshallow ssh://user@dummy/master shallow2
165 $ hgcloneshallow ssh://user@dummy/master shallow2
165 streaming all changes
166 streaming all changes
166 2 files to transfer, 528 bytes of data
167 2 files to transfer, 528 bytes of data
167 transferred 528 bytes in * seconds * (glob)
168 transferred 528 bytes in * seconds * (glob)
168 searching for changes
169 searching for changes
169 no changes found
170 no changes found
170 updating to branch default
171 updating to branch default
171 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
172 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
172 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
173 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
173 $ cd shallow2
174 $ cd shallow2
174 $ hg up -q null
175 $ hg up -q null
175 $ echo x > x
176 $ echo x > x
176 $ echo y > y
177 $ echo y > y
177 $ echo z > z
178 $ echo z > z
178 $ clearcache
179 $ clearcache
179 $ hg up tip
180 $ hg up tip
180 x: untracked file differs
181 x: untracked file differs
181 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
182 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
182 abort: untracked files in working directory differ from files in requested revision
183 abort: untracked files in working directory differ from files in requested revision
183 [20]
184 [20]
184 $ hg revert --all
185 $ hg revert --all
185
186
186 # Test batch fetching of lookup files during hg status
187 # Test batch fetching of lookup files during hg status
187 $ hg up --clean tip
188 $ hg up --clean tip
188 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
189 $ hg debugrebuilddirstate
190 $ hg debugrebuilddirstate
190 $ clearcache
191 $ clearcache
191 $ hg status
192 $ hg status
192 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
193 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
193
194
194 # Prefetch during addrename detection
195 # Prefetch during addrename detection
195 $ hg up -q --clean tip
196 $ hg up -q --clean tip
196 $ hg revert --all
197 $ hg revert --all
197 $ mv x x2
198 $ mv x x2
198 $ mv y y2
199 $ mv y y2
199 $ mv z z2
200 $ mv z z2
200 $ echo a > a
201 $ echo a > a
201 $ hg add a
202 $ hg add a
202 $ rm a
203 $ rm a
203 $ clearcache
204 $ clearcache
204 $ hg addremove -s 50 > /dev/null
205 $ hg addremove -s 50 > /dev/null
205 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
206 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over * (glob)
206 $ hg revert --all
207 $ hg revert --all
207 forgetting x2
208 forgetting x2
208 forgetting y2
209 forgetting y2
209 forgetting z2
210 forgetting z2
210 undeleting x
211 undeleting x
211 undeleting y
212 undeleting y
212 undeleting z
213 undeleting z
213
214
214
215
215 # Revert across double renames. Note: the scary "abort", error is because
216 # Revert across double renames. Note: the scary "abort", error is because
216 # https://bz.mercurial-scm.org/5419 .
217 # https://bz.mercurial-scm.org/5419 .
217
218
218 $ cd ../master
219 $ cd ../master
219 $ hg mv z z2
220 $ hg mv z z2
220 $ hg commit -m 'move z -> z2'
221 $ hg commit -m 'move z -> z2'
221 $ cd ../shallow2
222 $ cd ../shallow2
222 $ hg pull -q
223 $ hg pull -q
223 $ clearcache
224 $ clearcache
224 $ hg mv y y2
225 $ hg mv y y2
225 y2: not overwriting - file exists
226 y2: not overwriting - file exists
226 ('hg rename --after' to record the rename)
227 ('hg rename --after' to record the rename)
227 [1]
228 [1]
228 $ hg mv x x2
229 $ hg mv x x2
229 x2: not overwriting - file exists
230 x2: not overwriting - file exists
230 ('hg rename --after' to record the rename)
231 ('hg rename --after' to record the rename)
231 [1]
232 [1]
232 $ hg mv z2 z3
233 $ hg mv z2 z3
233 z2: not copying - file is not managed
234 z2: not copying - file is not managed
234 abort: no files to copy
235 abort: no files to copy
235 [10]
236 [10]
236 $ find $CACHEDIR -type f | sort
237 $ find $CACHEDIR -type f | sort
237 .. The following output line about files fetches is globed because it is
238 .. The following output line about files fetches is globed because it is
238 .. flaky, the core the test is checked when checking the cache dir, so
239 .. flaky, the core the test is checked when checking the cache dir, so
239 .. hopefully this flakyness is not hiding any actual bug.
240 .. hopefully this flakyness is not hiding any actual bug.
240 $ hg revert -a -r 1 || true
241 $ hg revert -a -r 1 || true
241 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over * (glob)
242 ? files fetched over 1 fetches - (? misses, 0.00% hit ratio) over * (glob)
242 abort: z2@109c3a557a73: not found in manifest (?)
243 abort: z2@109c3a557a73: not found in manifest (?)
243 $ find $CACHEDIR -type f | sort
244 $ find $CACHEDIR -type f | sort
244 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
245 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
245 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
246 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
246 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
247 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
247 $TESTTMP/hgcache/repos
248 $TESTTMP/hgcache/repos
248
249
249 # warning when we have excess remotefilelog fetching
250 # warning when we have excess remotefilelog fetching
250
251
251 $ cat > repeated_fetch.py << EOF
252 $ cat > repeated_fetch.py << EOF
252 > import binascii
253 > import binascii
253 > from mercurial import extensions, registrar
254 > from mercurial import extensions, registrar
254 > cmdtable = {}
255 > cmdtable = {}
255 > command = registrar.command(cmdtable)
256 > command = registrar.command(cmdtable)
256 > @command(b'repeated-fetch', [], b'', inferrepo=True)
257 > @command(b'repeated-fetch', [], b'', inferrepo=True)
257 > def repeated_fetch(ui, repo, *args, **opts):
258 > def repeated_fetch(ui, repo, *args, **opts):
258 > for i in range(20):
259 > for i in range(20):
259 > try:
260 > try:
260 > hexid = (b'%02x' % (i + 1)) * 20
261 > hexid = (b'%02x' % (i + 1)) * 20
261 > repo.fileservice.prefetch([(b'somefile.txt', hexid)])
262 > repo.fileservice.prefetch([(b'somefile.txt', hexid)])
262 > except Exception:
263 > except Exception:
263 > pass
264 > pass
264 > EOF
265 > EOF
265
266
266 We should only output to the user once. We're ignoring most of the output
267 We should only output to the user once. We're ignoring most of the output
267 because we're not actually fetching anything real here, all the hashes are
268 because we're not actually fetching anything real here, all the hashes are
268 bogus, so it's just going to be errors and a final summary of all the misses.
269 bogus, so it's just going to be errors and a final summary of all the misses.
269 $ hg --config extensions.repeated_fetch=repeated_fetch.py \
270 $ hg --config extensions.repeated_fetch=repeated_fetch.py \
270 > --config remotefilelog.fetchwarning="fetch warning!" \
271 > --config remotefilelog.fetchwarning="fetch warning!" \
271 > --config extensions.blackbox= \
272 > --config extensions.blackbox= \
272 > repeated-fetch 2>&1 | grep 'fetch warning'
273 > repeated-fetch 2>&1 | grep 'fetch warning'
273 fetch warning!
274 fetch warning!
274
275
275 We should output to blackbox three times, with a stack trace on each (though
276 We should output to blackbox three times, with a stack trace on each (though
276 that isn't tested here).
277 that isn't tested here).
277 $ grep 'excess remotefilelog fetching' .hg/blackbox.log
278 $ grep 'excess remotefilelog fetching' .hg/blackbox.log
278 .* excess remotefilelog fetching: (re)
279 .* excess remotefilelog fetching: (re)
279 .* excess remotefilelog fetching: (re)
280 .* excess remotefilelog fetching: (re)
280 .* excess remotefilelog fetching: (re)
281 .* excess remotefilelog fetching: (re)
@@ -1,102 +1,103 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ echo z > z
12 $ echo z > z
13 $ hg commit -qAm x1
13 $ hg commit -qAm x1
14 $ echo x2 > x
14 $ echo x2 > x
15 $ echo z2 > z
15 $ echo z2 > z
16 $ hg commit -qAm x2
16 $ hg commit -qAm x2
17 $ hg bookmark foo
17 $ hg bookmark foo
18
18
19 $ cd ..
19 $ cd ..
20
20
21 # prefetch a revision w/ a sparse checkout
21 # prefetch a revision w/ a sparse checkout
22
22
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
23 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
24 streaming all changes
24 streaming all changes
25 2 files to transfer, 527 bytes of data
25 2 files to transfer, 527 bytes of data
26 transferred 527 bytes in 0.* seconds (*/sec) (glob)
26 transferred 527 bytes in 0.* seconds (*/sec) (glob)
27 searching for changes
27 searching for changes
28 no changes found
28 no changes found
29 $ cd shallow
29 $ cd shallow
30 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
30 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
31
31
32 $ hg debugsparse -I x
32 $ hg debugsparse -I x
33 $ hg prefetch -r 0
33 $ hg prefetch -r 0
34 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
34 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
35
35
36 $ hg cat -r 0 x
36 $ hg cat -r 0 x
37 x
37 x
38
38
39 $ hg debugsparse -I z
39 $ hg debugsparse -I z
40 $ hg prefetch -r 0
40 $ hg prefetch -r 0
41 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
41 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
42
42
43 $ hg cat -r 0 z
43 $ hg cat -r 0 z
44 z
44 z
45
45
46 # prefetch sparse only on pull when configured
46 # prefetch sparse only on pull when configured
47
47
48 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
48 $ printf "[remotefilelog]\npullprefetch=bookmark()\n" >> .hg/hgrc
49 $ hg strip tip
49 $ hg strip tip
50 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/876b1317060d-b2e91d8d-backup.hg (glob)
50 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/876b1317060d-b2e91d8d-backup.hg (glob)
51 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
51
52
52 $ hg debugsparse --delete z
53 $ hg debugsparse --delete z
53
54
54 $ clearcache
55 $ clearcache
55 $ hg pull
56 $ hg pull
56 pulling from ssh://user@dummy/master
57 pulling from ssh://user@dummy/master
57 searching for changes
58 searching for changes
58 adding changesets
59 adding changesets
59 adding manifests
60 adding manifests
60 adding file changes
61 adding file changes
61 updating bookmark foo
62 updating bookmark foo
62 added 1 changesets with 0 changes to 0 files
63 added 1 changesets with 0 changes to 0 files
63 new changesets 876b1317060d
64 new changesets 876b1317060d
64 (run 'hg update' to get a working copy)
65 (run 'hg update' to get a working copy)
65 prefetching file contents
66 prefetching file contents
66 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
67 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
67
68
68 # Dont consider filtered files when doing copy tracing
69 # Dont consider filtered files when doing copy tracing
69
70
70 ## Push an unrelated commit
71 ## Push an unrelated commit
71 $ cd ../
72 $ cd ../
72
73
73 $ hgcloneshallow ssh://user@dummy/master shallow2
74 $ hgcloneshallow ssh://user@dummy/master shallow2
74 streaming all changes
75 streaming all changes
75 2 files to transfer, 527 bytes of data
76 2 files to transfer, 527 bytes of data
76 transferred 527 bytes in 0.* seconds (*) (glob)
77 transferred 527 bytes in 0.* seconds (*) (glob)
77 searching for changes
78 searching for changes
78 no changes found
79 no changes found
79 updating to branch default
80 updating to branch default
80 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
81 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
81 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
82 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
82 $ cd shallow2
83 $ cd shallow2
83 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
84 $ printf "[extensions]\nsparse=\n" >> .hg/hgrc
84
85
85 $ hg up -q 0
86 $ hg up -q 0
86 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
87 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
87 $ touch a
88 $ touch a
88 $ hg ci -Aqm a
89 $ hg ci -Aqm a
89 $ hg push -q -f
90 $ hg push -q -f
90
91
91 ## Pull the unrelated commit and rebase onto it - verify unrelated file was not
92 ## Pull the unrelated commit and rebase onto it - verify unrelated file was not
92 pulled
93 pulled
93
94
94 $ cd ../shallow
95 $ cd ../shallow
95 $ hg up -q 1
96 $ hg up -q 1
96 $ hg pull -q
97 $ hg pull -q
97 $ hg debugsparse -I z
98 $ hg debugsparse -I z
98 $ clearcache
99 $ clearcache
99 $ hg prefetch -r '. + .^' -I x -I z
100 $ hg prefetch -r '. + .^' -I x -I z
100 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over * (glob)
101 4 files fetched over 1 fetches - (4 misses, 0.00% hit ratio) over * (glob)
101 $ hg rebase -d 2 --keep
102 $ hg rebase -d 2 --keep
102 rebasing 1:876b1317060d foo "x2"
103 rebasing 1:876b1317060d foo "x2"
@@ -1,67 +1,68 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4
4
5 $ hg init master
5 $ hg init master
6 $ cd master
6 $ cd master
7 $ cat >> .hg/hgrc <<EOF
7 $ cat >> .hg/hgrc <<EOF
8 > [remotefilelog]
8 > [remotefilelog]
9 > server=True
9 > server=True
10 > EOF
10 > EOF
11 $ echo x > x
11 $ echo x > x
12 $ hg commit -qAm x
12 $ hg commit -qAm x
13
13
14 $ cd ..
14 $ cd ..
15
15
16 $ hgcloneshallow ssh://user@dummy/master shallow -q
16 $ hgcloneshallow ssh://user@dummy/master shallow -q
17 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
17 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
18 $ cd shallow
18 $ cd shallow
19
19
20 $ cat >> $TESTTMP/get_file_linknode.py <<EOF
20 $ cat >> $TESTTMP/get_file_linknode.py <<EOF
21 > from mercurial import node, registrar, scmutil
21 > from mercurial import node, registrar, scmutil
22 > cmdtable = {}
22 > cmdtable = {}
23 > command = registrar.command(cmdtable)
23 > command = registrar.command(cmdtable)
24 > @command(b'debug-file-linknode', [(b'r', b'rev', b'.', b'rev')], b'hg debug-file-linknode FILE')
24 > @command(b'debug-file-linknode', [(b'r', b'rev', b'.', b'rev')], b'hg debug-file-linknode FILE')
25 > def debug_file_linknode(ui, repo, file, **opts):
25 > def debug_file_linknode(ui, repo, file, **opts):
26 > rflctx = scmutil.revsingle(repo.unfiltered(), opts['rev']).filectx(file)
26 > rflctx = scmutil.revsingle(repo.unfiltered(), opts['rev']).filectx(file)
27 > ui.status(b'%s\n' % node.hex(rflctx.ancestormap()[rflctx._filenode][2]))
27 > ui.status(b'%s\n' % node.hex(rflctx.ancestormap()[rflctx._filenode][2]))
28 > EOF
28 > EOF
29
29
30 $ cat >> .hg/hgrc <<EOF
30 $ cat >> .hg/hgrc <<EOF
31 > [ui]
31 > [ui]
32 > interactive=1
32 > interactive=1
33 > [extensions]
33 > [extensions]
34 > strip=
34 > strip=
35 > get_file_linknode=$TESTTMP/get_file_linknode.py
35 > get_file_linknode=$TESTTMP/get_file_linknode.py
36 > [experimental]
36 > [experimental]
37 > evolution=createmarkers,allowunstable
37 > evolution=createmarkers,allowunstable
38 > EOF
38 > EOF
39 $ echo a > a
39 $ echo a > a
40 $ hg commit -qAm msg1
40 $ hg commit -qAm msg1
41 $ hg commit --amend 're:^$' -m msg2
41 $ hg commit --amend 're:^$' -m msg2
42 $ hg commit --amend 're:^$' -m msg3
42 $ hg commit --amend 're:^$' -m msg3
43 $ hg --hidden log -G -T '{rev} {node|short}'
43 $ hg --hidden log -G -T '{rev} {node|short}'
44 @ 3 df91f74b871e
44 @ 3 df91f74b871e
45 |
45 |
46 | x 2 70494d7ec5ef
46 | x 2 70494d7ec5ef
47 |/
47 |/
48 | x 1 1e423846dde0
48 | x 1 1e423846dde0
49 |/
49 |/
50 o 0 b292c1e3311f
50 o 0 b292c1e3311f
51
51
52 $ hg debug-file-linknode -r 70494d a
52 $ hg debug-file-linknode -r 70494d a
53 df91f74b871e064c89afa1fe9e2f66afa2c125df
53 df91f74b871e064c89afa1fe9e2f66afa2c125df
54 $ hg --hidden strip -r 1 3
54 $ hg --hidden strip -r 1 3
55 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
55 0 files updated, 0 files merged, 1 files removed, 0 files unresolved
56 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/df91f74b871e-c94d67be-backup.hg
56 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/df91f74b871e-c94d67be-backup.hg
57
57
58 $ hg --hidden log -G -T '{rev} {node|short}'
58 $ hg --hidden log -G -T '{rev} {node|short}'
59 o 1 70494d7ec5ef
59 o 1 70494d7ec5ef
60 |
60 |
61 @ 0 b292c1e3311f
61 @ 0 b292c1e3311f
62
62
63 FIXME: This should point to a commit that actually exists in the repo. Otherwise
63 Demonstrate that the linknode points to a commit that is actually in the repo
64 remotefilelog has to search every commit in the repository looking for a valid
64 after the strip operation. Otherwise remotefilelog has to search every commit in
65 linkrev every time it's queried, such as during push.
65 the repository looking for a valid linkrev every time it's queried, such as
66 during push.
66 $ hg debug-file-linknode -r 70494d a
67 $ hg debug-file-linknode -r 70494d a
67 df91f74b871e064c89afa1fe9e2f66afa2c125df
68 70494d7ec5ef6cd3cd6939a9fd2812f9956bf553
General Comments 0
You need to be logged in to leave comments. Login now