##// END OF EJS Templates
remotefilelog: remove the `ensurestart` usage...
marmoute -
r44303:612b4b63 stable
parent child Browse files
Show More
@@ -1,1303 +1,1289 b''
1 # __init__.py - remotefilelog extension
1 # __init__.py - remotefilelog extension
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
7 """remotefilelog causes Mercurial to lazilly fetch file contents (EXPERIMENTAL)
8
8
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
9 This extension is HIGHLY EXPERIMENTAL. There are NO BACKWARDS COMPATIBILITY
10 GUARANTEES. This means that repositories created with this extension may
10 GUARANTEES. This means that repositories created with this extension may
11 only be usable with the exact version of this extension/Mercurial that was
11 only be usable with the exact version of this extension/Mercurial that was
12 used. The extension attempts to enforce this in order to prevent repository
12 used. The extension attempts to enforce this in order to prevent repository
13 corruption.
13 corruption.
14
14
15 remotefilelog works by fetching file contents lazily and storing them
15 remotefilelog works by fetching file contents lazily and storing them
16 in a cache on the client rather than in revlogs. This allows enormous
16 in a cache on the client rather than in revlogs. This allows enormous
17 histories to be transferred only partially, making them easier to
17 histories to be transferred only partially, making them easier to
18 operate on.
18 operate on.
19
19
20 Configs:
20 Configs:
21
21
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
22 ``packs.maxchainlen`` specifies the maximum delta chain length in pack files
23
23
24 ``packs.maxpacksize`` specifies the maximum pack file size
24 ``packs.maxpacksize`` specifies the maximum pack file size
25
25
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
26 ``packs.maxpackfilecount`` specifies the maximum number of packs in the
27 shared cache (trees only for now)
27 shared cache (trees only for now)
28
28
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
29 ``remotefilelog.backgroundprefetch`` runs prefetch in background when True
30
30
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
31 ``remotefilelog.bgprefetchrevs`` specifies revisions to fetch on commit and
32 update, and on other commands that use them. Different from pullprefetch.
32 update, and on other commands that use them. Different from pullprefetch.
33
33
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
34 ``remotefilelog.gcrepack`` does garbage collection during repack when True
35
35
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
36 ``remotefilelog.nodettl`` specifies maximum TTL of a node in seconds before
37 it is garbage collected
37 it is garbage collected
38
38
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
39 ``remotefilelog.repackonhggc`` runs repack on hg gc when True
40
40
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
41 ``remotefilelog.prefetchdays`` specifies the maximum age of a commit in
42 days after which it is no longer prefetched.
42 days after which it is no longer prefetched.
43
43
44 ``remotefilelog.prefetchdelay`` specifies delay between background
44 ``remotefilelog.prefetchdelay`` specifies delay between background
45 prefetches in seconds after operations that change the working copy parent
45 prefetches in seconds after operations that change the working copy parent
46
46
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
47 ``remotefilelog.data.gencountlimit`` constraints the minimum number of data
48 pack files required to be considered part of a generation. In particular,
48 pack files required to be considered part of a generation. In particular,
49 minimum number of packs files > gencountlimit.
49 minimum number of packs files > gencountlimit.
50
50
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
51 ``remotefilelog.data.generations`` list for specifying the lower bound of
52 each generation of the data pack files. For example, list ['100MB','1MB']
52 each generation of the data pack files. For example, list ['100MB','1MB']
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
53 or ['1MB', '100MB'] will lead to three generations: [0, 1MB), [
54 1MB, 100MB) and [100MB, infinity).
54 1MB, 100MB) and [100MB, infinity).
55
55
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
56 ``remotefilelog.data.maxrepackpacks`` the maximum number of pack files to
57 include in an incremental data repack.
57 include in an incremental data repack.
58
58
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
59 ``remotefilelog.data.repackmaxpacksize`` the maximum size of a pack file for
60 it to be considered for an incremental data repack.
60 it to be considered for an incremental data repack.
61
61
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
62 ``remotefilelog.data.repacksizelimit`` the maximum total size of pack files
63 to include in an incremental data repack.
63 to include in an incremental data repack.
64
64
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
65 ``remotefilelog.history.gencountlimit`` constraints the minimum number of
66 history pack files required to be considered part of a generation. In
66 history pack files required to be considered part of a generation. In
67 particular, minimum number of packs files > gencountlimit.
67 particular, minimum number of packs files > gencountlimit.
68
68
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
69 ``remotefilelog.history.generations`` list for specifying the lower bound of
70 each generation of the history pack files. For example, list [
70 each generation of the history pack files. For example, list [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
71 '100MB', '1MB'] or ['1MB', '100MB'] will lead to three generations: [
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
72 0, 1MB), [1MB, 100MB) and [100MB, infinity).
73
73
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
74 ``remotefilelog.history.maxrepackpacks`` the maximum number of pack files to
75 include in an incremental history repack.
75 include in an incremental history repack.
76
76
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
77 ``remotefilelog.history.repackmaxpacksize`` the maximum size of a pack file
78 for it to be considered for an incremental history repack.
78 for it to be considered for an incremental history repack.
79
79
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
80 ``remotefilelog.history.repacksizelimit`` the maximum total size of pack
81 files to include in an incremental history repack.
81 files to include in an incremental history repack.
82
82
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
83 ``remotefilelog.backgroundrepack`` automatically consolidate packs in the
84 background
84 background
85
85
86 ``remotefilelog.cachepath`` path to cache
86 ``remotefilelog.cachepath`` path to cache
87
87
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
88 ``remotefilelog.cachegroup`` if set, make cache directory sgid to this
89 group
89 group
90
90
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
91 ``remotefilelog.cacheprocess`` binary to invoke for fetching file data
92
92
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
93 ``remotefilelog.debug`` turn on remotefilelog-specific debug output
94
94
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
95 ``remotefilelog.excludepattern`` pattern of files to exclude from pulls
96
96
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
97 ``remotefilelog.includepattern`` pattern of files to include in pulls
98
98
99 ``remotefilelog.fetchwarning``: message to print when too many
99 ``remotefilelog.fetchwarning``: message to print when too many
100 single-file fetches occur
100 single-file fetches occur
101
101
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
102 ``remotefilelog.getfilesstep`` number of files to request in a single RPC
103
103
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
104 ``remotefilelog.getfilestype`` if set to 'threaded' use threads to fetch
105 files, otherwise use optimistic fetching
105 files, otherwise use optimistic fetching
106
106
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
107 ``remotefilelog.pullprefetch`` revset for selecting files that should be
108 eagerly downloaded rather than lazily
108 eagerly downloaded rather than lazily
109
109
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
110 ``remotefilelog.reponame`` name of the repo. If set, used to partition
111 data from other repos in a shared store.
111 data from other repos in a shared store.
112
112
113 ``remotefilelog.server`` if true, enable server-side functionality
113 ``remotefilelog.server`` if true, enable server-side functionality
114
114
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
115 ``remotefilelog.servercachepath`` path for caching blobs on the server
116
116
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
117 ``remotefilelog.serverexpiration`` number of days to keep cached server
118 blobs
118 blobs
119
119
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
120 ``remotefilelog.validatecache`` if set, check cache entries for corruption
121 before returning blobs
121 before returning blobs
122
122
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
123 ``remotefilelog.validatecachelog`` if set, check cache entries for
124 corruption before returning metadata
124 corruption before returning metadata
125
125
126 """
126 """
127 from __future__ import absolute_import
127 from __future__ import absolute_import
128
128
129 import os
129 import os
130 import time
130 import time
131 import traceback
131 import traceback
132
132
133 from mercurial.node import hex
133 from mercurial.node import hex
134 from mercurial.i18n import _
134 from mercurial.i18n import _
135 from mercurial.pycompat import open
135 from mercurial.pycompat import open
136 from mercurial import (
136 from mercurial import (
137 changegroup,
137 changegroup,
138 changelog,
138 changelog,
139 cmdutil,
139 cmdutil,
140 commands,
140 commands,
141 configitems,
141 configitems,
142 context,
142 context,
143 copies,
143 copies,
144 debugcommands as hgdebugcommands,
144 debugcommands as hgdebugcommands,
145 dispatch,
145 dispatch,
146 error,
146 error,
147 exchange,
147 exchange,
148 extensions,
148 extensions,
149 hg,
149 hg,
150 localrepo,
150 localrepo,
151 match,
151 match,
152 merge,
152 merge,
153 node as nodemod,
153 node as nodemod,
154 patch,
154 patch,
155 pycompat,
155 pycompat,
156 registrar,
156 registrar,
157 repair,
157 repair,
158 repoview,
158 repoview,
159 revset,
159 revset,
160 scmutil,
160 scmutil,
161 smartset,
161 smartset,
162 streamclone,
162 streamclone,
163 util,
163 util,
164 )
164 )
165 from . import (
165 from . import (
166 constants,
166 constants,
167 debugcommands,
167 debugcommands,
168 fileserverclient,
168 fileserverclient,
169 remotefilectx,
169 remotefilectx,
170 remotefilelog,
170 remotefilelog,
171 remotefilelogserver,
171 remotefilelogserver,
172 repack as repackmod,
172 repack as repackmod,
173 shallowbundle,
173 shallowbundle,
174 shallowrepo,
174 shallowrepo,
175 shallowstore,
175 shallowstore,
176 shallowutil,
176 shallowutil,
177 shallowverifier,
177 shallowverifier,
178 )
178 )
179
179
180 # ensures debug commands are registered
180 # ensures debug commands are registered
181 hgdebugcommands.command
181 hgdebugcommands.command
182
182
183 cmdtable = {}
183 cmdtable = {}
184 command = registrar.command(cmdtable)
184 command = registrar.command(cmdtable)
185
185
186 configtable = {}
186 configtable = {}
187 configitem = registrar.configitem(configtable)
187 configitem = registrar.configitem(configtable)
188
188
189 configitem(b'remotefilelog', b'debug', default=False)
189 configitem(b'remotefilelog', b'debug', default=False)
190
190
191 configitem(b'remotefilelog', b'reponame', default=b'')
191 configitem(b'remotefilelog', b'reponame', default=b'')
192 configitem(b'remotefilelog', b'cachepath', default=None)
192 configitem(b'remotefilelog', b'cachepath', default=None)
193 configitem(b'remotefilelog', b'cachegroup', default=None)
193 configitem(b'remotefilelog', b'cachegroup', default=None)
194 configitem(b'remotefilelog', b'cacheprocess', default=None)
194 configitem(b'remotefilelog', b'cacheprocess', default=None)
195 configitem(b'remotefilelog', b'cacheprocess.includepath', default=None)
195 configitem(b'remotefilelog', b'cacheprocess.includepath', default=None)
196 configitem(b"remotefilelog", b"cachelimit", default=b"1000 GB")
196 configitem(b"remotefilelog", b"cachelimit", default=b"1000 GB")
197
197
198 configitem(
198 configitem(
199 b'remotefilelog',
199 b'remotefilelog',
200 b'fallbackpath',
200 b'fallbackpath',
201 default=configitems.dynamicdefault,
201 default=configitems.dynamicdefault,
202 alias=[(b'remotefilelog', b'fallbackrepo')],
202 alias=[(b'remotefilelog', b'fallbackrepo')],
203 )
203 )
204
204
205 configitem(b'remotefilelog', b'validatecachelog', default=None)
205 configitem(b'remotefilelog', b'validatecachelog', default=None)
206 configitem(b'remotefilelog', b'validatecache', default=b'on')
206 configitem(b'remotefilelog', b'validatecache', default=b'on')
207 configitem(b'remotefilelog', b'server', default=None)
207 configitem(b'remotefilelog', b'server', default=None)
208 configitem(b'remotefilelog', b'servercachepath', default=None)
208 configitem(b'remotefilelog', b'servercachepath', default=None)
209 configitem(b"remotefilelog", b"serverexpiration", default=30)
209 configitem(b"remotefilelog", b"serverexpiration", default=30)
210 configitem(b'remotefilelog', b'backgroundrepack', default=False)
210 configitem(b'remotefilelog', b'backgroundrepack', default=False)
211 configitem(b'remotefilelog', b'bgprefetchrevs', default=None)
211 configitem(b'remotefilelog', b'bgprefetchrevs', default=None)
212 configitem(b'remotefilelog', b'pullprefetch', default=None)
212 configitem(b'remotefilelog', b'pullprefetch', default=None)
213 configitem(b'remotefilelog', b'backgroundprefetch', default=False)
213 configitem(b'remotefilelog', b'backgroundprefetch', default=False)
214 configitem(b'remotefilelog', b'prefetchdelay', default=120)
214 configitem(b'remotefilelog', b'prefetchdelay', default=120)
215 configitem(b'remotefilelog', b'prefetchdays', default=14)
215 configitem(b'remotefilelog', b'prefetchdays', default=14)
216
216
217 configitem(b'remotefilelog', b'getfilesstep', default=10000)
217 configitem(b'remotefilelog', b'getfilesstep', default=10000)
218 configitem(b'remotefilelog', b'getfilestype', default=b'optimistic')
218 configitem(b'remotefilelog', b'getfilestype', default=b'optimistic')
219 configitem(b'remotefilelog', b'batchsize', configitems.dynamicdefault)
219 configitem(b'remotefilelog', b'batchsize', configitems.dynamicdefault)
220 configitem(b'remotefilelog', b'fetchwarning', default=b'')
220 configitem(b'remotefilelog', b'fetchwarning', default=b'')
221
221
222 configitem(b'remotefilelog', b'includepattern', default=None)
222 configitem(b'remotefilelog', b'includepattern', default=None)
223 configitem(b'remotefilelog', b'excludepattern', default=None)
223 configitem(b'remotefilelog', b'excludepattern', default=None)
224
224
225 configitem(b'remotefilelog', b'gcrepack', default=False)
225 configitem(b'remotefilelog', b'gcrepack', default=False)
226 configitem(b'remotefilelog', b'repackonhggc', default=False)
226 configitem(b'remotefilelog', b'repackonhggc', default=False)
227 configitem(b'repack', b'chainorphansbysize', default=True, experimental=True)
227 configitem(b'repack', b'chainorphansbysize', default=True, experimental=True)
228
228
229 configitem(b'packs', b'maxpacksize', default=0)
229 configitem(b'packs', b'maxpacksize', default=0)
230 configitem(b'packs', b'maxchainlen', default=1000)
230 configitem(b'packs', b'maxchainlen', default=1000)
231
231
232 configitem(b'devel', b'remotefilelog.ensurestart', default=False)
233 configitem(b'devel', b'remotefilelog.bg-wait', default=False)
232 configitem(b'devel', b'remotefilelog.bg-wait', default=False)
234
233
235 # default TTL limit is 30 days
234 # default TTL limit is 30 days
236 _defaultlimit = 60 * 60 * 24 * 30
235 _defaultlimit = 60 * 60 * 24 * 30
237 configitem(b'remotefilelog', b'nodettl', default=_defaultlimit)
236 configitem(b'remotefilelog', b'nodettl', default=_defaultlimit)
238
237
239 configitem(b'remotefilelog', b'data.gencountlimit', default=2),
238 configitem(b'remotefilelog', b'data.gencountlimit', default=2),
240 configitem(
239 configitem(
241 b'remotefilelog', b'data.generations', default=[b'1GB', b'100MB', b'1MB']
240 b'remotefilelog', b'data.generations', default=[b'1GB', b'100MB', b'1MB']
242 )
241 )
243 configitem(b'remotefilelog', b'data.maxrepackpacks', default=50)
242 configitem(b'remotefilelog', b'data.maxrepackpacks', default=50)
244 configitem(b'remotefilelog', b'data.repackmaxpacksize', default=b'4GB')
243 configitem(b'remotefilelog', b'data.repackmaxpacksize', default=b'4GB')
245 configitem(b'remotefilelog', b'data.repacksizelimit', default=b'100MB')
244 configitem(b'remotefilelog', b'data.repacksizelimit', default=b'100MB')
246
245
247 configitem(b'remotefilelog', b'history.gencountlimit', default=2),
246 configitem(b'remotefilelog', b'history.gencountlimit', default=2),
248 configitem(b'remotefilelog', b'history.generations', default=[b'100MB'])
247 configitem(b'remotefilelog', b'history.generations', default=[b'100MB'])
249 configitem(b'remotefilelog', b'history.maxrepackpacks', default=50)
248 configitem(b'remotefilelog', b'history.maxrepackpacks', default=50)
250 configitem(b'remotefilelog', b'history.repackmaxpacksize', default=b'400MB')
249 configitem(b'remotefilelog', b'history.repackmaxpacksize', default=b'400MB')
251 configitem(b'remotefilelog', b'history.repacksizelimit', default=b'100MB')
250 configitem(b'remotefilelog', b'history.repacksizelimit', default=b'100MB')
252
251
253 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
252 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
254 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
253 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
255 # be specifying the version(s) of Mercurial they are tested with, or
254 # be specifying the version(s) of Mercurial they are tested with, or
256 # leave the attribute unspecified.
255 # leave the attribute unspecified.
257 testedwith = b'ships-with-hg-core'
256 testedwith = b'ships-with-hg-core'
258
257
259 repoclass = localrepo.localrepository
258 repoclass = localrepo.localrepository
260 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
259 repoclass._basesupported.add(constants.SHALLOWREPO_REQUIREMENT)
261
260
262 isenabled = shallowutil.isenabled
261 isenabled = shallowutil.isenabled
263
262
264
263
265 def uisetup(ui):
264 def uisetup(ui):
266 """Wraps user facing Mercurial commands to swap them out with shallow
265 """Wraps user facing Mercurial commands to swap them out with shallow
267 versions.
266 versions.
268 """
267 """
269 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
268 hg.wirepeersetupfuncs.append(fileserverclient.peersetup)
270
269
271 entry = extensions.wrapcommand(commands.table, b'clone', cloneshallow)
270 entry = extensions.wrapcommand(commands.table, b'clone', cloneshallow)
272 entry[1].append(
271 entry[1].append(
273 (
272 (
274 b'',
273 b'',
275 b'shallow',
274 b'shallow',
276 None,
275 None,
277 _(b"create a shallow clone which uses remote file history"),
276 _(b"create a shallow clone which uses remote file history"),
278 )
277 )
279 )
278 )
280
279
281 extensions.wrapcommand(
280 extensions.wrapcommand(
282 commands.table, b'debugindex', debugcommands.debugindex
281 commands.table, b'debugindex', debugcommands.debugindex
283 )
282 )
284 extensions.wrapcommand(
283 extensions.wrapcommand(
285 commands.table, b'debugindexdot', debugcommands.debugindexdot
284 commands.table, b'debugindexdot', debugcommands.debugindexdot
286 )
285 )
287 extensions.wrapcommand(commands.table, b'log', log)
286 extensions.wrapcommand(commands.table, b'log', log)
288 extensions.wrapcommand(commands.table, b'pull', pull)
287 extensions.wrapcommand(commands.table, b'pull', pull)
289
288
290 # Prevent 'hg manifest --all'
289 # Prevent 'hg manifest --all'
291 def _manifest(orig, ui, repo, *args, **opts):
290 def _manifest(orig, ui, repo, *args, **opts):
292 if isenabled(repo) and opts.get(r'all'):
291 if isenabled(repo) and opts.get(r'all'):
293 raise error.Abort(_(b"--all is not supported in a shallow repo"))
292 raise error.Abort(_(b"--all is not supported in a shallow repo"))
294
293
295 return orig(ui, repo, *args, **opts)
294 return orig(ui, repo, *args, **opts)
296
295
297 extensions.wrapcommand(commands.table, b"manifest", _manifest)
296 extensions.wrapcommand(commands.table, b"manifest", _manifest)
298
297
299 # Wrap remotefilelog with lfs code
298 # Wrap remotefilelog with lfs code
300 def _lfsloaded(loaded=False):
299 def _lfsloaded(loaded=False):
301 lfsmod = None
300 lfsmod = None
302 try:
301 try:
303 lfsmod = extensions.find(b'lfs')
302 lfsmod = extensions.find(b'lfs')
304 except KeyError:
303 except KeyError:
305 pass
304 pass
306 if lfsmod:
305 if lfsmod:
307 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
306 lfsmod.wrapfilelog(remotefilelog.remotefilelog)
308 fileserverclient._lfsmod = lfsmod
307 fileserverclient._lfsmod = lfsmod
309
308
310 extensions.afterloaded(b'lfs', _lfsloaded)
309 extensions.afterloaded(b'lfs', _lfsloaded)
311
310
312 # debugdata needs remotefilelog.len to work
311 # debugdata needs remotefilelog.len to work
313 extensions.wrapcommand(commands.table, b'debugdata', debugdatashallow)
312 extensions.wrapcommand(commands.table, b'debugdata', debugdatashallow)
314
313
315 changegroup.cgpacker = shallowbundle.shallowcg1packer
314 changegroup.cgpacker = shallowbundle.shallowcg1packer
316
315
317 extensions.wrapfunction(
316 extensions.wrapfunction(
318 changegroup, b'_addchangegroupfiles', shallowbundle.addchangegroupfiles
317 changegroup, b'_addchangegroupfiles', shallowbundle.addchangegroupfiles
319 )
318 )
320 extensions.wrapfunction(
319 extensions.wrapfunction(
321 changegroup, b'makechangegroup', shallowbundle.makechangegroup
320 changegroup, b'makechangegroup', shallowbundle.makechangegroup
322 )
321 )
323 extensions.wrapfunction(localrepo, b'makestore', storewrapper)
322 extensions.wrapfunction(localrepo, b'makestore', storewrapper)
324 extensions.wrapfunction(exchange, b'pull', exchangepull)
323 extensions.wrapfunction(exchange, b'pull', exchangepull)
325 extensions.wrapfunction(merge, b'applyupdates', applyupdates)
324 extensions.wrapfunction(merge, b'applyupdates', applyupdates)
326 extensions.wrapfunction(merge, b'_checkunknownfiles', checkunknownfiles)
325 extensions.wrapfunction(merge, b'_checkunknownfiles', checkunknownfiles)
327 extensions.wrapfunction(context.workingctx, b'_checklookup', checklookup)
326 extensions.wrapfunction(context.workingctx, b'_checklookup', checklookup)
328 extensions.wrapfunction(scmutil, b'_findrenames', findrenames)
327 extensions.wrapfunction(scmutil, b'_findrenames', findrenames)
329 extensions.wrapfunction(
328 extensions.wrapfunction(
330 copies, b'_computeforwardmissing', computeforwardmissing
329 copies, b'_computeforwardmissing', computeforwardmissing
331 )
330 )
332 extensions.wrapfunction(dispatch, b'runcommand', runcommand)
331 extensions.wrapfunction(dispatch, b'runcommand', runcommand)
333 extensions.wrapfunction(repair, b'_collectbrokencsets', _collectbrokencsets)
332 extensions.wrapfunction(repair, b'_collectbrokencsets', _collectbrokencsets)
334 extensions.wrapfunction(context.changectx, b'filectx', filectx)
333 extensions.wrapfunction(context.changectx, b'filectx', filectx)
335 extensions.wrapfunction(context.workingctx, b'filectx', workingfilectx)
334 extensions.wrapfunction(context.workingctx, b'filectx', workingfilectx)
336 extensions.wrapfunction(patch, b'trydiff', trydiff)
335 extensions.wrapfunction(patch, b'trydiff', trydiff)
337 extensions.wrapfunction(hg, b'verify', _verify)
336 extensions.wrapfunction(hg, b'verify', _verify)
338 scmutil.fileprefetchhooks.add(b'remotefilelog', _fileprefetchhook)
337 scmutil.fileprefetchhooks.add(b'remotefilelog', _fileprefetchhook)
339
338
340 # disappointing hacks below
339 # disappointing hacks below
341 extensions.wrapfunction(scmutil, b'getrenamedfn', getrenamedfn)
340 extensions.wrapfunction(scmutil, b'getrenamedfn', getrenamedfn)
342 extensions.wrapfunction(revset, b'filelog', filelogrevset)
341 extensions.wrapfunction(revset, b'filelog', filelogrevset)
343 revset.symbols[b'filelog'] = revset.filelog
342 revset.symbols[b'filelog'] = revset.filelog
344 extensions.wrapfunction(cmdutil, b'walkfilerevs', walkfilerevs)
343 extensions.wrapfunction(cmdutil, b'walkfilerevs', walkfilerevs)
345
344
346
345
347 def cloneshallow(orig, ui, repo, *args, **opts):
346 def cloneshallow(orig, ui, repo, *args, **opts):
348 if opts.get(r'shallow'):
347 if opts.get(r'shallow'):
349 repos = []
348 repos = []
350
349
351 def pull_shallow(orig, self, *args, **kwargs):
350 def pull_shallow(orig, self, *args, **kwargs):
352 if not isenabled(self):
351 if not isenabled(self):
353 repos.append(self.unfiltered())
352 repos.append(self.unfiltered())
354 # set up the client hooks so the post-clone update works
353 # set up the client hooks so the post-clone update works
355 setupclient(self.ui, self.unfiltered())
354 setupclient(self.ui, self.unfiltered())
356
355
357 # setupclient fixed the class on the repo itself
356 # setupclient fixed the class on the repo itself
358 # but we also need to fix it on the repoview
357 # but we also need to fix it on the repoview
359 if isinstance(self, repoview.repoview):
358 if isinstance(self, repoview.repoview):
360 self.__class__.__bases__ = (
359 self.__class__.__bases__ = (
361 self.__class__.__bases__[0],
360 self.__class__.__bases__[0],
362 self.unfiltered().__class__,
361 self.unfiltered().__class__,
363 )
362 )
364 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
363 self.requirements.add(constants.SHALLOWREPO_REQUIREMENT)
365 self._writerequirements()
364 self._writerequirements()
366
365
367 # Since setupclient hadn't been called, exchange.pull was not
366 # Since setupclient hadn't been called, exchange.pull was not
368 # wrapped. So we need to manually invoke our version of it.
367 # wrapped. So we need to manually invoke our version of it.
369 return exchangepull(orig, self, *args, **kwargs)
368 return exchangepull(orig, self, *args, **kwargs)
370 else:
369 else:
371 return orig(self, *args, **kwargs)
370 return orig(self, *args, **kwargs)
372
371
373 extensions.wrapfunction(exchange, b'pull', pull_shallow)
372 extensions.wrapfunction(exchange, b'pull', pull_shallow)
374
373
375 # Wrap the stream logic to add requirements and to pass include/exclude
374 # Wrap the stream logic to add requirements and to pass include/exclude
376 # patterns around.
375 # patterns around.
377 def setup_streamout(repo, remote):
376 def setup_streamout(repo, remote):
378 # Replace remote.stream_out with a version that sends file
377 # Replace remote.stream_out with a version that sends file
379 # patterns.
378 # patterns.
380 def stream_out_shallow(orig):
379 def stream_out_shallow(orig):
381 caps = remote.capabilities()
380 caps = remote.capabilities()
382 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
381 if constants.NETWORK_CAP_LEGACY_SSH_GETFILES in caps:
383 opts = {}
382 opts = {}
384 if repo.includepattern:
383 if repo.includepattern:
385 opts[r'includepattern'] = b'\0'.join(
384 opts[r'includepattern'] = b'\0'.join(
386 repo.includepattern
385 repo.includepattern
387 )
386 )
388 if repo.excludepattern:
387 if repo.excludepattern:
389 opts[r'excludepattern'] = b'\0'.join(
388 opts[r'excludepattern'] = b'\0'.join(
390 repo.excludepattern
389 repo.excludepattern
391 )
390 )
392 return remote._callstream(b'stream_out_shallow', **opts)
391 return remote._callstream(b'stream_out_shallow', **opts)
393 else:
392 else:
394 return orig()
393 return orig()
395
394
396 extensions.wrapfunction(remote, b'stream_out', stream_out_shallow)
395 extensions.wrapfunction(remote, b'stream_out', stream_out_shallow)
397
396
398 def stream_wrap(orig, op):
397 def stream_wrap(orig, op):
399 setup_streamout(op.repo, op.remote)
398 setup_streamout(op.repo, op.remote)
400 return orig(op)
399 return orig(op)
401
400
402 extensions.wrapfunction(
401 extensions.wrapfunction(
403 streamclone, b'maybeperformlegacystreamclone', stream_wrap
402 streamclone, b'maybeperformlegacystreamclone', stream_wrap
404 )
403 )
405
404
406 def canperformstreamclone(orig, pullop, bundle2=False):
405 def canperformstreamclone(orig, pullop, bundle2=False):
407 # remotefilelog is currently incompatible with the
406 # remotefilelog is currently incompatible with the
408 # bundle2 flavor of streamclones, so force us to use
407 # bundle2 flavor of streamclones, so force us to use
409 # v1 instead.
408 # v1 instead.
410 if b'v2' in pullop.remotebundle2caps.get(b'stream', []):
409 if b'v2' in pullop.remotebundle2caps.get(b'stream', []):
411 pullop.remotebundle2caps[b'stream'] = [
410 pullop.remotebundle2caps[b'stream'] = [
412 c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2'
411 c for c in pullop.remotebundle2caps[b'stream'] if c != b'v2'
413 ]
412 ]
414 if bundle2:
413 if bundle2:
415 return False, None
414 return False, None
416 supported, requirements = orig(pullop, bundle2=bundle2)
415 supported, requirements = orig(pullop, bundle2=bundle2)
417 if requirements is not None:
416 if requirements is not None:
418 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
417 requirements.add(constants.SHALLOWREPO_REQUIREMENT)
419 return supported, requirements
418 return supported, requirements
420
419
421 extensions.wrapfunction(
420 extensions.wrapfunction(
422 streamclone, b'canperformstreamclone', canperformstreamclone
421 streamclone, b'canperformstreamclone', canperformstreamclone
423 )
422 )
424
423
425 try:
424 try:
426 orig(ui, repo, *args, **opts)
425 orig(ui, repo, *args, **opts)
427 finally:
426 finally:
428 if opts.get(r'shallow'):
427 if opts.get(r'shallow'):
429 for r in repos:
428 for r in repos:
430 if util.safehasattr(r, b'fileservice'):
429 if util.safehasattr(r, b'fileservice'):
431 r.fileservice.close()
430 r.fileservice.close()
432
431
433
432
434 def debugdatashallow(orig, *args, **kwds):
433 def debugdatashallow(orig, *args, **kwds):
435 oldlen = remotefilelog.remotefilelog.__len__
434 oldlen = remotefilelog.remotefilelog.__len__
436 try:
435 try:
437 remotefilelog.remotefilelog.__len__ = lambda x: 1
436 remotefilelog.remotefilelog.__len__ = lambda x: 1
438 return orig(*args, **kwds)
437 return orig(*args, **kwds)
439 finally:
438 finally:
440 remotefilelog.remotefilelog.__len__ = oldlen
439 remotefilelog.remotefilelog.__len__ = oldlen
441
440
442
441
443 def reposetup(ui, repo):
442 def reposetup(ui, repo):
444 if not repo.local():
443 if not repo.local():
445 return
444 return
446
445
447 # put here intentionally bc doesnt work in uisetup
446 # put here intentionally bc doesnt work in uisetup
448 ui.setconfig(b'hooks', b'update.prefetch', wcpprefetch)
447 ui.setconfig(b'hooks', b'update.prefetch', wcpprefetch)
449 ui.setconfig(b'hooks', b'commit.prefetch', wcpprefetch)
448 ui.setconfig(b'hooks', b'commit.prefetch', wcpprefetch)
450
449
451 isserverenabled = ui.configbool(b'remotefilelog', b'server')
450 isserverenabled = ui.configbool(b'remotefilelog', b'server')
452 isshallowclient = isenabled(repo)
451 isshallowclient = isenabled(repo)
453
452
454 if isserverenabled and isshallowclient:
453 if isserverenabled and isshallowclient:
455 raise RuntimeError(b"Cannot be both a server and shallow client.")
454 raise RuntimeError(b"Cannot be both a server and shallow client.")
456
455
457 if isshallowclient:
456 if isshallowclient:
458 setupclient(ui, repo)
457 setupclient(ui, repo)
459
458
460 if isserverenabled:
459 if isserverenabled:
461 remotefilelogserver.setupserver(ui, repo)
460 remotefilelogserver.setupserver(ui, repo)
462
461
463
462
464 def setupclient(ui, repo):
463 def setupclient(ui, repo):
465 if not isinstance(repo, localrepo.localrepository):
464 if not isinstance(repo, localrepo.localrepository):
466 return
465 return
467
466
468 # Even clients get the server setup since they need to have the
467 # Even clients get the server setup since they need to have the
469 # wireprotocol endpoints registered.
468 # wireprotocol endpoints registered.
470 remotefilelogserver.onetimesetup(ui)
469 remotefilelogserver.onetimesetup(ui)
471 onetimeclientsetup(ui)
470 onetimeclientsetup(ui)
472
471
473 shallowrepo.wraprepo(repo)
472 shallowrepo.wraprepo(repo)
474 repo.store = shallowstore.wrapstore(repo.store)
473 repo.store = shallowstore.wrapstore(repo.store)
475
474
476
475
477 def storewrapper(orig, requirements, path, vfstype):
476 def storewrapper(orig, requirements, path, vfstype):
478 s = orig(requirements, path, vfstype)
477 s = orig(requirements, path, vfstype)
479 if constants.SHALLOWREPO_REQUIREMENT in requirements:
478 if constants.SHALLOWREPO_REQUIREMENT in requirements:
480 s = shallowstore.wrapstore(s)
479 s = shallowstore.wrapstore(s)
481
480
482 return s
481 return s
483
482
484
483
485 # prefetch files before update
484 # prefetch files before update
486 def applyupdates(
485 def applyupdates(
487 orig, repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None
486 orig, repo, actions, wctx, mctx, overwrite, wantfiledata, labels=None
488 ):
487 ):
489 if isenabled(repo):
488 if isenabled(repo):
490 manifest = mctx.manifest()
489 manifest = mctx.manifest()
491 files = []
490 files = []
492 for f, args, msg in actions[b'g']:
491 for f, args, msg in actions[b'g']:
493 files.append((f, hex(manifest[f])))
492 files.append((f, hex(manifest[f])))
494 # batch fetch the needed files from the server
493 # batch fetch the needed files from the server
495 repo.fileservice.prefetch(files)
494 repo.fileservice.prefetch(files)
496 return orig(
495 return orig(
497 repo, actions, wctx, mctx, overwrite, wantfiledata, labels=labels
496 repo, actions, wctx, mctx, overwrite, wantfiledata, labels=labels
498 )
497 )
499
498
500
499
501 # Prefetch merge checkunknownfiles
500 # Prefetch merge checkunknownfiles
502 def checkunknownfiles(orig, repo, wctx, mctx, force, actions, *args, **kwargs):
501 def checkunknownfiles(orig, repo, wctx, mctx, force, actions, *args, **kwargs):
503 if isenabled(repo):
502 if isenabled(repo):
504 files = []
503 files = []
505 sparsematch = repo.maybesparsematch(mctx.rev())
504 sparsematch = repo.maybesparsematch(mctx.rev())
506 for f, (m, actionargs, msg) in pycompat.iteritems(actions):
505 for f, (m, actionargs, msg) in pycompat.iteritems(actions):
507 if sparsematch and not sparsematch(f):
506 if sparsematch and not sparsematch(f):
508 continue
507 continue
509 if m in (b'c', b'dc', b'cm'):
508 if m in (b'c', b'dc', b'cm'):
510 files.append((f, hex(mctx.filenode(f))))
509 files.append((f, hex(mctx.filenode(f))))
511 elif m == b'dg':
510 elif m == b'dg':
512 f2 = actionargs[0]
511 f2 = actionargs[0]
513 files.append((f2, hex(mctx.filenode(f2))))
512 files.append((f2, hex(mctx.filenode(f2))))
514 # batch fetch the needed files from the server
513 # batch fetch the needed files from the server
515 repo.fileservice.prefetch(files)
514 repo.fileservice.prefetch(files)
516 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
515 return orig(repo, wctx, mctx, force, actions, *args, **kwargs)
517
516
518
517
519 # Prefetch files before status attempts to look at their size and contents
518 # Prefetch files before status attempts to look at their size and contents
520 def checklookup(orig, self, files):
519 def checklookup(orig, self, files):
521 repo = self._repo
520 repo = self._repo
522 if isenabled(repo):
521 if isenabled(repo):
523 prefetchfiles = []
522 prefetchfiles = []
524 for parent in self._parents:
523 for parent in self._parents:
525 for f in files:
524 for f in files:
526 if f in parent:
525 if f in parent:
527 prefetchfiles.append((f, hex(parent.filenode(f))))
526 prefetchfiles.append((f, hex(parent.filenode(f))))
528 # batch fetch the needed files from the server
527 # batch fetch the needed files from the server
529 repo.fileservice.prefetch(prefetchfiles)
528 repo.fileservice.prefetch(prefetchfiles)
530 return orig(self, files)
529 return orig(self, files)
531
530
532
531
533 # Prefetch the logic that compares added and removed files for renames
532 # Prefetch the logic that compares added and removed files for renames
534 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
533 def findrenames(orig, repo, matcher, added, removed, *args, **kwargs):
535 if isenabled(repo):
534 if isenabled(repo):
536 files = []
535 files = []
537 pmf = repo[b'.'].manifest()
536 pmf = repo[b'.'].manifest()
538 for f in removed:
537 for f in removed:
539 if f in pmf:
538 if f in pmf:
540 files.append((f, hex(pmf[f])))
539 files.append((f, hex(pmf[f])))
541 # batch fetch the needed files from the server
540 # batch fetch the needed files from the server
542 repo.fileservice.prefetch(files)
541 repo.fileservice.prefetch(files)
543 return orig(repo, matcher, added, removed, *args, **kwargs)
542 return orig(repo, matcher, added, removed, *args, **kwargs)
544
543
545
544
546 # prefetch files before pathcopies check
545 # prefetch files before pathcopies check
547 def computeforwardmissing(orig, a, b, match=None):
546 def computeforwardmissing(orig, a, b, match=None):
548 missing = orig(a, b, match=match)
547 missing = orig(a, b, match=match)
549 repo = a._repo
548 repo = a._repo
550 if isenabled(repo):
549 if isenabled(repo):
551 mb = b.manifest()
550 mb = b.manifest()
552
551
553 files = []
552 files = []
554 sparsematch = repo.maybesparsematch(b.rev())
553 sparsematch = repo.maybesparsematch(b.rev())
555 if sparsematch:
554 if sparsematch:
556 sparsemissing = set()
555 sparsemissing = set()
557 for f in missing:
556 for f in missing:
558 if sparsematch(f):
557 if sparsematch(f):
559 files.append((f, hex(mb[f])))
558 files.append((f, hex(mb[f])))
560 sparsemissing.add(f)
559 sparsemissing.add(f)
561 missing = sparsemissing
560 missing = sparsemissing
562
561
563 # batch fetch the needed files from the server
562 # batch fetch the needed files from the server
564 repo.fileservice.prefetch(files)
563 repo.fileservice.prefetch(files)
565 return missing
564 return missing
566
565
567
566
568 # close cache miss server connection after the command has finished
567 # close cache miss server connection after the command has finished
569 def runcommand(orig, lui, repo, *args, **kwargs):
568 def runcommand(orig, lui, repo, *args, **kwargs):
570 fileservice = None
569 fileservice = None
571 # repo can be None when running in chg:
570 # repo can be None when running in chg:
572 # - at startup, reposetup was called because serve is not norepo
571 # - at startup, reposetup was called because serve is not norepo
573 # - a norepo command like "help" is called
572 # - a norepo command like "help" is called
574 if repo and isenabled(repo):
573 if repo and isenabled(repo):
575 fileservice = repo.fileservice
574 fileservice = repo.fileservice
576 try:
575 try:
577 return orig(lui, repo, *args, **kwargs)
576 return orig(lui, repo, *args, **kwargs)
578 finally:
577 finally:
579 if fileservice:
578 if fileservice:
580 fileservice.close()
579 fileservice.close()
581
580
582
581
583 # prevent strip from stripping remotefilelogs
582 # prevent strip from stripping remotefilelogs
584 def _collectbrokencsets(orig, repo, files, striprev):
583 def _collectbrokencsets(orig, repo, files, striprev):
585 if isenabled(repo):
584 if isenabled(repo):
586 files = list([f for f in files if not repo.shallowmatch(f)])
585 files = list([f for f in files if not repo.shallowmatch(f)])
587 return orig(repo, files, striprev)
586 return orig(repo, files, striprev)
588
587
589
588
590 # changectx wrappers
589 # changectx wrappers
591 def filectx(orig, self, path, fileid=None, filelog=None):
590 def filectx(orig, self, path, fileid=None, filelog=None):
592 if fileid is None:
591 if fileid is None:
593 fileid = self.filenode(path)
592 fileid = self.filenode(path)
594 if isenabled(self._repo) and self._repo.shallowmatch(path):
593 if isenabled(self._repo) and self._repo.shallowmatch(path):
595 return remotefilectx.remotefilectx(
594 return remotefilectx.remotefilectx(
596 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
595 self._repo, path, fileid=fileid, changectx=self, filelog=filelog
597 )
596 )
598 return orig(self, path, fileid=fileid, filelog=filelog)
597 return orig(self, path, fileid=fileid, filelog=filelog)
599
598
600
599
601 def workingfilectx(orig, self, path, filelog=None):
600 def workingfilectx(orig, self, path, filelog=None):
602 if isenabled(self._repo) and self._repo.shallowmatch(path):
601 if isenabled(self._repo) and self._repo.shallowmatch(path):
603 return remotefilectx.remoteworkingfilectx(
602 return remotefilectx.remoteworkingfilectx(
604 self._repo, path, workingctx=self, filelog=filelog
603 self._repo, path, workingctx=self, filelog=filelog
605 )
604 )
606 return orig(self, path, filelog=filelog)
605 return orig(self, path, filelog=filelog)
607
606
608
607
609 # prefetch required revisions before a diff
608 # prefetch required revisions before a diff
610 def trydiff(
609 def trydiff(
611 orig,
610 orig,
612 repo,
611 repo,
613 revs,
612 revs,
614 ctx1,
613 ctx1,
615 ctx2,
614 ctx2,
616 modified,
615 modified,
617 added,
616 added,
618 removed,
617 removed,
619 copy,
618 copy,
620 getfilectx,
619 getfilectx,
621 *args,
620 *args,
622 **kwargs
621 **kwargs
623 ):
622 ):
624 if isenabled(repo):
623 if isenabled(repo):
625 prefetch = []
624 prefetch = []
626 mf1 = ctx1.manifest()
625 mf1 = ctx1.manifest()
627 for fname in modified + added + removed:
626 for fname in modified + added + removed:
628 if fname in mf1:
627 if fname in mf1:
629 fnode = getfilectx(fname, ctx1).filenode()
628 fnode = getfilectx(fname, ctx1).filenode()
630 # fnode can be None if it's a edited working ctx file
629 # fnode can be None if it's a edited working ctx file
631 if fnode:
630 if fnode:
632 prefetch.append((fname, hex(fnode)))
631 prefetch.append((fname, hex(fnode)))
633 if fname not in removed:
632 if fname not in removed:
634 fnode = getfilectx(fname, ctx2).filenode()
633 fnode = getfilectx(fname, ctx2).filenode()
635 if fnode:
634 if fnode:
636 prefetch.append((fname, hex(fnode)))
635 prefetch.append((fname, hex(fnode)))
637
636
638 repo.fileservice.prefetch(prefetch)
637 repo.fileservice.prefetch(prefetch)
639
638
640 return orig(
639 return orig(
641 repo,
640 repo,
642 revs,
641 revs,
643 ctx1,
642 ctx1,
644 ctx2,
643 ctx2,
645 modified,
644 modified,
646 added,
645 added,
647 removed,
646 removed,
648 copy,
647 copy,
649 getfilectx,
648 getfilectx,
650 *args,
649 *args,
651 **kwargs
650 **kwargs
652 )
651 )
653
652
654
653
655 # Prevent verify from processing files
654 # Prevent verify from processing files
656 # a stub for mercurial.hg.verify()
655 # a stub for mercurial.hg.verify()
657 def _verify(orig, repo, level=None):
656 def _verify(orig, repo, level=None):
658 lock = repo.lock()
657 lock = repo.lock()
659 try:
658 try:
660 return shallowverifier.shallowverifier(repo).verify()
659 return shallowverifier.shallowverifier(repo).verify()
661 finally:
660 finally:
662 lock.release()
661 lock.release()
663
662
664
663
665 clientonetime = False
664 clientonetime = False
666
665
667
666
668 def onetimeclientsetup(ui):
667 def onetimeclientsetup(ui):
669 global clientonetime
668 global clientonetime
670 if clientonetime:
669 if clientonetime:
671 return
670 return
672 clientonetime = True
671 clientonetime = True
673
672
674 # Don't commit filelogs until we know the commit hash, since the hash
673 # Don't commit filelogs until we know the commit hash, since the hash
675 # is present in the filelog blob.
674 # is present in the filelog blob.
676 # This violates Mercurial's filelog->manifest->changelog write order,
675 # This violates Mercurial's filelog->manifest->changelog write order,
677 # but is generally fine for client repos.
676 # but is generally fine for client repos.
678 pendingfilecommits = []
677 pendingfilecommits = []
679
678
680 def addrawrevision(
679 def addrawrevision(
681 orig,
680 orig,
682 self,
681 self,
683 rawtext,
682 rawtext,
684 transaction,
683 transaction,
685 link,
684 link,
686 p1,
685 p1,
687 p2,
686 p2,
688 node,
687 node,
689 flags,
688 flags,
690 cachedelta=None,
689 cachedelta=None,
691 _metatuple=None,
690 _metatuple=None,
692 ):
691 ):
693 if isinstance(link, int):
692 if isinstance(link, int):
694 pendingfilecommits.append(
693 pendingfilecommits.append(
695 (
694 (
696 self,
695 self,
697 rawtext,
696 rawtext,
698 transaction,
697 transaction,
699 link,
698 link,
700 p1,
699 p1,
701 p2,
700 p2,
702 node,
701 node,
703 flags,
702 flags,
704 cachedelta,
703 cachedelta,
705 _metatuple,
704 _metatuple,
706 )
705 )
707 )
706 )
708 return node
707 return node
709 else:
708 else:
710 return orig(
709 return orig(
711 self,
710 self,
712 rawtext,
711 rawtext,
713 transaction,
712 transaction,
714 link,
713 link,
715 p1,
714 p1,
716 p2,
715 p2,
717 node,
716 node,
718 flags,
717 flags,
719 cachedelta,
718 cachedelta,
720 _metatuple=_metatuple,
719 _metatuple=_metatuple,
721 )
720 )
722
721
723 extensions.wrapfunction(
722 extensions.wrapfunction(
724 remotefilelog.remotefilelog, b'addrawrevision', addrawrevision
723 remotefilelog.remotefilelog, b'addrawrevision', addrawrevision
725 )
724 )
726
725
727 def changelogadd(orig, self, *args):
726 def changelogadd(orig, self, *args):
728 oldlen = len(self)
727 oldlen = len(self)
729 node = orig(self, *args)
728 node = orig(self, *args)
730 newlen = len(self)
729 newlen = len(self)
731 if oldlen != newlen:
730 if oldlen != newlen:
732 for oldargs in pendingfilecommits:
731 for oldargs in pendingfilecommits:
733 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
732 log, rt, tr, link, p1, p2, n, fl, c, m = oldargs
734 linknode = self.node(link)
733 linknode = self.node(link)
735 if linknode == node:
734 if linknode == node:
736 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
735 log.addrawrevision(rt, tr, linknode, p1, p2, n, fl, c, m)
737 else:
736 else:
738 raise error.ProgrammingError(
737 raise error.ProgrammingError(
739 b'pending multiple integer revisions are not supported'
738 b'pending multiple integer revisions are not supported'
740 )
739 )
741 else:
740 else:
742 # "link" is actually wrong here (it is set to len(changelog))
741 # "link" is actually wrong here (it is set to len(changelog))
743 # if changelog remains unchanged, skip writing file revisions
742 # if changelog remains unchanged, skip writing file revisions
744 # but still do a sanity check about pending multiple revisions
743 # but still do a sanity check about pending multiple revisions
745 if len(set(x[3] for x in pendingfilecommits)) > 1:
744 if len(set(x[3] for x in pendingfilecommits)) > 1:
746 raise error.ProgrammingError(
745 raise error.ProgrammingError(
747 b'pending multiple integer revisions are not supported'
746 b'pending multiple integer revisions are not supported'
748 )
747 )
749 del pendingfilecommits[:]
748 del pendingfilecommits[:]
750 return node
749 return node
751
750
752 extensions.wrapfunction(changelog.changelog, b'add', changelogadd)
751 extensions.wrapfunction(changelog.changelog, b'add', changelogadd)
753
752
754
753
755 def getrenamedfn(orig, repo, endrev=None):
754 def getrenamedfn(orig, repo, endrev=None):
756 if not isenabled(repo) or copies.usechangesetcentricalgo(repo):
755 if not isenabled(repo) or copies.usechangesetcentricalgo(repo):
757 return orig(repo, endrev)
756 return orig(repo, endrev)
758
757
759 rcache = {}
758 rcache = {}
760
759
761 def getrenamed(fn, rev):
760 def getrenamed(fn, rev):
762 '''looks up all renames for a file (up to endrev) the first
761 '''looks up all renames for a file (up to endrev) the first
763 time the file is given. It indexes on the changerev and only
762 time the file is given. It indexes on the changerev and only
764 parses the manifest if linkrev != changerev.
763 parses the manifest if linkrev != changerev.
765 Returns rename info for fn at changerev rev.'''
764 Returns rename info for fn at changerev rev.'''
766 if rev in rcache.setdefault(fn, {}):
765 if rev in rcache.setdefault(fn, {}):
767 return rcache[fn][rev]
766 return rcache[fn][rev]
768
767
769 try:
768 try:
770 fctx = repo[rev].filectx(fn)
769 fctx = repo[rev].filectx(fn)
771 for ancestor in fctx.ancestors():
770 for ancestor in fctx.ancestors():
772 if ancestor.path() == fn:
771 if ancestor.path() == fn:
773 renamed = ancestor.renamed()
772 renamed = ancestor.renamed()
774 rcache[fn][ancestor.rev()] = renamed and renamed[0]
773 rcache[fn][ancestor.rev()] = renamed and renamed[0]
775
774
776 renamed = fctx.renamed()
775 renamed = fctx.renamed()
777 return renamed and renamed[0]
776 return renamed and renamed[0]
778 except error.LookupError:
777 except error.LookupError:
779 return None
778 return None
780
779
781 return getrenamed
780 return getrenamed
782
781
783
782
784 def walkfilerevs(orig, repo, match, follow, revs, fncache):
783 def walkfilerevs(orig, repo, match, follow, revs, fncache):
785 if not isenabled(repo):
784 if not isenabled(repo):
786 return orig(repo, match, follow, revs, fncache)
785 return orig(repo, match, follow, revs, fncache)
787
786
788 # remotefilelog's can't be walked in rev order, so throw.
787 # remotefilelog's can't be walked in rev order, so throw.
789 # The caller will see the exception and walk the commit tree instead.
788 # The caller will see the exception and walk the commit tree instead.
790 if not follow:
789 if not follow:
791 raise cmdutil.FileWalkError(b"Cannot walk via filelog")
790 raise cmdutil.FileWalkError(b"Cannot walk via filelog")
792
791
793 wanted = set()
792 wanted = set()
794 minrev, maxrev = min(revs), max(revs)
793 minrev, maxrev = min(revs), max(revs)
795
794
796 pctx = repo[b'.']
795 pctx = repo[b'.']
797 for filename in match.files():
796 for filename in match.files():
798 if filename not in pctx:
797 if filename not in pctx:
799 raise error.Abort(
798 raise error.Abort(
800 _(b'cannot follow file not in parent revision: "%s"') % filename
799 _(b'cannot follow file not in parent revision: "%s"') % filename
801 )
800 )
802 fctx = pctx[filename]
801 fctx = pctx[filename]
803
802
804 linkrev = fctx.linkrev()
803 linkrev = fctx.linkrev()
805 if linkrev >= minrev and linkrev <= maxrev:
804 if linkrev >= minrev and linkrev <= maxrev:
806 fncache.setdefault(linkrev, []).append(filename)
805 fncache.setdefault(linkrev, []).append(filename)
807 wanted.add(linkrev)
806 wanted.add(linkrev)
808
807
809 for ancestor in fctx.ancestors():
808 for ancestor in fctx.ancestors():
810 linkrev = ancestor.linkrev()
809 linkrev = ancestor.linkrev()
811 if linkrev >= minrev and linkrev <= maxrev:
810 if linkrev >= minrev and linkrev <= maxrev:
812 fncache.setdefault(linkrev, []).append(ancestor.path())
811 fncache.setdefault(linkrev, []).append(ancestor.path())
813 wanted.add(linkrev)
812 wanted.add(linkrev)
814
813
815 return wanted
814 return wanted
816
815
817
816
818 def filelogrevset(orig, repo, subset, x):
817 def filelogrevset(orig, repo, subset, x):
819 """``filelog(pattern)``
818 """``filelog(pattern)``
820 Changesets connected to the specified filelog.
819 Changesets connected to the specified filelog.
821
820
822 For performance reasons, ``filelog()`` does not show every changeset
821 For performance reasons, ``filelog()`` does not show every changeset
823 that affects the requested file(s). See :hg:`help log` for details. For
822 that affects the requested file(s). See :hg:`help log` for details. For
824 a slower, more accurate result, use ``file()``.
823 a slower, more accurate result, use ``file()``.
825 """
824 """
826
825
827 if not isenabled(repo):
826 if not isenabled(repo):
828 return orig(repo, subset, x)
827 return orig(repo, subset, x)
829
828
830 # i18n: "filelog" is a keyword
829 # i18n: "filelog" is a keyword
831 pat = revset.getstring(x, _(b"filelog requires a pattern"))
830 pat = revset.getstring(x, _(b"filelog requires a pattern"))
832 m = match.match(
831 m = match.match(
833 repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None]
832 repo.root, repo.getcwd(), [pat], default=b'relpath', ctx=repo[None]
834 )
833 )
835 s = set()
834 s = set()
836
835
837 if not match.patkind(pat):
836 if not match.patkind(pat):
838 # slow
837 # slow
839 for r in subset:
838 for r in subset:
840 ctx = repo[r]
839 ctx = repo[r]
841 cfiles = ctx.files()
840 cfiles = ctx.files()
842 for f in m.files():
841 for f in m.files():
843 if f in cfiles:
842 if f in cfiles:
844 s.add(ctx.rev())
843 s.add(ctx.rev())
845 break
844 break
846 else:
845 else:
847 # partial
846 # partial
848 files = (f for f in repo[None] if m(f))
847 files = (f for f in repo[None] if m(f))
849 for f in files:
848 for f in files:
850 fctx = repo[None].filectx(f)
849 fctx = repo[None].filectx(f)
851 s.add(fctx.linkrev())
850 s.add(fctx.linkrev())
852 for actx in fctx.ancestors():
851 for actx in fctx.ancestors():
853 s.add(actx.linkrev())
852 s.add(actx.linkrev())
854
853
855 return smartset.baseset([r for r in subset if r in s])
854 return smartset.baseset([r for r in subset if r in s])
856
855
857
856
858 @command(b'gc', [], _(b'hg gc [REPO...]'), norepo=True)
857 @command(b'gc', [], _(b'hg gc [REPO...]'), norepo=True)
859 def gc(ui, *args, **opts):
858 def gc(ui, *args, **opts):
860 '''garbage collect the client and server filelog caches
859 '''garbage collect the client and server filelog caches
861 '''
860 '''
862 cachepaths = set()
861 cachepaths = set()
863
862
864 # get the system client cache
863 # get the system client cache
865 systemcache = shallowutil.getcachepath(ui, allowempty=True)
864 systemcache = shallowutil.getcachepath(ui, allowempty=True)
866 if systemcache:
865 if systemcache:
867 cachepaths.add(systemcache)
866 cachepaths.add(systemcache)
868
867
869 # get repo client and server cache
868 # get repo client and server cache
870 repopaths = []
869 repopaths = []
871 pwd = ui.environ.get(b'PWD')
870 pwd = ui.environ.get(b'PWD')
872 if pwd:
871 if pwd:
873 repopaths.append(pwd)
872 repopaths.append(pwd)
874
873
875 repopaths.extend(args)
874 repopaths.extend(args)
876 repos = []
875 repos = []
877 for repopath in repopaths:
876 for repopath in repopaths:
878 try:
877 try:
879 repo = hg.peer(ui, {}, repopath)
878 repo = hg.peer(ui, {}, repopath)
880 repos.append(repo)
879 repos.append(repo)
881
880
882 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
881 repocache = shallowutil.getcachepath(repo.ui, allowempty=True)
883 if repocache:
882 if repocache:
884 cachepaths.add(repocache)
883 cachepaths.add(repocache)
885 except error.RepoError:
884 except error.RepoError:
886 pass
885 pass
887
886
888 # gc client cache
887 # gc client cache
889 for cachepath in cachepaths:
888 for cachepath in cachepaths:
890 gcclient(ui, cachepath)
889 gcclient(ui, cachepath)
891
890
892 # gc server cache
891 # gc server cache
893 for repo in repos:
892 for repo in repos:
894 remotefilelogserver.gcserver(ui, repo._repo)
893 remotefilelogserver.gcserver(ui, repo._repo)
895
894
896
895
897 def gcclient(ui, cachepath):
896 def gcclient(ui, cachepath):
898 # get list of repos that use this cache
897 # get list of repos that use this cache
899 repospath = os.path.join(cachepath, b'repos')
898 repospath = os.path.join(cachepath, b'repos')
900 if not os.path.exists(repospath):
899 if not os.path.exists(repospath):
901 ui.warn(_(b"no known cache at %s\n") % cachepath)
900 ui.warn(_(b"no known cache at %s\n") % cachepath)
902 return
901 return
903
902
904 reposfile = open(repospath, b'rb')
903 reposfile = open(repospath, b'rb')
905 repos = {r[:-1] for r in reposfile.readlines()}
904 repos = {r[:-1] for r in reposfile.readlines()}
906 reposfile.close()
905 reposfile.close()
907
906
908 # build list of useful files
907 # build list of useful files
909 validrepos = []
908 validrepos = []
910 keepkeys = set()
909 keepkeys = set()
911
910
912 sharedcache = None
911 sharedcache = None
913 filesrepacked = False
912 filesrepacked = False
914
913
915 count = 0
914 count = 0
916 progress = ui.makeprogress(
915 progress = ui.makeprogress(
917 _(b"analyzing repositories"), unit=b"repos", total=len(repos)
916 _(b"analyzing repositories"), unit=b"repos", total=len(repos)
918 )
917 )
919 for path in repos:
918 for path in repos:
920 progress.update(count)
919 progress.update(count)
921 count += 1
920 count += 1
922 try:
921 try:
923 path = ui.expandpath(os.path.normpath(path))
922 path = ui.expandpath(os.path.normpath(path))
924 except TypeError as e:
923 except TypeError as e:
925 ui.warn(_(b"warning: malformed path: %r:%s\n") % (path, e))
924 ui.warn(_(b"warning: malformed path: %r:%s\n") % (path, e))
926 traceback.print_exc()
925 traceback.print_exc()
927 continue
926 continue
928 try:
927 try:
929 peer = hg.peer(ui, {}, path)
928 peer = hg.peer(ui, {}, path)
930 repo = peer._repo
929 repo = peer._repo
931 except error.RepoError:
930 except error.RepoError:
932 continue
931 continue
933
932
934 validrepos.append(path)
933 validrepos.append(path)
935
934
936 # Protect against any repo or config changes that have happened since
935 # Protect against any repo or config changes that have happened since
937 # this repo was added to the repos file. We'd rather this loop succeed
936 # this repo was added to the repos file. We'd rather this loop succeed
938 # and too much be deleted, than the loop fail and nothing gets deleted.
937 # and too much be deleted, than the loop fail and nothing gets deleted.
939 if not isenabled(repo):
938 if not isenabled(repo):
940 continue
939 continue
941
940
942 if not util.safehasattr(repo, b'name'):
941 if not util.safehasattr(repo, b'name'):
943 ui.warn(
942 ui.warn(
944 _(b"repo %s is a misconfigured remotefilelog repo\n") % path
943 _(b"repo %s is a misconfigured remotefilelog repo\n") % path
945 )
944 )
946 continue
945 continue
947
946
948 # If garbage collection on repack and repack on hg gc are enabled
947 # If garbage collection on repack and repack on hg gc are enabled
949 # then loose files are repacked and garbage collected.
948 # then loose files are repacked and garbage collected.
950 # Otherwise regular garbage collection is performed.
949 # Otherwise regular garbage collection is performed.
951 repackonhggc = repo.ui.configbool(b'remotefilelog', b'repackonhggc')
950 repackonhggc = repo.ui.configbool(b'remotefilelog', b'repackonhggc')
952 gcrepack = repo.ui.configbool(b'remotefilelog', b'gcrepack')
951 gcrepack = repo.ui.configbool(b'remotefilelog', b'gcrepack')
953 if repackonhggc and gcrepack:
952 if repackonhggc and gcrepack:
954 try:
953 try:
955 repackmod.incrementalrepack(repo)
954 repackmod.incrementalrepack(repo)
956 filesrepacked = True
955 filesrepacked = True
957 continue
956 continue
958 except (IOError, repackmod.RepackAlreadyRunning):
957 except (IOError, repackmod.RepackAlreadyRunning):
959 # If repack cannot be performed due to not enough disk space
958 # If repack cannot be performed due to not enough disk space
960 # continue doing garbage collection of loose files w/o repack
959 # continue doing garbage collection of loose files w/o repack
961 pass
960 pass
962
961
963 reponame = repo.name
962 reponame = repo.name
964 if not sharedcache:
963 if not sharedcache:
965 sharedcache = repo.sharedstore
964 sharedcache = repo.sharedstore
966
965
967 # Compute a keepset which is not garbage collected
966 # Compute a keepset which is not garbage collected
968 def keyfn(fname, fnode):
967 def keyfn(fname, fnode):
969 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
968 return fileserverclient.getcachekey(reponame, fname, hex(fnode))
970
969
971 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
970 keepkeys = repackmod.keepset(repo, keyfn=keyfn, lastkeepkeys=keepkeys)
972
971
973 progress.complete()
972 progress.complete()
974
973
975 # write list of valid repos back
974 # write list of valid repos back
976 oldumask = os.umask(0o002)
975 oldumask = os.umask(0o002)
977 try:
976 try:
978 reposfile = open(repospath, b'wb')
977 reposfile = open(repospath, b'wb')
979 reposfile.writelines([(b"%s\n" % r) for r in validrepos])
978 reposfile.writelines([(b"%s\n" % r) for r in validrepos])
980 reposfile.close()
979 reposfile.close()
981 finally:
980 finally:
982 os.umask(oldumask)
981 os.umask(oldumask)
983
982
984 # prune cache
983 # prune cache
985 if sharedcache is not None:
984 if sharedcache is not None:
986 sharedcache.gc(keepkeys)
985 sharedcache.gc(keepkeys)
987 elif not filesrepacked:
986 elif not filesrepacked:
988 ui.warn(_(b"warning: no valid repos in repofile\n"))
987 ui.warn(_(b"warning: no valid repos in repofile\n"))
989
988
990
989
991 def log(orig, ui, repo, *pats, **opts):
990 def log(orig, ui, repo, *pats, **opts):
992 if not isenabled(repo):
991 if not isenabled(repo):
993 return orig(ui, repo, *pats, **opts)
992 return orig(ui, repo, *pats, **opts)
994
993
995 follow = opts.get(r'follow')
994 follow = opts.get(r'follow')
996 revs = opts.get(r'rev')
995 revs = opts.get(r'rev')
997 if pats:
996 if pats:
998 # Force slowpath for non-follow patterns and follows that start from
997 # Force slowpath for non-follow patterns and follows that start from
999 # non-working-copy-parent revs.
998 # non-working-copy-parent revs.
1000 if not follow or revs:
999 if not follow or revs:
1001 # This forces the slowpath
1000 # This forces the slowpath
1002 opts[r'removed'] = True
1001 opts[r'removed'] = True
1003
1002
1004 # If this is a non-follow log without any revs specified, recommend that
1003 # If this is a non-follow log without any revs specified, recommend that
1005 # the user add -f to speed it up.
1004 # the user add -f to speed it up.
1006 if not follow and not revs:
1005 if not follow and not revs:
1007 match = scmutil.match(repo[b'.'], pats, pycompat.byteskwargs(opts))
1006 match = scmutil.match(repo[b'.'], pats, pycompat.byteskwargs(opts))
1008 isfile = not match.anypats()
1007 isfile = not match.anypats()
1009 if isfile:
1008 if isfile:
1010 for file in match.files():
1009 for file in match.files():
1011 if not os.path.isfile(repo.wjoin(file)):
1010 if not os.path.isfile(repo.wjoin(file)):
1012 isfile = False
1011 isfile = False
1013 break
1012 break
1014
1013
1015 if isfile:
1014 if isfile:
1016 ui.warn(
1015 ui.warn(
1017 _(
1016 _(
1018 b"warning: file log can be slow on large repos - "
1017 b"warning: file log can be slow on large repos - "
1019 + b"use -f to speed it up\n"
1018 + b"use -f to speed it up\n"
1020 )
1019 )
1021 )
1020 )
1022
1021
1023 return orig(ui, repo, *pats, **opts)
1022 return orig(ui, repo, *pats, **opts)
1024
1023
1025
1024
1026 def revdatelimit(ui, revset):
1025 def revdatelimit(ui, revset):
1027 """Update revset so that only changesets no older than 'prefetchdays' days
1026 """Update revset so that only changesets no older than 'prefetchdays' days
1028 are included. The default value is set to 14 days. If 'prefetchdays' is set
1027 are included. The default value is set to 14 days. If 'prefetchdays' is set
1029 to zero or negative value then date restriction is not applied.
1028 to zero or negative value then date restriction is not applied.
1030 """
1029 """
1031 days = ui.configint(b'remotefilelog', b'prefetchdays')
1030 days = ui.configint(b'remotefilelog', b'prefetchdays')
1032 if days > 0:
1031 if days > 0:
1033 revset = b'(%s) & date(-%s)' % (revset, days)
1032 revset = b'(%s) & date(-%s)' % (revset, days)
1034 return revset
1033 return revset
1035
1034
1036
1035
1037 def readytofetch(repo):
1036 def readytofetch(repo):
1038 """Check that enough time has passed since the last background prefetch.
1037 """Check that enough time has passed since the last background prefetch.
1039 This only relates to prefetches after operations that change the working
1038 This only relates to prefetches after operations that change the working
1040 copy parent. Default delay between background prefetches is 2 minutes.
1039 copy parent. Default delay between background prefetches is 2 minutes.
1041 """
1040 """
1042 timeout = repo.ui.configint(b'remotefilelog', b'prefetchdelay')
1041 timeout = repo.ui.configint(b'remotefilelog', b'prefetchdelay')
1043 fname = repo.vfs.join(b'lastprefetch')
1042 fname = repo.vfs.join(b'lastprefetch')
1044
1043
1045 ready = False
1044 ready = False
1046 with open(fname, b'a'):
1045 with open(fname, b'a'):
1047 # the with construct above is used to avoid race conditions
1046 # the with construct above is used to avoid race conditions
1048 modtime = os.path.getmtime(fname)
1047 modtime = os.path.getmtime(fname)
1049 if (time.time() - modtime) > timeout:
1048 if (time.time() - modtime) > timeout:
1050 os.utime(fname, None)
1049 os.utime(fname, None)
1051 ready = True
1050 ready = True
1052
1051
1053 return ready
1052 return ready
1054
1053
1055
1054
1056 def wcpprefetch(ui, repo, **kwargs):
1055 def wcpprefetch(ui, repo, **kwargs):
1057 """Prefetches in background revisions specified by bgprefetchrevs revset.
1056 """Prefetches in background revisions specified by bgprefetchrevs revset.
1058 Does background repack if backgroundrepack flag is set in config.
1057 Does background repack if backgroundrepack flag is set in config.
1059 """
1058 """
1060 shallow = isenabled(repo)
1059 shallow = isenabled(repo)
1061 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs')
1060 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs')
1062 isready = readytofetch(repo)
1061 isready = readytofetch(repo)
1063
1062
1064 if not (shallow and bgprefetchrevs and isready):
1063 if not (shallow and bgprefetchrevs and isready):
1065 return
1064 return
1066
1065
1067 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1066 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1068 # update a revset with a date limit
1067 # update a revset with a date limit
1069 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
1068 bgprefetchrevs = revdatelimit(ui, bgprefetchrevs)
1070
1069
1071 def anon():
1070 def anon():
1072 if util.safehasattr(repo, b'ranprefetch') and repo.ranprefetch:
1071 if util.safehasattr(repo, b'ranprefetch') and repo.ranprefetch:
1073 return
1072 return
1074 repo.ranprefetch = True
1073 repo.ranprefetch = True
1075 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
1074 repo.backgroundprefetch(bgprefetchrevs, repack=bgrepack)
1076
1075
1077 repo._afterlock(anon)
1076 repo._afterlock(anon)
1078
1077
1079
1078
1080 def pull(orig, ui, repo, *pats, **opts):
1079 def pull(orig, ui, repo, *pats, **opts):
1081 result = orig(ui, repo, *pats, **opts)
1080 result = orig(ui, repo, *pats, **opts)
1082
1081
1083 if isenabled(repo):
1082 if isenabled(repo):
1084 # prefetch if it's configured
1083 # prefetch if it's configured
1085 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch')
1084 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch')
1086 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1085 bgrepack = repo.ui.configbool(b'remotefilelog', b'backgroundrepack')
1087 bgprefetch = repo.ui.configbool(b'remotefilelog', b'backgroundprefetch')
1086 bgprefetch = repo.ui.configbool(b'remotefilelog', b'backgroundprefetch')
1088 ensurestart = repo.ui.configbool(b'devel', b'remotefilelog.ensurestart')
1089
1087
1090 if prefetchrevset:
1088 if prefetchrevset:
1091 ui.status(_(b"prefetching file contents\n"))
1089 ui.status(_(b"prefetching file contents\n"))
1092 revs = scmutil.revrange(repo, [prefetchrevset])
1090 revs = scmutil.revrange(repo, [prefetchrevset])
1093 base = repo[b'.'].rev()
1091 base = repo[b'.'].rev()
1094 if bgprefetch:
1092 if bgprefetch:
1095 repo.backgroundprefetch(
1093 repo.backgroundprefetch(prefetchrevset, repack=bgrepack)
1096 prefetchrevset, repack=bgrepack, ensurestart=ensurestart
1097 )
1098 else:
1094 else:
1099 repo.prefetch(revs, base=base)
1095 repo.prefetch(revs, base=base)
1100 if bgrepack:
1096 if bgrepack:
1101 repackmod.backgroundrepack(
1097 repackmod.backgroundrepack(repo, incremental=True)
1102 repo, incremental=True, ensurestart=ensurestart
1103 )
1104 elif bgrepack:
1098 elif bgrepack:
1105 repackmod.backgroundrepack(
1099 repackmod.backgroundrepack(repo, incremental=True)
1106 repo, incremental=True, ensurestart=ensurestart
1107 )
1108
1100
1109 return result
1101 return result
1110
1102
1111
1103
1112 def exchangepull(orig, repo, remote, *args, **kwargs):
1104 def exchangepull(orig, repo, remote, *args, **kwargs):
1113 # Hook into the callstream/getbundle to insert bundle capabilities
1105 # Hook into the callstream/getbundle to insert bundle capabilities
1114 # during a pull.
1106 # during a pull.
1115 def localgetbundle(
1107 def localgetbundle(
1116 orig, source, heads=None, common=None, bundlecaps=None, **kwargs
1108 orig, source, heads=None, common=None, bundlecaps=None, **kwargs
1117 ):
1109 ):
1118 if not bundlecaps:
1110 if not bundlecaps:
1119 bundlecaps = set()
1111 bundlecaps = set()
1120 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1112 bundlecaps.add(constants.BUNDLE2_CAPABLITY)
1121 return orig(
1113 return orig(
1122 source, heads=heads, common=common, bundlecaps=bundlecaps, **kwargs
1114 source, heads=heads, common=common, bundlecaps=bundlecaps, **kwargs
1123 )
1115 )
1124
1116
1125 if util.safehasattr(remote, b'_callstream'):
1117 if util.safehasattr(remote, b'_callstream'):
1126 remote._localrepo = repo
1118 remote._localrepo = repo
1127 elif util.safehasattr(remote, b'getbundle'):
1119 elif util.safehasattr(remote, b'getbundle'):
1128 extensions.wrapfunction(remote, b'getbundle', localgetbundle)
1120 extensions.wrapfunction(remote, b'getbundle', localgetbundle)
1129
1121
1130 return orig(repo, remote, *args, **kwargs)
1122 return orig(repo, remote, *args, **kwargs)
1131
1123
1132
1124
1133 def _fileprefetchhook(repo, revs, match):
1125 def _fileprefetchhook(repo, revs, match):
1134 if isenabled(repo):
1126 if isenabled(repo):
1135 allfiles = []
1127 allfiles = []
1136 for rev in revs:
1128 for rev in revs:
1137 if rev == nodemod.wdirrev or rev is None:
1129 if rev == nodemod.wdirrev or rev is None:
1138 continue
1130 continue
1139 ctx = repo[rev]
1131 ctx = repo[rev]
1140 mf = ctx.manifest()
1132 mf = ctx.manifest()
1141 sparsematch = repo.maybesparsematch(ctx.rev())
1133 sparsematch = repo.maybesparsematch(ctx.rev())
1142 for path in ctx.walk(match):
1134 for path in ctx.walk(match):
1143 if (not sparsematch or sparsematch(path)) and path in mf:
1135 if (not sparsematch or sparsematch(path)) and path in mf:
1144 allfiles.append((path, hex(mf[path])))
1136 allfiles.append((path, hex(mf[path])))
1145 repo.fileservice.prefetch(allfiles)
1137 repo.fileservice.prefetch(allfiles)
1146
1138
1147
1139
1148 @command(
1140 @command(
1149 b'debugremotefilelog',
1141 b'debugremotefilelog',
1150 [(b'd', b'decompress', None, _(b'decompress the filelog first')),],
1142 [(b'd', b'decompress', None, _(b'decompress the filelog first')),],
1151 _(b'hg debugremotefilelog <path>'),
1143 _(b'hg debugremotefilelog <path>'),
1152 norepo=True,
1144 norepo=True,
1153 )
1145 )
1154 def debugremotefilelog(ui, path, **opts):
1146 def debugremotefilelog(ui, path, **opts):
1155 return debugcommands.debugremotefilelog(ui, path, **opts)
1147 return debugcommands.debugremotefilelog(ui, path, **opts)
1156
1148
1157
1149
1158 @command(
1150 @command(
1159 b'verifyremotefilelog',
1151 b'verifyremotefilelog',
1160 [(b'd', b'decompress', None, _(b'decompress the filelogs first')),],
1152 [(b'd', b'decompress', None, _(b'decompress the filelogs first')),],
1161 _(b'hg verifyremotefilelogs <directory>'),
1153 _(b'hg verifyremotefilelogs <directory>'),
1162 norepo=True,
1154 norepo=True,
1163 )
1155 )
1164 def verifyremotefilelog(ui, path, **opts):
1156 def verifyremotefilelog(ui, path, **opts):
1165 return debugcommands.verifyremotefilelog(ui, path, **opts)
1157 return debugcommands.verifyremotefilelog(ui, path, **opts)
1166
1158
1167
1159
1168 @command(
1160 @command(
1169 b'debugdatapack',
1161 b'debugdatapack',
1170 [
1162 [
1171 (b'', b'long', None, _(b'print the long hashes')),
1163 (b'', b'long', None, _(b'print the long hashes')),
1172 (b'', b'node', b'', _(b'dump the contents of node'), b'NODE'),
1164 (b'', b'node', b'', _(b'dump the contents of node'), b'NODE'),
1173 ],
1165 ],
1174 _(b'hg debugdatapack <paths>'),
1166 _(b'hg debugdatapack <paths>'),
1175 norepo=True,
1167 norepo=True,
1176 )
1168 )
1177 def debugdatapack(ui, *paths, **opts):
1169 def debugdatapack(ui, *paths, **opts):
1178 return debugcommands.debugdatapack(ui, *paths, **opts)
1170 return debugcommands.debugdatapack(ui, *paths, **opts)
1179
1171
1180
1172
1181 @command(b'debughistorypack', [], _(b'hg debughistorypack <path>'), norepo=True)
1173 @command(b'debughistorypack', [], _(b'hg debughistorypack <path>'), norepo=True)
1182 def debughistorypack(ui, path, **opts):
1174 def debughistorypack(ui, path, **opts):
1183 return debugcommands.debughistorypack(ui, path)
1175 return debugcommands.debughistorypack(ui, path)
1184
1176
1185
1177
1186 @command(b'debugkeepset', [], _(b'hg debugkeepset'))
1178 @command(b'debugkeepset', [], _(b'hg debugkeepset'))
1187 def debugkeepset(ui, repo, **opts):
1179 def debugkeepset(ui, repo, **opts):
1188 # The command is used to measure keepset computation time
1180 # The command is used to measure keepset computation time
1189 def keyfn(fname, fnode):
1181 def keyfn(fname, fnode):
1190 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1182 return fileserverclient.getcachekey(repo.name, fname, hex(fnode))
1191
1183
1192 repackmod.keepset(repo, keyfn)
1184 repackmod.keepset(repo, keyfn)
1193 return
1185 return
1194
1186
1195
1187
1196 @command(b'debugwaitonrepack', [], _(b'hg debugwaitonrepack'))
1188 @command(b'debugwaitonrepack', [], _(b'hg debugwaitonrepack'))
1197 def debugwaitonrepack(ui, repo, **opts):
1189 def debugwaitonrepack(ui, repo, **opts):
1198 return debugcommands.debugwaitonrepack(repo)
1190 return debugcommands.debugwaitonrepack(repo)
1199
1191
1200
1192
1201 @command(b'debugwaitonprefetch', [], _(b'hg debugwaitonprefetch'))
1193 @command(b'debugwaitonprefetch', [], _(b'hg debugwaitonprefetch'))
1202 def debugwaitonprefetch(ui, repo, **opts):
1194 def debugwaitonprefetch(ui, repo, **opts):
1203 return debugcommands.debugwaitonprefetch(repo)
1195 return debugcommands.debugwaitonprefetch(repo)
1204
1196
1205
1197
1206 def resolveprefetchopts(ui, opts):
1198 def resolveprefetchopts(ui, opts):
1207 if not opts.get(b'rev'):
1199 if not opts.get(b'rev'):
1208 revset = [b'.', b'draft()']
1200 revset = [b'.', b'draft()']
1209
1201
1210 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch', None)
1202 prefetchrevset = ui.config(b'remotefilelog', b'pullprefetch', None)
1211 if prefetchrevset:
1203 if prefetchrevset:
1212 revset.append(b'(%s)' % prefetchrevset)
1204 revset.append(b'(%s)' % prefetchrevset)
1213 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs', None)
1205 bgprefetchrevs = ui.config(b'remotefilelog', b'bgprefetchrevs', None)
1214 if bgprefetchrevs:
1206 if bgprefetchrevs:
1215 revset.append(b'(%s)' % bgprefetchrevs)
1207 revset.append(b'(%s)' % bgprefetchrevs)
1216 revset = b'+'.join(revset)
1208 revset = b'+'.join(revset)
1217
1209
1218 # update a revset with a date limit
1210 # update a revset with a date limit
1219 revset = revdatelimit(ui, revset)
1211 revset = revdatelimit(ui, revset)
1220
1212
1221 opts[b'rev'] = [revset]
1213 opts[b'rev'] = [revset]
1222
1214
1223 if not opts.get(b'base'):
1215 if not opts.get(b'base'):
1224 opts[b'base'] = None
1216 opts[b'base'] = None
1225
1217
1226 return opts
1218 return opts
1227
1219
1228
1220
1229 @command(
1221 @command(
1230 b'prefetch',
1222 b'prefetch',
1231 [
1223 [
1232 (b'r', b'rev', [], _(b'prefetch the specified revisions'), _(b'REV')),
1224 (b'r', b'rev', [], _(b'prefetch the specified revisions'), _(b'REV')),
1233 (b'', b'repack', False, _(b'run repack after prefetch')),
1225 (b'', b'repack', False, _(b'run repack after prefetch')),
1234 (b'b', b'base', b'', _(b"rev that is assumed to already be local")),
1226 (b'b', b'base', b'', _(b"rev that is assumed to already be local")),
1235 ]
1227 ]
1236 + commands.walkopts,
1228 + commands.walkopts,
1237 _(b'hg prefetch [OPTIONS] [FILE...]'),
1229 _(b'hg prefetch [OPTIONS] [FILE...]'),
1238 helpcategory=command.CATEGORY_MAINTENANCE,
1230 helpcategory=command.CATEGORY_MAINTENANCE,
1239 )
1231 )
1240 def prefetch(ui, repo, *pats, **opts):
1232 def prefetch(ui, repo, *pats, **opts):
1241 """prefetch file revisions from the server
1233 """prefetch file revisions from the server
1242
1234
1243 Prefetchs file revisions for the specified revs and stores them in the
1235 Prefetchs file revisions for the specified revs and stores them in the
1244 local remotefilelog cache. If no rev is specified, the default rev is
1236 local remotefilelog cache. If no rev is specified, the default rev is
1245 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1237 used which is the union of dot, draft, pullprefetch and bgprefetchrev.
1246 File names or patterns can be used to limit which files are downloaded.
1238 File names or patterns can be used to limit which files are downloaded.
1247
1239
1248 Return 0 on success.
1240 Return 0 on success.
1249 """
1241 """
1250 opts = pycompat.byteskwargs(opts)
1242 opts = pycompat.byteskwargs(opts)
1251 if not isenabled(repo):
1243 if not isenabled(repo):
1252 raise error.Abort(_(b"repo is not shallow"))
1244 raise error.Abort(_(b"repo is not shallow"))
1253
1245
1254 opts = resolveprefetchopts(ui, opts)
1246 opts = resolveprefetchopts(ui, opts)
1255 revs = scmutil.revrange(repo, opts.get(b'rev'))
1247 revs = scmutil.revrange(repo, opts.get(b'rev'))
1256 repo.prefetch(revs, opts.get(b'base'), pats, opts)
1248 repo.prefetch(revs, opts.get(b'base'), pats, opts)
1257
1249
1258 ensurestart = repo.ui.configbool(b'devel', b'remotefilelog.ensurestart')
1259
1260 # Run repack in background
1250 # Run repack in background
1261 if opts.get(b'repack'):
1251 if opts.get(b'repack'):
1262 repackmod.backgroundrepack(
1252 repackmod.backgroundrepack(repo, incremental=True)
1263 repo, incremental=True, ensurestart=ensurestart
1264 )
1265
1253
1266
1254
1267 @command(
1255 @command(
1268 b'repack',
1256 b'repack',
1269 [
1257 [
1270 (b'', b'background', None, _(b'run in a background process'), None),
1258 (b'', b'background', None, _(b'run in a background process'), None),
1271 (b'', b'incremental', None, _(b'do an incremental repack'), None),
1259 (b'', b'incremental', None, _(b'do an incremental repack'), None),
1272 (
1260 (
1273 b'',
1261 b'',
1274 b'packsonly',
1262 b'packsonly',
1275 None,
1263 None,
1276 _(b'only repack packs (skip loose objects)'),
1264 _(b'only repack packs (skip loose objects)'),
1277 None,
1265 None,
1278 ),
1266 ),
1279 ],
1267 ],
1280 _(b'hg repack [OPTIONS]'),
1268 _(b'hg repack [OPTIONS]'),
1281 )
1269 )
1282 def repack_(ui, repo, *pats, **opts):
1270 def repack_(ui, repo, *pats, **opts):
1283 if opts.get(r'background'):
1271 if opts.get(r'background'):
1284 ensurestart = repo.ui.configbool(b'devel', b'remotefilelog.ensurestart')
1285 repackmod.backgroundrepack(
1272 repackmod.backgroundrepack(
1286 repo,
1273 repo,
1287 incremental=opts.get(r'incremental'),
1274 incremental=opts.get(r'incremental'),
1288 packsonly=opts.get(r'packsonly', False),
1275 packsonly=opts.get(r'packsonly', False),
1289 ensurestart=ensurestart,
1290 )
1276 )
1291 return
1277 return
1292
1278
1293 options = {b'packsonly': opts.get(r'packsonly')}
1279 options = {b'packsonly': opts.get(r'packsonly')}
1294
1280
1295 try:
1281 try:
1296 if opts.get(r'incremental'):
1282 if opts.get(r'incremental'):
1297 repackmod.incrementalrepack(repo, options=options)
1283 repackmod.incrementalrepack(repo, options=options)
1298 else:
1284 else:
1299 repackmod.fullrepack(repo, options=options)
1285 repackmod.fullrepack(repo, options=options)
1300 except repackmod.RepackAlreadyRunning as ex:
1286 except repackmod.RepackAlreadyRunning as ex:
1301 # Don't propogate the exception if the repack is already in
1287 # Don't propogate the exception if the repack is already in
1302 # progress, since we want the command to exit 0.
1288 # progress, since we want the command to exit 0.
1303 repo.ui.warn(b'%s\n' % ex)
1289 repo.ui.warn(b'%s\n' % ex)
@@ -1,918 +1,914 b''
1 from __future__ import absolute_import
1 from __future__ import absolute_import
2
2
3 import os
3 import os
4 import time
4 import time
5
5
6 from mercurial.i18n import _
6 from mercurial.i18n import _
7 from mercurial.node import (
7 from mercurial.node import (
8 nullid,
8 nullid,
9 short,
9 short,
10 )
10 )
11 from mercurial import (
11 from mercurial import (
12 encoding,
12 encoding,
13 error,
13 error,
14 lock as lockmod,
14 lock as lockmod,
15 mdiff,
15 mdiff,
16 policy,
16 policy,
17 pycompat,
17 pycompat,
18 scmutil,
18 scmutil,
19 util,
19 util,
20 vfs,
20 vfs,
21 )
21 )
22 from mercurial.utils import procutil
22 from mercurial.utils import procutil
23 from . import (
23 from . import (
24 constants,
24 constants,
25 contentstore,
25 contentstore,
26 datapack,
26 datapack,
27 historypack,
27 historypack,
28 metadatastore,
28 metadatastore,
29 shallowutil,
29 shallowutil,
30 )
30 )
31
31
32 osutil = policy.importmod(r'osutil')
32 osutil = policy.importmod(r'osutil')
33
33
34
34
35 class RepackAlreadyRunning(error.Abort):
35 class RepackAlreadyRunning(error.Abort):
36 pass
36 pass
37
37
38
38
39 def backgroundrepack(
39 def backgroundrepack(repo, incremental=True, packsonly=False):
40 repo, incremental=True, packsonly=False, ensurestart=False
41 ):
42 cmd = [procutil.hgexecutable(), b'-R', repo.origroot, b'repack']
40 cmd = [procutil.hgexecutable(), b'-R', repo.origroot, b'repack']
43 msg = _(b"(running background repack)\n")
41 msg = _(b"(running background repack)\n")
44 if incremental:
42 if incremental:
45 cmd.append(b'--incremental')
43 cmd.append(b'--incremental')
46 msg = _(b"(running background incremental repack)\n")
44 msg = _(b"(running background incremental repack)\n")
47 if packsonly:
45 if packsonly:
48 cmd.append(b'--packsonly')
46 cmd.append(b'--packsonly')
49 repo.ui.warn(msg)
47 repo.ui.warn(msg)
50 # We know this command will find a binary, so don't block on it starting.
48 # We know this command will find a binary, so don't block on it starting.
51 kwargs = {}
49 kwargs = {}
52 if repo.ui.configbool(b'devel', b'remotefilelog.bg-wait'):
50 if repo.ui.configbool(b'devel', b'remotefilelog.bg-wait'):
53 kwargs['record_wait'] = repo.ui.atexit
51 kwargs['record_wait'] = repo.ui.atexit
54
52
55 procutil.runbgcommand(
53 procutil.runbgcommand(cmd, encoding.environ, ensurestart=False, **kwargs)
56 cmd, encoding.environ, ensurestart=ensurestart, **kwargs
57 )
58
54
59
55
60 def fullrepack(repo, options=None):
56 def fullrepack(repo, options=None):
61 """If ``packsonly`` is True, stores creating only loose objects are skipped.
57 """If ``packsonly`` is True, stores creating only loose objects are skipped.
62 """
58 """
63 if util.safehasattr(repo, 'shareddatastores'):
59 if util.safehasattr(repo, 'shareddatastores'):
64 datasource = contentstore.unioncontentstore(*repo.shareddatastores)
60 datasource = contentstore.unioncontentstore(*repo.shareddatastores)
65 historysource = metadatastore.unionmetadatastore(
61 historysource = metadatastore.unionmetadatastore(
66 *repo.sharedhistorystores, allowincomplete=True
62 *repo.sharedhistorystores, allowincomplete=True
67 )
63 )
68
64
69 packpath = shallowutil.getcachepackpath(
65 packpath = shallowutil.getcachepackpath(
70 repo, constants.FILEPACK_CATEGORY
66 repo, constants.FILEPACK_CATEGORY
71 )
67 )
72 _runrepack(
68 _runrepack(
73 repo,
69 repo,
74 datasource,
70 datasource,
75 historysource,
71 historysource,
76 packpath,
72 packpath,
77 constants.FILEPACK_CATEGORY,
73 constants.FILEPACK_CATEGORY,
78 options=options,
74 options=options,
79 )
75 )
80
76
81 if util.safehasattr(repo.manifestlog, 'datastore'):
77 if util.safehasattr(repo.manifestlog, 'datastore'):
82 localdata, shareddata = _getmanifeststores(repo)
78 localdata, shareddata = _getmanifeststores(repo)
83 lpackpath, ldstores, lhstores = localdata
79 lpackpath, ldstores, lhstores = localdata
84 spackpath, sdstores, shstores = shareddata
80 spackpath, sdstores, shstores = shareddata
85
81
86 # Repack the shared manifest store
82 # Repack the shared manifest store
87 datasource = contentstore.unioncontentstore(*sdstores)
83 datasource = contentstore.unioncontentstore(*sdstores)
88 historysource = metadatastore.unionmetadatastore(
84 historysource = metadatastore.unionmetadatastore(
89 *shstores, allowincomplete=True
85 *shstores, allowincomplete=True
90 )
86 )
91 _runrepack(
87 _runrepack(
92 repo,
88 repo,
93 datasource,
89 datasource,
94 historysource,
90 historysource,
95 spackpath,
91 spackpath,
96 constants.TREEPACK_CATEGORY,
92 constants.TREEPACK_CATEGORY,
97 options=options,
93 options=options,
98 )
94 )
99
95
100 # Repack the local manifest store
96 # Repack the local manifest store
101 datasource = contentstore.unioncontentstore(
97 datasource = contentstore.unioncontentstore(
102 *ldstores, allowincomplete=True
98 *ldstores, allowincomplete=True
103 )
99 )
104 historysource = metadatastore.unionmetadatastore(
100 historysource = metadatastore.unionmetadatastore(
105 *lhstores, allowincomplete=True
101 *lhstores, allowincomplete=True
106 )
102 )
107 _runrepack(
103 _runrepack(
108 repo,
104 repo,
109 datasource,
105 datasource,
110 historysource,
106 historysource,
111 lpackpath,
107 lpackpath,
112 constants.TREEPACK_CATEGORY,
108 constants.TREEPACK_CATEGORY,
113 options=options,
109 options=options,
114 )
110 )
115
111
116
112
117 def incrementalrepack(repo, options=None):
113 def incrementalrepack(repo, options=None):
118 """This repacks the repo by looking at the distribution of pack files in the
114 """This repacks the repo by looking at the distribution of pack files in the
119 repo and performing the most minimal repack to keep the repo in good shape.
115 repo and performing the most minimal repack to keep the repo in good shape.
120 """
116 """
121 if util.safehasattr(repo, 'shareddatastores'):
117 if util.safehasattr(repo, 'shareddatastores'):
122 packpath = shallowutil.getcachepackpath(
118 packpath = shallowutil.getcachepackpath(
123 repo, constants.FILEPACK_CATEGORY
119 repo, constants.FILEPACK_CATEGORY
124 )
120 )
125 _incrementalrepack(
121 _incrementalrepack(
126 repo,
122 repo,
127 repo.shareddatastores,
123 repo.shareddatastores,
128 repo.sharedhistorystores,
124 repo.sharedhistorystores,
129 packpath,
125 packpath,
130 constants.FILEPACK_CATEGORY,
126 constants.FILEPACK_CATEGORY,
131 options=options,
127 options=options,
132 )
128 )
133
129
134 if util.safehasattr(repo.manifestlog, 'datastore'):
130 if util.safehasattr(repo.manifestlog, 'datastore'):
135 localdata, shareddata = _getmanifeststores(repo)
131 localdata, shareddata = _getmanifeststores(repo)
136 lpackpath, ldstores, lhstores = localdata
132 lpackpath, ldstores, lhstores = localdata
137 spackpath, sdstores, shstores = shareddata
133 spackpath, sdstores, shstores = shareddata
138
134
139 # Repack the shared manifest store
135 # Repack the shared manifest store
140 _incrementalrepack(
136 _incrementalrepack(
141 repo,
137 repo,
142 sdstores,
138 sdstores,
143 shstores,
139 shstores,
144 spackpath,
140 spackpath,
145 constants.TREEPACK_CATEGORY,
141 constants.TREEPACK_CATEGORY,
146 options=options,
142 options=options,
147 )
143 )
148
144
149 # Repack the local manifest store
145 # Repack the local manifest store
150 _incrementalrepack(
146 _incrementalrepack(
151 repo,
147 repo,
152 ldstores,
148 ldstores,
153 lhstores,
149 lhstores,
154 lpackpath,
150 lpackpath,
155 constants.TREEPACK_CATEGORY,
151 constants.TREEPACK_CATEGORY,
156 allowincompletedata=True,
152 allowincompletedata=True,
157 options=options,
153 options=options,
158 )
154 )
159
155
160
156
161 def _getmanifeststores(repo):
157 def _getmanifeststores(repo):
162 shareddatastores = repo.manifestlog.shareddatastores
158 shareddatastores = repo.manifestlog.shareddatastores
163 localdatastores = repo.manifestlog.localdatastores
159 localdatastores = repo.manifestlog.localdatastores
164 sharedhistorystores = repo.manifestlog.sharedhistorystores
160 sharedhistorystores = repo.manifestlog.sharedhistorystores
165 localhistorystores = repo.manifestlog.localhistorystores
161 localhistorystores = repo.manifestlog.localhistorystores
166
162
167 sharedpackpath = shallowutil.getcachepackpath(
163 sharedpackpath = shallowutil.getcachepackpath(
168 repo, constants.TREEPACK_CATEGORY
164 repo, constants.TREEPACK_CATEGORY
169 )
165 )
170 localpackpath = shallowutil.getlocalpackpath(
166 localpackpath = shallowutil.getlocalpackpath(
171 repo.svfs.vfs.base, constants.TREEPACK_CATEGORY
167 repo.svfs.vfs.base, constants.TREEPACK_CATEGORY
172 )
168 )
173
169
174 return (
170 return (
175 (localpackpath, localdatastores, localhistorystores),
171 (localpackpath, localdatastores, localhistorystores),
176 (sharedpackpath, shareddatastores, sharedhistorystores),
172 (sharedpackpath, shareddatastores, sharedhistorystores),
177 )
173 )
178
174
179
175
180 def _topacks(packpath, files, constructor):
176 def _topacks(packpath, files, constructor):
181 paths = list(os.path.join(packpath, p) for p in files)
177 paths = list(os.path.join(packpath, p) for p in files)
182 packs = list(constructor(p) for p in paths)
178 packs = list(constructor(p) for p in paths)
183 return packs
179 return packs
184
180
185
181
186 def _deletebigpacks(repo, folder, files):
182 def _deletebigpacks(repo, folder, files):
187 """Deletes packfiles that are bigger than ``packs.maxpacksize``.
183 """Deletes packfiles that are bigger than ``packs.maxpacksize``.
188
184
189 Returns ``files` with the removed files omitted."""
185 Returns ``files` with the removed files omitted."""
190 maxsize = repo.ui.configbytes(b"packs", b"maxpacksize")
186 maxsize = repo.ui.configbytes(b"packs", b"maxpacksize")
191 if maxsize <= 0:
187 if maxsize <= 0:
192 return files
188 return files
193
189
194 # This only considers datapacks today, but we could broaden it to include
190 # This only considers datapacks today, but we could broaden it to include
195 # historypacks.
191 # historypacks.
196 VALIDEXTS = [b".datapack", b".dataidx"]
192 VALIDEXTS = [b".datapack", b".dataidx"]
197
193
198 # Either an oversize index or datapack will trigger cleanup of the whole
194 # Either an oversize index or datapack will trigger cleanup of the whole
199 # pack:
195 # pack:
200 oversized = {
196 oversized = {
201 os.path.splitext(path)[0]
197 os.path.splitext(path)[0]
202 for path, ftype, stat in files
198 for path, ftype, stat in files
203 if (stat.st_size > maxsize and (os.path.splitext(path)[1] in VALIDEXTS))
199 if (stat.st_size > maxsize and (os.path.splitext(path)[1] in VALIDEXTS))
204 }
200 }
205
201
206 for rootfname in oversized:
202 for rootfname in oversized:
207 rootpath = os.path.join(folder, rootfname)
203 rootpath = os.path.join(folder, rootfname)
208 for ext in VALIDEXTS:
204 for ext in VALIDEXTS:
209 path = rootpath + ext
205 path = rootpath + ext
210 repo.ui.debug(
206 repo.ui.debug(
211 b'removing oversize packfile %s (%s)\n'
207 b'removing oversize packfile %s (%s)\n'
212 % (path, util.bytecount(os.stat(path).st_size))
208 % (path, util.bytecount(os.stat(path).st_size))
213 )
209 )
214 os.unlink(path)
210 os.unlink(path)
215 return [row for row in files if os.path.basename(row[0]) not in oversized]
211 return [row for row in files if os.path.basename(row[0]) not in oversized]
216
212
217
213
218 def _incrementalrepack(
214 def _incrementalrepack(
219 repo,
215 repo,
220 datastore,
216 datastore,
221 historystore,
217 historystore,
222 packpath,
218 packpath,
223 category,
219 category,
224 allowincompletedata=False,
220 allowincompletedata=False,
225 options=None,
221 options=None,
226 ):
222 ):
227 shallowutil.mkstickygroupdir(repo.ui, packpath)
223 shallowutil.mkstickygroupdir(repo.ui, packpath)
228
224
229 files = osutil.listdir(packpath, stat=True)
225 files = osutil.listdir(packpath, stat=True)
230 files = _deletebigpacks(repo, packpath, files)
226 files = _deletebigpacks(repo, packpath, files)
231 datapacks = _topacks(
227 datapacks = _topacks(
232 packpath, _computeincrementaldatapack(repo.ui, files), datapack.datapack
228 packpath, _computeincrementaldatapack(repo.ui, files), datapack.datapack
233 )
229 )
234 datapacks.extend(
230 datapacks.extend(
235 s for s in datastore if not isinstance(s, datapack.datapackstore)
231 s for s in datastore if not isinstance(s, datapack.datapackstore)
236 )
232 )
237
233
238 historypacks = _topacks(
234 historypacks = _topacks(
239 packpath,
235 packpath,
240 _computeincrementalhistorypack(repo.ui, files),
236 _computeincrementalhistorypack(repo.ui, files),
241 historypack.historypack,
237 historypack.historypack,
242 )
238 )
243 historypacks.extend(
239 historypacks.extend(
244 s
240 s
245 for s in historystore
241 for s in historystore
246 if not isinstance(s, historypack.historypackstore)
242 if not isinstance(s, historypack.historypackstore)
247 )
243 )
248
244
249 # ``allhistory{files,packs}`` contains all known history packs, even ones we
245 # ``allhistory{files,packs}`` contains all known history packs, even ones we
250 # don't plan to repack. They are used during the datapack repack to ensure
246 # don't plan to repack. They are used during the datapack repack to ensure
251 # good ordering of nodes.
247 # good ordering of nodes.
252 allhistoryfiles = _allpackfileswithsuffix(
248 allhistoryfiles = _allpackfileswithsuffix(
253 files, historypack.PACKSUFFIX, historypack.INDEXSUFFIX
249 files, historypack.PACKSUFFIX, historypack.INDEXSUFFIX
254 )
250 )
255 allhistorypacks = _topacks(
251 allhistorypacks = _topacks(
256 packpath,
252 packpath,
257 (f for f, mode, stat in allhistoryfiles),
253 (f for f, mode, stat in allhistoryfiles),
258 historypack.historypack,
254 historypack.historypack,
259 )
255 )
260 allhistorypacks.extend(
256 allhistorypacks.extend(
261 s
257 s
262 for s in historystore
258 for s in historystore
263 if not isinstance(s, historypack.historypackstore)
259 if not isinstance(s, historypack.historypackstore)
264 )
260 )
265 _runrepack(
261 _runrepack(
266 repo,
262 repo,
267 contentstore.unioncontentstore(
263 contentstore.unioncontentstore(
268 *datapacks, allowincomplete=allowincompletedata
264 *datapacks, allowincomplete=allowincompletedata
269 ),
265 ),
270 metadatastore.unionmetadatastore(*historypacks, allowincomplete=True),
266 metadatastore.unionmetadatastore(*historypacks, allowincomplete=True),
271 packpath,
267 packpath,
272 category,
268 category,
273 fullhistory=metadatastore.unionmetadatastore(
269 fullhistory=metadatastore.unionmetadatastore(
274 *allhistorypacks, allowincomplete=True
270 *allhistorypacks, allowincomplete=True
275 ),
271 ),
276 options=options,
272 options=options,
277 )
273 )
278
274
279
275
280 def _computeincrementaldatapack(ui, files):
276 def _computeincrementaldatapack(ui, files):
281 opts = {
277 opts = {
282 b'gencountlimit': ui.configint(b'remotefilelog', b'data.gencountlimit'),
278 b'gencountlimit': ui.configint(b'remotefilelog', b'data.gencountlimit'),
283 b'generations': ui.configlist(b'remotefilelog', b'data.generations'),
279 b'generations': ui.configlist(b'remotefilelog', b'data.generations'),
284 b'maxrepackpacks': ui.configint(
280 b'maxrepackpacks': ui.configint(
285 b'remotefilelog', b'data.maxrepackpacks'
281 b'remotefilelog', b'data.maxrepackpacks'
286 ),
282 ),
287 b'repackmaxpacksize': ui.configbytes(
283 b'repackmaxpacksize': ui.configbytes(
288 b'remotefilelog', b'data.repackmaxpacksize'
284 b'remotefilelog', b'data.repackmaxpacksize'
289 ),
285 ),
290 b'repacksizelimit': ui.configbytes(
286 b'repacksizelimit': ui.configbytes(
291 b'remotefilelog', b'data.repacksizelimit'
287 b'remotefilelog', b'data.repacksizelimit'
292 ),
288 ),
293 }
289 }
294
290
295 packfiles = _allpackfileswithsuffix(
291 packfiles = _allpackfileswithsuffix(
296 files, datapack.PACKSUFFIX, datapack.INDEXSUFFIX
292 files, datapack.PACKSUFFIX, datapack.INDEXSUFFIX
297 )
293 )
298 return _computeincrementalpack(packfiles, opts)
294 return _computeincrementalpack(packfiles, opts)
299
295
300
296
301 def _computeincrementalhistorypack(ui, files):
297 def _computeincrementalhistorypack(ui, files):
302 opts = {
298 opts = {
303 b'gencountlimit': ui.configint(
299 b'gencountlimit': ui.configint(
304 b'remotefilelog', b'history.gencountlimit'
300 b'remotefilelog', b'history.gencountlimit'
305 ),
301 ),
306 b'generations': ui.configlist(
302 b'generations': ui.configlist(
307 b'remotefilelog', b'history.generations', [b'100MB']
303 b'remotefilelog', b'history.generations', [b'100MB']
308 ),
304 ),
309 b'maxrepackpacks': ui.configint(
305 b'maxrepackpacks': ui.configint(
310 b'remotefilelog', b'history.maxrepackpacks'
306 b'remotefilelog', b'history.maxrepackpacks'
311 ),
307 ),
312 b'repackmaxpacksize': ui.configbytes(
308 b'repackmaxpacksize': ui.configbytes(
313 b'remotefilelog', b'history.repackmaxpacksize', b'400MB'
309 b'remotefilelog', b'history.repackmaxpacksize', b'400MB'
314 ),
310 ),
315 b'repacksizelimit': ui.configbytes(
311 b'repacksizelimit': ui.configbytes(
316 b'remotefilelog', b'history.repacksizelimit'
312 b'remotefilelog', b'history.repacksizelimit'
317 ),
313 ),
318 }
314 }
319
315
320 packfiles = _allpackfileswithsuffix(
316 packfiles = _allpackfileswithsuffix(
321 files, historypack.PACKSUFFIX, historypack.INDEXSUFFIX
317 files, historypack.PACKSUFFIX, historypack.INDEXSUFFIX
322 )
318 )
323 return _computeincrementalpack(packfiles, opts)
319 return _computeincrementalpack(packfiles, opts)
324
320
325
321
326 def _allpackfileswithsuffix(files, packsuffix, indexsuffix):
322 def _allpackfileswithsuffix(files, packsuffix, indexsuffix):
327 result = []
323 result = []
328 fileset = set(fn for fn, mode, stat in files)
324 fileset = set(fn for fn, mode, stat in files)
329 for filename, mode, stat in files:
325 for filename, mode, stat in files:
330 if not filename.endswith(packsuffix):
326 if not filename.endswith(packsuffix):
331 continue
327 continue
332
328
333 prefix = filename[: -len(packsuffix)]
329 prefix = filename[: -len(packsuffix)]
334
330
335 # Don't process a pack if it doesn't have an index.
331 # Don't process a pack if it doesn't have an index.
336 if (prefix + indexsuffix) not in fileset:
332 if (prefix + indexsuffix) not in fileset:
337 continue
333 continue
338 result.append((prefix, mode, stat))
334 result.append((prefix, mode, stat))
339
335
340 return result
336 return result
341
337
342
338
343 def _computeincrementalpack(files, opts):
339 def _computeincrementalpack(files, opts):
344 """Given a set of pack files along with the configuration options, this
340 """Given a set of pack files along with the configuration options, this
345 function computes the list of files that should be packed as part of an
341 function computes the list of files that should be packed as part of an
346 incremental repack.
342 incremental repack.
347
343
348 It tries to strike a balance between keeping incremental repacks cheap (i.e.
344 It tries to strike a balance between keeping incremental repacks cheap (i.e.
349 packing small things when possible, and rolling the packs up to the big ones
345 packing small things when possible, and rolling the packs up to the big ones
350 over time).
346 over time).
351 """
347 """
352
348
353 limits = list(
349 limits = list(
354 sorted((util.sizetoint(s) for s in opts[b'generations']), reverse=True)
350 sorted((util.sizetoint(s) for s in opts[b'generations']), reverse=True)
355 )
351 )
356 limits.append(0)
352 limits.append(0)
357
353
358 # Group the packs by generation (i.e. by size)
354 # Group the packs by generation (i.e. by size)
359 generations = []
355 generations = []
360 for i in pycompat.xrange(len(limits)):
356 for i in pycompat.xrange(len(limits)):
361 generations.append([])
357 generations.append([])
362
358
363 sizes = {}
359 sizes = {}
364 for prefix, mode, stat in files:
360 for prefix, mode, stat in files:
365 size = stat.st_size
361 size = stat.st_size
366 if size > opts[b'repackmaxpacksize']:
362 if size > opts[b'repackmaxpacksize']:
367 continue
363 continue
368
364
369 sizes[prefix] = size
365 sizes[prefix] = size
370 for i, limit in enumerate(limits):
366 for i, limit in enumerate(limits):
371 if size > limit:
367 if size > limit:
372 generations[i].append(prefix)
368 generations[i].append(prefix)
373 break
369 break
374
370
375 # Steps for picking what packs to repack:
371 # Steps for picking what packs to repack:
376 # 1. Pick the largest generation with > gencountlimit pack files.
372 # 1. Pick the largest generation with > gencountlimit pack files.
377 # 2. Take the smallest three packs.
373 # 2. Take the smallest three packs.
378 # 3. While total-size-of-packs < repacksizelimit: add another pack
374 # 3. While total-size-of-packs < repacksizelimit: add another pack
379
375
380 # Find the largest generation with more than gencountlimit packs
376 # Find the largest generation with more than gencountlimit packs
381 genpacks = []
377 genpacks = []
382 for i, limit in enumerate(limits):
378 for i, limit in enumerate(limits):
383 if len(generations[i]) > opts[b'gencountlimit']:
379 if len(generations[i]) > opts[b'gencountlimit']:
384 # Sort to be smallest last, for easy popping later
380 # Sort to be smallest last, for easy popping later
385 genpacks.extend(
381 genpacks.extend(
386 sorted(generations[i], reverse=True, key=lambda x: sizes[x])
382 sorted(generations[i], reverse=True, key=lambda x: sizes[x])
387 )
383 )
388 break
384 break
389
385
390 # Take as many packs from the generation as we can
386 # Take as many packs from the generation as we can
391 chosenpacks = genpacks[-3:]
387 chosenpacks = genpacks[-3:]
392 genpacks = genpacks[:-3]
388 genpacks = genpacks[:-3]
393 repacksize = sum(sizes[n] for n in chosenpacks)
389 repacksize = sum(sizes[n] for n in chosenpacks)
394 while (
390 while (
395 repacksize < opts[b'repacksizelimit']
391 repacksize < opts[b'repacksizelimit']
396 and genpacks
392 and genpacks
397 and len(chosenpacks) < opts[b'maxrepackpacks']
393 and len(chosenpacks) < opts[b'maxrepackpacks']
398 ):
394 ):
399 chosenpacks.append(genpacks.pop())
395 chosenpacks.append(genpacks.pop())
400 repacksize += sizes[chosenpacks[-1]]
396 repacksize += sizes[chosenpacks[-1]]
401
397
402 return chosenpacks
398 return chosenpacks
403
399
404
400
405 def _runrepack(
401 def _runrepack(
406 repo, data, history, packpath, category, fullhistory=None, options=None
402 repo, data, history, packpath, category, fullhistory=None, options=None
407 ):
403 ):
408 shallowutil.mkstickygroupdir(repo.ui, packpath)
404 shallowutil.mkstickygroupdir(repo.ui, packpath)
409
405
410 def isold(repo, filename, node):
406 def isold(repo, filename, node):
411 """Check if the file node is older than a limit.
407 """Check if the file node is older than a limit.
412 Unless a limit is specified in the config the default limit is taken.
408 Unless a limit is specified in the config the default limit is taken.
413 """
409 """
414 filectx = repo.filectx(filename, fileid=node)
410 filectx = repo.filectx(filename, fileid=node)
415 filetime = repo[filectx.linkrev()].date()
411 filetime = repo[filectx.linkrev()].date()
416
412
417 ttl = repo.ui.configint(b'remotefilelog', b'nodettl')
413 ttl = repo.ui.configint(b'remotefilelog', b'nodettl')
418
414
419 limit = time.time() - ttl
415 limit = time.time() - ttl
420 return filetime[0] < limit
416 return filetime[0] < limit
421
417
422 garbagecollect = repo.ui.configbool(b'remotefilelog', b'gcrepack')
418 garbagecollect = repo.ui.configbool(b'remotefilelog', b'gcrepack')
423 if not fullhistory:
419 if not fullhistory:
424 fullhistory = history
420 fullhistory = history
425 packer = repacker(
421 packer = repacker(
426 repo,
422 repo,
427 data,
423 data,
428 history,
424 history,
429 fullhistory,
425 fullhistory,
430 category,
426 category,
431 gc=garbagecollect,
427 gc=garbagecollect,
432 isold=isold,
428 isold=isold,
433 options=options,
429 options=options,
434 )
430 )
435
431
436 with datapack.mutabledatapack(repo.ui, packpath) as dpack:
432 with datapack.mutabledatapack(repo.ui, packpath) as dpack:
437 with historypack.mutablehistorypack(repo.ui, packpath) as hpack:
433 with historypack.mutablehistorypack(repo.ui, packpath) as hpack:
438 try:
434 try:
439 packer.run(dpack, hpack)
435 packer.run(dpack, hpack)
440 except error.LockHeld:
436 except error.LockHeld:
441 raise RepackAlreadyRunning(
437 raise RepackAlreadyRunning(
442 _(
438 _(
443 b"skipping repack - another repack "
439 b"skipping repack - another repack "
444 b"is already running"
440 b"is already running"
445 )
441 )
446 )
442 )
447
443
448
444
449 def keepset(repo, keyfn, lastkeepkeys=None):
445 def keepset(repo, keyfn, lastkeepkeys=None):
450 """Computes a keepset which is not garbage collected.
446 """Computes a keepset which is not garbage collected.
451 'keyfn' is a function that maps filename, node to a unique key.
447 'keyfn' is a function that maps filename, node to a unique key.
452 'lastkeepkeys' is an optional argument and if provided the keepset
448 'lastkeepkeys' is an optional argument and if provided the keepset
453 function updates lastkeepkeys with more keys and returns the result.
449 function updates lastkeepkeys with more keys and returns the result.
454 """
450 """
455 if not lastkeepkeys:
451 if not lastkeepkeys:
456 keepkeys = set()
452 keepkeys = set()
457 else:
453 else:
458 keepkeys = lastkeepkeys
454 keepkeys = lastkeepkeys
459
455
460 # We want to keep:
456 # We want to keep:
461 # 1. Working copy parent
457 # 1. Working copy parent
462 # 2. Draft commits
458 # 2. Draft commits
463 # 3. Parents of draft commits
459 # 3. Parents of draft commits
464 # 4. Pullprefetch and bgprefetchrevs revsets if specified
460 # 4. Pullprefetch and bgprefetchrevs revsets if specified
465 revs = [b'.', b'draft()', b'parents(draft())']
461 revs = [b'.', b'draft()', b'parents(draft())']
466 prefetchrevs = repo.ui.config(b'remotefilelog', b'pullprefetch', None)
462 prefetchrevs = repo.ui.config(b'remotefilelog', b'pullprefetch', None)
467 if prefetchrevs:
463 if prefetchrevs:
468 revs.append(b'(%s)' % prefetchrevs)
464 revs.append(b'(%s)' % prefetchrevs)
469 prefetchrevs = repo.ui.config(b'remotefilelog', b'bgprefetchrevs', None)
465 prefetchrevs = repo.ui.config(b'remotefilelog', b'bgprefetchrevs', None)
470 if prefetchrevs:
466 if prefetchrevs:
471 revs.append(b'(%s)' % prefetchrevs)
467 revs.append(b'(%s)' % prefetchrevs)
472 revs = b'+'.join(revs)
468 revs = b'+'.join(revs)
473
469
474 revs = [b'sort((%s), "topo")' % revs]
470 revs = [b'sort((%s), "topo")' % revs]
475 keep = scmutil.revrange(repo, revs)
471 keep = scmutil.revrange(repo, revs)
476
472
477 processed = set()
473 processed = set()
478 lastmanifest = None
474 lastmanifest = None
479
475
480 # process the commits in toposorted order starting from the oldest
476 # process the commits in toposorted order starting from the oldest
481 for r in reversed(keep._list):
477 for r in reversed(keep._list):
482 if repo[r].p1().rev() in processed:
478 if repo[r].p1().rev() in processed:
483 # if the direct parent has already been processed
479 # if the direct parent has already been processed
484 # then we only need to process the delta
480 # then we only need to process the delta
485 m = repo[r].manifestctx().readdelta()
481 m = repo[r].manifestctx().readdelta()
486 else:
482 else:
487 # otherwise take the manifest and diff it
483 # otherwise take the manifest and diff it
488 # with the previous manifest if one exists
484 # with the previous manifest if one exists
489 if lastmanifest:
485 if lastmanifest:
490 m = repo[r].manifest().diff(lastmanifest)
486 m = repo[r].manifest().diff(lastmanifest)
491 else:
487 else:
492 m = repo[r].manifest()
488 m = repo[r].manifest()
493 lastmanifest = repo[r].manifest()
489 lastmanifest = repo[r].manifest()
494 processed.add(r)
490 processed.add(r)
495
491
496 # populate keepkeys with keys from the current manifest
492 # populate keepkeys with keys from the current manifest
497 if type(m) is dict:
493 if type(m) is dict:
498 # m is a result of diff of two manifests and is a dictionary that
494 # m is a result of diff of two manifests and is a dictionary that
499 # maps filename to ((newnode, newflag), (oldnode, oldflag)) tuple
495 # maps filename to ((newnode, newflag), (oldnode, oldflag)) tuple
500 for filename, diff in pycompat.iteritems(m):
496 for filename, diff in pycompat.iteritems(m):
501 if diff[0][0] is not None:
497 if diff[0][0] is not None:
502 keepkeys.add(keyfn(filename, diff[0][0]))
498 keepkeys.add(keyfn(filename, diff[0][0]))
503 else:
499 else:
504 # m is a manifest object
500 # m is a manifest object
505 for filename, filenode in pycompat.iteritems(m):
501 for filename, filenode in pycompat.iteritems(m):
506 keepkeys.add(keyfn(filename, filenode))
502 keepkeys.add(keyfn(filename, filenode))
507
503
508 return keepkeys
504 return keepkeys
509
505
510
506
511 class repacker(object):
507 class repacker(object):
512 """Class for orchestrating the repack of data and history information into a
508 """Class for orchestrating the repack of data and history information into a
513 new format.
509 new format.
514 """
510 """
515
511
516 def __init__(
512 def __init__(
517 self,
513 self,
518 repo,
514 repo,
519 data,
515 data,
520 history,
516 history,
521 fullhistory,
517 fullhistory,
522 category,
518 category,
523 gc=False,
519 gc=False,
524 isold=None,
520 isold=None,
525 options=None,
521 options=None,
526 ):
522 ):
527 self.repo = repo
523 self.repo = repo
528 self.data = data
524 self.data = data
529 self.history = history
525 self.history = history
530 self.fullhistory = fullhistory
526 self.fullhistory = fullhistory
531 self.unit = constants.getunits(category)
527 self.unit = constants.getunits(category)
532 self.garbagecollect = gc
528 self.garbagecollect = gc
533 self.options = options
529 self.options = options
534 if self.garbagecollect:
530 if self.garbagecollect:
535 if not isold:
531 if not isold:
536 raise ValueError(b"Function 'isold' is not properly specified")
532 raise ValueError(b"Function 'isold' is not properly specified")
537 # use (filename, node) tuple as a keepset key
533 # use (filename, node) tuple as a keepset key
538 self.keepkeys = keepset(repo, lambda f, n: (f, n))
534 self.keepkeys = keepset(repo, lambda f, n: (f, n))
539 self.isold = isold
535 self.isold = isold
540
536
541 def run(self, targetdata, targethistory):
537 def run(self, targetdata, targethistory):
542 ledger = repackledger()
538 ledger = repackledger()
543
539
544 with lockmod.lock(
540 with lockmod.lock(
545 repacklockvfs(self.repo), b"repacklock", desc=None, timeout=0
541 repacklockvfs(self.repo), b"repacklock", desc=None, timeout=0
546 ):
542 ):
547 self.repo.hook(b'prerepack')
543 self.repo.hook(b'prerepack')
548
544
549 # Populate ledger from source
545 # Populate ledger from source
550 self.data.markledger(ledger, options=self.options)
546 self.data.markledger(ledger, options=self.options)
551 self.history.markledger(ledger, options=self.options)
547 self.history.markledger(ledger, options=self.options)
552
548
553 # Run repack
549 # Run repack
554 self.repackdata(ledger, targetdata)
550 self.repackdata(ledger, targetdata)
555 self.repackhistory(ledger, targethistory)
551 self.repackhistory(ledger, targethistory)
556
552
557 # Call cleanup on each source
553 # Call cleanup on each source
558 for source in ledger.sources:
554 for source in ledger.sources:
559 source.cleanup(ledger)
555 source.cleanup(ledger)
560
556
561 def _chainorphans(self, ui, filename, nodes, orphans, deltabases):
557 def _chainorphans(self, ui, filename, nodes, orphans, deltabases):
562 """Reorderes ``orphans`` into a single chain inside ``nodes`` and
558 """Reorderes ``orphans`` into a single chain inside ``nodes`` and
563 ``deltabases``.
559 ``deltabases``.
564
560
565 We often have orphan entries (nodes without a base that aren't
561 We often have orphan entries (nodes without a base that aren't
566 referenced by other nodes -- i.e., part of a chain) due to gaps in
562 referenced by other nodes -- i.e., part of a chain) due to gaps in
567 history. Rather than store them as individual fulltexts, we prefer to
563 history. Rather than store them as individual fulltexts, we prefer to
568 insert them as one chain sorted by size.
564 insert them as one chain sorted by size.
569 """
565 """
570 if not orphans:
566 if not orphans:
571 return nodes
567 return nodes
572
568
573 def getsize(node, default=0):
569 def getsize(node, default=0):
574 meta = self.data.getmeta(filename, node)
570 meta = self.data.getmeta(filename, node)
575 if constants.METAKEYSIZE in meta:
571 if constants.METAKEYSIZE in meta:
576 return meta[constants.METAKEYSIZE]
572 return meta[constants.METAKEYSIZE]
577 else:
573 else:
578 return default
574 return default
579
575
580 # Sort orphans by size; biggest first is preferred, since it's more
576 # Sort orphans by size; biggest first is preferred, since it's more
581 # likely to be the newest version assuming files grow over time.
577 # likely to be the newest version assuming files grow over time.
582 # (Sort by node first to ensure the sort is stable.)
578 # (Sort by node first to ensure the sort is stable.)
583 orphans = sorted(orphans)
579 orphans = sorted(orphans)
584 orphans = list(sorted(orphans, key=getsize, reverse=True))
580 orphans = list(sorted(orphans, key=getsize, reverse=True))
585 if ui.debugflag:
581 if ui.debugflag:
586 ui.debug(
582 ui.debug(
587 b"%s: orphan chain: %s\n"
583 b"%s: orphan chain: %s\n"
588 % (filename, b", ".join([short(s) for s in orphans]))
584 % (filename, b", ".join([short(s) for s in orphans]))
589 )
585 )
590
586
591 # Create one contiguous chain and reassign deltabases.
587 # Create one contiguous chain and reassign deltabases.
592 for i, node in enumerate(orphans):
588 for i, node in enumerate(orphans):
593 if i == 0:
589 if i == 0:
594 deltabases[node] = (nullid, 0)
590 deltabases[node] = (nullid, 0)
595 else:
591 else:
596 parent = orphans[i - 1]
592 parent = orphans[i - 1]
597 deltabases[node] = (parent, deltabases[parent][1] + 1)
593 deltabases[node] = (parent, deltabases[parent][1] + 1)
598 nodes = [n for n in nodes if n not in orphans]
594 nodes = [n for n in nodes if n not in orphans]
599 nodes += orphans
595 nodes += orphans
600 return nodes
596 return nodes
601
597
602 def repackdata(self, ledger, target):
598 def repackdata(self, ledger, target):
603 ui = self.repo.ui
599 ui = self.repo.ui
604 maxchainlen = ui.configint(b'packs', b'maxchainlen', 1000)
600 maxchainlen = ui.configint(b'packs', b'maxchainlen', 1000)
605
601
606 byfile = {}
602 byfile = {}
607 for entry in pycompat.itervalues(ledger.entries):
603 for entry in pycompat.itervalues(ledger.entries):
608 if entry.datasource:
604 if entry.datasource:
609 byfile.setdefault(entry.filename, {})[entry.node] = entry
605 byfile.setdefault(entry.filename, {})[entry.node] = entry
610
606
611 count = 0
607 count = 0
612 repackprogress = ui.makeprogress(
608 repackprogress = ui.makeprogress(
613 _(b"repacking data"), unit=self.unit, total=len(byfile)
609 _(b"repacking data"), unit=self.unit, total=len(byfile)
614 )
610 )
615 for filename, entries in sorted(pycompat.iteritems(byfile)):
611 for filename, entries in sorted(pycompat.iteritems(byfile)):
616 repackprogress.update(count)
612 repackprogress.update(count)
617
613
618 ancestors = {}
614 ancestors = {}
619 nodes = list(node for node in entries)
615 nodes = list(node for node in entries)
620 nohistory = []
616 nohistory = []
621 buildprogress = ui.makeprogress(
617 buildprogress = ui.makeprogress(
622 _(b"building history"), unit=b'nodes', total=len(nodes)
618 _(b"building history"), unit=b'nodes', total=len(nodes)
623 )
619 )
624 for i, node in enumerate(nodes):
620 for i, node in enumerate(nodes):
625 if node in ancestors:
621 if node in ancestors:
626 continue
622 continue
627 buildprogress.update(i)
623 buildprogress.update(i)
628 try:
624 try:
629 ancestors.update(
625 ancestors.update(
630 self.fullhistory.getancestors(
626 self.fullhistory.getancestors(
631 filename, node, known=ancestors
627 filename, node, known=ancestors
632 )
628 )
633 )
629 )
634 except KeyError:
630 except KeyError:
635 # Since we're packing data entries, we may not have the
631 # Since we're packing data entries, we may not have the
636 # corresponding history entries for them. It's not a big
632 # corresponding history entries for them. It's not a big
637 # deal, but the entries won't be delta'd perfectly.
633 # deal, but the entries won't be delta'd perfectly.
638 nohistory.append(node)
634 nohistory.append(node)
639 buildprogress.complete()
635 buildprogress.complete()
640
636
641 # Order the nodes children first, so we can produce reverse deltas
637 # Order the nodes children first, so we can produce reverse deltas
642 orderednodes = list(reversed(self._toposort(ancestors)))
638 orderednodes = list(reversed(self._toposort(ancestors)))
643 if len(nohistory) > 0:
639 if len(nohistory) > 0:
644 ui.debug(
640 ui.debug(
645 b'repackdata: %d nodes without history\n' % len(nohistory)
641 b'repackdata: %d nodes without history\n' % len(nohistory)
646 )
642 )
647 orderednodes.extend(sorted(nohistory))
643 orderednodes.extend(sorted(nohistory))
648
644
649 # Filter orderednodes to just the nodes we want to serialize (it
645 # Filter orderednodes to just the nodes we want to serialize (it
650 # currently also has the edge nodes' ancestors).
646 # currently also has the edge nodes' ancestors).
651 orderednodes = list(
647 orderednodes = list(
652 filter(lambda node: node in nodes, orderednodes)
648 filter(lambda node: node in nodes, orderednodes)
653 )
649 )
654
650
655 # Garbage collect old nodes:
651 # Garbage collect old nodes:
656 if self.garbagecollect:
652 if self.garbagecollect:
657 neworderednodes = []
653 neworderednodes = []
658 for node in orderednodes:
654 for node in orderednodes:
659 # If the node is old and is not in the keepset, we skip it,
655 # If the node is old and is not in the keepset, we skip it,
660 # and mark as garbage collected
656 # and mark as garbage collected
661 if (filename, node) not in self.keepkeys and self.isold(
657 if (filename, node) not in self.keepkeys and self.isold(
662 self.repo, filename, node
658 self.repo, filename, node
663 ):
659 ):
664 entries[node].gced = True
660 entries[node].gced = True
665 continue
661 continue
666 neworderednodes.append(node)
662 neworderednodes.append(node)
667 orderednodes = neworderednodes
663 orderednodes = neworderednodes
668
664
669 # Compute delta bases for nodes:
665 # Compute delta bases for nodes:
670 deltabases = {}
666 deltabases = {}
671 nobase = set()
667 nobase = set()
672 referenced = set()
668 referenced = set()
673 nodes = set(nodes)
669 nodes = set(nodes)
674 processprogress = ui.makeprogress(
670 processprogress = ui.makeprogress(
675 _(b"processing nodes"), unit=b'nodes', total=len(orderednodes)
671 _(b"processing nodes"), unit=b'nodes', total=len(orderednodes)
676 )
672 )
677 for i, node in enumerate(orderednodes):
673 for i, node in enumerate(orderednodes):
678 processprogress.update(i)
674 processprogress.update(i)
679 # Find delta base
675 # Find delta base
680 # TODO: allow delta'ing against most recent descendant instead
676 # TODO: allow delta'ing against most recent descendant instead
681 # of immediate child
677 # of immediate child
682 deltatuple = deltabases.get(node, None)
678 deltatuple = deltabases.get(node, None)
683 if deltatuple is None:
679 if deltatuple is None:
684 deltabase, chainlen = nullid, 0
680 deltabase, chainlen = nullid, 0
685 deltabases[node] = (nullid, 0)
681 deltabases[node] = (nullid, 0)
686 nobase.add(node)
682 nobase.add(node)
687 else:
683 else:
688 deltabase, chainlen = deltatuple
684 deltabase, chainlen = deltatuple
689 referenced.add(deltabase)
685 referenced.add(deltabase)
690
686
691 # Use available ancestor information to inform our delta choices
687 # Use available ancestor information to inform our delta choices
692 ancestorinfo = ancestors.get(node)
688 ancestorinfo = ancestors.get(node)
693 if ancestorinfo:
689 if ancestorinfo:
694 p1, p2, linknode, copyfrom = ancestorinfo
690 p1, p2, linknode, copyfrom = ancestorinfo
695
691
696 # The presence of copyfrom means we're at a point where the
692 # The presence of copyfrom means we're at a point where the
697 # file was copied from elsewhere. So don't attempt to do any
693 # file was copied from elsewhere. So don't attempt to do any
698 # deltas with the other file.
694 # deltas with the other file.
699 if copyfrom:
695 if copyfrom:
700 p1 = nullid
696 p1 = nullid
701
697
702 if chainlen < maxchainlen:
698 if chainlen < maxchainlen:
703 # Record this child as the delta base for its parents.
699 # Record this child as the delta base for its parents.
704 # This may be non optimal, since the parents may have
700 # This may be non optimal, since the parents may have
705 # many children, and this will only choose the last one.
701 # many children, and this will only choose the last one.
706 # TODO: record all children and try all deltas to find
702 # TODO: record all children and try all deltas to find
707 # best
703 # best
708 if p1 != nullid:
704 if p1 != nullid:
709 deltabases[p1] = (node, chainlen + 1)
705 deltabases[p1] = (node, chainlen + 1)
710 if p2 != nullid:
706 if p2 != nullid:
711 deltabases[p2] = (node, chainlen + 1)
707 deltabases[p2] = (node, chainlen + 1)
712
708
713 # experimental config: repack.chainorphansbysize
709 # experimental config: repack.chainorphansbysize
714 if ui.configbool(b'repack', b'chainorphansbysize'):
710 if ui.configbool(b'repack', b'chainorphansbysize'):
715 orphans = nobase - referenced
711 orphans = nobase - referenced
716 orderednodes = self._chainorphans(
712 orderednodes = self._chainorphans(
717 ui, filename, orderednodes, orphans, deltabases
713 ui, filename, orderednodes, orphans, deltabases
718 )
714 )
719
715
720 # Compute deltas and write to the pack
716 # Compute deltas and write to the pack
721 for i, node in enumerate(orderednodes):
717 for i, node in enumerate(orderednodes):
722 deltabase, chainlen = deltabases[node]
718 deltabase, chainlen = deltabases[node]
723 # Compute delta
719 # Compute delta
724 # TODO: Optimize the deltachain fetching. Since we're
720 # TODO: Optimize the deltachain fetching. Since we're
725 # iterating over the different version of the file, we may
721 # iterating over the different version of the file, we may
726 # be fetching the same deltachain over and over again.
722 # be fetching the same deltachain over and over again.
727 if deltabase != nullid:
723 if deltabase != nullid:
728 deltaentry = self.data.getdelta(filename, node)
724 deltaentry = self.data.getdelta(filename, node)
729 delta, deltabasename, origdeltabase, meta = deltaentry
725 delta, deltabasename, origdeltabase, meta = deltaentry
730 size = meta.get(constants.METAKEYSIZE)
726 size = meta.get(constants.METAKEYSIZE)
731 if (
727 if (
732 deltabasename != filename
728 deltabasename != filename
733 or origdeltabase != deltabase
729 or origdeltabase != deltabase
734 or size is None
730 or size is None
735 ):
731 ):
736 deltabasetext = self.data.get(filename, deltabase)
732 deltabasetext = self.data.get(filename, deltabase)
737 original = self.data.get(filename, node)
733 original = self.data.get(filename, node)
738 size = len(original)
734 size = len(original)
739 delta = mdiff.textdiff(deltabasetext, original)
735 delta = mdiff.textdiff(deltabasetext, original)
740 else:
736 else:
741 delta = self.data.get(filename, node)
737 delta = self.data.get(filename, node)
742 size = len(delta)
738 size = len(delta)
743 meta = self.data.getmeta(filename, node)
739 meta = self.data.getmeta(filename, node)
744
740
745 # TODO: don't use the delta if it's larger than the fulltext
741 # TODO: don't use the delta if it's larger than the fulltext
746 if constants.METAKEYSIZE not in meta:
742 if constants.METAKEYSIZE not in meta:
747 meta[constants.METAKEYSIZE] = size
743 meta[constants.METAKEYSIZE] = size
748 target.add(filename, node, deltabase, delta, meta)
744 target.add(filename, node, deltabase, delta, meta)
749
745
750 entries[node].datarepacked = True
746 entries[node].datarepacked = True
751
747
752 processprogress.complete()
748 processprogress.complete()
753 count += 1
749 count += 1
754
750
755 repackprogress.complete()
751 repackprogress.complete()
756 target.close(ledger=ledger)
752 target.close(ledger=ledger)
757
753
758 def repackhistory(self, ledger, target):
754 def repackhistory(self, ledger, target):
759 ui = self.repo.ui
755 ui = self.repo.ui
760
756
761 byfile = {}
757 byfile = {}
762 for entry in pycompat.itervalues(ledger.entries):
758 for entry in pycompat.itervalues(ledger.entries):
763 if entry.historysource:
759 if entry.historysource:
764 byfile.setdefault(entry.filename, {})[entry.node] = entry
760 byfile.setdefault(entry.filename, {})[entry.node] = entry
765
761
766 progress = ui.makeprogress(
762 progress = ui.makeprogress(
767 _(b"repacking history"), unit=self.unit, total=len(byfile)
763 _(b"repacking history"), unit=self.unit, total=len(byfile)
768 )
764 )
769 for filename, entries in sorted(pycompat.iteritems(byfile)):
765 for filename, entries in sorted(pycompat.iteritems(byfile)):
770 ancestors = {}
766 ancestors = {}
771 nodes = list(node for node in entries)
767 nodes = list(node for node in entries)
772
768
773 for node in nodes:
769 for node in nodes:
774 if node in ancestors:
770 if node in ancestors:
775 continue
771 continue
776 ancestors.update(
772 ancestors.update(
777 self.history.getancestors(filename, node, known=ancestors)
773 self.history.getancestors(filename, node, known=ancestors)
778 )
774 )
779
775
780 # Order the nodes children first
776 # Order the nodes children first
781 orderednodes = reversed(self._toposort(ancestors))
777 orderednodes = reversed(self._toposort(ancestors))
782
778
783 # Write to the pack
779 # Write to the pack
784 dontprocess = set()
780 dontprocess = set()
785 for node in orderednodes:
781 for node in orderednodes:
786 p1, p2, linknode, copyfrom = ancestors[node]
782 p1, p2, linknode, copyfrom = ancestors[node]
787
783
788 # If the node is marked dontprocess, but it's also in the
784 # If the node is marked dontprocess, but it's also in the
789 # explicit entries set, that means the node exists both in this
785 # explicit entries set, that means the node exists both in this
790 # file and in another file that was copied to this file.
786 # file and in another file that was copied to this file.
791 # Usually this happens if the file was copied to another file,
787 # Usually this happens if the file was copied to another file,
792 # then the copy was deleted, then reintroduced without copy
788 # then the copy was deleted, then reintroduced without copy
793 # metadata. The original add and the new add have the same hash
789 # metadata. The original add and the new add have the same hash
794 # since the content is identical and the parents are null.
790 # since the content is identical and the parents are null.
795 if node in dontprocess and node not in entries:
791 if node in dontprocess and node not in entries:
796 # If copyfrom == filename, it means the copy history
792 # If copyfrom == filename, it means the copy history
797 # went to come other file, then came back to this one, so we
793 # went to come other file, then came back to this one, so we
798 # should continue processing it.
794 # should continue processing it.
799 if p1 != nullid and copyfrom != filename:
795 if p1 != nullid and copyfrom != filename:
800 dontprocess.add(p1)
796 dontprocess.add(p1)
801 if p2 != nullid:
797 if p2 != nullid:
802 dontprocess.add(p2)
798 dontprocess.add(p2)
803 continue
799 continue
804
800
805 if copyfrom:
801 if copyfrom:
806 dontprocess.add(p1)
802 dontprocess.add(p1)
807
803
808 target.add(filename, node, p1, p2, linknode, copyfrom)
804 target.add(filename, node, p1, p2, linknode, copyfrom)
809
805
810 if node in entries:
806 if node in entries:
811 entries[node].historyrepacked = True
807 entries[node].historyrepacked = True
812
808
813 progress.increment()
809 progress.increment()
814
810
815 progress.complete()
811 progress.complete()
816 target.close(ledger=ledger)
812 target.close(ledger=ledger)
817
813
818 def _toposort(self, ancestors):
814 def _toposort(self, ancestors):
819 def parentfunc(node):
815 def parentfunc(node):
820 p1, p2, linknode, copyfrom = ancestors[node]
816 p1, p2, linknode, copyfrom = ancestors[node]
821 parents = []
817 parents = []
822 if p1 != nullid:
818 if p1 != nullid:
823 parents.append(p1)
819 parents.append(p1)
824 if p2 != nullid:
820 if p2 != nullid:
825 parents.append(p2)
821 parents.append(p2)
826 return parents
822 return parents
827
823
828 sortednodes = shallowutil.sortnodes(ancestors.keys(), parentfunc)
824 sortednodes = shallowutil.sortnodes(ancestors.keys(), parentfunc)
829 return sortednodes
825 return sortednodes
830
826
831
827
832 class repackledger(object):
828 class repackledger(object):
833 """Storage for all the bookkeeping that happens during a repack. It contains
829 """Storage for all the bookkeeping that happens during a repack. It contains
834 the list of revisions being repacked, what happened to each revision, and
830 the list of revisions being repacked, what happened to each revision, and
835 which source store contained which revision originally (for later cleanup).
831 which source store contained which revision originally (for later cleanup).
836 """
832 """
837
833
838 def __init__(self):
834 def __init__(self):
839 self.entries = {}
835 self.entries = {}
840 self.sources = {}
836 self.sources = {}
841 self.created = set()
837 self.created = set()
842
838
843 def markdataentry(self, source, filename, node):
839 def markdataentry(self, source, filename, node):
844 """Mark the given filename+node revision as having a data rev in the
840 """Mark the given filename+node revision as having a data rev in the
845 given source.
841 given source.
846 """
842 """
847 entry = self._getorcreateentry(filename, node)
843 entry = self._getorcreateentry(filename, node)
848 entry.datasource = True
844 entry.datasource = True
849 entries = self.sources.get(source)
845 entries = self.sources.get(source)
850 if not entries:
846 if not entries:
851 entries = set()
847 entries = set()
852 self.sources[source] = entries
848 self.sources[source] = entries
853 entries.add(entry)
849 entries.add(entry)
854
850
855 def markhistoryentry(self, source, filename, node):
851 def markhistoryentry(self, source, filename, node):
856 """Mark the given filename+node revision as having a history rev in the
852 """Mark the given filename+node revision as having a history rev in the
857 given source.
853 given source.
858 """
854 """
859 entry = self._getorcreateentry(filename, node)
855 entry = self._getorcreateentry(filename, node)
860 entry.historysource = True
856 entry.historysource = True
861 entries = self.sources.get(source)
857 entries = self.sources.get(source)
862 if not entries:
858 if not entries:
863 entries = set()
859 entries = set()
864 self.sources[source] = entries
860 self.sources[source] = entries
865 entries.add(entry)
861 entries.add(entry)
866
862
867 def _getorcreateentry(self, filename, node):
863 def _getorcreateentry(self, filename, node):
868 key = (filename, node)
864 key = (filename, node)
869 value = self.entries.get(key)
865 value = self.entries.get(key)
870 if not value:
866 if not value:
871 value = repackentry(filename, node)
867 value = repackentry(filename, node)
872 self.entries[key] = value
868 self.entries[key] = value
873
869
874 return value
870 return value
875
871
876 def addcreated(self, value):
872 def addcreated(self, value):
877 self.created.add(value)
873 self.created.add(value)
878
874
879
875
880 class repackentry(object):
876 class repackentry(object):
881 """Simple class representing a single revision entry in the repackledger.
877 """Simple class representing a single revision entry in the repackledger.
882 """
878 """
883
879
884 __slots__ = (
880 __slots__ = (
885 r'filename',
881 r'filename',
886 r'node',
882 r'node',
887 r'datasource',
883 r'datasource',
888 r'historysource',
884 r'historysource',
889 r'datarepacked',
885 r'datarepacked',
890 r'historyrepacked',
886 r'historyrepacked',
891 r'gced',
887 r'gced',
892 )
888 )
893
889
894 def __init__(self, filename, node):
890 def __init__(self, filename, node):
895 self.filename = filename
891 self.filename = filename
896 self.node = node
892 self.node = node
897 # If the revision has a data entry in the source
893 # If the revision has a data entry in the source
898 self.datasource = False
894 self.datasource = False
899 # If the revision has a history entry in the source
895 # If the revision has a history entry in the source
900 self.historysource = False
896 self.historysource = False
901 # If the revision's data entry was repacked into the repack target
897 # If the revision's data entry was repacked into the repack target
902 self.datarepacked = False
898 self.datarepacked = False
903 # If the revision's history entry was repacked into the repack target
899 # If the revision's history entry was repacked into the repack target
904 self.historyrepacked = False
900 self.historyrepacked = False
905 # If garbage collected
901 # If garbage collected
906 self.gced = False
902 self.gced = False
907
903
908
904
909 def repacklockvfs(repo):
905 def repacklockvfs(repo):
910 if util.safehasattr(repo, 'name'):
906 if util.safehasattr(repo, 'name'):
911 # Lock in the shared cache so repacks across multiple copies of the same
907 # Lock in the shared cache so repacks across multiple copies of the same
912 # repo are coordinated.
908 # repo are coordinated.
913 sharedcachepath = shallowutil.getcachepackpath(
909 sharedcachepath = shallowutil.getcachepackpath(
914 repo, constants.FILEPACK_CATEGORY
910 repo, constants.FILEPACK_CATEGORY
915 )
911 )
916 return vfs.vfs(sharedcachepath)
912 return vfs.vfs(sharedcachepath)
917 else:
913 else:
918 return repo.svfs
914 return repo.svfs
@@ -1,358 +1,352 b''
1 # shallowrepo.py - shallow repository that uses remote filelogs
1 # shallowrepo.py - shallow repository that uses remote filelogs
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import os
9 import os
10
10
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 from mercurial.node import hex, nullid, nullrev
12 from mercurial.node import hex, nullid, nullrev
13 from mercurial import (
13 from mercurial import (
14 encoding,
14 encoding,
15 error,
15 error,
16 localrepo,
16 localrepo,
17 match,
17 match,
18 pycompat,
18 pycompat,
19 scmutil,
19 scmutil,
20 sparse,
20 sparse,
21 util,
21 util,
22 )
22 )
23 from mercurial.utils import procutil
23 from mercurial.utils import procutil
24 from . import (
24 from . import (
25 connectionpool,
25 connectionpool,
26 constants,
26 constants,
27 contentstore,
27 contentstore,
28 datapack,
28 datapack,
29 fileserverclient,
29 fileserverclient,
30 historypack,
30 historypack,
31 metadatastore,
31 metadatastore,
32 remotefilectx,
32 remotefilectx,
33 remotefilelog,
33 remotefilelog,
34 shallowutil,
34 shallowutil,
35 )
35 )
36
36
37 # These make*stores functions are global so that other extensions can replace
37 # These make*stores functions are global so that other extensions can replace
38 # them.
38 # them.
39 def makelocalstores(repo):
39 def makelocalstores(repo):
40 """In-repo stores, like .hg/store/data; can not be discarded."""
40 """In-repo stores, like .hg/store/data; can not be discarded."""
41 localpath = os.path.join(repo.svfs.vfs.base, b'data')
41 localpath = os.path.join(repo.svfs.vfs.base, b'data')
42 if not os.path.exists(localpath):
42 if not os.path.exists(localpath):
43 os.makedirs(localpath)
43 os.makedirs(localpath)
44
44
45 # Instantiate local data stores
45 # Instantiate local data stores
46 localcontent = contentstore.remotefilelogcontentstore(
46 localcontent = contentstore.remotefilelogcontentstore(
47 repo, localpath, repo.name, shared=False
47 repo, localpath, repo.name, shared=False
48 )
48 )
49 localmetadata = metadatastore.remotefilelogmetadatastore(
49 localmetadata = metadatastore.remotefilelogmetadatastore(
50 repo, localpath, repo.name, shared=False
50 repo, localpath, repo.name, shared=False
51 )
51 )
52 return localcontent, localmetadata
52 return localcontent, localmetadata
53
53
54
54
55 def makecachestores(repo):
55 def makecachestores(repo):
56 """Typically machine-wide, cache of remote data; can be discarded."""
56 """Typically machine-wide, cache of remote data; can be discarded."""
57 # Instantiate shared cache stores
57 # Instantiate shared cache stores
58 cachepath = shallowutil.getcachepath(repo.ui)
58 cachepath = shallowutil.getcachepath(repo.ui)
59 cachecontent = contentstore.remotefilelogcontentstore(
59 cachecontent = contentstore.remotefilelogcontentstore(
60 repo, cachepath, repo.name, shared=True
60 repo, cachepath, repo.name, shared=True
61 )
61 )
62 cachemetadata = metadatastore.remotefilelogmetadatastore(
62 cachemetadata = metadatastore.remotefilelogmetadatastore(
63 repo, cachepath, repo.name, shared=True
63 repo, cachepath, repo.name, shared=True
64 )
64 )
65
65
66 repo.sharedstore = cachecontent
66 repo.sharedstore = cachecontent
67 repo.shareddatastores.append(cachecontent)
67 repo.shareddatastores.append(cachecontent)
68 repo.sharedhistorystores.append(cachemetadata)
68 repo.sharedhistorystores.append(cachemetadata)
69
69
70 return cachecontent, cachemetadata
70 return cachecontent, cachemetadata
71
71
72
72
73 def makeremotestores(repo, cachecontent, cachemetadata):
73 def makeremotestores(repo, cachecontent, cachemetadata):
74 """These stores fetch data from a remote server."""
74 """These stores fetch data from a remote server."""
75 # Instantiate remote stores
75 # Instantiate remote stores
76 repo.fileservice = fileserverclient.fileserverclient(repo)
76 repo.fileservice = fileserverclient.fileserverclient(repo)
77 remotecontent = contentstore.remotecontentstore(
77 remotecontent = contentstore.remotecontentstore(
78 repo.ui, repo.fileservice, cachecontent
78 repo.ui, repo.fileservice, cachecontent
79 )
79 )
80 remotemetadata = metadatastore.remotemetadatastore(
80 remotemetadata = metadatastore.remotemetadatastore(
81 repo.ui, repo.fileservice, cachemetadata
81 repo.ui, repo.fileservice, cachemetadata
82 )
82 )
83 return remotecontent, remotemetadata
83 return remotecontent, remotemetadata
84
84
85
85
86 def makepackstores(repo):
86 def makepackstores(repo):
87 """Packs are more efficient (to read from) cache stores."""
87 """Packs are more efficient (to read from) cache stores."""
88 # Instantiate pack stores
88 # Instantiate pack stores
89 packpath = shallowutil.getcachepackpath(repo, constants.FILEPACK_CATEGORY)
89 packpath = shallowutil.getcachepackpath(repo, constants.FILEPACK_CATEGORY)
90 packcontentstore = datapack.datapackstore(repo.ui, packpath)
90 packcontentstore = datapack.datapackstore(repo.ui, packpath)
91 packmetadatastore = historypack.historypackstore(repo.ui, packpath)
91 packmetadatastore = historypack.historypackstore(repo.ui, packpath)
92
92
93 repo.shareddatastores.append(packcontentstore)
93 repo.shareddatastores.append(packcontentstore)
94 repo.sharedhistorystores.append(packmetadatastore)
94 repo.sharedhistorystores.append(packmetadatastore)
95 shallowutil.reportpackmetrics(
95 shallowutil.reportpackmetrics(
96 repo.ui, b'filestore', packcontentstore, packmetadatastore
96 repo.ui, b'filestore', packcontentstore, packmetadatastore
97 )
97 )
98 return packcontentstore, packmetadatastore
98 return packcontentstore, packmetadatastore
99
99
100
100
101 def makeunionstores(repo):
101 def makeunionstores(repo):
102 """Union stores iterate the other stores and return the first result."""
102 """Union stores iterate the other stores and return the first result."""
103 repo.shareddatastores = []
103 repo.shareddatastores = []
104 repo.sharedhistorystores = []
104 repo.sharedhistorystores = []
105
105
106 packcontentstore, packmetadatastore = makepackstores(repo)
106 packcontentstore, packmetadatastore = makepackstores(repo)
107 cachecontent, cachemetadata = makecachestores(repo)
107 cachecontent, cachemetadata = makecachestores(repo)
108 localcontent, localmetadata = makelocalstores(repo)
108 localcontent, localmetadata = makelocalstores(repo)
109 remotecontent, remotemetadata = makeremotestores(
109 remotecontent, remotemetadata = makeremotestores(
110 repo, cachecontent, cachemetadata
110 repo, cachecontent, cachemetadata
111 )
111 )
112
112
113 # Instantiate union stores
113 # Instantiate union stores
114 repo.contentstore = contentstore.unioncontentstore(
114 repo.contentstore = contentstore.unioncontentstore(
115 packcontentstore,
115 packcontentstore,
116 cachecontent,
116 cachecontent,
117 localcontent,
117 localcontent,
118 remotecontent,
118 remotecontent,
119 writestore=localcontent,
119 writestore=localcontent,
120 )
120 )
121 repo.metadatastore = metadatastore.unionmetadatastore(
121 repo.metadatastore = metadatastore.unionmetadatastore(
122 packmetadatastore,
122 packmetadatastore,
123 cachemetadata,
123 cachemetadata,
124 localmetadata,
124 localmetadata,
125 remotemetadata,
125 remotemetadata,
126 writestore=localmetadata,
126 writestore=localmetadata,
127 )
127 )
128
128
129 fileservicedatawrite = cachecontent
129 fileservicedatawrite = cachecontent
130 fileservicehistorywrite = cachemetadata
130 fileservicehistorywrite = cachemetadata
131 repo.fileservice.setstore(
131 repo.fileservice.setstore(
132 repo.contentstore,
132 repo.contentstore,
133 repo.metadatastore,
133 repo.metadatastore,
134 fileservicedatawrite,
134 fileservicedatawrite,
135 fileservicehistorywrite,
135 fileservicehistorywrite,
136 )
136 )
137 shallowutil.reportpackmetrics(
137 shallowutil.reportpackmetrics(
138 repo.ui, b'filestore', packcontentstore, packmetadatastore
138 repo.ui, b'filestore', packcontentstore, packmetadatastore
139 )
139 )
140
140
141
141
142 def wraprepo(repo):
142 def wraprepo(repo):
143 class shallowrepository(repo.__class__):
143 class shallowrepository(repo.__class__):
144 @util.propertycache
144 @util.propertycache
145 def name(self):
145 def name(self):
146 return self.ui.config(b'remotefilelog', b'reponame')
146 return self.ui.config(b'remotefilelog', b'reponame')
147
147
148 @util.propertycache
148 @util.propertycache
149 def fallbackpath(self):
149 def fallbackpath(self):
150 path = repo.ui.config(
150 path = repo.ui.config(
151 b"remotefilelog",
151 b"remotefilelog",
152 b"fallbackpath",
152 b"fallbackpath",
153 repo.ui.config(b'paths', b'default'),
153 repo.ui.config(b'paths', b'default'),
154 )
154 )
155 if not path:
155 if not path:
156 raise error.Abort(
156 raise error.Abort(
157 b"no remotefilelog server "
157 b"no remotefilelog server "
158 b"configured - is your .hg/hgrc trusted?"
158 b"configured - is your .hg/hgrc trusted?"
159 )
159 )
160
160
161 return path
161 return path
162
162
163 def maybesparsematch(self, *revs, **kwargs):
163 def maybesparsematch(self, *revs, **kwargs):
164 '''
164 '''
165 A wrapper that allows the remotefilelog to invoke sparsematch() if
165 A wrapper that allows the remotefilelog to invoke sparsematch() if
166 this is a sparse repository, or returns None if this is not a
166 this is a sparse repository, or returns None if this is not a
167 sparse repository.
167 sparse repository.
168 '''
168 '''
169 if revs:
169 if revs:
170 ret = sparse.matcher(repo, revs=revs)
170 ret = sparse.matcher(repo, revs=revs)
171 else:
171 else:
172 ret = sparse.matcher(repo)
172 ret = sparse.matcher(repo)
173
173
174 if ret.always():
174 if ret.always():
175 return None
175 return None
176 return ret
176 return ret
177
177
178 def file(self, f):
178 def file(self, f):
179 if f[0] == b'/':
179 if f[0] == b'/':
180 f = f[1:]
180 f = f[1:]
181
181
182 if self.shallowmatch(f):
182 if self.shallowmatch(f):
183 return remotefilelog.remotefilelog(self.svfs, f, self)
183 return remotefilelog.remotefilelog(self.svfs, f, self)
184 else:
184 else:
185 return super(shallowrepository, self).file(f)
185 return super(shallowrepository, self).file(f)
186
186
187 def filectx(self, path, *args, **kwargs):
187 def filectx(self, path, *args, **kwargs):
188 if self.shallowmatch(path):
188 if self.shallowmatch(path):
189 return remotefilectx.remotefilectx(self, path, *args, **kwargs)
189 return remotefilectx.remotefilectx(self, path, *args, **kwargs)
190 else:
190 else:
191 return super(shallowrepository, self).filectx(
191 return super(shallowrepository, self).filectx(
192 path, *args, **kwargs
192 path, *args, **kwargs
193 )
193 )
194
194
195 @localrepo.unfilteredmethod
195 @localrepo.unfilteredmethod
196 def commitctx(self, ctx, error=False, origctx=None):
196 def commitctx(self, ctx, error=False, origctx=None):
197 """Add a new revision to current repository.
197 """Add a new revision to current repository.
198 Revision information is passed via the context argument.
198 Revision information is passed via the context argument.
199 """
199 """
200
200
201 # some contexts already have manifest nodes, they don't need any
201 # some contexts already have manifest nodes, they don't need any
202 # prefetching (for example if we're just editing a commit message
202 # prefetching (for example if we're just editing a commit message
203 # we can reuse manifest
203 # we can reuse manifest
204 if not ctx.manifestnode():
204 if not ctx.manifestnode():
205 # prefetch files that will likely be compared
205 # prefetch files that will likely be compared
206 m1 = ctx.p1().manifest()
206 m1 = ctx.p1().manifest()
207 files = []
207 files = []
208 for f in ctx.modified() + ctx.added():
208 for f in ctx.modified() + ctx.added():
209 fparent1 = m1.get(f, nullid)
209 fparent1 = m1.get(f, nullid)
210 if fparent1 != nullid:
210 if fparent1 != nullid:
211 files.append((f, hex(fparent1)))
211 files.append((f, hex(fparent1)))
212 self.fileservice.prefetch(files)
212 self.fileservice.prefetch(files)
213 return super(shallowrepository, self).commitctx(
213 return super(shallowrepository, self).commitctx(
214 ctx, error=error, origctx=origctx
214 ctx, error=error, origctx=origctx
215 )
215 )
216
216
217 def backgroundprefetch(
217 def backgroundprefetch(
218 self,
218 self, revs, base=None, repack=False, pats=None, opts=None
219 revs,
220 base=None,
221 repack=False,
222 pats=None,
223 opts=None,
224 ensurestart=False,
225 ):
219 ):
226 """Runs prefetch in background with optional repack
220 """Runs prefetch in background with optional repack
227 """
221 """
228 cmd = [procutil.hgexecutable(), b'-R', repo.origroot, b'prefetch']
222 cmd = [procutil.hgexecutable(), b'-R', repo.origroot, b'prefetch']
229 if repack:
223 if repack:
230 cmd.append(b'--repack')
224 cmd.append(b'--repack')
231 if revs:
225 if revs:
232 cmd += [b'-r', revs]
226 cmd += [b'-r', revs]
233 # We know this command will find a binary, so don't block
227 # We know this command will find a binary, so don't block
234 # on it starting.
228 # on it starting.
235 kwargs = {}
229 kwargs = {}
236 if repo.ui.configbool(b'devel', b'remotefilelog.bg-wait'):
230 if repo.ui.configbool(b'devel', b'remotefilelog.bg-wait'):
237 kwargs['record_wait'] = repo.ui.atexit
231 kwargs['record_wait'] = repo.ui.atexit
238
232
239 procutil.runbgcommand(
233 procutil.runbgcommand(
240 cmd, encoding.environ, ensurestart=ensurestart, **kwargs
234 cmd, encoding.environ, ensurestart=False, **kwargs
241 )
235 )
242
236
243 def prefetch(self, revs, base=None, pats=None, opts=None):
237 def prefetch(self, revs, base=None, pats=None, opts=None):
244 """Prefetches all the necessary file revisions for the given revs
238 """Prefetches all the necessary file revisions for the given revs
245 Optionally runs repack in background
239 Optionally runs repack in background
246 """
240 """
247 with repo._lock(
241 with repo._lock(
248 repo.svfs,
242 repo.svfs,
249 b'prefetchlock',
243 b'prefetchlock',
250 True,
244 True,
251 None,
245 None,
252 None,
246 None,
253 _(b'prefetching in %s') % repo.origroot,
247 _(b'prefetching in %s') % repo.origroot,
254 ):
248 ):
255 self._prefetch(revs, base, pats, opts)
249 self._prefetch(revs, base, pats, opts)
256
250
257 def _prefetch(self, revs, base=None, pats=None, opts=None):
251 def _prefetch(self, revs, base=None, pats=None, opts=None):
258 fallbackpath = self.fallbackpath
252 fallbackpath = self.fallbackpath
259 if fallbackpath:
253 if fallbackpath:
260 # If we know a rev is on the server, we should fetch the server
254 # If we know a rev is on the server, we should fetch the server
261 # version of those files, since our local file versions might
255 # version of those files, since our local file versions might
262 # become obsolete if the local commits are stripped.
256 # become obsolete if the local commits are stripped.
263 localrevs = repo.revs(b'outgoing(%s)', fallbackpath)
257 localrevs = repo.revs(b'outgoing(%s)', fallbackpath)
264 if base is not None and base != nullrev:
258 if base is not None and base != nullrev:
265 serverbase = list(
259 serverbase = list(
266 repo.revs(
260 repo.revs(
267 b'first(reverse(::%s) - %ld)', base, localrevs
261 b'first(reverse(::%s) - %ld)', base, localrevs
268 )
262 )
269 )
263 )
270 if serverbase:
264 if serverbase:
271 base = serverbase[0]
265 base = serverbase[0]
272 else:
266 else:
273 localrevs = repo
267 localrevs = repo
274
268
275 mfl = repo.manifestlog
269 mfl = repo.manifestlog
276 mfrevlog = mfl.getstorage(b'')
270 mfrevlog = mfl.getstorage(b'')
277 if base is not None:
271 if base is not None:
278 mfdict = mfl[repo[base].manifestnode()].read()
272 mfdict = mfl[repo[base].manifestnode()].read()
279 skip = set(pycompat.iteritems(mfdict))
273 skip = set(pycompat.iteritems(mfdict))
280 else:
274 else:
281 skip = set()
275 skip = set()
282
276
283 # Copy the skip set to start large and avoid constant resizing,
277 # Copy the skip set to start large and avoid constant resizing,
284 # and since it's likely to be very similar to the prefetch set.
278 # and since it's likely to be very similar to the prefetch set.
285 files = skip.copy()
279 files = skip.copy()
286 serverfiles = skip.copy()
280 serverfiles = skip.copy()
287 visited = set()
281 visited = set()
288 visited.add(nullrev)
282 visited.add(nullrev)
289 revcount = len(revs)
283 revcount = len(revs)
290 progress = self.ui.makeprogress(_(b'prefetching'), total=revcount)
284 progress = self.ui.makeprogress(_(b'prefetching'), total=revcount)
291 progress.update(0)
285 progress.update(0)
292 for rev in sorted(revs):
286 for rev in sorted(revs):
293 ctx = repo[rev]
287 ctx = repo[rev]
294 if pats:
288 if pats:
295 m = scmutil.match(ctx, pats, opts)
289 m = scmutil.match(ctx, pats, opts)
296 sparsematch = repo.maybesparsematch(rev)
290 sparsematch = repo.maybesparsematch(rev)
297
291
298 mfnode = ctx.manifestnode()
292 mfnode = ctx.manifestnode()
299 mfrev = mfrevlog.rev(mfnode)
293 mfrev = mfrevlog.rev(mfnode)
300
294
301 # Decompressing manifests is expensive.
295 # Decompressing manifests is expensive.
302 # When possible, only read the deltas.
296 # When possible, only read the deltas.
303 p1, p2 = mfrevlog.parentrevs(mfrev)
297 p1, p2 = mfrevlog.parentrevs(mfrev)
304 if p1 in visited and p2 in visited:
298 if p1 in visited and p2 in visited:
305 mfdict = mfl[mfnode].readfast()
299 mfdict = mfl[mfnode].readfast()
306 else:
300 else:
307 mfdict = mfl[mfnode].read()
301 mfdict = mfl[mfnode].read()
308
302
309 diff = pycompat.iteritems(mfdict)
303 diff = pycompat.iteritems(mfdict)
310 if pats:
304 if pats:
311 diff = (pf for pf in diff if m(pf[0]))
305 diff = (pf for pf in diff if m(pf[0]))
312 if sparsematch:
306 if sparsematch:
313 diff = (pf for pf in diff if sparsematch(pf[0]))
307 diff = (pf for pf in diff if sparsematch(pf[0]))
314 if rev not in localrevs:
308 if rev not in localrevs:
315 serverfiles.update(diff)
309 serverfiles.update(diff)
316 else:
310 else:
317 files.update(diff)
311 files.update(diff)
318
312
319 visited.add(mfrev)
313 visited.add(mfrev)
320 progress.increment()
314 progress.increment()
321
315
322 files.difference_update(skip)
316 files.difference_update(skip)
323 serverfiles.difference_update(skip)
317 serverfiles.difference_update(skip)
324 progress.complete()
318 progress.complete()
325
319
326 # Fetch files known to be on the server
320 # Fetch files known to be on the server
327 if serverfiles:
321 if serverfiles:
328 results = [(path, hex(fnode)) for (path, fnode) in serverfiles]
322 results = [(path, hex(fnode)) for (path, fnode) in serverfiles]
329 repo.fileservice.prefetch(results, force=True)
323 repo.fileservice.prefetch(results, force=True)
330
324
331 # Fetch files that may or may not be on the server
325 # Fetch files that may or may not be on the server
332 if files:
326 if files:
333 results = [(path, hex(fnode)) for (path, fnode) in files]
327 results = [(path, hex(fnode)) for (path, fnode) in files]
334 repo.fileservice.prefetch(results)
328 repo.fileservice.prefetch(results)
335
329
336 def close(self):
330 def close(self):
337 super(shallowrepository, self).close()
331 super(shallowrepository, self).close()
338 self.connectionpool.close()
332 self.connectionpool.close()
339
333
340 repo.__class__ = shallowrepository
334 repo.__class__ = shallowrepository
341
335
342 repo.shallowmatch = match.always()
336 repo.shallowmatch = match.always()
343
337
344 makeunionstores(repo)
338 makeunionstores(repo)
345
339
346 repo.includepattern = repo.ui.configlist(
340 repo.includepattern = repo.ui.configlist(
347 b"remotefilelog", b"includepattern", None
341 b"remotefilelog", b"includepattern", None
348 )
342 )
349 repo.excludepattern = repo.ui.configlist(
343 repo.excludepattern = repo.ui.configlist(
350 b"remotefilelog", b"excludepattern", None
344 b"remotefilelog", b"excludepattern", None
351 )
345 )
352 if not util.safehasattr(repo, 'connectionpool'):
346 if not util.safehasattr(repo, 'connectionpool'):
353 repo.connectionpool = connectionpool.connectionpool(repo)
347 repo.connectionpool = connectionpool.connectionpool(repo)
354
348
355 if repo.includepattern or repo.excludepattern:
349 if repo.includepattern or repo.excludepattern:
356 repo.shallowmatch = match.match(
350 repo.shallowmatch = match.match(
357 repo.root, b'', None, repo.includepattern, repo.excludepattern
351 repo.root, b'', None, repo.includepattern, repo.excludepattern
358 )
352 )
@@ -1,348 +1,345 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4 # devel.remotefilelog.ensurestart: reduce race condition with
5 # waiton{repack/prefetch}
6 $ cat >> $HGRCPATH <<EOF
4 $ cat >> $HGRCPATH <<EOF
7 > [devel]
5 > [devel]
8 > remotefilelog.ensurestart=True
9 > remotefilelog.bg-wait=True
6 > remotefilelog.bg-wait=True
10 > EOF
7 > EOF
11
8
12 $ hg init master
9 $ hg init master
13 $ cd master
10 $ cd master
14 $ cat >> .hg/hgrc <<EOF
11 $ cat >> .hg/hgrc <<EOF
15 > [remotefilelog]
12 > [remotefilelog]
16 > server=True
13 > server=True
17 > EOF
14 > EOF
18 $ echo x > x
15 $ echo x > x
19 $ echo z > z
16 $ echo z > z
20 $ hg commit -qAm x
17 $ hg commit -qAm x
21 $ echo x2 > x
18 $ echo x2 > x
22 $ echo y > y
19 $ echo y > y
23 $ hg commit -qAm y
20 $ hg commit -qAm y
24 $ echo w > w
21 $ echo w > w
25 $ rm z
22 $ rm z
26 $ hg commit -qAm w
23 $ hg commit -qAm w
27 $ hg bookmark foo
24 $ hg bookmark foo
28
25
29 $ cd ..
26 $ cd ..
30
27
31 # clone the repo
28 # clone the repo
32
29
33 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
30 $ hgcloneshallow ssh://user@dummy/master shallow --noupdate
34 streaming all changes
31 streaming all changes
35 2 files to transfer, 776 bytes of data
32 2 files to transfer, 776 bytes of data
36 transferred 776 bytes in * seconds (*/sec) (glob)
33 transferred 776 bytes in * seconds (*/sec) (glob)
37 searching for changes
34 searching for changes
38 no changes found
35 no changes found
39
36
40 # Set the prefetchdays config to zero so that all commits are prefetched
37 # Set the prefetchdays config to zero so that all commits are prefetched
41 # no matter what their creation date is. Also set prefetchdelay config
38 # no matter what their creation date is. Also set prefetchdelay config
42 # to zero so that there is no delay between prefetches.
39 # to zero so that there is no delay between prefetches.
43 $ cd shallow
40 $ cd shallow
44 $ cat >> .hg/hgrc <<EOF
41 $ cat >> .hg/hgrc <<EOF
45 > [remotefilelog]
42 > [remotefilelog]
46 > prefetchdays=0
43 > prefetchdays=0
47 > prefetchdelay=0
44 > prefetchdelay=0
48 > EOF
45 > EOF
49 $ cd ..
46 $ cd ..
50
47
51 # prefetch a revision
48 # prefetch a revision
52 $ cd shallow
49 $ cd shallow
53
50
54 $ hg prefetch -r 0
51 $ hg prefetch -r 0
55 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
52 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
56
53
57 $ hg cat -r 0 x
54 $ hg cat -r 0 x
58 x
55 x
59
56
60 # background prefetch on pull when configured
57 # background prefetch on pull when configured
61
58
62 $ cat >> .hg/hgrc <<EOF
59 $ cat >> .hg/hgrc <<EOF
63 > [remotefilelog]
60 > [remotefilelog]
64 > pullprefetch=bookmark()
61 > pullprefetch=bookmark()
65 > backgroundprefetch=True
62 > backgroundprefetch=True
66 > EOF
63 > EOF
67 $ hg strip tip
64 $ hg strip tip
68 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
65 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
69
66
70 $ clearcache
67 $ clearcache
71 $ hg pull
68 $ hg pull
72 pulling from ssh://user@dummy/master
69 pulling from ssh://user@dummy/master
73 searching for changes
70 searching for changes
74 adding changesets
71 adding changesets
75 adding manifests
72 adding manifests
76 adding file changes
73 adding file changes
77 updating bookmark foo
74 updating bookmark foo
78 added 1 changesets with 0 changes to 0 files
75 added 1 changesets with 0 changes to 0 files
79 new changesets 6b4b6f66ef8c
76 new changesets 6b4b6f66ef8c
80 (run 'hg update' to get a working copy)
77 (run 'hg update' to get a working copy)
81 prefetching file contents
78 prefetching file contents
82 $ find $CACHEDIR -type f | sort
79 $ find $CACHEDIR -type f | sort
83 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
80 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/ef95c5376f34698742fe34f315fd82136f8f68c0
84 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
81 $TESTTMP/hgcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/076f5e2225b3ff0400b98c92aa6cdf403ee24cca
85 $TESTTMP/hgcache/master/af/f024fe4ab0fece4091de044c58c9ae4233383a/bb6ccd5dceaa5e9dc220e0dad65e051b94f69a2c
82 $TESTTMP/hgcache/master/af/f024fe4ab0fece4091de044c58c9ae4233383a/bb6ccd5dceaa5e9dc220e0dad65e051b94f69a2c
86 $TESTTMP/hgcache/repos
83 $TESTTMP/hgcache/repos
87
84
88 # background prefetch with repack on pull when configured
85 # background prefetch with repack on pull when configured
89
86
90 $ cat >> .hg/hgrc <<EOF
87 $ cat >> .hg/hgrc <<EOF
91 > [remotefilelog]
88 > [remotefilelog]
92 > backgroundrepack=True
89 > backgroundrepack=True
93 > EOF
90 > EOF
94 $ hg strip tip
91 $ hg strip tip
95 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
92 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/6b4b6f66ef8c-b4b8bdaf-backup.hg (glob)
96
93
97 $ clearcache
94 $ clearcache
98 $ hg pull
95 $ hg pull
99 pulling from ssh://user@dummy/master
96 pulling from ssh://user@dummy/master
100 searching for changes
97 searching for changes
101 adding changesets
98 adding changesets
102 adding manifests
99 adding manifests
103 adding file changes
100 adding file changes
104 updating bookmark foo
101 updating bookmark foo
105 added 1 changesets with 0 changes to 0 files
102 added 1 changesets with 0 changes to 0 files
106 new changesets 6b4b6f66ef8c
103 new changesets 6b4b6f66ef8c
107 (run 'hg update' to get a working copy)
104 (run 'hg update' to get a working copy)
108 prefetching file contents
105 prefetching file contents
109 $ find $CACHEDIR -type f | sort
106 $ find $CACHEDIR -type f | sort
110 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
107 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histidx
111 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
108 $TESTTMP/hgcache/master/packs/6e8633deba6e544e5f8edbd7b996d6e31a2c42ae.histpack
112 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.dataidx
109 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.dataidx
113 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.datapack
110 $TESTTMP/hgcache/master/packs/8ce5ab3745465ab83bba30a7b9c295e0c8404652.datapack
114 $TESTTMP/hgcache/repos
111 $TESTTMP/hgcache/repos
115
112
116 # background prefetch with repack on update when wcprevset configured
113 # background prefetch with repack on update when wcprevset configured
117
114
118 $ clearcache
115 $ clearcache
119 $ hg up -r 0
116 $ hg up -r 0
120 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
117 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
121 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
118 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
122 $ find $CACHEDIR -type f | sort
119 $ find $CACHEDIR -type f | sort
123 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1406e74118627694268417491f018a4a883152f0
120 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1406e74118627694268417491f018a4a883152f0
124 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
121 $TESTTMP/hgcache/master/39/5df8f7c51f007019cb30201c49e884b46b92fa/69a1b67522704ec122181c0890bd16e9d3e7516a
125 $TESTTMP/hgcache/repos
122 $TESTTMP/hgcache/repos
126
123
127 $ hg up -r 1
124 $ hg up -r 1
128 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
125 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
129 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
126 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over *s (glob)
130
127
131 $ cat >> .hg/hgrc <<EOF
128 $ cat >> .hg/hgrc <<EOF
132 > [remotefilelog]
129 > [remotefilelog]
133 > bgprefetchrevs=.::
130 > bgprefetchrevs=.::
134 > EOF
131 > EOF
135
132
136 $ clearcache
133 $ clearcache
137 $ hg up -r 0
134 $ hg up -r 0
138 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
135 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
139 * files fetched over * fetches - (* misses, 0.00% hit ratio) over *s (glob)
136 * files fetched over * fetches - (* misses, 0.00% hit ratio) over *s (glob)
140 $ find $CACHEDIR -type f | sort
137 $ find $CACHEDIR -type f | sort
141 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
138 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
142 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
139 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
143 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
140 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
144 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
141 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
145 $TESTTMP/hgcache/repos
142 $TESTTMP/hgcache/repos
146
143
147 # Ensure that file 'w' was prefetched - it was not part of the update operation and therefore
144 # Ensure that file 'w' was prefetched - it was not part of the update operation and therefore
148 # could only be downloaded by the background prefetch
145 # could only be downloaded by the background prefetch
149
146
150 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
147 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
151 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
148 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
152 w:
149 w:
153 Node Delta Base Delta Length Blob Size
150 Node Delta Base Delta Length Blob Size
154 bb6ccd5dceaa 000000000000 2 2
151 bb6ccd5dceaa 000000000000 2 2
155
152
156 Total: 2 2 (0.0% bigger)
153 Total: 2 2 (0.0% bigger)
157 x:
154 x:
158 Node Delta Base Delta Length Blob Size
155 Node Delta Base Delta Length Blob Size
159 ef95c5376f34 000000000000 3 3
156 ef95c5376f34 000000000000 3 3
160 1406e7411862 ef95c5376f34 14 2
157 1406e7411862 ef95c5376f34 14 2
161
158
162 Total: 17 5 (240.0% bigger)
159 Total: 17 5 (240.0% bigger)
163 y:
160 y:
164 Node Delta Base Delta Length Blob Size
161 Node Delta Base Delta Length Blob Size
165 076f5e2225b3 000000000000 2 2
162 076f5e2225b3 000000000000 2 2
166
163
167 Total: 2 2 (0.0% bigger)
164 Total: 2 2 (0.0% bigger)
168 z:
165 z:
169 Node Delta Base Delta Length Blob Size
166 Node Delta Base Delta Length Blob Size
170 69a1b6752270 000000000000 2 2
167 69a1b6752270 000000000000 2 2
171
168
172 Total: 2 2 (0.0% bigger)
169 Total: 2 2 (0.0% bigger)
173
170
174 # background prefetch with repack on commit when wcprevset configured
171 # background prefetch with repack on commit when wcprevset configured
175
172
176 $ cat >> .hg/hgrc <<EOF
173 $ cat >> .hg/hgrc <<EOF
177 > [remotefilelog]
174 > [remotefilelog]
178 > bgprefetchrevs=0::
175 > bgprefetchrevs=0::
179 > EOF
176 > EOF
180
177
181 $ clearcache
178 $ clearcache
182 $ find $CACHEDIR -type f | sort
179 $ find $CACHEDIR -type f | sort
183 $ echo b > b
180 $ echo b > b
184 $ hg commit -qAm b
181 $ hg commit -qAm b
185 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob)
182 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob)
186 $ hg bookmark temporary
183 $ hg bookmark temporary
187 $ find $CACHEDIR -type f | sort
184 $ find $CACHEDIR -type f | sort
188 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
185 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
189 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
186 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
190 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
187 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
191 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
188 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
192 $TESTTMP/hgcache/repos
189 $TESTTMP/hgcache/repos
193
190
194 # Ensure that file 'w' was prefetched - it was not part of the commit operation and therefore
191 # Ensure that file 'w' was prefetched - it was not part of the commit operation and therefore
195 # could only be downloaded by the background prefetch
192 # could only be downloaded by the background prefetch
196
193
197 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
194 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
198 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
195 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
199 w:
196 w:
200 Node Delta Base Delta Length Blob Size
197 Node Delta Base Delta Length Blob Size
201 bb6ccd5dceaa 000000000000 2 2
198 bb6ccd5dceaa 000000000000 2 2
202
199
203 Total: 2 2 (0.0% bigger)
200 Total: 2 2 (0.0% bigger)
204 x:
201 x:
205 Node Delta Base Delta Length Blob Size
202 Node Delta Base Delta Length Blob Size
206 ef95c5376f34 000000000000 3 3
203 ef95c5376f34 000000000000 3 3
207 1406e7411862 ef95c5376f34 14 2
204 1406e7411862 ef95c5376f34 14 2
208
205
209 Total: 17 5 (240.0% bigger)
206 Total: 17 5 (240.0% bigger)
210 y:
207 y:
211 Node Delta Base Delta Length Blob Size
208 Node Delta Base Delta Length Blob Size
212 076f5e2225b3 000000000000 2 2
209 076f5e2225b3 000000000000 2 2
213
210
214 Total: 2 2 (0.0% bigger)
211 Total: 2 2 (0.0% bigger)
215 z:
212 z:
216 Node Delta Base Delta Length Blob Size
213 Node Delta Base Delta Length Blob Size
217 69a1b6752270 000000000000 2 2
214 69a1b6752270 000000000000 2 2
218
215
219 Total: 2 2 (0.0% bigger)
216 Total: 2 2 (0.0% bigger)
220
217
221 # background prefetch with repack on rebase when wcprevset configured
218 # background prefetch with repack on rebase when wcprevset configured
222
219
223 $ hg up -r 2
220 $ hg up -r 2
224 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
221 3 files updated, 0 files merged, 2 files removed, 0 files unresolved
225 (leaving bookmark temporary)
222 (leaving bookmark temporary)
226 $ clearcache
223 $ clearcache
227 $ find $CACHEDIR -type f | sort
224 $ find $CACHEDIR -type f | sort
228 $ hg rebase -s temporary -d foo
225 $ hg rebase -s temporary -d foo
229 rebasing 3:d9cf06e3b5b6 "b" (temporary tip)
226 rebasing 3:d9cf06e3b5b6 "b" (temporary tip)
230 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/d9cf06e3b5b6-e5c3dc63-rebase.hg
227 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/d9cf06e3b5b6-e5c3dc63-rebase.hg
231 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
228 3 files fetched over 1 fetches - (3 misses, 0.00% hit ratio) over *s (glob)
232
229
233 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
230 # Ensure that file 'y' was prefetched - it was not part of the rebase operation and therefore
234 # could only be downloaded by the background prefetch
231 # could only be downloaded by the background prefetch
235
232
236 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
233 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
237 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
234 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
238 w:
235 w:
239 Node Delta Base Delta Length Blob Size
236 Node Delta Base Delta Length Blob Size
240 bb6ccd5dceaa 000000000000 2 2
237 bb6ccd5dceaa 000000000000 2 2
241
238
242 Total: 2 2 (0.0% bigger)
239 Total: 2 2 (0.0% bigger)
243 x:
240 x:
244 Node Delta Base Delta Length Blob Size
241 Node Delta Base Delta Length Blob Size
245 ef95c5376f34 000000000000 3 3
242 ef95c5376f34 000000000000 3 3
246 1406e7411862 ef95c5376f34 14 2
243 1406e7411862 ef95c5376f34 14 2
247
244
248 Total: 17 5 (240.0% bigger)
245 Total: 17 5 (240.0% bigger)
249 y:
246 y:
250 Node Delta Base Delta Length Blob Size
247 Node Delta Base Delta Length Blob Size
251 076f5e2225b3 000000000000 2 2
248 076f5e2225b3 000000000000 2 2
252
249
253 Total: 2 2 (0.0% bigger)
250 Total: 2 2 (0.0% bigger)
254 z:
251 z:
255 Node Delta Base Delta Length Blob Size
252 Node Delta Base Delta Length Blob Size
256 69a1b6752270 000000000000 2 2
253 69a1b6752270 000000000000 2 2
257
254
258 Total: 2 2 (0.0% bigger)
255 Total: 2 2 (0.0% bigger)
259
256
260 # Check that foregound prefetch with no arguments blocks until background prefetches finish
257 # Check that foregound prefetch with no arguments blocks until background prefetches finish
261
258
262 $ hg up -r 3
259 $ hg up -r 3
263 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
260 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
264 $ clearcache
261 $ clearcache
265 $ hg prefetch --repack
262 $ hg prefetch --repack
266 waiting for lock on prefetching in $TESTTMP/shallow held by process * on host * (glob) (?)
263 waiting for lock on prefetching in $TESTTMP/shallow held by process * on host * (glob) (?)
267 got lock after * seconds (glob) (?)
264 got lock after * seconds (glob) (?)
268 (running background incremental repack)
265 (running background incremental repack)
269 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
266 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
270
267
271 $ find $CACHEDIR -type f | sort
268 $ find $CACHEDIR -type f | sort
272 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
269 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
273 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
270 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
274 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
271 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
275 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
272 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
276 $TESTTMP/hgcache/repos
273 $TESTTMP/hgcache/repos
277
274
278 # Ensure that files were prefetched
275 # Ensure that files were prefetched
279 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
276 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
280 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
277 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
281 w:
278 w:
282 Node Delta Base Delta Length Blob Size
279 Node Delta Base Delta Length Blob Size
283 bb6ccd5dceaa 000000000000 2 2
280 bb6ccd5dceaa 000000000000 2 2
284
281
285 Total: 2 2 (0.0% bigger)
282 Total: 2 2 (0.0% bigger)
286 x:
283 x:
287 Node Delta Base Delta Length Blob Size
284 Node Delta Base Delta Length Blob Size
288 ef95c5376f34 000000000000 3 3
285 ef95c5376f34 000000000000 3 3
289 1406e7411862 ef95c5376f34 14 2
286 1406e7411862 ef95c5376f34 14 2
290
287
291 Total: 17 5 (240.0% bigger)
288 Total: 17 5 (240.0% bigger)
292 y:
289 y:
293 Node Delta Base Delta Length Blob Size
290 Node Delta Base Delta Length Blob Size
294 076f5e2225b3 000000000000 2 2
291 076f5e2225b3 000000000000 2 2
295
292
296 Total: 2 2 (0.0% bigger)
293 Total: 2 2 (0.0% bigger)
297 z:
294 z:
298 Node Delta Base Delta Length Blob Size
295 Node Delta Base Delta Length Blob Size
299 69a1b6752270 000000000000 2 2
296 69a1b6752270 000000000000 2 2
300
297
301 Total: 2 2 (0.0% bigger)
298 Total: 2 2 (0.0% bigger)
302
299
303 # Check that foreground prefetch fetches revs specified by '. + draft() + bgprefetchrevs + pullprefetch'
300 # Check that foreground prefetch fetches revs specified by '. + draft() + bgprefetchrevs + pullprefetch'
304
301
305 $ clearcache
302 $ clearcache
306 $ hg prefetch --repack
303 $ hg prefetch --repack
307 waiting for lock on prefetching in $TESTTMP/shallow held by process * on host * (glob) (?)
304 waiting for lock on prefetching in $TESTTMP/shallow held by process * on host * (glob) (?)
308 got lock after * seconds (glob) (?)
305 got lock after * seconds (glob) (?)
309 (running background incremental repack)
306 (running background incremental repack)
310 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
307 * files fetched over 1 fetches - (* misses, 0.00% hit ratio) over *s (glob) (?)
311
308
312 $ find $CACHEDIR -type f | sort
309 $ find $CACHEDIR -type f | sort
313 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
310 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histidx
314 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
311 $TESTTMP/hgcache/master/packs/8f1443d44e57fec96f72fb2412e01d2818767ef2.histpack
315 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
312 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.dataidx
316 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
313 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407.datapack
317 $TESTTMP/hgcache/repos
314 $TESTTMP/hgcache/repos
318
315
319 # Ensure that files were prefetched
316 # Ensure that files were prefetched
320 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
317 $ hg debugdatapack `ls -ct $TESTTMP/hgcache/master/packs/*.datapack | head -n 1`
321 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
318 $TESTTMP/hgcache/master/packs/f4d50848e0b465e9bfd2875f213044c06cfd7407:
322 w:
319 w:
323 Node Delta Base Delta Length Blob Size
320 Node Delta Base Delta Length Blob Size
324 bb6ccd5dceaa 000000000000 2 2
321 bb6ccd5dceaa 000000000000 2 2
325
322
326 Total: 2 2 (0.0% bigger)
323 Total: 2 2 (0.0% bigger)
327 x:
324 x:
328 Node Delta Base Delta Length Blob Size
325 Node Delta Base Delta Length Blob Size
329 ef95c5376f34 000000000000 3 3
326 ef95c5376f34 000000000000 3 3
330 1406e7411862 ef95c5376f34 14 2
327 1406e7411862 ef95c5376f34 14 2
331
328
332 Total: 17 5 (240.0% bigger)
329 Total: 17 5 (240.0% bigger)
333 y:
330 y:
334 Node Delta Base Delta Length Blob Size
331 Node Delta Base Delta Length Blob Size
335 076f5e2225b3 000000000000 2 2
332 076f5e2225b3 000000000000 2 2
336
333
337 Total: 2 2 (0.0% bigger)
334 Total: 2 2 (0.0% bigger)
338 z:
335 z:
339 Node Delta Base Delta Length Blob Size
336 Node Delta Base Delta Length Blob Size
340 69a1b6752270 000000000000 2 2
337 69a1b6752270 000000000000 2 2
341
338
342 Total: 2 2 (0.0% bigger)
339 Total: 2 2 (0.0% bigger)
343
340
344 # Test that if data was prefetched and repacked we dont need to prefetch it again
341 # Test that if data was prefetched and repacked we dont need to prefetch it again
345 # It ensures that Mercurial looks not only in loose files but in packs as well
342 # It ensures that Mercurial looks not only in loose files but in packs as well
346
343
347 $ hg prefetch --repack
344 $ hg prefetch --repack
348 (running background incremental repack)
345 (running background incremental repack)
@@ -1,376 +1,374 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4 # devel.remotefilelog.ensurestart: reduce race condition with
4
5 # waiton{repack/prefetch}
6 $ cat >> $HGRCPATH <<EOF
5 $ cat >> $HGRCPATH <<EOF
7 > [remotefilelog]
6 > [remotefilelog]
8 > fastdatapack=True
7 > fastdatapack=True
9 > [devel]
8 > [devel]
10 > remotefilelog.ensurestart=True
11 > remotefilelog.bg-wait=True
9 > remotefilelog.bg-wait=True
12 > EOF
10 > EOF
13
11
14 $ hg init master
12 $ hg init master
15 $ cd master
13 $ cd master
16 $ cat >> .hg/hgrc <<EOF
14 $ cat >> .hg/hgrc <<EOF
17 > [remotefilelog]
15 > [remotefilelog]
18 > server=True
16 > server=True
19 > serverexpiration=-1
17 > serverexpiration=-1
20 > EOF
18 > EOF
21 $ echo x > x
19 $ echo x > x
22 $ hg commit -qAm x
20 $ hg commit -qAm x
23 $ echo x >> x
21 $ echo x >> x
24 $ hg commit -qAm x2
22 $ hg commit -qAm x2
25 $ cd ..
23 $ cd ..
26
24
27 $ hgcloneshallow ssh://user@dummy/master shallow -q
25 $ hgcloneshallow ssh://user@dummy/master shallow -q
28 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
26 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
29
27
30 # Set the prefetchdays config to zero so that all commits are prefetched
28 # Set the prefetchdays config to zero so that all commits are prefetched
31 # no matter what their creation date is.
29 # no matter what their creation date is.
32 $ cd shallow
30 $ cd shallow
33 $ cat >> .hg/hgrc <<EOF
31 $ cat >> .hg/hgrc <<EOF
34 > [remotefilelog]
32 > [remotefilelog]
35 > prefetchdays=0
33 > prefetchdays=0
36 > EOF
34 > EOF
37 $ cd ..
35 $ cd ..
38
36
39 # Test that repack cleans up the old files and creates new packs
37 # Test that repack cleans up the old files and creates new packs
40
38
41 $ cd shallow
39 $ cd shallow
42 $ find $CACHEDIR | sort
40 $ find $CACHEDIR | sort
43 $TESTTMP/hgcache
41 $TESTTMP/hgcache
44 $TESTTMP/hgcache/master
42 $TESTTMP/hgcache/master
45 $TESTTMP/hgcache/master/11
43 $TESTTMP/hgcache/master/11
46 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072
44 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072
47 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51
45 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51
48 $TESTTMP/hgcache/repos
46 $TESTTMP/hgcache/repos
49
47
50 $ hg repack
48 $ hg repack
51
49
52 $ find $CACHEDIR | sort
50 $ find $CACHEDIR | sort
53 $TESTTMP/hgcache
51 $TESTTMP/hgcache
54 $TESTTMP/hgcache/master
52 $TESTTMP/hgcache/master
55 $TESTTMP/hgcache/master/packs
53 $TESTTMP/hgcache/master/packs
56 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
54 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
57 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
55 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
58 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
56 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
59 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
57 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
60 $TESTTMP/hgcache/repos
58 $TESTTMP/hgcache/repos
61
59
62 # Test that the packs are readonly
60 # Test that the packs are readonly
63 $ ls_l $CACHEDIR/master/packs
61 $ ls_l $CACHEDIR/master/packs
64 -r--r--r-- 1145 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
62 -r--r--r-- 1145 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
65 -r--r--r-- 172 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
63 -r--r--r-- 172 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
66 -r--r--r-- 1074 b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
64 -r--r--r-- 1074 b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
67 -r--r--r-- 72 b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
65 -r--r--r-- 72 b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
68
66
69 # Test that the data in the new packs is accessible
67 # Test that the data in the new packs is accessible
70 $ hg cat -r . x
68 $ hg cat -r . x
71 x
69 x
72 x
70 x
73
71
74 # Test that adding new data and repacking it results in the loose data and the
72 # Test that adding new data and repacking it results in the loose data and the
75 # old packs being combined.
73 # old packs being combined.
76
74
77 $ cd ../master
75 $ cd ../master
78 $ echo x >> x
76 $ echo x >> x
79 $ hg commit -m x3
77 $ hg commit -m x3
80 $ cd ../shallow
78 $ cd ../shallow
81 $ hg pull -q
79 $ hg pull -q
82 $ hg up -q tip
80 $ hg up -q tip
83 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
81 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
84
82
85 $ find $CACHEDIR -type f | sort
83 $ find $CACHEDIR -type f | sort
86 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
84 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
87 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
85 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
88 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
86 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
89 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
87 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
90 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
88 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
91 $TESTTMP/hgcache/repos
89 $TESTTMP/hgcache/repos
92
90
93 $ hg repack --traceback
91 $ hg repack --traceback
94
92
95 $ find $CACHEDIR -type f | sort
93 $ find $CACHEDIR -type f | sort
96 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
94 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
97 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
95 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
98 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
96 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
99 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
97 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
100 $TESTTMP/hgcache/repos
98 $TESTTMP/hgcache/repos
101
99
102 # Verify all the file data is still available
100 # Verify all the file data is still available
103 $ hg cat -r . x
101 $ hg cat -r . x
104 x
102 x
105 x
103 x
106 x
104 x
107 $ hg cat -r '.^' x
105 $ hg cat -r '.^' x
108 x
106 x
109 x
107 x
110
108
111 # Test that repacking again without new data does not delete the pack files
109 # Test that repacking again without new data does not delete the pack files
112 # and did not change the pack names
110 # and did not change the pack names
113 $ hg repack
111 $ hg repack
114 $ find $CACHEDIR -type f | sort
112 $ find $CACHEDIR -type f | sort
115 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
113 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
116 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
114 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
117 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
115 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
118 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
116 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
119 $TESTTMP/hgcache/repos
117 $TESTTMP/hgcache/repos
120
118
121 # Run two repacks at once
119 # Run two repacks at once
122 $ hg repack --config "hooks.prerepack=sleep 3" &
120 $ hg repack --config "hooks.prerepack=sleep 3" &
123 $ sleep 1
121 $ sleep 1
124 $ hg repack
122 $ hg repack
125 skipping repack - another repack is already running
123 skipping repack - another repack is already running
126 $ hg debugwaitonrepack >/dev/null 2>&1
124 $ hg debugwaitonrepack >/dev/null 2>&1
127
125
128 # Run repack in the background
126 # Run repack in the background
129 $ cd ../master
127 $ cd ../master
130 $ echo x >> x
128 $ echo x >> x
131 $ hg commit -m x4
129 $ hg commit -m x4
132 $ cd ../shallow
130 $ cd ../shallow
133 $ hg pull -q
131 $ hg pull -q
134 $ hg up -q tip
132 $ hg up -q tip
135 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
133 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
136 $ find $CACHEDIR -type f | sort
134 $ find $CACHEDIR -type f | sort
137 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1bb2e6237e035c8f8ef508e281f1ce075bc6db72
135 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1bb2e6237e035c8f8ef508e281f1ce075bc6db72
138 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
136 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
139 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
137 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
140 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
138 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
141 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
139 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
142 $TESTTMP/hgcache/repos
140 $TESTTMP/hgcache/repos
143
141
144 $ hg repack --background
142 $ hg repack --background
145 (running background repack)
143 (running background repack)
146 $ find $CACHEDIR -type f | sort
144 $ find $CACHEDIR -type f | sort
147 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.dataidx
145 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.dataidx
148 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.datapack
146 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.datapack
149 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histidx
147 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histidx
150 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histpack
148 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histpack
151 $TESTTMP/hgcache/repos
149 $TESTTMP/hgcache/repos
152
150
153 # Test debug commands
151 # Test debug commands
154
152
155 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
153 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
156 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
154 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
157 x:
155 x:
158 Node Delta Base Delta Length Blob Size
156 Node Delta Base Delta Length Blob Size
159 1bb2e6237e03 000000000000 8 8
157 1bb2e6237e03 000000000000 8 8
160 d4a3ed9310e5 1bb2e6237e03 12 6
158 d4a3ed9310e5 1bb2e6237e03 12 6
161 aee31534993a d4a3ed9310e5 12 4
159 aee31534993a d4a3ed9310e5 12 4
162
160
163 Total: 32 18 (77.8% bigger)
161 Total: 32 18 (77.8% bigger)
164 $ hg debugdatapack --long $TESTTMP/hgcache/master/packs/*.datapack
162 $ hg debugdatapack --long $TESTTMP/hgcache/master/packs/*.datapack
165 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
163 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
166 x:
164 x:
167 Node Delta Base Delta Length Blob Size
165 Node Delta Base Delta Length Blob Size
168 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 8 8
166 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 8 8
169 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 12 6
167 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 12 6
170 aee31534993a501858fb6dd96a065671922e7d51 d4a3ed9310e5bd9887e3bf779da5077efab28216 12 4
168 aee31534993a501858fb6dd96a065671922e7d51 d4a3ed9310e5bd9887e3bf779da5077efab28216 12 4
171
169
172 Total: 32 18 (77.8% bigger)
170 Total: 32 18 (77.8% bigger)
173 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack --node d4a3ed9310e5bd9887e3bf779da5077efab28216
171 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack --node d4a3ed9310e5bd9887e3bf779da5077efab28216
174 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
172 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
175
173
176 x
174 x
177 Node Delta Base Delta SHA1 Delta Length
175 Node Delta Base Delta SHA1 Delta Length
178 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 77029ab56e83ea2115dd53ff87483682abe5d7ca 12
176 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 77029ab56e83ea2115dd53ff87483682abe5d7ca 12
179 Node Delta Base Delta SHA1 Delta Length
177 Node Delta Base Delta SHA1 Delta Length
180 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 7ca8c71a64f7b56380e77573da2f7a5fdd2ecdb5 8
178 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 7ca8c71a64f7b56380e77573da2f7a5fdd2ecdb5 8
181 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
179 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
182
180
183 x
181 x
184 Node P1 Node P2 Node Link Node Copy From
182 Node P1 Node P2 Node Link Node Copy From
185 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
183 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
186 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
184 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
187 aee31534993a 1406e7411862 000000000000 a89d614e2364
185 aee31534993a 1406e7411862 000000000000 a89d614e2364
188 1406e7411862 000000000000 000000000000 b292c1e3311f
186 1406e7411862 000000000000 000000000000 b292c1e3311f
189
187
190 # Test copy tracing from a pack
188 # Test copy tracing from a pack
191 $ cd ../master
189 $ cd ../master
192 $ hg mv x y
190 $ hg mv x y
193 $ hg commit -m 'move x to y'
191 $ hg commit -m 'move x to y'
194 $ cd ../shallow
192 $ cd ../shallow
195 $ hg pull -q
193 $ hg pull -q
196 $ hg up -q tip
194 $ hg up -q tip
197 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
195 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
198 $ hg repack
196 $ hg repack
199 $ hg log -f y -T '{desc}\n'
197 $ hg log -f y -T '{desc}\n'
200 move x to y
198 move x to y
201 x4
199 x4
202 x3
200 x3
203 x2
201 x2
204 x
202 x
205
203
206 # Test copy trace across rename and back
204 # Test copy trace across rename and back
207 $ cp -R $TESTTMP/hgcache/master/packs $TESTTMP/backuppacks
205 $ cp -R $TESTTMP/hgcache/master/packs $TESTTMP/backuppacks
208 $ cd ../master
206 $ cd ../master
209 $ hg mv y x
207 $ hg mv y x
210 $ hg commit -m 'move y back to x'
208 $ hg commit -m 'move y back to x'
211 $ hg revert -r 0 x
209 $ hg revert -r 0 x
212 $ mv x y
210 $ mv x y
213 $ hg add y
211 $ hg add y
214 $ echo >> y
212 $ echo >> y
215 $ hg revert x
213 $ hg revert x
216 $ hg commit -m 'add y back without metadata'
214 $ hg commit -m 'add y back without metadata'
217 $ cd ../shallow
215 $ cd ../shallow
218 $ hg pull -q
216 $ hg pull -q
219 $ hg up -q tip
217 $ hg up -q tip
220 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over * (glob)
218 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over * (glob)
221 $ hg repack
219 $ hg repack
222 $ ls $TESTTMP/hgcache/master/packs
220 $ ls $TESTTMP/hgcache/master/packs
223 bfd60adb76018bb952e27cd23fc151bf94865d7d.histidx
221 bfd60adb76018bb952e27cd23fc151bf94865d7d.histidx
224 bfd60adb76018bb952e27cd23fc151bf94865d7d.histpack
222 bfd60adb76018bb952e27cd23fc151bf94865d7d.histpack
225 fb3aa57b22789ebcc45706c352e2d6af099c5816.dataidx
223 fb3aa57b22789ebcc45706c352e2d6af099c5816.dataidx
226 fb3aa57b22789ebcc45706c352e2d6af099c5816.datapack
224 fb3aa57b22789ebcc45706c352e2d6af099c5816.datapack
227 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
225 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
228
226
229 x
227 x
230 Node P1 Node P2 Node Link Node Copy From
228 Node P1 Node P2 Node Link Node Copy From
231 cd410a44d584 577959738234 000000000000 609547eda446 y
229 cd410a44d584 577959738234 000000000000 609547eda446 y
232 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
230 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
233 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
231 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
234 aee31534993a 1406e7411862 000000000000 a89d614e2364
232 aee31534993a 1406e7411862 000000000000 a89d614e2364
235 1406e7411862 000000000000 000000000000 b292c1e3311f
233 1406e7411862 000000000000 000000000000 b292c1e3311f
236
234
237 y
235 y
238 Node P1 Node P2 Node Link Node Copy From
236 Node P1 Node P2 Node Link Node Copy From
239 577959738234 1bb2e6237e03 000000000000 c7faf2fc439a x
237 577959738234 1bb2e6237e03 000000000000 c7faf2fc439a x
240 21f46f2721e7 000000000000 000000000000 d6868642b790
238 21f46f2721e7 000000000000 000000000000 d6868642b790
241 $ hg strip -r '.^'
239 $ hg strip -r '.^'
242 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
240 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
243 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
241 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
244 $ hg -R ../master strip -r '.^'
242 $ hg -R ../master strip -r '.^'
245 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
243 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
246 saved backup bundle to $TESTTMP/master/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
244 saved backup bundle to $TESTTMP/master/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
247
245
248 $ rm -rf $TESTTMP/hgcache/master/packs
246 $ rm -rf $TESTTMP/hgcache/master/packs
249 $ cp -R $TESTTMP/backuppacks $TESTTMP/hgcache/master/packs
247 $ cp -R $TESTTMP/backuppacks $TESTTMP/hgcache/master/packs
250
248
251 # Test repacking datapack without history
249 # Test repacking datapack without history
252 $ rm -rf $CACHEDIR/master/packs/*hist*
250 $ rm -rf $CACHEDIR/master/packs/*hist*
253 $ hg repack
251 $ hg repack
254 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
252 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
255 $TESTTMP/hgcache/master/packs/922aca43dbbeda4d250565372e8892ec7b08da6a:
253 $TESTTMP/hgcache/master/packs/922aca43dbbeda4d250565372e8892ec7b08da6a:
256 x:
254 x:
257 Node Delta Base Delta Length Blob Size
255 Node Delta Base Delta Length Blob Size
258 1bb2e6237e03 000000000000 8 8
256 1bb2e6237e03 000000000000 8 8
259 d4a3ed9310e5 1bb2e6237e03 12 6
257 d4a3ed9310e5 1bb2e6237e03 12 6
260 aee31534993a d4a3ed9310e5 12 4
258 aee31534993a d4a3ed9310e5 12 4
261
259
262 Total: 32 18 (77.8% bigger)
260 Total: 32 18 (77.8% bigger)
263 y:
261 y:
264 Node Delta Base Delta Length Blob Size
262 Node Delta Base Delta Length Blob Size
265 577959738234 000000000000 70 8
263 577959738234 000000000000 70 8
266
264
267 Total: 70 8 (775.0% bigger)
265 Total: 70 8 (775.0% bigger)
268
266
269 $ hg cat -r ".^" x
267 $ hg cat -r ".^" x
270 x
268 x
271 x
269 x
272 x
270 x
273 x
271 x
274
272
275 Incremental repack
273 Incremental repack
276 $ rm -rf $CACHEDIR/master/packs/*
274 $ rm -rf $CACHEDIR/master/packs/*
277 $ cat >> .hg/hgrc <<EOF
275 $ cat >> .hg/hgrc <<EOF
278 > [remotefilelog]
276 > [remotefilelog]
279 > data.generations=60
277 > data.generations=60
280 > 150
278 > 150
281 > EOF
279 > EOF
282
280
283 Single pack - repack does nothing
281 Single pack - repack does nothing
284 $ hg prefetch -r 0
282 $ hg prefetch -r 0
285 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
283 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
286 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
284 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
287 [1]
285 [1]
288 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
286 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
289 [1]
287 [1]
290 $ hg repack --incremental
288 $ hg repack --incremental
291 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
289 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
292 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
290 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
293 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
291 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
294 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
292 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
295
293
296 3 gen1 packs, 1 gen0 pack - packs 3 gen1 into 1
294 3 gen1 packs, 1 gen0 pack - packs 3 gen1 into 1
297 $ hg prefetch -r 1
295 $ hg prefetch -r 1
298 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
296 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
299 $ hg prefetch -r 2
297 $ hg prefetch -r 2
300 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
298 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
301 $ hg prefetch -r 3
299 $ hg prefetch -r 3
302 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
300 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
303 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
301 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
304 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
302 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
305 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
303 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
306 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
304 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
307 $ hg repack --incremental
305 $ hg repack --incremental
308 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
306 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
309 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
307 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
310 -r--r--r-- 226 39443fa1064182e93d968b5cba292eb5283260d0.datapack
308 -r--r--r-- 226 39443fa1064182e93d968b5cba292eb5283260d0.datapack
311 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
309 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
312 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
310 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
313 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
311 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
314
312
315 1 gen3 pack, 1 gen0 pack - does nothing
313 1 gen3 pack, 1 gen0 pack - does nothing
316 $ hg repack --incremental
314 $ hg repack --incremental
317 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
315 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
318 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
316 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
319 -r--r--r-- 226 39443fa1064182e93d968b5cba292eb5283260d0.datapack
317 -r--r--r-- 226 39443fa1064182e93d968b5cba292eb5283260d0.datapack
320 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
318 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
321 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
319 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
322 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
320 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
323
321
324 Pull should run background repack
322 Pull should run background repack
325 $ cat >> .hg/hgrc <<EOF
323 $ cat >> .hg/hgrc <<EOF
326 > [remotefilelog]
324 > [remotefilelog]
327 > backgroundrepack=True
325 > backgroundrepack=True
328 > EOF
326 > EOF
329 $ clearcache
327 $ clearcache
330 $ hg prefetch -r 0
328 $ hg prefetch -r 0
331 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
329 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
332 $ hg prefetch -r 1
330 $ hg prefetch -r 1
333 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
331 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
334 $ hg prefetch -r 2
332 $ hg prefetch -r 2
335 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
333 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
336 $ hg prefetch -r 3
334 $ hg prefetch -r 3
337 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
335 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
338
336
339 $ hg pull
337 $ hg pull
340 pulling from ssh://user@dummy/master
338 pulling from ssh://user@dummy/master
341 searching for changes
339 searching for changes
342 no changes found
340 no changes found
343 (running background incremental repack)
341 (running background incremental repack)
344 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
342 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
345 -r--r--r-- 303 156a6c1c83aeb69422d7936e0a46ba9bc06a71c0.datapack
343 -r--r--r-- 303 156a6c1c83aeb69422d7936e0a46ba9bc06a71c0.datapack
346 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
344 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
347 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
345 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
348
346
349 Test environment variable resolution
347 Test environment variable resolution
350 $ CACHEPATH=$TESTTMP/envcache hg prefetch --config 'remotefilelog.cachepath=$CACHEPATH'
348 $ CACHEPATH=$TESTTMP/envcache hg prefetch --config 'remotefilelog.cachepath=$CACHEPATH'
351 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
349 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
352 $ find $TESTTMP/envcache | sort
350 $ find $TESTTMP/envcache | sort
353 $TESTTMP/envcache
351 $TESTTMP/envcache
354 $TESTTMP/envcache/master
352 $TESTTMP/envcache/master
355 $TESTTMP/envcache/master/95
353 $TESTTMP/envcache/master/95
356 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a
354 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a
357 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/577959738234a1eb241ed3ed4b22a575833f56e0
355 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/577959738234a1eb241ed3ed4b22a575833f56e0
358 $TESTTMP/envcache/repos
356 $TESTTMP/envcache/repos
359
357
360 Test local remotefilelog blob is correct when based on a pack
358 Test local remotefilelog blob is correct when based on a pack
361 $ hg prefetch -r .
359 $ hg prefetch -r .
362 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
360 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
363 $ echo >> y
361 $ echo >> y
364 $ hg commit -m y2
362 $ hg commit -m y2
365 $ hg debugremotefilelog .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
363 $ hg debugremotefilelog .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
366 size: 9 bytes
364 size: 9 bytes
367 path: .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
365 path: .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
368 key: b70860edba4f
366 key: b70860edba4f
369
367
370 node => p1 p2 linknode copyfrom
368 node => p1 p2 linknode copyfrom
371 b70860edba4f => 577959738234 000000000000 08d3fbc98c48
369 b70860edba4f => 577959738234 000000000000 08d3fbc98c48
372 577959738234 => 1bb2e6237e03 000000000000 c7faf2fc439a x
370 577959738234 => 1bb2e6237e03 000000000000 c7faf2fc439a x
373 1bb2e6237e03 => d4a3ed9310e5 000000000000 0b03bbc9e1e7
371 1bb2e6237e03 => d4a3ed9310e5 000000000000 0b03bbc9e1e7
374 d4a3ed9310e5 => aee31534993a 000000000000 421535db10b6
372 d4a3ed9310e5 => aee31534993a 000000000000 421535db10b6
375 aee31534993a => 1406e7411862 000000000000 a89d614e2364
373 aee31534993a => 1406e7411862 000000000000 a89d614e2364
376 1406e7411862 => 000000000000 000000000000 b292c1e3311f
374 1406e7411862 => 000000000000 000000000000 b292c1e3311f
@@ -1,456 +1,453 b''
1 #require no-windows
1 #require no-windows
2
2
3 $ . "$TESTDIR/remotefilelog-library.sh"
3 $ . "$TESTDIR/remotefilelog-library.sh"
4 # devel.remotefilelog.ensurestart: reduce race condition with
5 # waiton{repack/prefetch}
6 $ cat >> $HGRCPATH <<EOF
4 $ cat >> $HGRCPATH <<EOF
7 > [devel]
5 > [devel]
8 > remotefilelog.ensurestart=True
9 > remotefilelog.bg-wait=True
6 > remotefilelog.bg-wait=True
10 > EOF
7 > EOF
11
8
12 $ hg init master
9 $ hg init master
13 $ cd master
10 $ cd master
14 $ cat >> .hg/hgrc <<EOF
11 $ cat >> .hg/hgrc <<EOF
15 > [remotefilelog]
12 > [remotefilelog]
16 > server=True
13 > server=True
17 > serverexpiration=-1
14 > serverexpiration=-1
18 > EOF
15 > EOF
19 $ echo x > x
16 $ echo x > x
20 $ hg commit -qAm x
17 $ hg commit -qAm x
21 $ echo x >> x
18 $ echo x >> x
22 $ hg commit -qAm x2
19 $ hg commit -qAm x2
23 $ cd ..
20 $ cd ..
24
21
25 $ hgcloneshallow ssh://user@dummy/master shallow -q
22 $ hgcloneshallow ssh://user@dummy/master shallow -q
26 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
23 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over *s (glob)
27
24
28 # Set the prefetchdays config to zero so that all commits are prefetched
25 # Set the prefetchdays config to zero so that all commits are prefetched
29 # no matter what their creation date is.
26 # no matter what their creation date is.
30 $ cd shallow
27 $ cd shallow
31 $ cat >> .hg/hgrc <<EOF
28 $ cat >> .hg/hgrc <<EOF
32 > [remotefilelog]
29 > [remotefilelog]
33 > prefetchdays=0
30 > prefetchdays=0
34 > EOF
31 > EOF
35 $ cd ..
32 $ cd ..
36
33
37 # Test that repack cleans up the old files and creates new packs
34 # Test that repack cleans up the old files and creates new packs
38
35
39 $ cd shallow
36 $ cd shallow
40 $ find $CACHEDIR | sort
37 $ find $CACHEDIR | sort
41 $TESTTMP/hgcache
38 $TESTTMP/hgcache
42 $TESTTMP/hgcache/master
39 $TESTTMP/hgcache/master
43 $TESTTMP/hgcache/master/11
40 $TESTTMP/hgcache/master/11
44 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072
41 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072
45 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51
42 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/aee31534993a501858fb6dd96a065671922e7d51
46 $TESTTMP/hgcache/repos
43 $TESTTMP/hgcache/repos
47
44
48 $ hg repack
45 $ hg repack
49
46
50 $ find $CACHEDIR | sort
47 $ find $CACHEDIR | sort
51 $TESTTMP/hgcache
48 $TESTTMP/hgcache
52 $TESTTMP/hgcache/master
49 $TESTTMP/hgcache/master
53 $TESTTMP/hgcache/master/packs
50 $TESTTMP/hgcache/master/packs
54 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
51 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
55 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
52 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
56 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
53 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
57 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
54 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
58 $TESTTMP/hgcache/repos
55 $TESTTMP/hgcache/repos
59
56
60 # Test that the packs are readonly
57 # Test that the packs are readonly
61 $ ls_l $CACHEDIR/master/packs
58 $ ls_l $CACHEDIR/master/packs
62 -r--r--r-- 1145 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
59 -r--r--r-- 1145 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
63 -r--r--r-- 172 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
60 -r--r--r-- 172 1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
64 -r--r--r-- 1074 b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
61 -r--r--r-- 1074 b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
65 -r--r--r-- 72 b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
62 -r--r--r-- 72 b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
66
63
67 # Test that the data in the new packs is accessible
64 # Test that the data in the new packs is accessible
68 $ hg cat -r . x
65 $ hg cat -r . x
69 x
66 x
70 x
67 x
71
68
72 # Test that adding new data and repacking it results in the loose data and the
69 # Test that adding new data and repacking it results in the loose data and the
73 # old packs being combined.
70 # old packs being combined.
74
71
75 $ cd ../master
72 $ cd ../master
76 $ echo x >> x
73 $ echo x >> x
77 $ hg commit -m x3
74 $ hg commit -m x3
78 $ cd ../shallow
75 $ cd ../shallow
79 $ hg pull -q
76 $ hg pull -q
80 $ hg up -q tip
77 $ hg up -q tip
81 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
78 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
82
79
83 $ find $CACHEDIR -type f | sort
80 $ find $CACHEDIR -type f | sort
84 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
81 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
85 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
82 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
86 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
83 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
87 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
84 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
88 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
85 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
89 $TESTTMP/hgcache/repos
86 $TESTTMP/hgcache/repos
90
87
91 # First assert that with --packsonly, the loose object will be ignored:
88 # First assert that with --packsonly, the loose object will be ignored:
92
89
93 $ hg repack --packsonly
90 $ hg repack --packsonly
94
91
95 $ find $CACHEDIR -type f | sort
92 $ find $CACHEDIR -type f | sort
96 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
93 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/d4a3ed9310e5bd9887e3bf779da5077efab28216
97 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
94 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histidx
98 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
95 $TESTTMP/hgcache/master/packs/1e91b207daf5d7b48f1be9c587d6b5ae654ce78c.histpack
99 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
96 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.dataidx
100 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
97 $TESTTMP/hgcache/master/packs/b1e0cfc7f345e408a7825e3081501959488d59ce.datapack
101 $TESTTMP/hgcache/repos
98 $TESTTMP/hgcache/repos
102
99
103 $ hg repack --traceback
100 $ hg repack --traceback
104
101
105 $ find $CACHEDIR -type f | sort
102 $ find $CACHEDIR -type f | sort
106 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
103 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
107 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
104 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
108 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
105 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
109 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
106 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
110 $TESTTMP/hgcache/repos
107 $TESTTMP/hgcache/repos
111
108
112 # Verify all the file data is still available
109 # Verify all the file data is still available
113 $ hg cat -r . x
110 $ hg cat -r . x
114 x
111 x
115 x
112 x
116 x
113 x
117 $ hg cat -r '.^' x
114 $ hg cat -r '.^' x
118 x
115 x
119 x
116 x
120
117
121 # Test that repacking again without new data does not delete the pack files
118 # Test that repacking again without new data does not delete the pack files
122 # and did not change the pack names
119 # and did not change the pack names
123 $ hg repack
120 $ hg repack
124 $ find $CACHEDIR -type f | sort
121 $ find $CACHEDIR -type f | sort
125 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
122 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
126 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
123 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
127 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
124 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
128 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
125 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
129 $TESTTMP/hgcache/repos
126 $TESTTMP/hgcache/repos
130
127
131 # Run two repacks at once
128 # Run two repacks at once
132 $ hg repack --config "hooks.prerepack=sleep 3" &
129 $ hg repack --config "hooks.prerepack=sleep 3" &
133 $ sleep 1
130 $ sleep 1
134 $ hg repack
131 $ hg repack
135 skipping repack - another repack is already running
132 skipping repack - another repack is already running
136 $ hg debugwaitonrepack >/dev/null 2>&1
133 $ hg debugwaitonrepack >/dev/null 2>&1
137
134
138 # Run repack in the background
135 # Run repack in the background
139 $ cd ../master
136 $ cd ../master
140 $ echo x >> x
137 $ echo x >> x
141 $ hg commit -m x4
138 $ hg commit -m x4
142 $ cd ../shallow
139 $ cd ../shallow
143 $ hg pull -q
140 $ hg pull -q
144 $ hg up -q tip
141 $ hg up -q tip
145 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
142 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
146 $ find $CACHEDIR -type f | sort
143 $ find $CACHEDIR -type f | sort
147 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1bb2e6237e035c8f8ef508e281f1ce075bc6db72
144 $TESTTMP/hgcache/master/11/f6ad8ec52a2984abaafd7c3b516503785c2072/1bb2e6237e035c8f8ef508e281f1ce075bc6db72
148 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
145 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.dataidx
149 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
146 $TESTTMP/hgcache/master/packs/78840d69389c7404327f7477e3931c89945c37d1.datapack
150 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
147 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histidx
151 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
148 $TESTTMP/hgcache/master/packs/8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
152 $TESTTMP/hgcache/repos
149 $TESTTMP/hgcache/repos
153
150
154 $ hg repack --background
151 $ hg repack --background
155 (running background repack)
152 (running background repack)
156 $ find $CACHEDIR -type f | sort
153 $ find $CACHEDIR -type f | sort
157 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.dataidx
154 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.dataidx
158 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.datapack
155 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0.datapack
159 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histidx
156 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histidx
160 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histpack
157 $TESTTMP/hgcache/master/packs/604552d403a1381749faf656feca0ca265a6d52c.histpack
161 $TESTTMP/hgcache/repos
158 $TESTTMP/hgcache/repos
162
159
163 # Test debug commands
160 # Test debug commands
164
161
165 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
162 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
166 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
163 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
167 x:
164 x:
168 Node Delta Base Delta Length Blob Size
165 Node Delta Base Delta Length Blob Size
169 1bb2e6237e03 000000000000 8 8
166 1bb2e6237e03 000000000000 8 8
170 d4a3ed9310e5 1bb2e6237e03 12 6
167 d4a3ed9310e5 1bb2e6237e03 12 6
171 aee31534993a d4a3ed9310e5 12 4
168 aee31534993a d4a3ed9310e5 12 4
172
169
173 Total: 32 18 (77.8% bigger)
170 Total: 32 18 (77.8% bigger)
174 $ hg debugdatapack --long $TESTTMP/hgcache/master/packs/*.datapack
171 $ hg debugdatapack --long $TESTTMP/hgcache/master/packs/*.datapack
175 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
172 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
176 x:
173 x:
177 Node Delta Base Delta Length Blob Size
174 Node Delta Base Delta Length Blob Size
178 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 8 8
175 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 8 8
179 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 12 6
176 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 12 6
180 aee31534993a501858fb6dd96a065671922e7d51 d4a3ed9310e5bd9887e3bf779da5077efab28216 12 4
177 aee31534993a501858fb6dd96a065671922e7d51 d4a3ed9310e5bd9887e3bf779da5077efab28216 12 4
181
178
182 Total: 32 18 (77.8% bigger)
179 Total: 32 18 (77.8% bigger)
183 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack --node d4a3ed9310e5bd9887e3bf779da5077efab28216
180 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack --node d4a3ed9310e5bd9887e3bf779da5077efab28216
184 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
181 $TESTTMP/hgcache/master/packs/39443fa1064182e93d968b5cba292eb5283260d0:
185
182
186 x
183 x
187 Node Delta Base Delta SHA1 Delta Length
184 Node Delta Base Delta SHA1 Delta Length
188 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 77029ab56e83ea2115dd53ff87483682abe5d7ca 12
185 d4a3ed9310e5bd9887e3bf779da5077efab28216 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 77029ab56e83ea2115dd53ff87483682abe5d7ca 12
189 Node Delta Base Delta SHA1 Delta Length
186 Node Delta Base Delta SHA1 Delta Length
190 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 7ca8c71a64f7b56380e77573da2f7a5fdd2ecdb5 8
187 1bb2e6237e035c8f8ef508e281f1ce075bc6db72 0000000000000000000000000000000000000000 7ca8c71a64f7b56380e77573da2f7a5fdd2ecdb5 8
191 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
188 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
192
189
193 x
190 x
194 Node P1 Node P2 Node Link Node Copy From
191 Node P1 Node P2 Node Link Node Copy From
195 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
192 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
196 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
193 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
197 aee31534993a 1406e7411862 000000000000 a89d614e2364
194 aee31534993a 1406e7411862 000000000000 a89d614e2364
198 1406e7411862 000000000000 000000000000 b292c1e3311f
195 1406e7411862 000000000000 000000000000 b292c1e3311f
199
196
200 # Test copy tracing from a pack
197 # Test copy tracing from a pack
201 $ cd ../master
198 $ cd ../master
202 $ hg mv x y
199 $ hg mv x y
203 $ hg commit -m 'move x to y'
200 $ hg commit -m 'move x to y'
204 $ cd ../shallow
201 $ cd ../shallow
205 $ hg pull -q
202 $ hg pull -q
206 $ hg up -q tip
203 $ hg up -q tip
207 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
204 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
208 $ hg repack
205 $ hg repack
209 $ hg log -f y -T '{desc}\n'
206 $ hg log -f y -T '{desc}\n'
210 move x to y
207 move x to y
211 x4
208 x4
212 x3
209 x3
213 x2
210 x2
214 x
211 x
215
212
216 # Test copy trace across rename and back
213 # Test copy trace across rename and back
217 $ cp -R $TESTTMP/hgcache/master/packs $TESTTMP/backuppacks
214 $ cp -R $TESTTMP/hgcache/master/packs $TESTTMP/backuppacks
218 $ cd ../master
215 $ cd ../master
219 $ hg mv y x
216 $ hg mv y x
220 $ hg commit -m 'move y back to x'
217 $ hg commit -m 'move y back to x'
221 $ hg revert -r 0 x
218 $ hg revert -r 0 x
222 $ mv x y
219 $ mv x y
223 $ hg add y
220 $ hg add y
224 $ echo >> y
221 $ echo >> y
225 $ hg revert x
222 $ hg revert x
226 $ hg commit -m 'add y back without metadata'
223 $ hg commit -m 'add y back without metadata'
227 $ cd ../shallow
224 $ cd ../shallow
228 $ hg pull -q
225 $ hg pull -q
229 $ hg up -q tip
226 $ hg up -q tip
230 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over * (glob)
227 2 files fetched over 2 fetches - (2 misses, 0.00% hit ratio) over * (glob)
231 $ hg repack
228 $ hg repack
232 $ ls $TESTTMP/hgcache/master/packs
229 $ ls $TESTTMP/hgcache/master/packs
233 bfd60adb76018bb952e27cd23fc151bf94865d7d.histidx
230 bfd60adb76018bb952e27cd23fc151bf94865d7d.histidx
234 bfd60adb76018bb952e27cd23fc151bf94865d7d.histpack
231 bfd60adb76018bb952e27cd23fc151bf94865d7d.histpack
235 fb3aa57b22789ebcc45706c352e2d6af099c5816.dataidx
232 fb3aa57b22789ebcc45706c352e2d6af099c5816.dataidx
236 fb3aa57b22789ebcc45706c352e2d6af099c5816.datapack
233 fb3aa57b22789ebcc45706c352e2d6af099c5816.datapack
237 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
234 $ hg debughistorypack $TESTTMP/hgcache/master/packs/*.histidx
238
235
239 x
236 x
240 Node P1 Node P2 Node Link Node Copy From
237 Node P1 Node P2 Node Link Node Copy From
241 cd410a44d584 577959738234 000000000000 609547eda446 y
238 cd410a44d584 577959738234 000000000000 609547eda446 y
242 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
239 1bb2e6237e03 d4a3ed9310e5 000000000000 0b03bbc9e1e7
243 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
240 d4a3ed9310e5 aee31534993a 000000000000 421535db10b6
244 aee31534993a 1406e7411862 000000000000 a89d614e2364
241 aee31534993a 1406e7411862 000000000000 a89d614e2364
245 1406e7411862 000000000000 000000000000 b292c1e3311f
242 1406e7411862 000000000000 000000000000 b292c1e3311f
246
243
247 y
244 y
248 Node P1 Node P2 Node Link Node Copy From
245 Node P1 Node P2 Node Link Node Copy From
249 577959738234 1bb2e6237e03 000000000000 c7faf2fc439a x
246 577959738234 1bb2e6237e03 000000000000 c7faf2fc439a x
250 21f46f2721e7 000000000000 000000000000 d6868642b790
247 21f46f2721e7 000000000000 000000000000 d6868642b790
251 $ hg strip -r '.^'
248 $ hg strip -r '.^'
252 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
249 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
253 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
250 saved backup bundle to $TESTTMP/shallow/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
254 $ hg -R ../master strip -r '.^'
251 $ hg -R ../master strip -r '.^'
255 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
252 1 files updated, 0 files merged, 1 files removed, 0 files unresolved
256 saved backup bundle to $TESTTMP/master/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
253 saved backup bundle to $TESTTMP/master/.hg/strip-backup/609547eda446-b26b56a8-backup.hg (glob)
257
254
258 $ rm -rf $TESTTMP/hgcache/master/packs
255 $ rm -rf $TESTTMP/hgcache/master/packs
259 $ cp -R $TESTTMP/backuppacks $TESTTMP/hgcache/master/packs
256 $ cp -R $TESTTMP/backuppacks $TESTTMP/hgcache/master/packs
260
257
261 # Test repacking datapack without history
258 # Test repacking datapack without history
262 $ rm -rf $CACHEDIR/master/packs/*hist*
259 $ rm -rf $CACHEDIR/master/packs/*hist*
263 $ hg repack
260 $ hg repack
264 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
261 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.datapack
265 $TESTTMP/hgcache/master/packs/922aca43dbbeda4d250565372e8892ec7b08da6a:
262 $TESTTMP/hgcache/master/packs/922aca43dbbeda4d250565372e8892ec7b08da6a:
266 x:
263 x:
267 Node Delta Base Delta Length Blob Size
264 Node Delta Base Delta Length Blob Size
268 1bb2e6237e03 000000000000 8 8
265 1bb2e6237e03 000000000000 8 8
269 d4a3ed9310e5 1bb2e6237e03 12 6
266 d4a3ed9310e5 1bb2e6237e03 12 6
270 aee31534993a d4a3ed9310e5 12 4
267 aee31534993a d4a3ed9310e5 12 4
271
268
272 Total: 32 18 (77.8% bigger)
269 Total: 32 18 (77.8% bigger)
273 y:
270 y:
274 Node Delta Base Delta Length Blob Size
271 Node Delta Base Delta Length Blob Size
275 577959738234 000000000000 70 8
272 577959738234 000000000000 70 8
276
273
277 Total: 70 8 (775.0% bigger)
274 Total: 70 8 (775.0% bigger)
278
275
279 $ hg cat -r ".^" x
276 $ hg cat -r ".^" x
280 x
277 x
281 x
278 x
282 x
279 x
283 x
280 x
284
281
285 Incremental repack
282 Incremental repack
286 $ rm -rf $CACHEDIR/master/packs/*
283 $ rm -rf $CACHEDIR/master/packs/*
287 $ cat >> .hg/hgrc <<EOF
284 $ cat >> .hg/hgrc <<EOF
288 > [remotefilelog]
285 > [remotefilelog]
289 > data.generations=60
286 > data.generations=60
290 > 150
287 > 150
291 > EOF
288 > EOF
292
289
293 Single pack - repack does nothing
290 Single pack - repack does nothing
294 $ hg prefetch -r 0
291 $ hg prefetch -r 0
295 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
292 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
296 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
293 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
297 [1]
294 [1]
298 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
295 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
299 [1]
296 [1]
300 $ hg repack --incremental
297 $ hg repack --incremental
301 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
298 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
302 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
299 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
303 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
300 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
304 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
301 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
305
302
306 3 gen1 packs, 1 gen0 pack - packs 3 gen1 into 1
303 3 gen1 packs, 1 gen0 pack - packs 3 gen1 into 1
307 $ hg prefetch -r 1
304 $ hg prefetch -r 1
308 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
305 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
309 $ hg prefetch -r 2
306 $ hg prefetch -r 2
310 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
307 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
311 $ hg prefetch -r 38
308 $ hg prefetch -r 38
312 abort: unknown revision '38'!
309 abort: unknown revision '38'!
313 [255]
310 [255]
314 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
311 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
315 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
312 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
316 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
313 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
317 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
314 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
318
315
319 For the data packs, setting the limit for the repackmaxpacksize to be 64 such
316 For the data packs, setting the limit for the repackmaxpacksize to be 64 such
320 that data pack with size 65 is more than the limit. This effectively ensures
317 that data pack with size 65 is more than the limit. This effectively ensures
321 that no generation has 3 packs and therefore, no packs are chosen for the
318 that no generation has 3 packs and therefore, no packs are chosen for the
322 incremental repacking. As for the history packs, setting repackmaxpacksize to be
319 incremental repacking. As for the history packs, setting repackmaxpacksize to be
323 0 which should always result in no repacking.
320 0 which should always result in no repacking.
324 $ hg repack --incremental --config remotefilelog.data.repackmaxpacksize=64 \
321 $ hg repack --incremental --config remotefilelog.data.repackmaxpacksize=64 \
325 > --config remotefilelog.history.repackmaxpacksize=0
322 > --config remotefilelog.history.repackmaxpacksize=0
326 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
323 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
327 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
324 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
328 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
325 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
329 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
326 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
330 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
327 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
331 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
328 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
332
329
333 Setting limit for the repackmaxpacksize to be the size of the biggest pack file
330 Setting limit for the repackmaxpacksize to be the size of the biggest pack file
334 which ensures that it is effectively ignored in the incremental repacking.
331 which ensures that it is effectively ignored in the incremental repacking.
335 $ hg repack --incremental --config remotefilelog.data.repackmaxpacksize=65 \
332 $ hg repack --incremental --config remotefilelog.data.repackmaxpacksize=65 \
336 > --config remotefilelog.history.repackmaxpacksize=336
333 > --config remotefilelog.history.repackmaxpacksize=336
337 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
334 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
338 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
335 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
339 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
336 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
340 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
337 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
341 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
338 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
342 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
339 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
343
340
344 1 gen3 pack, 1 gen0 pack - does nothing
341 1 gen3 pack, 1 gen0 pack - does nothing
345 $ hg repack --incremental
342 $ hg repack --incremental
346 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
343 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
347 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
344 -r--r--r-- 70 052643fdcdebbd42d7c180a651a30d46098e6fe1.datapack
348 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
345 -r--r--r-- 149 78840d69389c7404327f7477e3931c89945c37d1.datapack
349 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
346 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
350 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
347 -r--r--r-- 254 8abe7889aae389337d12ebe6085d4ee13854c7c9.histpack
351 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
348 -r--r--r-- 90 955a622173324b2d8b53e1147f209f1cf125302e.histpack
352
349
353 Pull should run background repack
350 Pull should run background repack
354 $ cat >> .hg/hgrc <<EOF
351 $ cat >> .hg/hgrc <<EOF
355 > [remotefilelog]
352 > [remotefilelog]
356 > backgroundrepack=True
353 > backgroundrepack=True
357 > EOF
354 > EOF
358 $ clearcache
355 $ clearcache
359 $ hg prefetch -r 0
356 $ hg prefetch -r 0
360 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
357 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
361 $ hg prefetch -r 1
358 $ hg prefetch -r 1
362 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
359 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
363 $ hg prefetch -r 2
360 $ hg prefetch -r 2
364 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
361 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
365 $ hg prefetch -r 3
362 $ hg prefetch -r 3
366 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
363 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
367
364
368 $ hg pull
365 $ hg pull
369 pulling from ssh://user@dummy/master
366 pulling from ssh://user@dummy/master
370 searching for changes
367 searching for changes
371 no changes found
368 no changes found
372 (running background incremental repack)
369 (running background incremental repack)
373 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
370 $ ls_l $TESTTMP/hgcache/master/packs/ | grep datapack
374 -r--r--r-- 303 156a6c1c83aeb69422d7936e0a46ba9bc06a71c0.datapack
371 -r--r--r-- 303 156a6c1c83aeb69422d7936e0a46ba9bc06a71c0.datapack
375 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
372 $ ls_l $TESTTMP/hgcache/master/packs/ | grep histpack
376 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
373 -r--r--r-- 336 604552d403a1381749faf656feca0ca265a6d52c.histpack
377
374
378 Test environment variable resolution
375 Test environment variable resolution
379 $ CACHEPATH=$TESTTMP/envcache hg prefetch --config 'remotefilelog.cachepath=$CACHEPATH'
376 $ CACHEPATH=$TESTTMP/envcache hg prefetch --config 'remotefilelog.cachepath=$CACHEPATH'
380 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
377 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
381 $ find $TESTTMP/envcache | sort
378 $ find $TESTTMP/envcache | sort
382 $TESTTMP/envcache
379 $TESTTMP/envcache
383 $TESTTMP/envcache/master
380 $TESTTMP/envcache/master
384 $TESTTMP/envcache/master/95
381 $TESTTMP/envcache/master/95
385 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a
382 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a
386 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/577959738234a1eb241ed3ed4b22a575833f56e0
383 $TESTTMP/envcache/master/95/cb0bfd2977c761298d9624e4b4d4c72a39974a/577959738234a1eb241ed3ed4b22a575833f56e0
387 $TESTTMP/envcache/repos
384 $TESTTMP/envcache/repos
388
385
389 Test local remotefilelog blob is correct when based on a pack
386 Test local remotefilelog blob is correct when based on a pack
390 $ hg prefetch -r .
387 $ hg prefetch -r .
391 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
388 1 files fetched over 1 fetches - (1 misses, 0.00% hit ratio) over * (glob)
392 $ echo >> y
389 $ echo >> y
393 $ hg commit -m y2
390 $ hg commit -m y2
394 $ hg debugremotefilelog .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
391 $ hg debugremotefilelog .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
395 size: 9 bytes
392 size: 9 bytes
396 path: .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
393 path: .hg/store/data/95cb0bfd2977c761298d9624e4b4d4c72a39974a/b70860edba4f8242a1d52f2a94679dd23cb76808
397 key: b70860edba4f
394 key: b70860edba4f
398
395
399 node => p1 p2 linknode copyfrom
396 node => p1 p2 linknode copyfrom
400 b70860edba4f => 577959738234 000000000000 08d3fbc98c48
397 b70860edba4f => 577959738234 000000000000 08d3fbc98c48
401 577959738234 => 1bb2e6237e03 000000000000 c7faf2fc439a x
398 577959738234 => 1bb2e6237e03 000000000000 c7faf2fc439a x
402 1bb2e6237e03 => d4a3ed9310e5 000000000000 0b03bbc9e1e7
399 1bb2e6237e03 => d4a3ed9310e5 000000000000 0b03bbc9e1e7
403 d4a3ed9310e5 => aee31534993a 000000000000 421535db10b6
400 d4a3ed9310e5 => aee31534993a 000000000000 421535db10b6
404 aee31534993a => 1406e7411862 000000000000 a89d614e2364
401 aee31534993a => 1406e7411862 000000000000 a89d614e2364
405 1406e7411862 => 000000000000 000000000000 b292c1e3311f
402 1406e7411862 => 000000000000 000000000000 b292c1e3311f
406
403
407 Test limiting the max delta chain length
404 Test limiting the max delta chain length
408 $ hg repack --config packs.maxchainlen=1
405 $ hg repack --config packs.maxchainlen=1
409 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.dataidx
406 $ hg debugdatapack $TESTTMP/hgcache/master/packs/*.dataidx
410 $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909:
407 $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909:
411 x:
408 x:
412 Node Delta Base Delta Length Blob Size
409 Node Delta Base Delta Length Blob Size
413 1bb2e6237e03 000000000000 8 8
410 1bb2e6237e03 000000000000 8 8
414 d4a3ed9310e5 1bb2e6237e03 12 6
411 d4a3ed9310e5 1bb2e6237e03 12 6
415 aee31534993a 000000000000 4 4
412 aee31534993a 000000000000 4 4
416 1406e7411862 aee31534993a 12 2
413 1406e7411862 aee31534993a 12 2
417
414
418 Total: 36 20 (80.0% bigger)
415 Total: 36 20 (80.0% bigger)
419 y:
416 y:
420 Node Delta Base Delta Length Blob Size
417 Node Delta Base Delta Length Blob Size
421 577959738234 000000000000 70 8
418 577959738234 000000000000 70 8
422
419
423 Total: 70 8 (775.0% bigger)
420 Total: 70 8 (775.0% bigger)
424
421
425 Test huge pack cleanup using different values of packs.maxpacksize:
422 Test huge pack cleanup using different values of packs.maxpacksize:
426 $ hg repack --incremental --debug
423 $ hg repack --incremental --debug
427 $ hg repack --incremental --debug --config packs.maxpacksize=512
424 $ hg repack --incremental --debug --config packs.maxpacksize=512
428 removing oversize packfile $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909.datapack (425 bytes)
425 removing oversize packfile $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909.datapack (425 bytes)
429 removing oversize packfile $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909.dataidx (1.21 KB)
426 removing oversize packfile $TESTTMP/hgcache/master/packs/f258af4c033dd5cd32b4dbc42a1efcd8e4c7d909.dataidx (1.21 KB)
430
427
431 Do a repack where the new pack reuses a delta from the old pack
428 Do a repack where the new pack reuses a delta from the old pack
432 $ clearcache
429 $ clearcache
433 $ hg prefetch -r '2::3'
430 $ hg prefetch -r '2::3'
434 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over * (glob)
431 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over * (glob)
435 $ hg repack
432 $ hg repack
436 $ hg debugdatapack $CACHEDIR/master/packs/*.datapack
433 $ hg debugdatapack $CACHEDIR/master/packs/*.datapack
437 $TESTTMP/hgcache/master/packs/9ec6b30891bd851320acb7c66b69a2bdf41c8df3:
434 $TESTTMP/hgcache/master/packs/9ec6b30891bd851320acb7c66b69a2bdf41c8df3:
438 x:
435 x:
439 Node Delta Base Delta Length Blob Size
436 Node Delta Base Delta Length Blob Size
440 1bb2e6237e03 000000000000 8 8
437 1bb2e6237e03 000000000000 8 8
441 d4a3ed9310e5 1bb2e6237e03 12 6
438 d4a3ed9310e5 1bb2e6237e03 12 6
442
439
443 Total: 20 14 (42.9% bigger)
440 Total: 20 14 (42.9% bigger)
444 $ hg prefetch -r '0::1'
441 $ hg prefetch -r '0::1'
445 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over * (glob)
442 2 files fetched over 1 fetches - (2 misses, 0.00% hit ratio) over * (glob)
446 $ hg repack
443 $ hg repack
447 $ hg debugdatapack $CACHEDIR/master/packs/*.datapack
444 $ hg debugdatapack $CACHEDIR/master/packs/*.datapack
448 $TESTTMP/hgcache/master/packs/156a6c1c83aeb69422d7936e0a46ba9bc06a71c0:
445 $TESTTMP/hgcache/master/packs/156a6c1c83aeb69422d7936e0a46ba9bc06a71c0:
449 x:
446 x:
450 Node Delta Base Delta Length Blob Size
447 Node Delta Base Delta Length Blob Size
451 1bb2e6237e03 000000000000 8 8
448 1bb2e6237e03 000000000000 8 8
452 d4a3ed9310e5 1bb2e6237e03 12 6
449 d4a3ed9310e5 1bb2e6237e03 12 6
453 aee31534993a d4a3ed9310e5 12 4
450 aee31534993a d4a3ed9310e5 12 4
454 1406e7411862 aee31534993a 12 2
451 1406e7411862 aee31534993a 12 2
455
452
456 Total: 44 20 (120.0% bigger)
453 Total: 44 20 (120.0% bigger)
General Comments 0
You need to be logged in to leave comments. Login now