##// END OF EJS Templates
lfs: migrate to the fileprefetch callback mechanism
Matt Harbison -
r36155:a991fcc4 default
parent child Browse files
Show More
@@ -1,392 +1,393
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will prompt setting this when it must use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 from mercurial.i18n import _
126 126
127 127 from mercurial import (
128 128 bundle2,
129 129 changegroup,
130 130 cmdutil,
131 131 config,
132 132 context,
133 133 error,
134 134 exchange,
135 135 extensions,
136 136 filelog,
137 137 fileset,
138 138 hg,
139 139 localrepo,
140 140 merge,
141 141 minifileset,
142 142 node,
143 143 pycompat,
144 144 registrar,
145 145 revlog,
146 146 scmutil,
147 147 templatekw,
148 148 upgrade,
149 149 util,
150 150 vfs as vfsmod,
151 151 wireproto,
152 152 )
153 153
154 154 from . import (
155 155 blobstore,
156 156 wrapper,
157 157 )
158 158
159 159 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
160 160 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
161 161 # be specifying the version(s) of Mercurial they are tested with, or
162 162 # leave the attribute unspecified.
163 163 testedwith = 'ships-with-hg-core'
164 164
165 165 configtable = {}
166 166 configitem = registrar.configitem(configtable)
167 167
168 168 configitem('experimental', 'lfs.user-agent',
169 169 default=None,
170 170 )
171 171 configitem('experimental', 'lfs.worker-enable',
172 172 default=False,
173 173 )
174 174
175 175 configitem('lfs', 'url',
176 176 default=None,
177 177 )
178 178 configitem('lfs', 'usercache',
179 179 default=None,
180 180 )
181 181 # Deprecated
182 182 configitem('lfs', 'threshold',
183 183 default=None,
184 184 )
185 185 configitem('lfs', 'track',
186 186 default='none()',
187 187 )
188 188 configitem('lfs', 'retry',
189 189 default=5,
190 190 )
191 191
192 192 cmdtable = {}
193 193 command = registrar.command(cmdtable)
194 194
195 195 templatekeyword = registrar.templatekeyword()
196 196 filesetpredicate = registrar.filesetpredicate()
197 197
198 198 def featuresetup(ui, supported):
199 199 # don't die on seeing a repo with the lfs requirement
200 200 supported |= {'lfs'}
201 201
202 202 def uisetup(ui):
203 203 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
204 204
205 205 def reposetup(ui, repo):
206 206 # Nothing to do with a remote repo
207 207 if not repo.local():
208 208 return
209 209
210 210 repo.svfs.lfslocalblobstore = blobstore.local(repo)
211 211 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
212 212
213 213 class lfsrepo(repo.__class__):
214 214 @localrepo.unfilteredmethod
215 215 def commitctx(self, ctx, error=False):
216 216 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
217 217 return super(lfsrepo, self).commitctx(ctx, error)
218 218
219 219 repo.__class__ = lfsrepo
220 220
221 221 if 'lfs' not in repo.requirements:
222 222 def checkrequireslfs(ui, repo, **kwargs):
223 223 if 'lfs' not in repo.requirements:
224 224 last = kwargs.get('node_last')
225 225 _bin = node.bin
226 226 if last:
227 227 s = repo.set('%n:%n', _bin(kwargs['node']), _bin(last))
228 228 else:
229 229 s = repo.set('%n', _bin(kwargs['node']))
230 230 for ctx in s:
231 231 # TODO: is there a way to just walk the files in the commit?
232 232 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
233 233 repo.requirements.add('lfs')
234 234 repo._writerequirements()
235 235 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
236 236 break
237 237
238 238 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
239 239 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
240 240 else:
241 241 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
242 242
243 243 def _trackedmatcher(repo):
244 244 """Return a function (path, size) -> bool indicating whether or not to
245 245 track a given file with lfs."""
246 246 if not repo.wvfs.exists('.hglfs'):
247 247 # No '.hglfs' in wdir. Fallback to config for now.
248 248 trackspec = repo.ui.config('lfs', 'track')
249 249
250 250 # deprecated config: lfs.threshold
251 251 threshold = repo.ui.configbytes('lfs', 'threshold')
252 252 if threshold:
253 253 fileset.parse(trackspec) # make sure syntax errors are confined
254 254 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
255 255
256 256 return minifileset.compile(trackspec)
257 257
258 258 data = repo.wvfs.tryread('.hglfs')
259 259 if not data:
260 260 return lambda p, s: False
261 261
262 262 # Parse errors here will abort with a message that points to the .hglfs file
263 263 # and line number.
264 264 cfg = config.config()
265 265 cfg.parse('.hglfs', data)
266 266
267 267 try:
268 268 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
269 269 for pattern, rule in cfg.items('track')]
270 270 except error.ParseError as e:
271 271 # The original exception gives no indicator that the error is in the
272 272 # .hglfs file, so add that.
273 273
274 274 # TODO: See if the line number of the file can be made available.
275 275 raise error.Abort(_('parse error in .hglfs: %s') % e)
276 276
277 277 def _match(path, size):
278 278 for pat, rule in rules:
279 279 if pat(path, size):
280 280 return rule(path, size)
281 281
282 282 return False
283 283
284 284 return _match
285 285
286 286 def wrapfilelog(filelog):
287 287 wrapfunction = extensions.wrapfunction
288 288
289 289 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
290 290 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
291 291 wrapfunction(filelog, 'size', wrapper.filelogsize)
292 292
293 293 def extsetup(ui):
294 294 wrapfilelog(filelog.filelog)
295 295
296 296 wrapfunction = extensions.wrapfunction
297 297
298 298 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
299 299 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
300 300
301 301 wrapfunction(upgrade, '_finishdatamigration',
302 302 wrapper.upgradefinishdatamigration)
303 303
304 304 wrapfunction(upgrade, 'preservedrequirements',
305 305 wrapper.upgraderequirements)
306 306
307 307 wrapfunction(upgrade, 'supporteddestrequirements',
308 308 wrapper.upgraderequirements)
309 309
310 310 wrapfunction(changegroup,
311 311 'supportedoutgoingversions',
312 312 wrapper.supportedoutgoingversions)
313 313 wrapfunction(changegroup,
314 314 'allsupportedversions',
315 315 wrapper.allsupportedversions)
316 316
317 317 wrapfunction(exchange, 'push', wrapper.push)
318 318 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
319 319
320 320 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
321 321 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
322 322 context.basefilectx.islfs = wrapper.filectxislfs
323 323
324 324 revlog.addflagprocessor(
325 325 revlog.REVIDX_EXTSTORED,
326 326 (
327 327 wrapper.readfromstore,
328 328 wrapper.writetostore,
329 329 wrapper.bypasscheckhash,
330 330 ),
331 331 )
332 332
333 333 wrapfunction(hg, 'clone', wrapper.hgclone)
334 334 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
335 335
336 336 wrapfunction(merge, 'applyupdates', wrapper.mergemodapplyupdates)
337 wrapfunction(cmdutil, '_prefetchfiles', wrapper.cmdutilprefetchfiles)
337
338 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
338 339
339 340 # Make bundle choose changegroup3 instead of changegroup2. This affects
340 341 # "hg bundle" command. Note: it does not cover all bundle formats like
341 342 # "packed1". Using "packed1" with lfs will likely cause trouble.
342 343 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
343 344 for k in names:
344 345 exchange._bundlespeccgversions[k] = '03'
345 346
346 347 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
347 348 # options and blob stores are passed from othervfs to the new readonlyvfs.
348 349 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
349 350
350 351 # when writing a bundle via "hg bundle" command, upload related LFS blobs
351 352 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
352 353
353 354 @filesetpredicate('lfs()', callstatus=True)
354 355 def lfsfileset(mctx, x):
355 356 """File that uses LFS storage."""
356 357 # i18n: "lfs" is a keyword
357 358 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
358 359 return [f for f in mctx.subset
359 360 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
360 361
361 362 @templatekeyword('lfs_files')
362 363 def lfsfiles(repo, ctx, **args):
363 364 """List of strings. All files modified, added, or removed by this
364 365 changeset."""
365 366 args = pycompat.byteskwargs(args)
366 367
367 368 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
368 369 files = sorted(pointers.keys())
369 370
370 371 def pointer(v):
371 372 # In the file spec, version is first and the other keys are sorted.
372 373 sortkeyfunc = lambda x: (x[0] != 'version', x)
373 374 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
374 375 return util.sortdict(items)
375 376
376 377 makemap = lambda v: {
377 378 'file': v,
378 379 'lfsoid': pointers[v].oid() if pointers[v] else None,
379 380 'lfspointer': templatekw.hybriddict(pointer(v)),
380 381 }
381 382
382 383 # TODO: make the separator ', '?
383 384 f = templatekw._showlist('lfs_file', files, args)
384 385 return templatekw._hybrid(f, files, makemap, pycompat.identity)
385 386
386 387 @command('debuglfsupload',
387 388 [('r', 'rev', [], _('upload large files introduced by REV'))])
388 389 def debuglfsupload(ui, repo, **opts):
389 390 """upload lfs blobs added by the working copy parent or given revisions"""
390 391 revs = opts.get('rev', [])
391 392 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
392 393 wrapper.uploadblobs(repo, pointers)
@@ -1,418 +1,412
1 1 # wrapper.py - methods wrapping core mercurial logic
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import hashlib
11 11
12 12 from mercurial.i18n import _
13 13 from mercurial.node import bin, nullid, short
14 14
15 15 from mercurial import (
16 16 error,
17 17 filelog,
18 18 revlog,
19 19 util,
20 20 )
21 21
22 22 from ..largefiles import lfutil
23 23
24 24 from . import (
25 25 blobstore,
26 26 pointer,
27 27 )
28 28
29 29 def supportedoutgoingversions(orig, repo):
30 30 versions = orig(repo)
31 31 if 'lfs' in repo.requirements:
32 32 versions.discard('01')
33 33 versions.discard('02')
34 34 versions.add('03')
35 35 return versions
36 36
37 37 def allsupportedversions(orig, ui):
38 38 versions = orig(ui)
39 39 versions.add('03')
40 40 return versions
41 41
42 42 def _capabilities(orig, repo, proto):
43 43 '''Wrap server command to announce lfs server capability'''
44 44 caps = orig(repo, proto)
45 45 # XXX: change to 'lfs=serve' when separate git server isn't required?
46 46 caps.append('lfs')
47 47 return caps
48 48
49 49 def bypasscheckhash(self, text):
50 50 return False
51 51
52 52 def readfromstore(self, text):
53 53 """Read filelog content from local blobstore transform for flagprocessor.
54 54
55 55 Default tranform for flagprocessor, returning contents from blobstore.
56 56 Returns a 2-typle (text, validatehash) where validatehash is True as the
57 57 contents of the blobstore should be checked using checkhash.
58 58 """
59 59 p = pointer.deserialize(text)
60 60 oid = p.oid()
61 61 store = self.opener.lfslocalblobstore
62 62 if not store.has(oid):
63 63 p.filename = self.filename
64 64 self.opener.lfsremoteblobstore.readbatch([p], store)
65 65
66 66 # The caller will validate the content
67 67 text = store.read(oid, verify=False)
68 68
69 69 # pack hg filelog metadata
70 70 hgmeta = {}
71 71 for k in p.keys():
72 72 if k.startswith('x-hg-'):
73 73 name = k[len('x-hg-'):]
74 74 hgmeta[name] = p[k]
75 75 if hgmeta or text.startswith('\1\n'):
76 76 text = filelog.packmeta(hgmeta, text)
77 77
78 78 return (text, True)
79 79
80 80 def writetostore(self, text):
81 81 # hg filelog metadata (includes rename, etc)
82 82 hgmeta, offset = filelog.parsemeta(text)
83 83 if offset and offset > 0:
84 84 # lfs blob does not contain hg filelog metadata
85 85 text = text[offset:]
86 86
87 87 # git-lfs only supports sha256
88 88 oid = hashlib.sha256(text).hexdigest()
89 89 self.opener.lfslocalblobstore.write(oid, text)
90 90
91 91 # replace contents with metadata
92 92 longoid = 'sha256:%s' % oid
93 93 metadata = pointer.gitlfspointer(oid=longoid, size=str(len(text)))
94 94
95 95 # by default, we expect the content to be binary. however, LFS could also
96 96 # be used for non-binary content. add a special entry for non-binary data.
97 97 # this will be used by filectx.isbinary().
98 98 if not util.binary(text):
99 99 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
100 100 metadata['x-is-binary'] = '0'
101 101
102 102 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
103 103 if hgmeta is not None:
104 104 for k, v in hgmeta.iteritems():
105 105 metadata['x-hg-%s' % k] = v
106 106
107 107 rawtext = metadata.serialize()
108 108 return (rawtext, False)
109 109
110 110 def _islfs(rlog, node=None, rev=None):
111 111 if rev is None:
112 112 if node is None:
113 113 # both None - likely working copy content where node is not ready
114 114 return False
115 115 rev = rlog.rev(node)
116 116 else:
117 117 node = rlog.node(rev)
118 118 if node == nullid:
119 119 return False
120 120 flags = rlog.flags(rev)
121 121 return bool(flags & revlog.REVIDX_EXTSTORED)
122 122
123 123 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
124 124 cachedelta=None, node=None,
125 125 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
126 126 textlen = len(text)
127 127 # exclude hg rename meta from file size
128 128 meta, offset = filelog.parsemeta(text)
129 129 if offset:
130 130 textlen -= offset
131 131
132 132 lfstrack = self.opener.options['lfstrack']
133 133
134 134 if lfstrack(self.filename, textlen):
135 135 flags |= revlog.REVIDX_EXTSTORED
136 136
137 137 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
138 138 node=node, flags=flags, **kwds)
139 139
140 140 def filelogrenamed(orig, self, node):
141 141 if _islfs(self, node):
142 142 rawtext = self.revision(node, raw=True)
143 143 if not rawtext:
144 144 return False
145 145 metadata = pointer.deserialize(rawtext)
146 146 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
147 147 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
148 148 else:
149 149 return False
150 150 return orig(self, node)
151 151
152 152 def filelogsize(orig, self, rev):
153 153 if _islfs(self, rev=rev):
154 154 # fast path: use lfs metadata to answer size
155 155 rawtext = self.revision(rev, raw=True)
156 156 metadata = pointer.deserialize(rawtext)
157 157 return int(metadata['size'])
158 158 return orig(self, rev)
159 159
160 160 def filectxcmp(orig, self, fctx):
161 161 """returns True if text is different than fctx"""
162 162 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
163 163 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
164 164 # fast path: check LFS oid
165 165 p1 = pointer.deserialize(self.rawdata())
166 166 p2 = pointer.deserialize(fctx.rawdata())
167 167 return p1.oid() != p2.oid()
168 168 return orig(self, fctx)
169 169
170 170 def filectxisbinary(orig, self):
171 171 if self.islfs():
172 172 # fast path: use lfs metadata to answer isbinary
173 173 metadata = pointer.deserialize(self.rawdata())
174 174 # if lfs metadata says nothing, assume it's binary by default
175 175 return bool(int(metadata.get('x-is-binary', 1)))
176 176 return orig(self)
177 177
178 178 def filectxislfs(self):
179 179 return _islfs(self.filelog(), self.filenode())
180 180
181 181 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
182 182 orig(fm, ctx, matcher, path, decode)
183 183 fm.data(rawdata=ctx[path].rawdata())
184 184
185 185 def convertsink(orig, sink):
186 186 sink = orig(sink)
187 187 if sink.repotype == 'hg':
188 188 class lfssink(sink.__class__):
189 189 def putcommit(self, files, copies, parents, commit, source, revmap,
190 190 full, cleanp2):
191 191 pc = super(lfssink, self).putcommit
192 192 node = pc(files, copies, parents, commit, source, revmap, full,
193 193 cleanp2)
194 194
195 195 if 'lfs' not in self.repo.requirements:
196 196 ctx = self.repo[node]
197 197
198 198 # The file list may contain removed files, so check for
199 199 # membership before assuming it is in the context.
200 200 if any(f in ctx and ctx[f].islfs() for f, n in files):
201 201 self.repo.requirements.add('lfs')
202 202 self.repo._writerequirements()
203 203
204 204 # Permanently enable lfs locally
205 205 self.repo.vfs.append(
206 206 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
207 207
208 208 return node
209 209
210 210 sink.__class__ = lfssink
211 211
212 212 return sink
213 213
214 214 def vfsinit(orig, self, othervfs):
215 215 orig(self, othervfs)
216 216 # copy lfs related options
217 217 for k, v in othervfs.options.items():
218 218 if k.startswith('lfs'):
219 219 self.options[k] = v
220 220 # also copy lfs blobstores. note: this can run before reposetup, so lfs
221 221 # blobstore attributes are not always ready at this time.
222 222 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
223 223 if util.safehasattr(othervfs, name):
224 224 setattr(self, name, getattr(othervfs, name))
225 225
226 226 def hgclone(orig, ui, opts, *args, **kwargs):
227 227 result = orig(ui, opts, *args, **kwargs)
228 228
229 229 if result is not None:
230 230 sourcerepo, destrepo = result
231 231 repo = destrepo.local()
232 232
233 233 # When cloning to a remote repo (like through SSH), no repo is available
234 234 # from the peer. Therefore the hgrc can't be updated.
235 235 if not repo:
236 236 return result
237 237
238 238 # If lfs is required for this repo, permanently enable it locally
239 239 if 'lfs' in repo.requirements:
240 240 repo.vfs.append('hgrc',
241 241 util.tonativeeol('\n[extensions]\nlfs=\n'))
242 242
243 243 return result
244 244
245 245 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
246 246 orig(sourcerepo, destrepo, bookmarks, defaultpath)
247 247
248 248 # If lfs is required for this repo, permanently enable it locally
249 249 if 'lfs' in destrepo.requirements:
250 250 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
251 251
252 252 def _prefetchfiles(repo, ctx, files):
253 253 """Ensure that required LFS blobs are present, fetching them as a group if
254 254 needed.
255 255
256 256 This is centralized logic for various prefetch hooks."""
257 257 pointers = []
258 258 localstore = repo.svfs.lfslocalblobstore
259 259
260 260 for f in files:
261 261 p = pointerfromctx(ctx, f)
262 262 if p and not localstore.has(p.oid()):
263 263 p.filename = f
264 264 pointers.append(p)
265 265
266 266 if pointers:
267 267 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
268 268
269 def cmdutilprefetchfiles(orig, repo, ctx, files):
270 """Prefetch the indicated files before they are accessed by a command."""
271 orig(repo, ctx, files)
272
273 _prefetchfiles(repo, ctx, files)
274
275 269 def mergemodapplyupdates(orig, repo, actions, wctx, mctx, overwrite,
276 270 labels=None):
277 271 """Ensure that the required LFS blobs are present before applying updates,
278 272 fetching them as a group if needed.
279 273
280 274 This has the effect of ensuring all necessary LFS blobs are present before
281 275 making working directory changes during an update (including after clone and
282 276 share) or merge."""
283 277
284 278 # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they
285 279 # don't touch mctx. 'cd' is skipped, because changed/deleted never resolves
286 280 # to something from the remote side.
287 281 oplist = [actions[a] for a in 'g dc dg m'.split()]
288 282
289 283 _prefetchfiles(repo, mctx,
290 284 [f for sublist in oplist for f, args, msg in sublist])
291 285
292 286 return orig(repo, actions, wctx, mctx, overwrite, labels)
293 287
294 288 def _canskipupload(repo):
295 289 # if remotestore is a null store, upload is a no-op and can be skipped
296 290 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
297 291
298 292 def candownload(repo):
299 293 # if remotestore is a null store, downloads will lead to nothing
300 294 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
301 295
302 296 def uploadblobsfromrevs(repo, revs):
303 297 '''upload lfs blobs introduced by revs
304 298
305 299 Note: also used by other extensions e. g. infinitepush. avoid renaming.
306 300 '''
307 301 if _canskipupload(repo):
308 302 return
309 303 pointers = extractpointers(repo, revs)
310 304 uploadblobs(repo, pointers)
311 305
312 306 def prepush(pushop):
313 307 """Prepush hook.
314 308
315 309 Read through the revisions to push, looking for filelog entries that can be
316 310 deserialized into metadata so that we can block the push on their upload to
317 311 the remote blobstore.
318 312 """
319 313 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
320 314
321 315 def push(orig, repo, remote, *args, **kwargs):
322 316 """bail on push if the extension isn't enabled on remote when needed"""
323 317 if 'lfs' in repo.requirements:
324 318 # If the remote peer is for a local repo, the requirement tests in the
325 319 # base class method enforce lfs support. Otherwise, some revisions in
326 320 # this repo use lfs, and the remote repo needs the extension loaded.
327 321 if not remote.local() and not remote.capable('lfs'):
328 322 # This is a copy of the message in exchange.push() when requirements
329 323 # are missing between local repos.
330 324 m = _("required features are not supported in the destination: %s")
331 325 raise error.Abort(m % 'lfs',
332 326 hint=_('enable the lfs extension on the server'))
333 327 return orig(repo, remote, *args, **kwargs)
334 328
335 329 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
336 330 *args, **kwargs):
337 331 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
338 332 uploadblobsfromrevs(repo, outgoing.missing)
339 333 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
340 334 **kwargs)
341 335
342 336 def extractpointers(repo, revs):
343 337 """return a list of lfs pointers added by given revs"""
344 338 repo.ui.debug('lfs: computing set of blobs to upload\n')
345 339 pointers = {}
346 340 for r in revs:
347 341 ctx = repo[r]
348 342 for p in pointersfromctx(ctx).values():
349 343 pointers[p.oid()] = p
350 344 return sorted(pointers.values())
351 345
352 346 def pointerfromctx(ctx, f, removed=False):
353 347 """return a pointer for the named file from the given changectx, or None if
354 348 the file isn't LFS.
355 349
356 350 Optionally, the pointer for a file deleted from the context can be returned.
357 351 Since no such pointer is actually stored, and to distinguish from a non LFS
358 352 file, this pointer is represented by an empty dict.
359 353 """
360 354 _ctx = ctx
361 355 if f not in ctx:
362 356 if not removed:
363 357 return None
364 358 if f in ctx.p1():
365 359 _ctx = ctx.p1()
366 360 elif f in ctx.p2():
367 361 _ctx = ctx.p2()
368 362 else:
369 363 return None
370 364 fctx = _ctx[f]
371 365 if not _islfs(fctx.filelog(), fctx.filenode()):
372 366 return None
373 367 try:
374 368 p = pointer.deserialize(fctx.rawdata())
375 369 if ctx == _ctx:
376 370 return p
377 371 return {}
378 372 except pointer.InvalidPointer as ex:
379 373 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
380 374 % (f, short(_ctx.node()), ex))
381 375
382 376 def pointersfromctx(ctx, removed=False):
383 377 """return a dict {path: pointer} for given single changectx.
384 378
385 379 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
386 380 stored for the path is an empty dict.
387 381 """
388 382 result = {}
389 383 for f in ctx.files():
390 384 p = pointerfromctx(ctx, f, removed=removed)
391 385 if p is not None:
392 386 result[f] = p
393 387 return result
394 388
395 389 def uploadblobs(repo, pointers):
396 390 """upload given pointers from local blobstore"""
397 391 if not pointers:
398 392 return
399 393
400 394 remoteblob = repo.svfs.lfsremoteblobstore
401 395 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
402 396
403 397 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
404 398 orig(ui, srcrepo, dstrepo, requirements)
405 399
406 400 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
407 401 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
408 402
409 403 for dirpath, dirs, files in srclfsvfs.walk():
410 404 for oid in files:
411 405 ui.write(_('copying lfs blob %s\n') % oid)
412 406 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
413 407
414 408 def upgraderequirements(orig, repo):
415 409 reqs = orig(repo)
416 410 if 'lfs' in repo.requirements:
417 411 reqs.add('lfs')
418 412 return reqs
General Comments 0
You need to be logged in to leave comments. Login now