##// END OF EJS Templates
wrapfunction: use sysstr instead of bytes as argument in "lfs"...
marmoute -
r51679:dde4b55a default
parent child Browse files
Show More
@@ -1,446 +1,446
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 The extension reads its configuration from a versioned ``.hglfs``
57 57 configuration file found in the root of the working directory. The
58 58 ``.hglfs`` file uses the same syntax as all other Mercurial
59 59 configuration files. It uses a single section, ``[track]``.
60 60
61 61 The ``[track]`` section specifies which files are stored as LFS (or
62 62 not). Each line is keyed by a file pattern, with a predicate value.
63 63 The first file pattern match is used, so put more specific patterns
64 64 first. The available predicates are ``all()``, ``none()``, and
65 65 ``size()``. See "hg help filesets.size" for the latter.
66 66
67 67 Example versioned ``.hglfs`` file::
68 68
69 69 [track]
70 70 # No Makefile or python file, anywhere, will be LFS
71 71 **Makefile = none()
72 72 **.py = none()
73 73
74 74 **.zip = all()
75 75 **.exe = size(">1MB")
76 76
77 77 # Catchall for everything not matched above
78 78 ** = size(">10MB")
79 79
80 80 Configs::
81 81
82 82 [lfs]
83 83 # Remote endpoint. Multiple protocols are supported:
84 84 # - http(s)://user:pass@example.com/path
85 85 # git-lfs endpoint
86 86 # - file:///tmp/path
87 87 # local filesystem, usually for testing
88 88 # if unset, lfs will assume the remote repository also handles blob storage
89 89 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
90 90 # use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123
124 124 import sys
125 125
126 126 from mercurial.i18n import _
127 127 from mercurial.node import bin
128 128
129 129 from mercurial import (
130 130 bundlecaches,
131 131 config,
132 132 context,
133 133 error,
134 134 extensions,
135 135 exthelper,
136 136 filelog,
137 137 filesetlang,
138 138 localrepo,
139 139 logcmdutil,
140 140 minifileset,
141 141 pycompat,
142 142 revlog,
143 143 scmutil,
144 144 templateutil,
145 145 util,
146 146 )
147 147
148 148 from mercurial.interfaces import repository
149 149
150 150 from . import (
151 151 blobstore,
152 152 wireprotolfsserver,
153 153 wrapper,
154 154 )
155 155
156 156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 158 # be specifying the version(s) of Mercurial they are tested with, or
159 159 # leave the attribute unspecified.
160 160 testedwith = b'ships-with-hg-core'
161 161
162 162 eh = exthelper.exthelper()
163 163 eh.merge(wrapper.eh)
164 164 eh.merge(wireprotolfsserver.eh)
165 165
166 166 cmdtable = eh.cmdtable
167 167 configtable = eh.configtable
168 168 extsetup = eh.finalextsetup
169 169 uisetup = eh.finaluisetup
170 170 filesetpredicate = eh.filesetpredicate
171 171 reposetup = eh.finalreposetup
172 172 templatekeyword = eh.templatekeyword
173 173
174 174 eh.configitem(
175 175 b'experimental',
176 176 b'lfs.serve',
177 177 default=True,
178 178 )
179 179 eh.configitem(
180 180 b'experimental',
181 181 b'lfs.user-agent',
182 182 default=None,
183 183 )
184 184 eh.configitem(
185 185 b'experimental',
186 186 b'lfs.disableusercache',
187 187 default=False,
188 188 )
189 189 eh.configitem(
190 190 b'experimental',
191 191 b'lfs.worker-enable',
192 192 default=True,
193 193 )
194 194
195 195 eh.configitem(
196 196 b'lfs',
197 197 b'url',
198 198 default=None,
199 199 )
200 200 eh.configitem(
201 201 b'lfs',
202 202 b'usercache',
203 203 default=None,
204 204 )
205 205 # Deprecated
206 206 eh.configitem(
207 207 b'lfs',
208 208 b'threshold',
209 209 default=None,
210 210 )
211 211 eh.configitem(
212 212 b'lfs',
213 213 b'track',
214 214 default=b'none()',
215 215 )
216 216 eh.configitem(
217 217 b'lfs',
218 218 b'retry',
219 219 default=5,
220 220 )
221 221
222 222 lfsprocessor = (
223 223 wrapper.readfromstore,
224 224 wrapper.writetostore,
225 225 wrapper.bypasscheckhash,
226 226 )
227 227
228 228
229 229 def featuresetup(ui, supported):
230 230 # don't die on seeing a repo with the lfs requirement
231 231 supported |= {b'lfs'}
232 232
233 233
234 234 @eh.uisetup
235 235 def _uisetup(ui):
236 236 localrepo.featuresetupfuncs.add(featuresetup)
237 237
238 238
239 239 @eh.reposetup
240 240 def _reposetup(ui, repo):
241 241 # Nothing to do with a remote repo
242 242 if not repo.local():
243 243 return
244 244
245 245 repo.svfs.lfslocalblobstore = blobstore.local(repo)
246 246 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
247 247
248 248 class lfsrepo(repo.__class__):
249 249 @localrepo.unfilteredmethod
250 250 def commitctx(self, ctx, error=False, origctx=None):
251 251 repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
252 252 return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)
253 253
254 254 repo.__class__ = lfsrepo
255 255
256 256 if b'lfs' not in repo.requirements:
257 257
258 258 def checkrequireslfs(ui, repo, **kwargs):
259 259 with repo.lock():
260 260 if b'lfs' in repo.requirements:
261 261 return 0
262 262
263 263 last = kwargs.get('node_last')
264 264 if last:
265 265 s = repo.set(b'%n:%n', bin(kwargs['node']), bin(last))
266 266 else:
267 267 s = repo.set(b'%n', bin(kwargs['node']))
268 268 match = repo._storenarrowmatch
269 269 for ctx in s:
270 270 # TODO: is there a way to just walk the files in the commit?
271 271 if any(
272 272 ctx[f].islfs()
273 273 for f in ctx.files()
274 274 if f in ctx and match(f)
275 275 ):
276 276 repo.requirements.add(b'lfs')
277 277 repo.features.add(repository.REPO_FEATURE_LFS)
278 278 scmutil.writereporequirements(repo)
279 279 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
280 280 break
281 281
282 282 ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
283 283 ui.setconfig(
284 284 b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
285 285 )
286 286 else:
287 287 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
288 288
289 289
290 290 def _trackedmatcher(repo):
291 291 """Return a function (path, size) -> bool indicating whether or not to
292 292 track a given file with lfs."""
293 293 if not repo.wvfs.exists(b'.hglfs'):
294 294 # No '.hglfs' in wdir. Fallback to config for now.
295 295 trackspec = repo.ui.config(b'lfs', b'track')
296 296
297 297 # deprecated config: lfs.threshold
298 298 threshold = repo.ui.configbytes(b'lfs', b'threshold')
299 299 if threshold:
300 300 filesetlang.parse(trackspec) # make sure syntax errors are confined
301 301 trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)
302 302
303 303 return minifileset.compile(trackspec)
304 304
305 305 data = repo.wvfs.tryread(b'.hglfs')
306 306 if not data:
307 307 return lambda p, s: False
308 308
309 309 # Parse errors here will abort with a message that points to the .hglfs file
310 310 # and line number.
311 311 cfg = config.config()
312 312 cfg.parse(b'.hglfs', data)
313 313
314 314 try:
315 315 rules = [
316 316 (minifileset.compile(pattern), minifileset.compile(rule))
317 317 for pattern, rule in cfg.items(b'track')
318 318 ]
319 319 except error.ParseError as e:
320 320 # The original exception gives no indicator that the error is in the
321 321 # .hglfs file, so add that.
322 322
323 323 # TODO: See if the line number of the file can be made available.
324 324 raise error.Abort(_(b'parse error in .hglfs: %s') % e)
325 325
326 326 def _match(path, size):
327 327 for pat, rule in rules:
328 328 if pat(path, size):
329 329 return rule(path, size)
330 330
331 331 return False
332 332
333 333 return _match
334 334
335 335
336 336 # Called by remotefilelog
337 337 def wrapfilelog(filelog):
338 338 wrapfunction = extensions.wrapfunction
339 339
340 340 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
341 341 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
342 342 wrapfunction(filelog, 'size', wrapper.filelogsize)
343 343
344 344
345 @eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
345 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
346 346 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
347 347 opts = orig(ui, requirements, features)
348 348 for name, module in extensions.extensions(ui):
349 349 if module is sys.modules[__name__]:
350 350 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
351 351 msg = (
352 352 _(b"cannot register multiple processors on flag '%#x'.")
353 353 % revlog.REVIDX_EXTSTORED
354 354 )
355 355 raise error.Abort(msg)
356 356
357 357 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
358 358 break
359 359
360 360 return opts
361 361
362 362
363 363 @eh.extsetup
364 364 def _extsetup(ui):
365 365 wrapfilelog(filelog.filelog)
366 366
367 367 context.basefilectx.islfs = wrapper.filectxislfs
368 368
369 369 scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)
370 370
371 371 # Make bundle choose changegroup3 instead of changegroup2. This affects
372 372 # "hg bundle" command. Note: it does not cover all bundle formats like
373 373 # "packed1". Using "packed1" with lfs will likely cause trouble.
374 374 bundlecaches._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"
375 375
376 376
377 377 @eh.filesetpredicate(b'lfs()')
378 378 def lfsfileset(mctx, x):
379 379 """File that uses LFS storage."""
380 380 # i18n: "lfs" is a keyword
381 381 filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
382 382 ctx = mctx.ctx
383 383
384 384 def lfsfilep(f):
385 385 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
386 386
387 387 return mctx.predicate(lfsfilep, predrepr=b'<lfs>')
388 388
389 389
390 390 @eh.templatekeyword(b'lfs_files', requires={b'ctx'})
391 391 def lfsfiles(context, mapping):
392 392 """List of strings. All files modified, added, or removed by this
393 393 changeset."""
394 394 ctx = context.resource(mapping, b'ctx')
395 395
396 396 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
397 397 files = sorted(pointers.keys())
398 398
399 399 def pointer(v):
400 400 # In the file spec, version is first and the other keys are sorted.
401 401 sortkeyfunc = lambda x: (x[0] != b'version', x)
402 402 items = sorted(pointers[v].items(), key=sortkeyfunc)
403 403 return util.sortdict(items)
404 404
405 405 makemap = lambda v: {
406 406 b'file': v,
407 407 b'lfsoid': pointers[v].oid() if pointers[v] else None,
408 408 b'lfspointer': templateutil.hybriddict(pointer(v)),
409 409 }
410 410
411 411 # TODO: make the separator ', '?
412 412 f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
413 413 return templateutil.hybrid(f, files, makemap, pycompat.identity)
414 414
415 415
416 416 @eh.command(
417 417 b'debuglfsupload',
418 418 [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
419 419 )
420 420 def debuglfsupload(ui, repo, **opts):
421 421 """upload lfs blobs added by the working copy parent or given revisions"""
422 422 revs = opts.get('rev', [])
423 423 pointers = wrapper.extractpointers(repo, logcmdutil.revrange(repo, revs))
424 424 wrapper.uploadblobs(repo, pointers)
425 425
426 426
427 427 @eh.wrapcommand(
428 428 b'verify',
429 429 opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
430 430 )
431 431 def verify(orig, ui, repo, **opts):
432 432 skipflags = repo.ui.configint(b'verify', b'skipflags')
433 433 no_lfs = opts.pop('no_lfs')
434 434
435 435 if skipflags:
436 436 # --lfs overrides the config bit, if set.
437 437 if no_lfs is False:
438 438 skipflags &= ~repository.REVISION_FLAG_EXTSTORED
439 439 else:
440 440 skipflags = 0
441 441
442 442 if no_lfs is True:
443 443 skipflags |= repository.REVISION_FLAG_EXTSTORED
444 444
445 445 with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
446 446 return orig(ui, repo, **opts)
@@ -1,369 +1,369
1 1 # wireprotolfsserver.py - lfs protocol server side implementation
2 2 #
3 3 # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import datetime
10 10 import errno
11 11 import json
12 12 import traceback
13 13
14 14 from mercurial.hgweb import common as hgwebcommon
15 15
16 16 from mercurial import (
17 17 exthelper,
18 18 pycompat,
19 19 util,
20 20 wireprotoserver,
21 21 )
22 22
23 23 from . import blobstore
24 24
25 25 HTTP_OK = hgwebcommon.HTTP_OK
26 26 HTTP_CREATED = hgwebcommon.HTTP_CREATED
27 27 HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST
28 28 HTTP_NOT_FOUND = hgwebcommon.HTTP_NOT_FOUND
29 29 HTTP_METHOD_NOT_ALLOWED = hgwebcommon.HTTP_METHOD_NOT_ALLOWED
30 30 HTTP_NOT_ACCEPTABLE = hgwebcommon.HTTP_NOT_ACCEPTABLE
31 31 HTTP_UNSUPPORTED_MEDIA_TYPE = hgwebcommon.HTTP_UNSUPPORTED_MEDIA_TYPE
32 32
33 33 eh = exthelper.exthelper()
34 34
35 35
36 @eh.wrapfunction(wireprotoserver, b'handlewsgirequest')
36 @eh.wrapfunction(wireprotoserver, 'handlewsgirequest')
37 37 def handlewsgirequest(orig, rctx, req, res, checkperm):
38 38 """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS
39 39 request if it is left unprocessed by the wrapped method.
40 40 """
41 41 if orig(rctx, req, res, checkperm):
42 42 return True
43 43
44 44 if not rctx.repo.ui.configbool(b'experimental', b'lfs.serve'):
45 45 return False
46 46
47 47 if not util.safehasattr(rctx.repo.svfs, 'lfslocalblobstore'):
48 48 return False
49 49
50 50 if not req.dispatchpath:
51 51 return False
52 52
53 53 try:
54 54 if req.dispatchpath == b'.git/info/lfs/objects/batch':
55 55 checkperm(rctx, req, b'pull')
56 56 return _processbatchrequest(rctx.repo, req, res)
57 57 # TODO: reserve and use a path in the proposed http wireprotocol /api/
58 58 # namespace?
59 59 elif req.dispatchpath.startswith(b'.hg/lfs/objects'):
60 60 return _processbasictransfer(
61 61 rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm)
62 62 )
63 63 return False
64 64 except hgwebcommon.ErrorResponse as e:
65 65 # XXX: copied from the handler surrounding wireprotoserver._callhttp()
66 66 # in the wrapped function. Should this be moved back to hgweb to
67 67 # be a common handler?
68 68 for k, v in e.headers:
69 69 res.headers[k] = v
70 70 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
71 71 res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e))
72 72 return True
73 73
74 74
75 75 def _sethttperror(res, code, message=None):
76 76 res.status = hgwebcommon.statusmessage(code, message=message)
77 77 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
78 78 res.setbodybytes(b'')
79 79
80 80
81 81 def _logexception(req):
82 82 """Write information about the current exception to wsgi.errors."""
83 83 tb = pycompat.sysbytes(traceback.format_exc())
84 84 errorlog = req.rawenv[b'wsgi.errors']
85 85
86 86 uri = b''
87 87 if req.apppath:
88 88 uri += req.apppath
89 89 uri += b'/' + req.dispatchpath
90 90
91 91 errorlog.write(
92 92 b"Exception happened while processing request '%s':\n%s" % (uri, tb)
93 93 )
94 94
95 95
96 96 def _processbatchrequest(repo, req, res):
97 97 """Handle a request for the Batch API, which is the gateway to granting file
98 98 access.
99 99
100 100 https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
101 101 """
102 102
103 103 # Mercurial client request:
104 104 #
105 105 # HOST: localhost:$HGPORT
106 106 # ACCEPT: application/vnd.git-lfs+json
107 107 # ACCEPT-ENCODING: identity
108 108 # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316)
109 109 # Content-Length: 125
110 110 # Content-Type: application/vnd.git-lfs+json
111 111 #
112 112 # {
113 113 # "objects": [
114 114 # {
115 115 # "oid": "31cf...8e5b"
116 116 # "size": 12
117 117 # }
118 118 # ]
119 119 # "operation": "upload"
120 120 # }
121 121
122 122 if req.method != b'POST':
123 123 _sethttperror(res, HTTP_METHOD_NOT_ALLOWED)
124 124 return True
125 125
126 126 if req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json':
127 127 _sethttperror(res, HTTP_UNSUPPORTED_MEDIA_TYPE)
128 128 return True
129 129
130 130 if req.headers[b'Accept'] != b'application/vnd.git-lfs+json':
131 131 _sethttperror(res, HTTP_NOT_ACCEPTABLE)
132 132 return True
133 133
134 134 # XXX: specify an encoding?
135 135 lfsreq = pycompat.json_loads(req.bodyfh.read())
136 136
137 137 # If no transfer handlers are explicitly requested, 'basic' is assumed.
138 138 if 'basic' not in lfsreq.get('transfers', ['basic']):
139 139 _sethttperror(
140 140 res,
141 141 HTTP_BAD_REQUEST,
142 142 b'Only the basic LFS transfer handler is supported',
143 143 )
144 144 return True
145 145
146 146 operation = lfsreq.get('operation')
147 147 operation = pycompat.bytestr(operation)
148 148
149 149 if operation not in (b'upload', b'download'):
150 150 _sethttperror(
151 151 res,
152 152 HTTP_BAD_REQUEST,
153 153 b'Unsupported LFS transfer operation: %s' % operation,
154 154 )
155 155 return True
156 156
157 157 localstore = repo.svfs.lfslocalblobstore
158 158
159 159 objects = [
160 160 p
161 161 for p in _batchresponseobjects(
162 162 req, lfsreq.get('objects', []), operation, localstore
163 163 )
164 164 ]
165 165
166 166 rsp = {
167 167 'transfer': 'basic',
168 168 'objects': objects,
169 169 }
170 170
171 171 res.status = hgwebcommon.statusmessage(HTTP_OK)
172 172 res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json'
173 173 res.setbodybytes(pycompat.bytestr(json.dumps(rsp)))
174 174
175 175 return True
176 176
177 177
178 178 def _batchresponseobjects(req, objects, action, store):
179 179 """Yield one dictionary of attributes for the Batch API response for each
180 180 object in the list.
181 181
182 182 req: The parsedrequest for the Batch API request
183 183 objects: The list of objects in the Batch API object request list
184 184 action: 'upload' or 'download'
185 185 store: The local blob store for servicing requests"""
186 186
187 187 # Successful lfs-test-server response to solict an upload:
188 188 # {
189 189 # u'objects': [{
190 190 # u'size': 12,
191 191 # u'oid': u'31cf...8e5b',
192 192 # u'actions': {
193 193 # u'upload': {
194 194 # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b',
195 195 # u'expires_at': u'0001-01-01T00:00:00Z',
196 196 # u'header': {
197 197 # u'Accept': u'application/vnd.git-lfs'
198 198 # }
199 199 # }
200 200 # }
201 201 # }]
202 202 # }
203 203
204 204 # TODO: Sort out the expires_at/expires_in/authenticated keys.
205 205
206 206 for obj in objects:
207 207 # Convert unicode to ASCII to create a filesystem path
208 208 soid = obj.get('oid')
209 209 oid = soid.encode('ascii')
210 210 rsp = {
211 211 'oid': soid,
212 212 'size': obj.get('size'), # XXX: should this check the local size?
213 213 # 'authenticated': True,
214 214 }
215 215
216 216 exists = True
217 217 verifies = False
218 218
219 219 # Verify an existing file on the upload request, so that the client is
220 220 # solicited to re-upload if it corrupt locally. Download requests are
221 221 # also verified, so the error can be flagged in the Batch API response.
222 222 # (Maybe we can use this to short circuit the download for `hg verify`,
223 223 # IFF the client can assert that the remote end is an hg server.)
224 224 # Otherwise, it's potentially overkill on download, since it is also
225 225 # verified as the file is streamed to the caller.
226 226 try:
227 227 verifies = store.verify(oid)
228 228 if verifies and action == b'upload':
229 229 # The client will skip this upload, but make sure it remains
230 230 # available locally.
231 231 store.linkfromusercache(oid)
232 232 except IOError as inst:
233 233 if inst.errno != errno.ENOENT:
234 234 _logexception(req)
235 235
236 236 rsp['error'] = {
237 237 'code': 500,
238 238 'message': inst.strerror or 'Internal Server Server',
239 239 }
240 240 yield rsp
241 241 continue
242 242
243 243 exists = False
244 244
245 245 # Items are always listed for downloads. They are dropped for uploads
246 246 # IFF they already exist locally.
247 247 if action == b'download':
248 248 if not exists:
249 249 rsp['error'] = {
250 250 'code': 404,
251 251 'message': "The object does not exist",
252 252 }
253 253 yield rsp
254 254 continue
255 255
256 256 elif not verifies:
257 257 rsp['error'] = {
258 258 'code': 422, # XXX: is this the right code?
259 259 'message': "The object is corrupt",
260 260 }
261 261 yield rsp
262 262 continue
263 263
264 264 elif verifies:
265 265 yield rsp # Skip 'actions': already uploaded
266 266 continue
267 267
268 268 expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10)
269 269
270 270 def _buildheader():
271 271 # The spec doesn't mention the Accept header here, but avoid
272 272 # a gratuitous deviation from lfs-test-server in the test
273 273 # output.
274 274 hdr = {'Accept': 'application/vnd.git-lfs'}
275 275
276 276 auth = req.headers.get(b'Authorization', b'')
277 277 if auth.startswith(b'Basic '):
278 278 hdr['Authorization'] = pycompat.strurl(auth)
279 279
280 280 return hdr
281 281
282 282 rsp['actions'] = {
283 283 '%s'
284 284 % pycompat.strurl(action): {
285 285 'href': pycompat.strurl(
286 286 b'%s%s/.hg/lfs/objects/%s' % (req.baseurl, req.apppath, oid)
287 287 ),
288 288 # datetime.isoformat() doesn't include the 'Z' suffix
289 289 "expires_at": expiresat.strftime('%Y-%m-%dT%H:%M:%SZ'),
290 290 'header': _buildheader(),
291 291 }
292 292 }
293 293
294 294 yield rsp
295 295
296 296
297 297 def _processbasictransfer(repo, req, res, checkperm):
298 298 """Handle a single file upload (PUT) or download (GET) action for the Basic
299 299 Transfer Adapter.
300 300
301 301 After determining if the request is for an upload or download, the access
302 302 must be checked by calling ``checkperm()`` with either 'pull' or 'upload'
303 303 before accessing the files.
304 304
305 305 https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md
306 306 """
307 307
308 308 method = req.method
309 309 oid = req.dispatchparts[-1]
310 310 localstore = repo.svfs.lfslocalblobstore
311 311
312 312 if len(req.dispatchparts) != 4:
313 313 _sethttperror(res, HTTP_NOT_FOUND)
314 314 return True
315 315
316 316 if method == b'PUT':
317 317 checkperm(b'upload')
318 318
319 319 # TODO: verify Content-Type?
320 320
321 321 existed = localstore.has(oid)
322 322
323 323 # TODO: how to handle timeouts? The body proxy handles limiting to
324 324 # Content-Length, but what happens if a client sends less than it
325 325 # says it will?
326 326
327 327 statusmessage = hgwebcommon.statusmessage
328 328 try:
329 329 localstore.download(oid, req.bodyfh, req.headers[b'Content-Length'])
330 330 res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED)
331 331 except blobstore.LfsCorruptionError:
332 332 _logexception(req)
333 333
334 334 # XXX: Is this the right code?
335 335 res.status = statusmessage(422, b'corrupt blob')
336 336
337 337 # There's no payload here, but this is the header that lfs-test-server
338 338 # sends back. This eliminates some gratuitous test output conditionals.
339 339 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
340 340 res.setbodybytes(b'')
341 341
342 342 return True
343 343 elif method == b'GET':
344 344 checkperm(b'pull')
345 345
346 346 res.status = hgwebcommon.statusmessage(HTTP_OK)
347 347 res.headers[b'Content-Type'] = b'application/octet-stream'
348 348
349 349 try:
350 350 # TODO: figure out how to send back the file in chunks, instead of
351 351 # reading the whole thing. (Also figure out how to send back
352 352 # an error status if an IOError occurs after a partial write
353 353 # in that case. Here, everything is read before starting.)
354 354 res.setbodybytes(localstore.read(oid))
355 355 except blobstore.LfsCorruptionError:
356 356 _logexception(req)
357 357
358 358 # XXX: Is this the right code?
359 359 res.status = hgwebcommon.statusmessage(422, b'corrupt blob')
360 360 res.setbodybytes(b'')
361 361
362 362 return True
363 363 else:
364 364 _sethttperror(
365 365 res,
366 366 HTTP_METHOD_NOT_ALLOWED,
367 367 message=b'Unsupported LFS transfer method: %s' % method,
368 368 )
369 369 return True
@@ -1,548 +1,548
1 1 # wrapper.py - methods wrapping core mercurial logic
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8
9 9 import hashlib
10 10
11 11 from mercurial.i18n import _
12 12 from mercurial.node import bin, hex, short
13 13 from mercurial.pycompat import (
14 14 getattr,
15 15 setattr,
16 16 )
17 17
18 18 from mercurial import (
19 19 bundle2,
20 20 changegroup,
21 21 cmdutil,
22 22 context,
23 23 error,
24 24 exchange,
25 25 exthelper,
26 26 localrepo,
27 27 revlog,
28 28 scmutil,
29 29 util,
30 30 vfs as vfsmod,
31 31 wireprotov1server,
32 32 )
33 33
34 34 from mercurial.upgrade_utils import (
35 35 actions as upgrade_actions,
36 36 engine as upgrade_engine,
37 37 )
38 38
39 39 from mercurial.interfaces import repository
40 40
41 41 from mercurial.utils import (
42 42 storageutil,
43 43 stringutil,
44 44 )
45 45
46 46 from ..largefiles import lfutil
47 47
48 48 from . import (
49 49 blobstore,
50 50 pointer,
51 51 )
52 52
53 53 eh = exthelper.exthelper()
54 54
55 55
56 @eh.wrapfunction(localrepo, b'makefilestorage')
56 @eh.wrapfunction(localrepo, 'makefilestorage')
57 57 def localrepomakefilestorage(orig, requirements, features, **kwargs):
58 58 if b'lfs' in requirements:
59 59 features.add(repository.REPO_FEATURE_LFS)
60 60
61 61 return orig(requirements=requirements, features=features, **kwargs)
62 62
63 63
64 @eh.wrapfunction(changegroup, b'allsupportedversions')
64 @eh.wrapfunction(changegroup, 'allsupportedversions')
65 65 def allsupportedversions(orig, ui):
66 66 versions = orig(ui)
67 67 versions.add(b'03')
68 68 return versions
69 69
70 70
71 @eh.wrapfunction(wireprotov1server, b'_capabilities')
71 @eh.wrapfunction(wireprotov1server, '_capabilities')
72 72 def _capabilities(orig, repo, proto):
73 73 '''Wrap server command to announce lfs server capability'''
74 74 caps = orig(repo, proto)
75 75 if util.safehasattr(repo.svfs, b'lfslocalblobstore'):
76 76 # Advertise a slightly different capability when lfs is *required*, so
77 77 # that the client knows it MUST load the extension. If lfs is not
78 78 # required on the server, there's no reason to autoload the extension
79 79 # on the client.
80 80 if b'lfs' in repo.requirements:
81 81 caps.append(b'lfs-serve')
82 82
83 83 caps.append(b'lfs')
84 84 return caps
85 85
86 86
87 87 def bypasscheckhash(self, text):
88 88 return False
89 89
90 90
91 91 def readfromstore(self, text):
92 92 """Read filelog content from local blobstore transform for flagprocessor.
93 93
94 94 Default tranform for flagprocessor, returning contents from blobstore.
95 95 Returns a 2-typle (text, validatehash) where validatehash is True as the
96 96 contents of the blobstore should be checked using checkhash.
97 97 """
98 98 p = pointer.deserialize(text)
99 99 oid = p.oid()
100 100 store = self.opener.lfslocalblobstore
101 101 if not store.has(oid):
102 102 p.filename = self.filename
103 103 self.opener.lfsremoteblobstore.readbatch([p], store)
104 104
105 105 # The caller will validate the content
106 106 text = store.read(oid, verify=False)
107 107
108 108 # pack hg filelog metadata
109 109 hgmeta = {}
110 110 for k in p.keys():
111 111 if k.startswith(b'x-hg-'):
112 112 name = k[len(b'x-hg-') :]
113 113 hgmeta[name] = p[k]
114 114 if hgmeta or text.startswith(b'\1\n'):
115 115 text = storageutil.packmeta(hgmeta, text)
116 116
117 117 return (text, True)
118 118
119 119
120 120 def writetostore(self, text):
121 121 # hg filelog metadata (includes rename, etc)
122 122 hgmeta, offset = storageutil.parsemeta(text)
123 123 if offset and offset > 0:
124 124 # lfs blob does not contain hg filelog metadata
125 125 text = text[offset:]
126 126
127 127 # git-lfs only supports sha256
128 128 oid = hex(hashlib.sha256(text).digest())
129 129 self.opener.lfslocalblobstore.write(oid, text)
130 130
131 131 # replace contents with metadata
132 132 longoid = b'sha256:%s' % oid
133 133 metadata = pointer.gitlfspointer(oid=longoid, size=b'%d' % len(text))
134 134
135 135 # by default, we expect the content to be binary. however, LFS could also
136 136 # be used for non-binary content. add a special entry for non-binary data.
137 137 # this will be used by filectx.isbinary().
138 138 if not stringutil.binary(text):
139 139 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
140 140 metadata[b'x-is-binary'] = b'0'
141 141
142 142 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
143 143 if hgmeta is not None:
144 144 for k, v in hgmeta.items():
145 145 metadata[b'x-hg-%s' % k] = v
146 146
147 147 rawtext = metadata.serialize()
148 148 return (rawtext, False)
149 149
150 150
151 151 def _islfs(rlog, node=None, rev=None):
152 152 if rev is None:
153 153 if node is None:
154 154 # both None - likely working copy content where node is not ready
155 155 return False
156 156 rev = rlog.rev(node)
157 157 else:
158 158 node = rlog.node(rev)
159 159 if node == rlog.nullid:
160 160 return False
161 161 flags = rlog.flags(rev)
162 162 return bool(flags & revlog.REVIDX_EXTSTORED)
163 163
164 164
165 165 # Wrapping may also be applied by remotefilelog
166 166 def filelogaddrevision(
167 167 orig,
168 168 self,
169 169 text,
170 170 transaction,
171 171 link,
172 172 p1,
173 173 p2,
174 174 cachedelta=None,
175 175 node=None,
176 176 flags=revlog.REVIDX_DEFAULT_FLAGS,
177 177 **kwds
178 178 ):
179 179 # The matcher isn't available if reposetup() wasn't called.
180 180 lfstrack = self._revlog.opener.options.get(b'lfstrack')
181 181
182 182 if lfstrack:
183 183 textlen = len(text)
184 184 # exclude hg rename meta from file size
185 185 meta, offset = storageutil.parsemeta(text)
186 186 if offset:
187 187 textlen -= offset
188 188
189 189 if lfstrack(self._revlog.filename, textlen):
190 190 flags |= revlog.REVIDX_EXTSTORED
191 191
192 192 return orig(
193 193 self,
194 194 text,
195 195 transaction,
196 196 link,
197 197 p1,
198 198 p2,
199 199 cachedelta=cachedelta,
200 200 node=node,
201 201 flags=flags,
202 202 **kwds
203 203 )
204 204
205 205
206 206 # Wrapping may also be applied by remotefilelog
207 207 def filelogrenamed(orig, self, node):
208 208 if _islfs(self._revlog, node):
209 209 rawtext = self._revlog.rawdata(node)
210 210 if not rawtext:
211 211 return False
212 212 metadata = pointer.deserialize(rawtext)
213 213 if b'x-hg-copy' in metadata and b'x-hg-copyrev' in metadata:
214 214 return metadata[b'x-hg-copy'], bin(metadata[b'x-hg-copyrev'])
215 215 else:
216 216 return False
217 217 return orig(self, node)
218 218
219 219
220 220 # Wrapping may also be applied by remotefilelog
221 221 def filelogsize(orig, self, rev):
222 222 if _islfs(self._revlog, rev=rev):
223 223 # fast path: use lfs metadata to answer size
224 224 rawtext = self._revlog.rawdata(rev)
225 225 metadata = pointer.deserialize(rawtext)
226 226 return int(metadata[b'size'])
227 227 return orig(self, rev)
228 228
229 229
230 @eh.wrapfunction(revlog, b'_verify_revision')
230 @eh.wrapfunction(revlog, '_verify_revision')
231 231 def _verify_revision(orig, rl, skipflags, state, node):
232 232 if _islfs(rl, node=node):
233 233 rawtext = rl.rawdata(node)
234 234 metadata = pointer.deserialize(rawtext)
235 235
236 236 # Don't skip blobs that are stored locally, as local verification is
237 237 # relatively cheap and there's no other way to verify the raw data in
238 238 # the revlog.
239 239 if rl.opener.lfslocalblobstore.has(metadata.oid()):
240 240 skipflags &= ~revlog.REVIDX_EXTSTORED
241 241 elif skipflags & revlog.REVIDX_EXTSTORED:
242 242 # The wrapped method will set `skipread`, but there's enough local
243 243 # info to check renames.
244 244 state[b'safe_renamed'].add(node)
245 245
246 246 orig(rl, skipflags, state, node)
247 247
248 248
249 @eh.wrapfunction(context.basefilectx, b'cmp')
249 @eh.wrapfunction(context.basefilectx, 'cmp')
250 250 def filectxcmp(orig, self, fctx):
251 251 """returns True if text is different than fctx"""
252 252 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
253 253 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
254 254 # fast path: check LFS oid
255 255 p1 = pointer.deserialize(self.rawdata())
256 256 p2 = pointer.deserialize(fctx.rawdata())
257 257 return p1.oid() != p2.oid()
258 258 return orig(self, fctx)
259 259
260 260
261 @eh.wrapfunction(context.basefilectx, b'isbinary')
261 @eh.wrapfunction(context.basefilectx, 'isbinary')
262 262 def filectxisbinary(orig, self):
263 263 if self.islfs():
264 264 # fast path: use lfs metadata to answer isbinary
265 265 metadata = pointer.deserialize(self.rawdata())
266 266 # if lfs metadata says nothing, assume it's binary by default
267 267 return bool(int(metadata.get(b'x-is-binary', 1)))
268 268 return orig(self)
269 269
270 270
271 271 def filectxislfs(self):
272 272 return _islfs(self.filelog()._revlog, self.filenode())
273 273
274 274
275 @eh.wrapfunction(cmdutil, b'_updatecatformatter')
275 @eh.wrapfunction(cmdutil, '_updatecatformatter')
276 276 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
277 277 orig(fm, ctx, matcher, path, decode)
278 278 fm.data(rawdata=ctx[path].rawdata())
279 279
280 280
281 @eh.wrapfunction(scmutil, b'wrapconvertsink')
281 @eh.wrapfunction(scmutil, 'wrapconvertsink')
282 282 def convertsink(orig, sink):
283 283 sink = orig(sink)
284 284 if sink.repotype == b'hg':
285 285
286 286 class lfssink(sink.__class__):
287 287 def putcommit(
288 288 self,
289 289 files,
290 290 copies,
291 291 parents,
292 292 commit,
293 293 source,
294 294 revmap,
295 295 full,
296 296 cleanp2,
297 297 ):
298 298 pc = super(lfssink, self).putcommit
299 299 node = pc(
300 300 files,
301 301 copies,
302 302 parents,
303 303 commit,
304 304 source,
305 305 revmap,
306 306 full,
307 307 cleanp2,
308 308 )
309 309
310 310 if b'lfs' not in self.repo.requirements:
311 311 ctx = self.repo[node]
312 312
313 313 # The file list may contain removed files, so check for
314 314 # membership before assuming it is in the context.
315 315 if any(f in ctx and ctx[f].islfs() for f, n in files):
316 316 self.repo.requirements.add(b'lfs')
317 317 scmutil.writereporequirements(self.repo)
318 318
319 319 return node
320 320
321 321 sink.__class__ = lfssink
322 322
323 323 return sink
324 324
325 325
326 326 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
327 327 # options and blob stores are passed from othervfs to the new readonlyvfs.
328 @eh.wrapfunction(vfsmod.readonlyvfs, b'__init__')
328 @eh.wrapfunction(vfsmod.readonlyvfs, '__init__')
329 329 def vfsinit(orig, self, othervfs):
330 330 orig(self, othervfs)
331 331 # copy lfs related options
332 332 for k, v in othervfs.options.items():
333 333 if k.startswith(b'lfs'):
334 334 self.options[k] = v
335 335 # also copy lfs blobstores. note: this can run before reposetup, so lfs
336 336 # blobstore attributes are not always ready at this time.
337 337 for name in [b'lfslocalblobstore', b'lfsremoteblobstore']:
338 338 if util.safehasattr(othervfs, name):
339 339 setattr(self, name, getattr(othervfs, name))
340 340
341 341
342 342 def _prefetchfiles(repo, revmatches):
343 343 """Ensure that required LFS blobs are present, fetching them as a group if
344 344 needed."""
345 345 if not util.safehasattr(repo.svfs, b'lfslocalblobstore'):
346 346 return
347 347
348 348 pointers = []
349 349 oids = set()
350 350 localstore = repo.svfs.lfslocalblobstore
351 351
352 352 for rev, match in revmatches:
353 353 ctx = repo[rev]
354 354 for f in ctx.walk(match):
355 355 p = pointerfromctx(ctx, f)
356 356 if p and p.oid() not in oids and not localstore.has(p.oid()):
357 357 p.filename = f
358 358 pointers.append(p)
359 359 oids.add(p.oid())
360 360
361 361 if pointers:
362 362 # Recalculating the repo store here allows 'paths.default' that is set
363 363 # on the repo by a clone command to be used for the update.
364 364 blobstore.remote(repo).readbatch(pointers, localstore)
365 365
366 366
367 367 def _canskipupload(repo):
368 368 # Skip if this hasn't been passed to reposetup()
369 369 if not util.safehasattr(repo.svfs, b'lfsremoteblobstore'):
370 370 return True
371 371
372 372 # if remotestore is a null store, upload is a no-op and can be skipped
373 373 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
374 374
375 375
376 376 def candownload(repo):
377 377 # Skip if this hasn't been passed to reposetup()
378 378 if not util.safehasattr(repo.svfs, b'lfsremoteblobstore'):
379 379 return False
380 380
381 381 # if remotestore is a null store, downloads will lead to nothing
382 382 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
383 383
384 384
385 385 def uploadblobsfromrevs(repo, revs):
386 386 """upload lfs blobs introduced by revs
387 387
388 388 Note: also used by other extensions e. g. infinitepush. avoid renaming.
389 389 """
390 390 if _canskipupload(repo):
391 391 return
392 392 pointers = extractpointers(repo, revs)
393 393 uploadblobs(repo, pointers)
394 394
395 395
396 396 def prepush(pushop):
397 397 """Prepush hook.
398 398
399 399 Read through the revisions to push, looking for filelog entries that can be
400 400 deserialized into metadata so that we can block the push on their upload to
401 401 the remote blobstore.
402 402 """
403 403 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
404 404
405 405
406 @eh.wrapfunction(exchange, b'push')
406 @eh.wrapfunction(exchange, 'push')
407 407 def push(orig, repo, remote, *args, **kwargs):
408 408 """bail on push if the extension isn't enabled on remote when needed, and
409 409 update the remote store based on the destination path."""
410 410 if b'lfs' in repo.requirements:
411 411 # If the remote peer is for a local repo, the requirement tests in the
412 412 # base class method enforce lfs support. Otherwise, some revisions in
413 413 # this repo use lfs, and the remote repo needs the extension loaded.
414 414 if not remote.local() and not remote.capable(b'lfs'):
415 415 # This is a copy of the message in exchange.push() when requirements
416 416 # are missing between local repos.
417 417 m = _(b"required features are not supported in the destination: %s")
418 418 raise error.Abort(
419 419 m % b'lfs', hint=_(b'enable the lfs extension on the server')
420 420 )
421 421
422 422 # Repositories where this extension is disabled won't have the field.
423 423 # But if there's a requirement, then the extension must be loaded AND
424 424 # there may be blobs to push.
425 425 remotestore = repo.svfs.lfsremoteblobstore
426 426 try:
427 427 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
428 428 return orig(repo, remote, *args, **kwargs)
429 429 finally:
430 430 repo.svfs.lfsremoteblobstore = remotestore
431 431 else:
432 432 return orig(repo, remote, *args, **kwargs)
433 433
434 434
435 435 # when writing a bundle via "hg bundle" command, upload related LFS blobs
436 @eh.wrapfunction(bundle2, b'writenewbundle')
436 @eh.wrapfunction(bundle2, 'writenewbundle')
437 437 def writenewbundle(
438 438 orig, ui, repo, source, filename, bundletype, outgoing, *args, **kwargs
439 439 ):
440 440 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
441 441 uploadblobsfromrevs(repo, outgoing.missing)
442 442 return orig(
443 443 ui, repo, source, filename, bundletype, outgoing, *args, **kwargs
444 444 )
445 445
446 446
447 447 def extractpointers(repo, revs):
448 448 """return a list of lfs pointers added by given revs"""
449 449 repo.ui.debug(b'lfs: computing set of blobs to upload\n')
450 450 pointers = {}
451 451
452 452 makeprogress = repo.ui.makeprogress
453 453 with makeprogress(
454 454 _(b'lfs search'), _(b'changesets'), len(revs)
455 455 ) as progress:
456 456 for r in revs:
457 457 ctx = repo[r]
458 458 for p in pointersfromctx(ctx).values():
459 459 pointers[p.oid()] = p
460 460 progress.increment()
461 461 return sorted(pointers.values(), key=lambda p: p.oid())
462 462
463 463
464 464 def pointerfromctx(ctx, f, removed=False):
465 465 """return a pointer for the named file from the given changectx, or None if
466 466 the file isn't LFS.
467 467
468 468 Optionally, the pointer for a file deleted from the context can be returned.
469 469 Since no such pointer is actually stored, and to distinguish from a non LFS
470 470 file, this pointer is represented by an empty dict.
471 471 """
472 472 _ctx = ctx
473 473 if f not in ctx:
474 474 if not removed:
475 475 return None
476 476 if f in ctx.p1():
477 477 _ctx = ctx.p1()
478 478 elif f in ctx.p2():
479 479 _ctx = ctx.p2()
480 480 else:
481 481 return None
482 482 fctx = _ctx[f]
483 483 if not _islfs(fctx.filelog()._revlog, fctx.filenode()):
484 484 return None
485 485 try:
486 486 p = pointer.deserialize(fctx.rawdata())
487 487 if ctx == _ctx:
488 488 return p
489 489 return {}
490 490 except pointer.InvalidPointer as ex:
491 491 raise error.Abort(
492 492 _(b'lfs: corrupted pointer (%s@%s): %s\n')
493 493 % (f, short(_ctx.node()), ex)
494 494 )
495 495
496 496
497 497 def pointersfromctx(ctx, removed=False):
498 498 """return a dict {path: pointer} for given single changectx.
499 499
500 500 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
501 501 stored for the path is an empty dict.
502 502 """
503 503 result = {}
504 504 m = ctx.repo().narrowmatch()
505 505
506 506 # TODO: consider manifest.fastread() instead
507 507 for f in ctx.files():
508 508 if not m(f):
509 509 continue
510 510 p = pointerfromctx(ctx, f, removed=removed)
511 511 if p is not None:
512 512 result[f] = p
513 513 return result
514 514
515 515
516 516 def uploadblobs(repo, pointers):
517 517 """upload given pointers from local blobstore"""
518 518 if not pointers:
519 519 return
520 520
521 521 remoteblob = repo.svfs.lfsremoteblobstore
522 522 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
523 523
524 524
525 @eh.wrapfunction(upgrade_engine, b'finishdatamigration')
525 @eh.wrapfunction(upgrade_engine, 'finishdatamigration')
526 526 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
527 527 orig(ui, srcrepo, dstrepo, requirements)
528 528
529 529 # Skip if this hasn't been passed to reposetup()
530 530 if util.safehasattr(
531 531 srcrepo.svfs, b'lfslocalblobstore'
532 532 ) and util.safehasattr(dstrepo.svfs, b'lfslocalblobstore'):
533 533 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
534 534 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
535 535
536 536 for dirpath, dirs, files in srclfsvfs.walk():
537 537 for oid in files:
538 538 ui.write(_(b'copying lfs blob %s\n') % oid)
539 539 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
540 540
541 541
542 @eh.wrapfunction(upgrade_actions, b'preservedrequirements')
543 @eh.wrapfunction(upgrade_actions, b'supporteddestrequirements')
542 @eh.wrapfunction(upgrade_actions, 'preservedrequirements')
543 @eh.wrapfunction(upgrade_actions, 'supporteddestrequirements')
544 544 def upgraderequirements(orig, repo):
545 545 reqs = orig(repo)
546 546 if b'lfs' in repo.requirements:
547 547 reqs.add(b'lfs')
548 548 return reqs
General Comments 0
You need to be logged in to leave comments. Login now