##// END OF EJS Templates
lfs: add an experimental knob to disable blob serving...
Matt Harbison -
r37265:dfb38c48 default
parent child Browse files
Show More
@@ -0,0 +1,67 b''
1 #require serve
2
3 $ cat >> $HGRCPATH <<EOF
4 > [extensions]
5 > lfs=
6 > [lfs]
7 > url=http://localhost:$HGPORT/.git/info/lfs
8 > track=all()
9 > [web]
10 > push_ssl = False
11 > allow-push = *
12 > EOF
13
14 Serving LFS files can experimentally be turned off. The long term solution is
15 to support the 'verify' action in both client and server, so that the server can
16 tell the client to store files elsewhere.
17
18 $ hg init server
19 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
20 > --config experimental.lfs.serve=False -R server serve -d \
21 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
22 $ cat hg.pid >> $DAEMON_PIDS
23
24 Uploads fail...
25
26 $ hg init client
27 $ echo 'this-is-an-lfs-file' > client/lfs.bin
28 $ hg -R client ci -Am 'initial commit'
29 adding lfs.bin
30 $ hg -R client push http://localhost:$HGPORT
31 pushing to http://localhost:$HGPORT/
32 searching for changes
33 abort: LFS HTTP error: HTTP Error 400: no such method: .git (action=upload)!
34 [255]
35
36 ... so do a local push to make the data available. Remove the blob from the
37 default cache, so it attempts to download.
38 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
39 > --config "lfs.url=null://" \
40 > -R client push -q server
41 $ rm -rf `hg config lfs.usercache`
42
43 Downloads fail...
44
45 $ hg clone http://localhost:$HGPORT httpclone
46 requesting all changes
47 adding changesets
48 adding manifests
49 adding file changes
50 added 1 changesets with 1 changes to 1 files
51 new changesets 525251863cad
52 updating to branch default
53 abort: LFS HTTP error: HTTP Error 400: no such method: .git (action=download)!
54 [255]
55
56 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
57
58 $ cat $TESTTMP/access.log $TESTTMP/errors.log
59 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
60 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
61 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
62 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
63 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
64 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
65 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
66 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
67 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
@@ -1,391 +1,394 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will prompt setting this when it must use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 from mercurial.i18n import _
126 126
127 127 from mercurial import (
128 128 bundle2,
129 129 changegroup,
130 130 cmdutil,
131 131 config,
132 132 context,
133 133 error,
134 134 exchange,
135 135 extensions,
136 136 filelog,
137 137 fileset,
138 138 hg,
139 139 localrepo,
140 140 minifileset,
141 141 node,
142 142 pycompat,
143 143 registrar,
144 144 revlog,
145 145 scmutil,
146 146 templateutil,
147 147 upgrade,
148 148 util,
149 149 vfs as vfsmod,
150 150 wireproto,
151 151 wireprotoserver,
152 152 )
153 153
154 154 from . import (
155 155 blobstore,
156 156 wireprotolfsserver,
157 157 wrapper,
158 158 )
159 159
160 160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 162 # be specifying the version(s) of Mercurial they are tested with, or
163 163 # leave the attribute unspecified.
164 164 testedwith = 'ships-with-hg-core'
165 165
166 166 configtable = {}
167 167 configitem = registrar.configitem(configtable)
168 168
169 configitem('experimental', 'lfs.serve',
170 default=True,
171 )
169 172 configitem('experimental', 'lfs.user-agent',
170 173 default=None,
171 174 )
172 175 configitem('experimental', 'lfs.worker-enable',
173 176 default=False,
174 177 )
175 178
176 179 configitem('lfs', 'url',
177 180 default=None,
178 181 )
179 182 configitem('lfs', 'usercache',
180 183 default=None,
181 184 )
182 185 # Deprecated
183 186 configitem('lfs', 'threshold',
184 187 default=None,
185 188 )
186 189 configitem('lfs', 'track',
187 190 default='none()',
188 191 )
189 192 configitem('lfs', 'retry',
190 193 default=5,
191 194 )
192 195
193 196 cmdtable = {}
194 197 command = registrar.command(cmdtable)
195 198
196 199 templatekeyword = registrar.templatekeyword()
197 200 filesetpredicate = registrar.filesetpredicate()
198 201
199 202 def featuresetup(ui, supported):
200 203 # don't die on seeing a repo with the lfs requirement
201 204 supported |= {'lfs'}
202 205
203 206 def uisetup(ui):
204 207 localrepo.featuresetupfuncs.add(featuresetup)
205 208
206 209 def reposetup(ui, repo):
207 210 # Nothing to do with a remote repo
208 211 if not repo.local():
209 212 return
210 213
211 214 repo.svfs.lfslocalblobstore = blobstore.local(repo)
212 215 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
213 216
214 217 class lfsrepo(repo.__class__):
215 218 @localrepo.unfilteredmethod
216 219 def commitctx(self, ctx, error=False):
217 220 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
218 221 return super(lfsrepo, self).commitctx(ctx, error)
219 222
220 223 repo.__class__ = lfsrepo
221 224
222 225 if 'lfs' not in repo.requirements:
223 226 def checkrequireslfs(ui, repo, **kwargs):
224 227 if 'lfs' not in repo.requirements:
225 228 last = kwargs.get(r'node_last')
226 229 _bin = node.bin
227 230 if last:
228 231 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
229 232 else:
230 233 s = repo.set('%n', _bin(kwargs[r'node']))
231 234 match = repo.narrowmatch()
232 235 for ctx in s:
233 236 # TODO: is there a way to just walk the files in the commit?
234 237 if any(ctx[f].islfs() for f in ctx.files()
235 238 if f in ctx and match(f)):
236 239 repo.requirements.add('lfs')
237 240 repo._writerequirements()
238 241 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
239 242 break
240 243
241 244 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
242 245 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
243 246 else:
244 247 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
245 248
246 249 def _trackedmatcher(repo):
247 250 """Return a function (path, size) -> bool indicating whether or not to
248 251 track a given file with lfs."""
249 252 if not repo.wvfs.exists('.hglfs'):
250 253 # No '.hglfs' in wdir. Fallback to config for now.
251 254 trackspec = repo.ui.config('lfs', 'track')
252 255
253 256 # deprecated config: lfs.threshold
254 257 threshold = repo.ui.configbytes('lfs', 'threshold')
255 258 if threshold:
256 259 fileset.parse(trackspec) # make sure syntax errors are confined
257 260 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
258 261
259 262 return minifileset.compile(trackspec)
260 263
261 264 data = repo.wvfs.tryread('.hglfs')
262 265 if not data:
263 266 return lambda p, s: False
264 267
265 268 # Parse errors here will abort with a message that points to the .hglfs file
266 269 # and line number.
267 270 cfg = config.config()
268 271 cfg.parse('.hglfs', data)
269 272
270 273 try:
271 274 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
272 275 for pattern, rule in cfg.items('track')]
273 276 except error.ParseError as e:
274 277 # The original exception gives no indicator that the error is in the
275 278 # .hglfs file, so add that.
276 279
277 280 # TODO: See if the line number of the file can be made available.
278 281 raise error.Abort(_('parse error in .hglfs: %s') % e)
279 282
280 283 def _match(path, size):
281 284 for pat, rule in rules:
282 285 if pat(path, size):
283 286 return rule(path, size)
284 287
285 288 return False
286 289
287 290 return _match
288 291
289 292 def wrapfilelog(filelog):
290 293 wrapfunction = extensions.wrapfunction
291 294
292 295 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
293 296 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
294 297 wrapfunction(filelog, 'size', wrapper.filelogsize)
295 298
296 299 def extsetup(ui):
297 300 wrapfilelog(filelog.filelog)
298 301
299 302 wrapfunction = extensions.wrapfunction
300 303
301 304 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
302 305 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
303 306
304 307 wrapfunction(upgrade, '_finishdatamigration',
305 308 wrapper.upgradefinishdatamigration)
306 309
307 310 wrapfunction(upgrade, 'preservedrequirements',
308 311 wrapper.upgraderequirements)
309 312
310 313 wrapfunction(upgrade, 'supporteddestrequirements',
311 314 wrapper.upgraderequirements)
312 315
313 316 wrapfunction(changegroup,
314 317 'allsupportedversions',
315 318 wrapper.allsupportedversions)
316 319
317 320 wrapfunction(exchange, 'push', wrapper.push)
318 321 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
319 322 wrapfunction(wireprotoserver, 'handlewsgirequest',
320 323 wireprotolfsserver.handlewsgirequest)
321 324
322 325 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
323 326 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
324 327 context.basefilectx.islfs = wrapper.filectxislfs
325 328
326 329 revlog.addflagprocessor(
327 330 revlog.REVIDX_EXTSTORED,
328 331 (
329 332 wrapper.readfromstore,
330 333 wrapper.writetostore,
331 334 wrapper.bypasscheckhash,
332 335 ),
333 336 )
334 337
335 338 wrapfunction(hg, 'clone', wrapper.hgclone)
336 339 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
337 340
338 341 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
339 342
340 343 # Make bundle choose changegroup3 instead of changegroup2. This affects
341 344 # "hg bundle" command. Note: it does not cover all bundle formats like
342 345 # "packed1". Using "packed1" with lfs will likely cause trouble.
343 346 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
344 347
345 348 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
346 349 # options and blob stores are passed from othervfs to the new readonlyvfs.
347 350 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
348 351
349 352 # when writing a bundle via "hg bundle" command, upload related LFS blobs
350 353 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
351 354
352 355 @filesetpredicate('lfs()', callstatus=True)
353 356 def lfsfileset(mctx, x):
354 357 """File that uses LFS storage."""
355 358 # i18n: "lfs" is a keyword
356 359 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
357 360 return [f for f in mctx.subset
358 361 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
359 362
360 363 @templatekeyword('lfs_files', requires={'ctx'})
361 364 def lfsfiles(context, mapping):
362 365 """List of strings. All files modified, added, or removed by this
363 366 changeset."""
364 367 ctx = context.resource(mapping, 'ctx')
365 368
366 369 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
367 370 files = sorted(pointers.keys())
368 371
369 372 def pointer(v):
370 373 # In the file spec, version is first and the other keys are sorted.
371 374 sortkeyfunc = lambda x: (x[0] != 'version', x)
372 375 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
373 376 return util.sortdict(items)
374 377
375 378 makemap = lambda v: {
376 379 'file': v,
377 380 'lfsoid': pointers[v].oid() if pointers[v] else None,
378 381 'lfspointer': templateutil.hybriddict(pointer(v)),
379 382 }
380 383
381 384 # TODO: make the separator ', '?
382 385 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
383 386 return templateutil.hybrid(f, files, makemap, pycompat.identity)
384 387
385 388 @command('debuglfsupload',
386 389 [('r', 'rev', [], _('upload large files introduced by REV'))])
387 390 def debuglfsupload(ui, repo, **opts):
388 391 """upload lfs blobs added by the working copy parent or given revisions"""
389 392 revs = opts.get(r'rev', [])
390 393 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
391 394 wrapper.uploadblobs(repo, pointers)
@@ -1,284 +1,287 b''
1 1 # wireprotolfsserver.py - lfs protocol server side implementation
2 2 #
3 3 # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import datetime
11 11 import errno
12 12 import json
13 13 import os
14 14
15 15 from mercurial.hgweb import (
16 16 common as hgwebcommon,
17 17 )
18 18
19 19 from mercurial import (
20 20 pycompat,
21 21 )
22 22
23 23 HTTP_OK = hgwebcommon.HTTP_OK
24 24 HTTP_CREATED = hgwebcommon.HTTP_CREATED
25 25 HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST
26 26
27 27 def handlewsgirequest(orig, rctx, req, res, checkperm):
28 28 """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS
29 29 request if it is left unprocessed by the wrapped method.
30 30 """
31 31 if orig(rctx, req, res, checkperm):
32 32 return True
33 33
34 if not rctx.repo.ui.configbool('experimental', 'lfs.serve'):
35 return False
36
34 37 if not req.dispatchpath:
35 38 return False
36 39
37 40 try:
38 41 if req.dispatchpath == b'.git/info/lfs/objects/batch':
39 42 checkperm(rctx, req, 'pull')
40 43 return _processbatchrequest(rctx.repo, req, res)
41 44 # TODO: reserve and use a path in the proposed http wireprotocol /api/
42 45 # namespace?
43 46 elif req.dispatchpath.startswith(b'.hg/lfs/objects'):
44 47 return _processbasictransfer(rctx.repo, req, res,
45 48 lambda perm:
46 49 checkperm(rctx, req, perm))
47 50 return False
48 51 except hgwebcommon.ErrorResponse as e:
49 52 # XXX: copied from the handler surrounding wireprotoserver._callhttp()
50 53 # in the wrapped function. Should this be moved back to hgweb to
51 54 # be a common handler?
52 55 for k, v in e.headers:
53 56 res.headers[k] = v
54 57 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
55 58 res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e))
56 59 return True
57 60
58 61 def _sethttperror(res, code, message=None):
59 62 res.status = hgwebcommon.statusmessage(code, message=message)
60 63 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
61 64 res.setbodybytes(b'')
62 65
63 66 def _processbatchrequest(repo, req, res):
64 67 """Handle a request for the Batch API, which is the gateway to granting file
65 68 access.
66 69
67 70 https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
68 71 """
69 72
70 73 # Mercurial client request:
71 74 #
72 75 # HOST: localhost:$HGPORT
73 76 # ACCEPT: application/vnd.git-lfs+json
74 77 # ACCEPT-ENCODING: identity
75 78 # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316)
76 79 # Content-Length: 125
77 80 # Content-Type: application/vnd.git-lfs+json
78 81 #
79 82 # {
80 83 # "objects": [
81 84 # {
82 85 # "oid": "31cf...8e5b"
83 86 # "size": 12
84 87 # }
85 88 # ]
86 89 # "operation": "upload"
87 90 # }
88 91
89 92 if (req.method != b'POST'
90 93 or req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json'
91 94 or req.headers[b'Accept'] != b'application/vnd.git-lfs+json'):
92 95 # TODO: figure out what the proper handling for a bad request to the
93 96 # Batch API is.
94 97 _sethttperror(res, HTTP_BAD_REQUEST, b'Invalid Batch API request')
95 98 return True
96 99
97 100 # XXX: specify an encoding?
98 101 lfsreq = json.loads(req.bodyfh.read())
99 102
100 103 # If no transfer handlers are explicitly requested, 'basic' is assumed.
101 104 if 'basic' not in lfsreq.get('transfers', ['basic']):
102 105 _sethttperror(res, HTTP_BAD_REQUEST,
103 106 b'Only the basic LFS transfer handler is supported')
104 107 return True
105 108
106 109 operation = lfsreq.get('operation')
107 110 if operation not in ('upload', 'download'):
108 111 _sethttperror(res, HTTP_BAD_REQUEST,
109 112 b'Unsupported LFS transfer operation: %s' % operation)
110 113 return True
111 114
112 115 localstore = repo.svfs.lfslocalblobstore
113 116
114 117 objects = [p for p in _batchresponseobjects(req, lfsreq.get('objects', []),
115 118 operation, localstore)]
116 119
117 120 rsp = {
118 121 'transfer': 'basic',
119 122 'objects': objects,
120 123 }
121 124
122 125 res.status = hgwebcommon.statusmessage(HTTP_OK)
123 126 res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json'
124 127 res.setbodybytes(pycompat.bytestr(json.dumps(rsp)))
125 128
126 129 return True
127 130
128 131 def _batchresponseobjects(req, objects, action, store):
129 132 """Yield one dictionary of attributes for the Batch API response for each
130 133 object in the list.
131 134
132 135 req: The parsedrequest for the Batch API request
133 136 objects: The list of objects in the Batch API object request list
134 137 action: 'upload' or 'download'
135 138 store: The local blob store for servicing requests"""
136 139
137 140 # Successful lfs-test-server response to solict an upload:
138 141 # {
139 142 # u'objects': [{
140 143 # u'size': 12,
141 144 # u'oid': u'31cf...8e5b',
142 145 # u'actions': {
143 146 # u'upload': {
144 147 # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b',
145 148 # u'expires_at': u'0001-01-01T00:00:00Z',
146 149 # u'header': {
147 150 # u'Accept': u'application/vnd.git-lfs'
148 151 # }
149 152 # }
150 153 # }
151 154 # }]
152 155 # }
153 156
154 157 # TODO: Sort out the expires_at/expires_in/authenticated keys.
155 158
156 159 for obj in objects:
157 160 # Convert unicode to ASCII to create a filesystem path
158 161 oid = obj.get('oid').encode('ascii')
159 162 rsp = {
160 163 'oid': oid,
161 164 'size': obj.get('size'), # XXX: should this check the local size?
162 165 #'authenticated': True,
163 166 }
164 167
165 168 exists = True
166 169 verifies = False
167 170
168 171 # Verify an existing file on the upload request, so that the client is
169 172 # solicited to re-upload if it corrupt locally. Download requests are
170 173 # also verified, so the error can be flagged in the Batch API response.
171 174 # (Maybe we can use this to short circuit the download for `hg verify`,
172 175 # IFF the client can assert that the remote end is an hg server.)
173 176 # Otherwise, it's potentially overkill on download, since it is also
174 177 # verified as the file is streamed to the caller.
175 178 try:
176 179 verifies = store.verify(oid)
177 180 except IOError as inst:
178 181 if inst.errno != errno.ENOENT:
179 182 rsp['error'] = {
180 183 'code': 500,
181 184 'message': inst.strerror or 'Internal Server Server'
182 185 }
183 186 yield rsp
184 187 continue
185 188
186 189 exists = False
187 190
188 191 # Items are always listed for downloads. They are dropped for uploads
189 192 # IFF they already exist locally.
190 193 if action == 'download':
191 194 if not exists:
192 195 rsp['error'] = {
193 196 'code': 404,
194 197 'message': "The object does not exist"
195 198 }
196 199 yield rsp
197 200 continue
198 201
199 202 elif not verifies:
200 203 rsp['error'] = {
201 204 'code': 422, # XXX: is this the right code?
202 205 'message': "The object is corrupt"
203 206 }
204 207 yield rsp
205 208 continue
206 209
207 210 elif verifies:
208 211 yield rsp # Skip 'actions': already uploaded
209 212 continue
210 213
211 214 expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10)
212 215
213 216 rsp['actions'] = {
214 217 '%s' % action: {
215 218 # TODO: Account for the --prefix, if any.
216 219 'href': '%s/.hg/lfs/objects/%s' % (req.baseurl, oid),
217 220 # datetime.isoformat() doesn't include the 'Z' suffix
218 221 "expires_at": expiresat.strftime('%Y-%m-%dT%H:%M:%SZ'),
219 222 'header': {
220 223 # The spec doesn't mention the Accept header here, but avoid
221 224 # a gratuitous deviation from lfs-test-server in the test
222 225 # output.
223 226 'Accept': 'application/vnd.git-lfs'
224 227 }
225 228 }
226 229 }
227 230
228 231 yield rsp
229 232
230 233 def _processbasictransfer(repo, req, res, checkperm):
231 234 """Handle a single file upload (PUT) or download (GET) action for the Basic
232 235 Transfer Adapter.
233 236
234 237 After determining if the request is for an upload or download, the access
235 238 must be checked by calling ``checkperm()`` with either 'pull' or 'upload'
236 239 before accessing the files.
237 240
238 241 https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md
239 242 """
240 243
241 244 method = req.method
242 245 oid = os.path.basename(req.dispatchpath)
243 246 localstore = repo.svfs.lfslocalblobstore
244 247
245 248 if method == b'PUT':
246 249 checkperm('upload')
247 250
248 251 # TODO: verify Content-Type?
249 252
250 253 existed = localstore.has(oid)
251 254
252 255 # TODO: how to handle timeouts? The body proxy handles limiting to
253 256 # Content-Length, but what happens if a client sends less than it
254 257 # says it will?
255 258
256 259 # TODO: download() will abort if the checksum fails. It should raise
257 260 # something checksum specific that can be caught here, and turned
258 261 # into an http code.
259 262 localstore.download(oid, req.bodyfh)
260 263
261 264 statusmessage = hgwebcommon.statusmessage
262 265 res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED)
263 266
264 267 # There's no payload here, but this is the header that lfs-test-server
265 268 # sends back. This eliminates some gratuitous test output conditionals.
266 269 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
267 270 res.setbodybytes(b'')
268 271
269 272 return True
270 273 elif method == b'GET':
271 274 checkperm('pull')
272 275
273 276 res.status = hgwebcommon.statusmessage(HTTP_OK)
274 277 res.headers[b'Content-Type'] = b'application/octet-stream'
275 278
276 279 # TODO: figure out how to send back the file in chunks, instead of
277 280 # reading the whole thing.
278 281 res.setbodybytes(localstore.read(oid))
279 282
280 283 return True
281 284 else:
282 285 _sethttperror(res, HTTP_BAD_REQUEST,
283 286 message=b'Unsupported LFS transfer method: %s' % method)
284 287 return True
General Comments 0
You need to be logged in to leave comments. Login now