##// END OF EJS Templates
lfs: add an experimental knob to disable blob serving...
Matt Harbison -
r37265:dfb38c48 default
parent child Browse files
Show More
@@ -0,0 +1,67 b''
1 #require serve
2
3 $ cat >> $HGRCPATH <<EOF
4 > [extensions]
5 > lfs=
6 > [lfs]
7 > url=http://localhost:$HGPORT/.git/info/lfs
8 > track=all()
9 > [web]
10 > push_ssl = False
11 > allow-push = *
12 > EOF
13
14 Serving LFS files can experimentally be turned off. The long term solution is
15 to support the 'verify' action in both client and server, so that the server can
16 tell the client to store files elsewhere.
17
18 $ hg init server
19 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
20 > --config experimental.lfs.serve=False -R server serve -d \
21 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
22 $ cat hg.pid >> $DAEMON_PIDS
23
24 Uploads fail...
25
26 $ hg init client
27 $ echo 'this-is-an-lfs-file' > client/lfs.bin
28 $ hg -R client ci -Am 'initial commit'
29 adding lfs.bin
30 $ hg -R client push http://localhost:$HGPORT
31 pushing to http://localhost:$HGPORT/
32 searching for changes
33 abort: LFS HTTP error: HTTP Error 400: no such method: .git (action=upload)!
34 [255]
35
36 ... so do a local push to make the data available. Remove the blob from the
37 default cache, so it attempts to download.
38 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
39 > --config "lfs.url=null://" \
40 > -R client push -q server
41 $ rm -rf `hg config lfs.usercache`
42
43 Downloads fail...
44
45 $ hg clone http://localhost:$HGPORT httpclone
46 requesting all changes
47 adding changesets
48 adding manifests
49 adding file changes
50 added 1 changesets with 1 changes to 1 files
51 new changesets 525251863cad
52 updating to branch default
53 abort: LFS HTTP error: HTTP Error 400: no such method: .git (action=download)!
54 [255]
55
56 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
57
58 $ cat $TESTTMP/access.log $TESTTMP/errors.log
59 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
60 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
61 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
62 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
63 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
64 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
65 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
66 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ (glob)
67 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
@@ -1,391 +1,394 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will prompt setting this when it must use this value.
91 # (default: unset)
91 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
92 url = https://example.com/repo.git/info/lfs
93
93
94 # Which files to track in LFS. Path tests are "**.extname" for file
94 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
95 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
96 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
97 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
98 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
99 #
100 # Some examples:
100 # Some examples:
101 # - all() # everything
101 # - all() # everything
102 # - none() # nothing
102 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
103 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
104 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
106 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
109 # (default: none())
110 #
110 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
112 # will eventually be deprecated and removed.
113 track = size(">10M")
113 track = size(">10M")
114
114
115 # how many times to retry before giving up on transferring an object
115 # how many times to retry before giving up on transferring an object
116 retry = 5
116 retry = 5
117
117
118 # the local directory to store lfs files for sharing across local clones.
118 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
119 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
120 usercache = /path/to/global/cache
121 """
121 """
122
122
123 from __future__ import absolute_import
123 from __future__ import absolute_import
124
124
125 from mercurial.i18n import _
125 from mercurial.i18n import _
126
126
127 from mercurial import (
127 from mercurial import (
128 bundle2,
128 bundle2,
129 changegroup,
129 changegroup,
130 cmdutil,
130 cmdutil,
131 config,
131 config,
132 context,
132 context,
133 error,
133 error,
134 exchange,
134 exchange,
135 extensions,
135 extensions,
136 filelog,
136 filelog,
137 fileset,
137 fileset,
138 hg,
138 hg,
139 localrepo,
139 localrepo,
140 minifileset,
140 minifileset,
141 node,
141 node,
142 pycompat,
142 pycompat,
143 registrar,
143 registrar,
144 revlog,
144 revlog,
145 scmutil,
145 scmutil,
146 templateutil,
146 templateutil,
147 upgrade,
147 upgrade,
148 util,
148 util,
149 vfs as vfsmod,
149 vfs as vfsmod,
150 wireproto,
150 wireproto,
151 wireprotoserver,
151 wireprotoserver,
152 )
152 )
153
153
154 from . import (
154 from . import (
155 blobstore,
155 blobstore,
156 wireprotolfsserver,
156 wireprotolfsserver,
157 wrapper,
157 wrapper,
158 )
158 )
159
159
160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 # be specifying the version(s) of Mercurial they are tested with, or
162 # be specifying the version(s) of Mercurial they are tested with, or
163 # leave the attribute unspecified.
163 # leave the attribute unspecified.
164 testedwith = 'ships-with-hg-core'
164 testedwith = 'ships-with-hg-core'
165
165
166 configtable = {}
166 configtable = {}
167 configitem = registrar.configitem(configtable)
167 configitem = registrar.configitem(configtable)
168
168
169 configitem('experimental', 'lfs.serve',
170 default=True,
171 )
169 configitem('experimental', 'lfs.user-agent',
172 configitem('experimental', 'lfs.user-agent',
170 default=None,
173 default=None,
171 )
174 )
172 configitem('experimental', 'lfs.worker-enable',
175 configitem('experimental', 'lfs.worker-enable',
173 default=False,
176 default=False,
174 )
177 )
175
178
176 configitem('lfs', 'url',
179 configitem('lfs', 'url',
177 default=None,
180 default=None,
178 )
181 )
179 configitem('lfs', 'usercache',
182 configitem('lfs', 'usercache',
180 default=None,
183 default=None,
181 )
184 )
182 # Deprecated
185 # Deprecated
183 configitem('lfs', 'threshold',
186 configitem('lfs', 'threshold',
184 default=None,
187 default=None,
185 )
188 )
186 configitem('lfs', 'track',
189 configitem('lfs', 'track',
187 default='none()',
190 default='none()',
188 )
191 )
189 configitem('lfs', 'retry',
192 configitem('lfs', 'retry',
190 default=5,
193 default=5,
191 )
194 )
192
195
193 cmdtable = {}
196 cmdtable = {}
194 command = registrar.command(cmdtable)
197 command = registrar.command(cmdtable)
195
198
196 templatekeyword = registrar.templatekeyword()
199 templatekeyword = registrar.templatekeyword()
197 filesetpredicate = registrar.filesetpredicate()
200 filesetpredicate = registrar.filesetpredicate()
198
201
199 def featuresetup(ui, supported):
202 def featuresetup(ui, supported):
200 # don't die on seeing a repo with the lfs requirement
203 # don't die on seeing a repo with the lfs requirement
201 supported |= {'lfs'}
204 supported |= {'lfs'}
202
205
203 def uisetup(ui):
206 def uisetup(ui):
204 localrepo.featuresetupfuncs.add(featuresetup)
207 localrepo.featuresetupfuncs.add(featuresetup)
205
208
206 def reposetup(ui, repo):
209 def reposetup(ui, repo):
207 # Nothing to do with a remote repo
210 # Nothing to do with a remote repo
208 if not repo.local():
211 if not repo.local():
209 return
212 return
210
213
211 repo.svfs.lfslocalblobstore = blobstore.local(repo)
214 repo.svfs.lfslocalblobstore = blobstore.local(repo)
212 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
215 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
213
216
214 class lfsrepo(repo.__class__):
217 class lfsrepo(repo.__class__):
215 @localrepo.unfilteredmethod
218 @localrepo.unfilteredmethod
216 def commitctx(self, ctx, error=False):
219 def commitctx(self, ctx, error=False):
217 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
220 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
218 return super(lfsrepo, self).commitctx(ctx, error)
221 return super(lfsrepo, self).commitctx(ctx, error)
219
222
220 repo.__class__ = lfsrepo
223 repo.__class__ = lfsrepo
221
224
222 if 'lfs' not in repo.requirements:
225 if 'lfs' not in repo.requirements:
223 def checkrequireslfs(ui, repo, **kwargs):
226 def checkrequireslfs(ui, repo, **kwargs):
224 if 'lfs' not in repo.requirements:
227 if 'lfs' not in repo.requirements:
225 last = kwargs.get(r'node_last')
228 last = kwargs.get(r'node_last')
226 _bin = node.bin
229 _bin = node.bin
227 if last:
230 if last:
228 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
231 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
229 else:
232 else:
230 s = repo.set('%n', _bin(kwargs[r'node']))
233 s = repo.set('%n', _bin(kwargs[r'node']))
231 match = repo.narrowmatch()
234 match = repo.narrowmatch()
232 for ctx in s:
235 for ctx in s:
233 # TODO: is there a way to just walk the files in the commit?
236 # TODO: is there a way to just walk the files in the commit?
234 if any(ctx[f].islfs() for f in ctx.files()
237 if any(ctx[f].islfs() for f in ctx.files()
235 if f in ctx and match(f)):
238 if f in ctx and match(f)):
236 repo.requirements.add('lfs')
239 repo.requirements.add('lfs')
237 repo._writerequirements()
240 repo._writerequirements()
238 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
241 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
239 break
242 break
240
243
241 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
244 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
242 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
245 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
243 else:
246 else:
244 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
247 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
245
248
246 def _trackedmatcher(repo):
249 def _trackedmatcher(repo):
247 """Return a function (path, size) -> bool indicating whether or not to
250 """Return a function (path, size) -> bool indicating whether or not to
248 track a given file with lfs."""
251 track a given file with lfs."""
249 if not repo.wvfs.exists('.hglfs'):
252 if not repo.wvfs.exists('.hglfs'):
250 # No '.hglfs' in wdir. Fallback to config for now.
253 # No '.hglfs' in wdir. Fallback to config for now.
251 trackspec = repo.ui.config('lfs', 'track')
254 trackspec = repo.ui.config('lfs', 'track')
252
255
253 # deprecated config: lfs.threshold
256 # deprecated config: lfs.threshold
254 threshold = repo.ui.configbytes('lfs', 'threshold')
257 threshold = repo.ui.configbytes('lfs', 'threshold')
255 if threshold:
258 if threshold:
256 fileset.parse(trackspec) # make sure syntax errors are confined
259 fileset.parse(trackspec) # make sure syntax errors are confined
257 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
260 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
258
261
259 return minifileset.compile(trackspec)
262 return minifileset.compile(trackspec)
260
263
261 data = repo.wvfs.tryread('.hglfs')
264 data = repo.wvfs.tryread('.hglfs')
262 if not data:
265 if not data:
263 return lambda p, s: False
266 return lambda p, s: False
264
267
265 # Parse errors here will abort with a message that points to the .hglfs file
268 # Parse errors here will abort with a message that points to the .hglfs file
266 # and line number.
269 # and line number.
267 cfg = config.config()
270 cfg = config.config()
268 cfg.parse('.hglfs', data)
271 cfg.parse('.hglfs', data)
269
272
270 try:
273 try:
271 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
274 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
272 for pattern, rule in cfg.items('track')]
275 for pattern, rule in cfg.items('track')]
273 except error.ParseError as e:
276 except error.ParseError as e:
274 # The original exception gives no indicator that the error is in the
277 # The original exception gives no indicator that the error is in the
275 # .hglfs file, so add that.
278 # .hglfs file, so add that.
276
279
277 # TODO: See if the line number of the file can be made available.
280 # TODO: See if the line number of the file can be made available.
278 raise error.Abort(_('parse error in .hglfs: %s') % e)
281 raise error.Abort(_('parse error in .hglfs: %s') % e)
279
282
280 def _match(path, size):
283 def _match(path, size):
281 for pat, rule in rules:
284 for pat, rule in rules:
282 if pat(path, size):
285 if pat(path, size):
283 return rule(path, size)
286 return rule(path, size)
284
287
285 return False
288 return False
286
289
287 return _match
290 return _match
288
291
289 def wrapfilelog(filelog):
292 def wrapfilelog(filelog):
290 wrapfunction = extensions.wrapfunction
293 wrapfunction = extensions.wrapfunction
291
294
292 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
295 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
293 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
296 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
294 wrapfunction(filelog, 'size', wrapper.filelogsize)
297 wrapfunction(filelog, 'size', wrapper.filelogsize)
295
298
296 def extsetup(ui):
299 def extsetup(ui):
297 wrapfilelog(filelog.filelog)
300 wrapfilelog(filelog.filelog)
298
301
299 wrapfunction = extensions.wrapfunction
302 wrapfunction = extensions.wrapfunction
300
303
301 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
304 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
302 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
305 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
303
306
304 wrapfunction(upgrade, '_finishdatamigration',
307 wrapfunction(upgrade, '_finishdatamigration',
305 wrapper.upgradefinishdatamigration)
308 wrapper.upgradefinishdatamigration)
306
309
307 wrapfunction(upgrade, 'preservedrequirements',
310 wrapfunction(upgrade, 'preservedrequirements',
308 wrapper.upgraderequirements)
311 wrapper.upgraderequirements)
309
312
310 wrapfunction(upgrade, 'supporteddestrequirements',
313 wrapfunction(upgrade, 'supporteddestrequirements',
311 wrapper.upgraderequirements)
314 wrapper.upgraderequirements)
312
315
313 wrapfunction(changegroup,
316 wrapfunction(changegroup,
314 'allsupportedversions',
317 'allsupportedversions',
315 wrapper.allsupportedversions)
318 wrapper.allsupportedversions)
316
319
317 wrapfunction(exchange, 'push', wrapper.push)
320 wrapfunction(exchange, 'push', wrapper.push)
318 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
321 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
319 wrapfunction(wireprotoserver, 'handlewsgirequest',
322 wrapfunction(wireprotoserver, 'handlewsgirequest',
320 wireprotolfsserver.handlewsgirequest)
323 wireprotolfsserver.handlewsgirequest)
321
324
322 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
325 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
323 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
326 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
324 context.basefilectx.islfs = wrapper.filectxislfs
327 context.basefilectx.islfs = wrapper.filectxislfs
325
328
326 revlog.addflagprocessor(
329 revlog.addflagprocessor(
327 revlog.REVIDX_EXTSTORED,
330 revlog.REVIDX_EXTSTORED,
328 (
331 (
329 wrapper.readfromstore,
332 wrapper.readfromstore,
330 wrapper.writetostore,
333 wrapper.writetostore,
331 wrapper.bypasscheckhash,
334 wrapper.bypasscheckhash,
332 ),
335 ),
333 )
336 )
334
337
335 wrapfunction(hg, 'clone', wrapper.hgclone)
338 wrapfunction(hg, 'clone', wrapper.hgclone)
336 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
339 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
337
340
338 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
341 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
339
342
340 # Make bundle choose changegroup3 instead of changegroup2. This affects
343 # Make bundle choose changegroup3 instead of changegroup2. This affects
341 # "hg bundle" command. Note: it does not cover all bundle formats like
344 # "hg bundle" command. Note: it does not cover all bundle formats like
342 # "packed1". Using "packed1" with lfs will likely cause trouble.
345 # "packed1". Using "packed1" with lfs will likely cause trouble.
343 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
346 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
344
347
345 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
348 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
346 # options and blob stores are passed from othervfs to the new readonlyvfs.
349 # options and blob stores are passed from othervfs to the new readonlyvfs.
347 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
350 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
348
351
349 # when writing a bundle via "hg bundle" command, upload related LFS blobs
352 # when writing a bundle via "hg bundle" command, upload related LFS blobs
350 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
353 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
351
354
352 @filesetpredicate('lfs()', callstatus=True)
355 @filesetpredicate('lfs()', callstatus=True)
353 def lfsfileset(mctx, x):
356 def lfsfileset(mctx, x):
354 """File that uses LFS storage."""
357 """File that uses LFS storage."""
355 # i18n: "lfs" is a keyword
358 # i18n: "lfs" is a keyword
356 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
359 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
357 return [f for f in mctx.subset
360 return [f for f in mctx.subset
358 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
361 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
359
362
360 @templatekeyword('lfs_files', requires={'ctx'})
363 @templatekeyword('lfs_files', requires={'ctx'})
361 def lfsfiles(context, mapping):
364 def lfsfiles(context, mapping):
362 """List of strings. All files modified, added, or removed by this
365 """List of strings. All files modified, added, or removed by this
363 changeset."""
366 changeset."""
364 ctx = context.resource(mapping, 'ctx')
367 ctx = context.resource(mapping, 'ctx')
365
368
366 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
369 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
367 files = sorted(pointers.keys())
370 files = sorted(pointers.keys())
368
371
369 def pointer(v):
372 def pointer(v):
370 # In the file spec, version is first and the other keys are sorted.
373 # In the file spec, version is first and the other keys are sorted.
371 sortkeyfunc = lambda x: (x[0] != 'version', x)
374 sortkeyfunc = lambda x: (x[0] != 'version', x)
372 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
375 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
373 return util.sortdict(items)
376 return util.sortdict(items)
374
377
375 makemap = lambda v: {
378 makemap = lambda v: {
376 'file': v,
379 'file': v,
377 'lfsoid': pointers[v].oid() if pointers[v] else None,
380 'lfsoid': pointers[v].oid() if pointers[v] else None,
378 'lfspointer': templateutil.hybriddict(pointer(v)),
381 'lfspointer': templateutil.hybriddict(pointer(v)),
379 }
382 }
380
383
381 # TODO: make the separator ', '?
384 # TODO: make the separator ', '?
382 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
385 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
383 return templateutil.hybrid(f, files, makemap, pycompat.identity)
386 return templateutil.hybrid(f, files, makemap, pycompat.identity)
384
387
385 @command('debuglfsupload',
388 @command('debuglfsupload',
386 [('r', 'rev', [], _('upload large files introduced by REV'))])
389 [('r', 'rev', [], _('upload large files introduced by REV'))])
387 def debuglfsupload(ui, repo, **opts):
390 def debuglfsupload(ui, repo, **opts):
388 """upload lfs blobs added by the working copy parent or given revisions"""
391 """upload lfs blobs added by the working copy parent or given revisions"""
389 revs = opts.get(r'rev', [])
392 revs = opts.get(r'rev', [])
390 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
393 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
391 wrapper.uploadblobs(repo, pointers)
394 wrapper.uploadblobs(repo, pointers)
@@ -1,284 +1,287 b''
1 # wireprotolfsserver.py - lfs protocol server side implementation
1 # wireprotolfsserver.py - lfs protocol server side implementation
2 #
2 #
3 # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com>
3 # Copyright 2018 Matt Harbison <matt_harbison@yahoo.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import datetime
10 import datetime
11 import errno
11 import errno
12 import json
12 import json
13 import os
13 import os
14
14
15 from mercurial.hgweb import (
15 from mercurial.hgweb import (
16 common as hgwebcommon,
16 common as hgwebcommon,
17 )
17 )
18
18
19 from mercurial import (
19 from mercurial import (
20 pycompat,
20 pycompat,
21 )
21 )
22
22
23 HTTP_OK = hgwebcommon.HTTP_OK
23 HTTP_OK = hgwebcommon.HTTP_OK
24 HTTP_CREATED = hgwebcommon.HTTP_CREATED
24 HTTP_CREATED = hgwebcommon.HTTP_CREATED
25 HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST
25 HTTP_BAD_REQUEST = hgwebcommon.HTTP_BAD_REQUEST
26
26
27 def handlewsgirequest(orig, rctx, req, res, checkperm):
27 def handlewsgirequest(orig, rctx, req, res, checkperm):
28 """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS
28 """Wrap wireprotoserver.handlewsgirequest() to possibly process an LFS
29 request if it is left unprocessed by the wrapped method.
29 request if it is left unprocessed by the wrapped method.
30 """
30 """
31 if orig(rctx, req, res, checkperm):
31 if orig(rctx, req, res, checkperm):
32 return True
32 return True
33
33
34 if not rctx.repo.ui.configbool('experimental', 'lfs.serve'):
35 return False
36
34 if not req.dispatchpath:
37 if not req.dispatchpath:
35 return False
38 return False
36
39
37 try:
40 try:
38 if req.dispatchpath == b'.git/info/lfs/objects/batch':
41 if req.dispatchpath == b'.git/info/lfs/objects/batch':
39 checkperm(rctx, req, 'pull')
42 checkperm(rctx, req, 'pull')
40 return _processbatchrequest(rctx.repo, req, res)
43 return _processbatchrequest(rctx.repo, req, res)
41 # TODO: reserve and use a path in the proposed http wireprotocol /api/
44 # TODO: reserve and use a path in the proposed http wireprotocol /api/
42 # namespace?
45 # namespace?
43 elif req.dispatchpath.startswith(b'.hg/lfs/objects'):
46 elif req.dispatchpath.startswith(b'.hg/lfs/objects'):
44 return _processbasictransfer(rctx.repo, req, res,
47 return _processbasictransfer(rctx.repo, req, res,
45 lambda perm:
48 lambda perm:
46 checkperm(rctx, req, perm))
49 checkperm(rctx, req, perm))
47 return False
50 return False
48 except hgwebcommon.ErrorResponse as e:
51 except hgwebcommon.ErrorResponse as e:
49 # XXX: copied from the handler surrounding wireprotoserver._callhttp()
52 # XXX: copied from the handler surrounding wireprotoserver._callhttp()
50 # in the wrapped function. Should this be moved back to hgweb to
53 # in the wrapped function. Should this be moved back to hgweb to
51 # be a common handler?
54 # be a common handler?
52 for k, v in e.headers:
55 for k, v in e.headers:
53 res.headers[k] = v
56 res.headers[k] = v
54 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
57 res.status = hgwebcommon.statusmessage(e.code, pycompat.bytestr(e))
55 res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e))
58 res.setbodybytes(b'0\n%s\n' % pycompat.bytestr(e))
56 return True
59 return True
57
60
58 def _sethttperror(res, code, message=None):
61 def _sethttperror(res, code, message=None):
59 res.status = hgwebcommon.statusmessage(code, message=message)
62 res.status = hgwebcommon.statusmessage(code, message=message)
60 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
63 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
61 res.setbodybytes(b'')
64 res.setbodybytes(b'')
62
65
63 def _processbatchrequest(repo, req, res):
66 def _processbatchrequest(repo, req, res):
64 """Handle a request for the Batch API, which is the gateway to granting file
67 """Handle a request for the Batch API, which is the gateway to granting file
65 access.
68 access.
66
69
67 https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
70 https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
68 """
71 """
69
72
70 # Mercurial client request:
73 # Mercurial client request:
71 #
74 #
72 # HOST: localhost:$HGPORT
75 # HOST: localhost:$HGPORT
73 # ACCEPT: application/vnd.git-lfs+json
76 # ACCEPT: application/vnd.git-lfs+json
74 # ACCEPT-ENCODING: identity
77 # ACCEPT-ENCODING: identity
75 # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316)
78 # USER-AGENT: git-lfs/2.3.4 (Mercurial 4.5.2+1114-f48b9754f04c+20180316)
76 # Content-Length: 125
79 # Content-Length: 125
77 # Content-Type: application/vnd.git-lfs+json
80 # Content-Type: application/vnd.git-lfs+json
78 #
81 #
79 # {
82 # {
80 # "objects": [
83 # "objects": [
81 # {
84 # {
82 # "oid": "31cf...8e5b"
85 # "oid": "31cf...8e5b"
83 # "size": 12
86 # "size": 12
84 # }
87 # }
85 # ]
88 # ]
86 # "operation": "upload"
89 # "operation": "upload"
87 # }
90 # }
88
91
89 if (req.method != b'POST'
92 if (req.method != b'POST'
90 or req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json'
93 or req.headers[b'Content-Type'] != b'application/vnd.git-lfs+json'
91 or req.headers[b'Accept'] != b'application/vnd.git-lfs+json'):
94 or req.headers[b'Accept'] != b'application/vnd.git-lfs+json'):
92 # TODO: figure out what the proper handling for a bad request to the
95 # TODO: figure out what the proper handling for a bad request to the
93 # Batch API is.
96 # Batch API is.
94 _sethttperror(res, HTTP_BAD_REQUEST, b'Invalid Batch API request')
97 _sethttperror(res, HTTP_BAD_REQUEST, b'Invalid Batch API request')
95 return True
98 return True
96
99
97 # XXX: specify an encoding?
100 # XXX: specify an encoding?
98 lfsreq = json.loads(req.bodyfh.read())
101 lfsreq = json.loads(req.bodyfh.read())
99
102
100 # If no transfer handlers are explicitly requested, 'basic' is assumed.
103 # If no transfer handlers are explicitly requested, 'basic' is assumed.
101 if 'basic' not in lfsreq.get('transfers', ['basic']):
104 if 'basic' not in lfsreq.get('transfers', ['basic']):
102 _sethttperror(res, HTTP_BAD_REQUEST,
105 _sethttperror(res, HTTP_BAD_REQUEST,
103 b'Only the basic LFS transfer handler is supported')
106 b'Only the basic LFS transfer handler is supported')
104 return True
107 return True
105
108
106 operation = lfsreq.get('operation')
109 operation = lfsreq.get('operation')
107 if operation not in ('upload', 'download'):
110 if operation not in ('upload', 'download'):
108 _sethttperror(res, HTTP_BAD_REQUEST,
111 _sethttperror(res, HTTP_BAD_REQUEST,
109 b'Unsupported LFS transfer operation: %s' % operation)
112 b'Unsupported LFS transfer operation: %s' % operation)
110 return True
113 return True
111
114
112 localstore = repo.svfs.lfslocalblobstore
115 localstore = repo.svfs.lfslocalblobstore
113
116
114 objects = [p for p in _batchresponseobjects(req, lfsreq.get('objects', []),
117 objects = [p for p in _batchresponseobjects(req, lfsreq.get('objects', []),
115 operation, localstore)]
118 operation, localstore)]
116
119
117 rsp = {
120 rsp = {
118 'transfer': 'basic',
121 'transfer': 'basic',
119 'objects': objects,
122 'objects': objects,
120 }
123 }
121
124
122 res.status = hgwebcommon.statusmessage(HTTP_OK)
125 res.status = hgwebcommon.statusmessage(HTTP_OK)
123 res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json'
126 res.headers[b'Content-Type'] = b'application/vnd.git-lfs+json'
124 res.setbodybytes(pycompat.bytestr(json.dumps(rsp)))
127 res.setbodybytes(pycompat.bytestr(json.dumps(rsp)))
125
128
126 return True
129 return True
127
130
128 def _batchresponseobjects(req, objects, action, store):
131 def _batchresponseobjects(req, objects, action, store):
129 """Yield one dictionary of attributes for the Batch API response for each
132 """Yield one dictionary of attributes for the Batch API response for each
130 object in the list.
133 object in the list.
131
134
132 req: The parsedrequest for the Batch API request
135 req: The parsedrequest for the Batch API request
133 objects: The list of objects in the Batch API object request list
136 objects: The list of objects in the Batch API object request list
134 action: 'upload' or 'download'
137 action: 'upload' or 'download'
135 store: The local blob store for servicing requests"""
138 store: The local blob store for servicing requests"""
136
139
137 # Successful lfs-test-server response to solict an upload:
140 # Successful lfs-test-server response to solict an upload:
138 # {
141 # {
139 # u'objects': [{
142 # u'objects': [{
140 # u'size': 12,
143 # u'size': 12,
141 # u'oid': u'31cf...8e5b',
144 # u'oid': u'31cf...8e5b',
142 # u'actions': {
145 # u'actions': {
143 # u'upload': {
146 # u'upload': {
144 # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b',
147 # u'href': u'http://localhost:$HGPORT/objects/31cf...8e5b',
145 # u'expires_at': u'0001-01-01T00:00:00Z',
148 # u'expires_at': u'0001-01-01T00:00:00Z',
146 # u'header': {
149 # u'header': {
147 # u'Accept': u'application/vnd.git-lfs'
150 # u'Accept': u'application/vnd.git-lfs'
148 # }
151 # }
149 # }
152 # }
150 # }
153 # }
151 # }]
154 # }]
152 # }
155 # }
153
156
154 # TODO: Sort out the expires_at/expires_in/authenticated keys.
157 # TODO: Sort out the expires_at/expires_in/authenticated keys.
155
158
156 for obj in objects:
159 for obj in objects:
157 # Convert unicode to ASCII to create a filesystem path
160 # Convert unicode to ASCII to create a filesystem path
158 oid = obj.get('oid').encode('ascii')
161 oid = obj.get('oid').encode('ascii')
159 rsp = {
162 rsp = {
160 'oid': oid,
163 'oid': oid,
161 'size': obj.get('size'), # XXX: should this check the local size?
164 'size': obj.get('size'), # XXX: should this check the local size?
162 #'authenticated': True,
165 #'authenticated': True,
163 }
166 }
164
167
165 exists = True
168 exists = True
166 verifies = False
169 verifies = False
167
170
168 # Verify an existing file on the upload request, so that the client is
171 # Verify an existing file on the upload request, so that the client is
169 # solicited to re-upload if it corrupt locally. Download requests are
172 # solicited to re-upload if it corrupt locally. Download requests are
170 # also verified, so the error can be flagged in the Batch API response.
173 # also verified, so the error can be flagged in the Batch API response.
171 # (Maybe we can use this to short circuit the download for `hg verify`,
174 # (Maybe we can use this to short circuit the download for `hg verify`,
172 # IFF the client can assert that the remote end is an hg server.)
175 # IFF the client can assert that the remote end is an hg server.)
173 # Otherwise, it's potentially overkill on download, since it is also
176 # Otherwise, it's potentially overkill on download, since it is also
174 # verified as the file is streamed to the caller.
177 # verified as the file is streamed to the caller.
175 try:
178 try:
176 verifies = store.verify(oid)
179 verifies = store.verify(oid)
177 except IOError as inst:
180 except IOError as inst:
178 if inst.errno != errno.ENOENT:
181 if inst.errno != errno.ENOENT:
179 rsp['error'] = {
182 rsp['error'] = {
180 'code': 500,
183 'code': 500,
181 'message': inst.strerror or 'Internal Server Server'
184 'message': inst.strerror or 'Internal Server Server'
182 }
185 }
183 yield rsp
186 yield rsp
184 continue
187 continue
185
188
186 exists = False
189 exists = False
187
190
188 # Items are always listed for downloads. They are dropped for uploads
191 # Items are always listed for downloads. They are dropped for uploads
189 # IFF they already exist locally.
192 # IFF they already exist locally.
190 if action == 'download':
193 if action == 'download':
191 if not exists:
194 if not exists:
192 rsp['error'] = {
195 rsp['error'] = {
193 'code': 404,
196 'code': 404,
194 'message': "The object does not exist"
197 'message': "The object does not exist"
195 }
198 }
196 yield rsp
199 yield rsp
197 continue
200 continue
198
201
199 elif not verifies:
202 elif not verifies:
200 rsp['error'] = {
203 rsp['error'] = {
201 'code': 422, # XXX: is this the right code?
204 'code': 422, # XXX: is this the right code?
202 'message': "The object is corrupt"
205 'message': "The object is corrupt"
203 }
206 }
204 yield rsp
207 yield rsp
205 continue
208 continue
206
209
207 elif verifies:
210 elif verifies:
208 yield rsp # Skip 'actions': already uploaded
211 yield rsp # Skip 'actions': already uploaded
209 continue
212 continue
210
213
211 expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10)
214 expiresat = datetime.datetime.now() + datetime.timedelta(minutes=10)
212
215
213 rsp['actions'] = {
216 rsp['actions'] = {
214 '%s' % action: {
217 '%s' % action: {
215 # TODO: Account for the --prefix, if any.
218 # TODO: Account for the --prefix, if any.
216 'href': '%s/.hg/lfs/objects/%s' % (req.baseurl, oid),
219 'href': '%s/.hg/lfs/objects/%s' % (req.baseurl, oid),
217 # datetime.isoformat() doesn't include the 'Z' suffix
220 # datetime.isoformat() doesn't include the 'Z' suffix
218 "expires_at": expiresat.strftime('%Y-%m-%dT%H:%M:%SZ'),
221 "expires_at": expiresat.strftime('%Y-%m-%dT%H:%M:%SZ'),
219 'header': {
222 'header': {
220 # The spec doesn't mention the Accept header here, but avoid
223 # The spec doesn't mention the Accept header here, but avoid
221 # a gratuitous deviation from lfs-test-server in the test
224 # a gratuitous deviation from lfs-test-server in the test
222 # output.
225 # output.
223 'Accept': 'application/vnd.git-lfs'
226 'Accept': 'application/vnd.git-lfs'
224 }
227 }
225 }
228 }
226 }
229 }
227
230
228 yield rsp
231 yield rsp
229
232
230 def _processbasictransfer(repo, req, res, checkperm):
233 def _processbasictransfer(repo, req, res, checkperm):
231 """Handle a single file upload (PUT) or download (GET) action for the Basic
234 """Handle a single file upload (PUT) or download (GET) action for the Basic
232 Transfer Adapter.
235 Transfer Adapter.
233
236
234 After determining if the request is for an upload or download, the access
237 After determining if the request is for an upload or download, the access
235 must be checked by calling ``checkperm()`` with either 'pull' or 'upload'
238 must be checked by calling ``checkperm()`` with either 'pull' or 'upload'
236 before accessing the files.
239 before accessing the files.
237
240
238 https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md
241 https://github.com/git-lfs/git-lfs/blob/master/docs/api/basic-transfers.md
239 """
242 """
240
243
241 method = req.method
244 method = req.method
242 oid = os.path.basename(req.dispatchpath)
245 oid = os.path.basename(req.dispatchpath)
243 localstore = repo.svfs.lfslocalblobstore
246 localstore = repo.svfs.lfslocalblobstore
244
247
245 if method == b'PUT':
248 if method == b'PUT':
246 checkperm('upload')
249 checkperm('upload')
247
250
248 # TODO: verify Content-Type?
251 # TODO: verify Content-Type?
249
252
250 existed = localstore.has(oid)
253 existed = localstore.has(oid)
251
254
252 # TODO: how to handle timeouts? The body proxy handles limiting to
255 # TODO: how to handle timeouts? The body proxy handles limiting to
253 # Content-Length, but what happens if a client sends less than it
256 # Content-Length, but what happens if a client sends less than it
254 # says it will?
257 # says it will?
255
258
256 # TODO: download() will abort if the checksum fails. It should raise
259 # TODO: download() will abort if the checksum fails. It should raise
257 # something checksum specific that can be caught here, and turned
260 # something checksum specific that can be caught here, and turned
258 # into an http code.
261 # into an http code.
259 localstore.download(oid, req.bodyfh)
262 localstore.download(oid, req.bodyfh)
260
263
261 statusmessage = hgwebcommon.statusmessage
264 statusmessage = hgwebcommon.statusmessage
262 res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED)
265 res.status = statusmessage(HTTP_OK if existed else HTTP_CREATED)
263
266
264 # There's no payload here, but this is the header that lfs-test-server
267 # There's no payload here, but this is the header that lfs-test-server
265 # sends back. This eliminates some gratuitous test output conditionals.
268 # sends back. This eliminates some gratuitous test output conditionals.
266 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
269 res.headers[b'Content-Type'] = b'text/plain; charset=utf-8'
267 res.setbodybytes(b'')
270 res.setbodybytes(b'')
268
271
269 return True
272 return True
270 elif method == b'GET':
273 elif method == b'GET':
271 checkperm('pull')
274 checkperm('pull')
272
275
273 res.status = hgwebcommon.statusmessage(HTTP_OK)
276 res.status = hgwebcommon.statusmessage(HTTP_OK)
274 res.headers[b'Content-Type'] = b'application/octet-stream'
277 res.headers[b'Content-Type'] = b'application/octet-stream'
275
278
276 # TODO: figure out how to send back the file in chunks, instead of
279 # TODO: figure out how to send back the file in chunks, instead of
277 # reading the whole thing.
280 # reading the whole thing.
278 res.setbodybytes(localstore.read(oid))
281 res.setbodybytes(localstore.read(oid))
279
282
280 return True
283 return True
281 else:
284 else:
282 _sethttperror(res, HTTP_BAD_REQUEST,
285 _sethttperror(res, HTTP_BAD_REQUEST,
283 message=b'Unsupported LFS transfer method: %s' % method)
286 message=b'Unsupported LFS transfer method: %s' % method)
284 return True
287 return True
General Comments 0
You need to be logged in to leave comments. Login now