##// END OF EJS Templates
lfs: special case the null:// usercache instead of treating it as a url...
Matt Harbison -
r37580:e5cd8d1a default
parent child Browse files
Show More
@@ -1,396 +1,399
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will assume the repository at ``paths.default`` also handles
90 # if unset, lfs will assume the repository at ``paths.default`` also handles
91 # blob storage for http(s) URLs. Otherwise, lfs will prompt to set this
91 # blob storage for http(s) URLs. Otherwise, lfs will prompt to set this
92 # when it must use this value.
92 # when it must use this value.
93 # (default: unset)
93 # (default: unset)
94 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
95
95
96 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
97 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
98 # are relative to the repository root.
98 # are relative to the repository root.
99 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
100 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
101 #
101 #
102 # Some examples:
102 # Some examples:
103 # - all() # everything
103 # - all() # everything
104 # - none() # nothing
104 # - none() # nothing
105 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
106 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 # (default: none())
111 # (default: none())
112 #
112 #
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
115 track = size(">10M")
115 track = size(">10M")
116
116
117 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
118 retry = 5
118 retry = 5
119
119
120 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
121 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
122 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 from mercurial.i18n import _
127 from mercurial.i18n import _
128
128
129 from mercurial import (
129 from mercurial import (
130 bundle2,
130 bundle2,
131 changegroup,
131 changegroup,
132 cmdutil,
132 cmdutil,
133 config,
133 config,
134 context,
134 context,
135 error,
135 error,
136 exchange,
136 exchange,
137 extensions,
137 extensions,
138 filelog,
138 filelog,
139 fileset,
139 fileset,
140 hg,
140 hg,
141 localrepo,
141 localrepo,
142 minifileset,
142 minifileset,
143 node,
143 node,
144 pycompat,
144 pycompat,
145 registrar,
145 registrar,
146 revlog,
146 revlog,
147 scmutil,
147 scmutil,
148 templateutil,
148 templateutil,
149 upgrade,
149 upgrade,
150 util,
150 util,
151 vfs as vfsmod,
151 vfs as vfsmod,
152 wireproto,
152 wireproto,
153 wireprotoserver,
153 wireprotoserver,
154 )
154 )
155
155
156 from . import (
156 from . import (
157 blobstore,
157 blobstore,
158 wireprotolfsserver,
158 wireprotolfsserver,
159 wrapper,
159 wrapper,
160 )
160 )
161
161
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
164 # be specifying the version(s) of Mercurial they are tested with, or
164 # be specifying the version(s) of Mercurial they are tested with, or
165 # leave the attribute unspecified.
165 # leave the attribute unspecified.
166 testedwith = 'ships-with-hg-core'
166 testedwith = 'ships-with-hg-core'
167
167
168 configtable = {}
168 configtable = {}
169 configitem = registrar.configitem(configtable)
169 configitem = registrar.configitem(configtable)
170
170
171 configitem('experimental', 'lfs.serve',
171 configitem('experimental', 'lfs.serve',
172 default=True,
172 default=True,
173 )
173 )
174 configitem('experimental', 'lfs.user-agent',
174 configitem('experimental', 'lfs.user-agent',
175 default=None,
175 default=None,
176 )
176 )
177 configitem('experimental', 'lfs.disableusercache',
178 default=False,
179 )
177 configitem('experimental', 'lfs.worker-enable',
180 configitem('experimental', 'lfs.worker-enable',
178 default=False,
181 default=False,
179 )
182 )
180
183
181 configitem('lfs', 'url',
184 configitem('lfs', 'url',
182 default=None,
185 default=None,
183 )
186 )
184 configitem('lfs', 'usercache',
187 configitem('lfs', 'usercache',
185 default=None,
188 default=None,
186 )
189 )
187 # Deprecated
190 # Deprecated
188 configitem('lfs', 'threshold',
191 configitem('lfs', 'threshold',
189 default=None,
192 default=None,
190 )
193 )
191 configitem('lfs', 'track',
194 configitem('lfs', 'track',
192 default='none()',
195 default='none()',
193 )
196 )
194 configitem('lfs', 'retry',
197 configitem('lfs', 'retry',
195 default=5,
198 default=5,
196 )
199 )
197
200
198 cmdtable = {}
201 cmdtable = {}
199 command = registrar.command(cmdtable)
202 command = registrar.command(cmdtable)
200
203
201 templatekeyword = registrar.templatekeyword()
204 templatekeyword = registrar.templatekeyword()
202 filesetpredicate = registrar.filesetpredicate()
205 filesetpredicate = registrar.filesetpredicate()
203
206
204 def featuresetup(ui, supported):
207 def featuresetup(ui, supported):
205 # don't die on seeing a repo with the lfs requirement
208 # don't die on seeing a repo with the lfs requirement
206 supported |= {'lfs'}
209 supported |= {'lfs'}
207
210
208 def uisetup(ui):
211 def uisetup(ui):
209 localrepo.featuresetupfuncs.add(featuresetup)
212 localrepo.featuresetupfuncs.add(featuresetup)
210
213
211 def reposetup(ui, repo):
214 def reposetup(ui, repo):
212 # Nothing to do with a remote repo
215 # Nothing to do with a remote repo
213 if not repo.local():
216 if not repo.local():
214 return
217 return
215
218
216 repo.svfs.lfslocalblobstore = blobstore.local(repo)
219 repo.svfs.lfslocalblobstore = blobstore.local(repo)
217 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
220 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
218
221
219 class lfsrepo(repo.__class__):
222 class lfsrepo(repo.__class__):
220 @localrepo.unfilteredmethod
223 @localrepo.unfilteredmethod
221 def commitctx(self, ctx, error=False):
224 def commitctx(self, ctx, error=False):
222 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
225 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
223 return super(lfsrepo, self).commitctx(ctx, error)
226 return super(lfsrepo, self).commitctx(ctx, error)
224
227
225 repo.__class__ = lfsrepo
228 repo.__class__ = lfsrepo
226
229
227 if 'lfs' not in repo.requirements:
230 if 'lfs' not in repo.requirements:
228 def checkrequireslfs(ui, repo, **kwargs):
231 def checkrequireslfs(ui, repo, **kwargs):
229 if 'lfs' not in repo.requirements:
232 if 'lfs' not in repo.requirements:
230 last = kwargs.get(r'node_last')
233 last = kwargs.get(r'node_last')
231 _bin = node.bin
234 _bin = node.bin
232 if last:
235 if last:
233 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
236 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
234 else:
237 else:
235 s = repo.set('%n', _bin(kwargs[r'node']))
238 s = repo.set('%n', _bin(kwargs[r'node']))
236 match = repo.narrowmatch()
239 match = repo.narrowmatch()
237 for ctx in s:
240 for ctx in s:
238 # TODO: is there a way to just walk the files in the commit?
241 # TODO: is there a way to just walk the files in the commit?
239 if any(ctx[f].islfs() for f in ctx.files()
242 if any(ctx[f].islfs() for f in ctx.files()
240 if f in ctx and match(f)):
243 if f in ctx and match(f)):
241 repo.requirements.add('lfs')
244 repo.requirements.add('lfs')
242 repo._writerequirements()
245 repo._writerequirements()
243 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
246 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
244 break
247 break
245
248
246 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
249 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
247 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
250 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
248 else:
251 else:
249 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
252 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
250
253
251 def _trackedmatcher(repo):
254 def _trackedmatcher(repo):
252 """Return a function (path, size) -> bool indicating whether or not to
255 """Return a function (path, size) -> bool indicating whether or not to
253 track a given file with lfs."""
256 track a given file with lfs."""
254 if not repo.wvfs.exists('.hglfs'):
257 if not repo.wvfs.exists('.hglfs'):
255 # No '.hglfs' in wdir. Fallback to config for now.
258 # No '.hglfs' in wdir. Fallback to config for now.
256 trackspec = repo.ui.config('lfs', 'track')
259 trackspec = repo.ui.config('lfs', 'track')
257
260
258 # deprecated config: lfs.threshold
261 # deprecated config: lfs.threshold
259 threshold = repo.ui.configbytes('lfs', 'threshold')
262 threshold = repo.ui.configbytes('lfs', 'threshold')
260 if threshold:
263 if threshold:
261 fileset.parse(trackspec) # make sure syntax errors are confined
264 fileset.parse(trackspec) # make sure syntax errors are confined
262 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
265 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
263
266
264 return minifileset.compile(trackspec)
267 return minifileset.compile(trackspec)
265
268
266 data = repo.wvfs.tryread('.hglfs')
269 data = repo.wvfs.tryread('.hglfs')
267 if not data:
270 if not data:
268 return lambda p, s: False
271 return lambda p, s: False
269
272
270 # Parse errors here will abort with a message that points to the .hglfs file
273 # Parse errors here will abort with a message that points to the .hglfs file
271 # and line number.
274 # and line number.
272 cfg = config.config()
275 cfg = config.config()
273 cfg.parse('.hglfs', data)
276 cfg.parse('.hglfs', data)
274
277
275 try:
278 try:
276 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
279 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
277 for pattern, rule in cfg.items('track')]
280 for pattern, rule in cfg.items('track')]
278 except error.ParseError as e:
281 except error.ParseError as e:
279 # The original exception gives no indicator that the error is in the
282 # The original exception gives no indicator that the error is in the
280 # .hglfs file, so add that.
283 # .hglfs file, so add that.
281
284
282 # TODO: See if the line number of the file can be made available.
285 # TODO: See if the line number of the file can be made available.
283 raise error.Abort(_('parse error in .hglfs: %s') % e)
286 raise error.Abort(_('parse error in .hglfs: %s') % e)
284
287
285 def _match(path, size):
288 def _match(path, size):
286 for pat, rule in rules:
289 for pat, rule in rules:
287 if pat(path, size):
290 if pat(path, size):
288 return rule(path, size)
291 return rule(path, size)
289
292
290 return False
293 return False
291
294
292 return _match
295 return _match
293
296
294 def wrapfilelog(filelog):
297 def wrapfilelog(filelog):
295 wrapfunction = extensions.wrapfunction
298 wrapfunction = extensions.wrapfunction
296
299
297 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
300 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
298 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
301 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
299 wrapfunction(filelog, 'size', wrapper.filelogsize)
302 wrapfunction(filelog, 'size', wrapper.filelogsize)
300
303
301 def extsetup(ui):
304 def extsetup(ui):
302 wrapfilelog(filelog.filelog)
305 wrapfilelog(filelog.filelog)
303
306
304 wrapfunction = extensions.wrapfunction
307 wrapfunction = extensions.wrapfunction
305
308
306 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
309 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
307 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
310 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
308
311
309 wrapfunction(upgrade, '_finishdatamigration',
312 wrapfunction(upgrade, '_finishdatamigration',
310 wrapper.upgradefinishdatamigration)
313 wrapper.upgradefinishdatamigration)
311
314
312 wrapfunction(upgrade, 'preservedrequirements',
315 wrapfunction(upgrade, 'preservedrequirements',
313 wrapper.upgraderequirements)
316 wrapper.upgraderequirements)
314
317
315 wrapfunction(upgrade, 'supporteddestrequirements',
318 wrapfunction(upgrade, 'supporteddestrequirements',
316 wrapper.upgraderequirements)
319 wrapper.upgraderequirements)
317
320
318 wrapfunction(changegroup,
321 wrapfunction(changegroup,
319 'allsupportedversions',
322 'allsupportedversions',
320 wrapper.allsupportedversions)
323 wrapper.allsupportedversions)
321
324
322 wrapfunction(exchange, 'push', wrapper.push)
325 wrapfunction(exchange, 'push', wrapper.push)
323 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
326 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
324 wrapfunction(wireprotoserver, 'handlewsgirequest',
327 wrapfunction(wireprotoserver, 'handlewsgirequest',
325 wireprotolfsserver.handlewsgirequest)
328 wireprotolfsserver.handlewsgirequest)
326
329
327 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
330 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
328 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
331 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
329 context.basefilectx.islfs = wrapper.filectxislfs
332 context.basefilectx.islfs = wrapper.filectxislfs
330
333
331 revlog.addflagprocessor(
334 revlog.addflagprocessor(
332 revlog.REVIDX_EXTSTORED,
335 revlog.REVIDX_EXTSTORED,
333 (
336 (
334 wrapper.readfromstore,
337 wrapper.readfromstore,
335 wrapper.writetostore,
338 wrapper.writetostore,
336 wrapper.bypasscheckhash,
339 wrapper.bypasscheckhash,
337 ),
340 ),
338 )
341 )
339
342
340 wrapfunction(hg, 'clone', wrapper.hgclone)
343 wrapfunction(hg, 'clone', wrapper.hgclone)
341 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
344 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
342
345
343 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
346 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
344
347
345 # Make bundle choose changegroup3 instead of changegroup2. This affects
348 # Make bundle choose changegroup3 instead of changegroup2. This affects
346 # "hg bundle" command. Note: it does not cover all bundle formats like
349 # "hg bundle" command. Note: it does not cover all bundle formats like
347 # "packed1". Using "packed1" with lfs will likely cause trouble.
350 # "packed1". Using "packed1" with lfs will likely cause trouble.
348 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
351 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
349
352
350 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
353 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
351 # options and blob stores are passed from othervfs to the new readonlyvfs.
354 # options and blob stores are passed from othervfs to the new readonlyvfs.
352 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
355 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
353
356
354 # when writing a bundle via "hg bundle" command, upload related LFS blobs
357 # when writing a bundle via "hg bundle" command, upload related LFS blobs
355 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
358 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
356
359
357 @filesetpredicate('lfs()', callstatus=True)
360 @filesetpredicate('lfs()', callstatus=True)
358 def lfsfileset(mctx, x):
361 def lfsfileset(mctx, x):
359 """File that uses LFS storage."""
362 """File that uses LFS storage."""
360 # i18n: "lfs" is a keyword
363 # i18n: "lfs" is a keyword
361 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
364 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
362 return [f for f in mctx.subset
365 return [f for f in mctx.subset
363 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
366 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
364
367
365 @templatekeyword('lfs_files', requires={'ctx'})
368 @templatekeyword('lfs_files', requires={'ctx'})
366 def lfsfiles(context, mapping):
369 def lfsfiles(context, mapping):
367 """List of strings. All files modified, added, or removed by this
370 """List of strings. All files modified, added, or removed by this
368 changeset."""
371 changeset."""
369 ctx = context.resource(mapping, 'ctx')
372 ctx = context.resource(mapping, 'ctx')
370
373
371 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
374 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
372 files = sorted(pointers.keys())
375 files = sorted(pointers.keys())
373
376
374 def pointer(v):
377 def pointer(v):
375 # In the file spec, version is first and the other keys are sorted.
378 # In the file spec, version is first and the other keys are sorted.
376 sortkeyfunc = lambda x: (x[0] != 'version', x)
379 sortkeyfunc = lambda x: (x[0] != 'version', x)
377 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
380 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
378 return util.sortdict(items)
381 return util.sortdict(items)
379
382
380 makemap = lambda v: {
383 makemap = lambda v: {
381 'file': v,
384 'file': v,
382 'lfsoid': pointers[v].oid() if pointers[v] else None,
385 'lfsoid': pointers[v].oid() if pointers[v] else None,
383 'lfspointer': templateutil.hybriddict(pointer(v)),
386 'lfspointer': templateutil.hybriddict(pointer(v)),
384 }
387 }
385
388
386 # TODO: make the separator ', '?
389 # TODO: make the separator ', '?
387 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
390 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
388 return templateutil.hybrid(f, files, makemap, pycompat.identity)
391 return templateutil.hybrid(f, files, makemap, pycompat.identity)
389
392
390 @command('debuglfsupload',
393 @command('debuglfsupload',
391 [('r', 'rev', [], _('upload large files introduced by REV'))])
394 [('r', 'rev', [], _('upload large files introduced by REV'))])
392 def debuglfsupload(ui, repo, **opts):
395 def debuglfsupload(ui, repo, **opts):
393 """upload lfs blobs added by the working copy parent or given revisions"""
396 """upload lfs blobs added by the working copy parent or given revisions"""
394 revs = opts.get(r'rev', [])
397 revs = opts.get(r'rev', [])
395 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
398 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
396 wrapper.uploadblobs(repo, pointers)
399 wrapper.uploadblobs(repo, pointers)
@@ -1,564 +1,562
1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12 import json
12 import json
13 import os
13 import os
14 import re
14 import re
15 import socket
15 import socket
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18
18
19 from mercurial import (
19 from mercurial import (
20 error,
20 error,
21 pathutil,
21 pathutil,
22 pycompat,
22 pycompat,
23 url as urlmod,
23 url as urlmod,
24 util,
24 util,
25 vfs as vfsmod,
25 vfs as vfsmod,
26 worker,
26 worker,
27 )
27 )
28
28
29 from ..largefiles import lfutil
29 from ..largefiles import lfutil
30
30
31 # 64 bytes for SHA256
31 # 64 bytes for SHA256
32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
33
33
34 class lfsvfs(vfsmod.vfs):
34 class lfsvfs(vfsmod.vfs):
35 def join(self, path):
35 def join(self, path):
36 """split the path at first two characters, like: XX/XXXXX..."""
36 """split the path at first two characters, like: XX/XXXXX..."""
37 if not _lfsre.match(path):
37 if not _lfsre.match(path):
38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
39 return super(lfsvfs, self).join(path[0:2], path[2:])
39 return super(lfsvfs, self).join(path[0:2], path[2:])
40
40
41 def walk(self, path=None, onerror=None):
41 def walk(self, path=None, onerror=None):
42 """Yield (dirpath, [], oids) tuple for blobs under path
42 """Yield (dirpath, [], oids) tuple for blobs under path
43
43
44 Oids only exist in the root of this vfs, so dirpath is always ''.
44 Oids only exist in the root of this vfs, so dirpath is always ''.
45 """
45 """
46 root = os.path.normpath(self.base)
46 root = os.path.normpath(self.base)
47 # when dirpath == root, dirpath[prefixlen:] becomes empty
47 # when dirpath == root, dirpath[prefixlen:] becomes empty
48 # because len(dirpath) < prefixlen.
48 # because len(dirpath) < prefixlen.
49 prefixlen = len(pathutil.normasprefix(root))
49 prefixlen = len(pathutil.normasprefix(root))
50 oids = []
50 oids = []
51
51
52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
53 onerror=onerror):
53 onerror=onerror):
54 dirpath = dirpath[prefixlen:]
54 dirpath = dirpath[prefixlen:]
55
55
56 # Silently skip unexpected files and directories
56 # Silently skip unexpected files and directories
57 if len(dirpath) == 2:
57 if len(dirpath) == 2:
58 oids.extend([dirpath + f for f in files
58 oids.extend([dirpath + f for f in files
59 if _lfsre.match(dirpath + f)])
59 if _lfsre.match(dirpath + f)])
60
60
61 yield ('', [], oids)
61 yield ('', [], oids)
62
62
63 class nullvfs(lfsvfs):
63 class nullvfs(lfsvfs):
64 def __init__(self):
64 def __init__(self):
65 pass
65 pass
66
66
67 def exists(self, oid):
67 def exists(self, oid):
68 return False
68 return False
69
69
70 def read(self, oid):
70 def read(self, oid):
71 # store.read() calls into here if the blob doesn't exist in its
71 # store.read() calls into here if the blob doesn't exist in its
72 # self.vfs. Raise the same error as a normal vfs when asked to read a
72 # self.vfs. Raise the same error as a normal vfs when asked to read a
73 # file that doesn't exist. The only difference is the full file path
73 # file that doesn't exist. The only difference is the full file path
74 # isn't available in the error.
74 # isn't available in the error.
75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
76
76
77 def walk(self, path=None, onerror=None):
77 def walk(self, path=None, onerror=None):
78 return ('', [], [])
78 return ('', [], [])
79
79
80 def write(self, oid, data):
80 def write(self, oid, data):
81 pass
81 pass
82
82
83 class filewithprogress(object):
83 class filewithprogress(object):
84 """a file-like object that supports __len__ and read.
84 """a file-like object that supports __len__ and read.
85
85
86 Useful to provide progress information for how many bytes are read.
86 Useful to provide progress information for how many bytes are read.
87 """
87 """
88
88
89 def __init__(self, fp, callback):
89 def __init__(self, fp, callback):
90 self._fp = fp
90 self._fp = fp
91 self._callback = callback # func(readsize)
91 self._callback = callback # func(readsize)
92 fp.seek(0, os.SEEK_END)
92 fp.seek(0, os.SEEK_END)
93 self._len = fp.tell()
93 self._len = fp.tell()
94 fp.seek(0)
94 fp.seek(0)
95
95
96 def __len__(self):
96 def __len__(self):
97 return self._len
97 return self._len
98
98
99 def read(self, size):
99 def read(self, size):
100 if self._fp is None:
100 if self._fp is None:
101 return b''
101 return b''
102 data = self._fp.read(size)
102 data = self._fp.read(size)
103 if data:
103 if data:
104 if self._callback:
104 if self._callback:
105 self._callback(len(data))
105 self._callback(len(data))
106 else:
106 else:
107 self._fp.close()
107 self._fp.close()
108 self._fp = None
108 self._fp = None
109 return data
109 return data
110
110
111 class local(object):
111 class local(object):
112 """Local blobstore for large file contents.
112 """Local blobstore for large file contents.
113
113
114 This blobstore is used both as a cache and as a staging area for large blobs
114 This blobstore is used both as a cache and as a staging area for large blobs
115 to be uploaded to the remote blobstore.
115 to be uploaded to the remote blobstore.
116 """
116 """
117
117
118 def __init__(self, repo):
118 def __init__(self, repo):
119 fullpath = repo.svfs.join('lfs/objects')
119 fullpath = repo.svfs.join('lfs/objects')
120 self.vfs = lfsvfs(fullpath)
120 self.vfs = lfsvfs(fullpath)
121 usercache = util.url(lfutil._usercachedir(repo.ui, 'lfs'))
121
122 if usercache.scheme in (None, 'file'):
122 if repo.ui.configbool('experimental', 'lfs.disableusercache'):
123 self.cachevfs = lfsvfs(usercache.localpath())
124 elif usercache.scheme == 'null':
125 self.cachevfs = nullvfs()
123 self.cachevfs = nullvfs()
126 else:
124 else:
127 raise error.Abort(_('unknown lfs cache scheme: %s')
125 usercache = lfutil._usercachedir(repo.ui, 'lfs')
128 % usercache.scheme)
126 self.cachevfs = lfsvfs(usercache)
129 self.ui = repo.ui
127 self.ui = repo.ui
130
128
131 def open(self, oid):
129 def open(self, oid):
132 """Open a read-only file descriptor to the named blob, in either the
130 """Open a read-only file descriptor to the named blob, in either the
133 usercache or the local store."""
131 usercache or the local store."""
134 # The usercache is the most likely place to hold the file. Commit will
132 # The usercache is the most likely place to hold the file. Commit will
135 # write to both it and the local store, as will anything that downloads
133 # write to both it and the local store, as will anything that downloads
136 # the blobs. However, things like clone without an update won't
134 # the blobs. However, things like clone without an update won't
137 # populate the local store. For an init + push of a local clone,
135 # populate the local store. For an init + push of a local clone,
138 # the usercache is the only place it _could_ be. If not present, the
136 # the usercache is the only place it _could_ be. If not present, the
139 # missing file msg here will indicate the local repo, not the usercache.
137 # missing file msg here will indicate the local repo, not the usercache.
140 if self.cachevfs.exists(oid):
138 if self.cachevfs.exists(oid):
141 return self.cachevfs(oid, 'rb')
139 return self.cachevfs(oid, 'rb')
142
140
143 return self.vfs(oid, 'rb')
141 return self.vfs(oid, 'rb')
144
142
145 def download(self, oid, src):
143 def download(self, oid, src):
146 """Read the blob from the remote source in chunks, verify the content,
144 """Read the blob from the remote source in chunks, verify the content,
147 and write to this local blobstore."""
145 and write to this local blobstore."""
148 sha256 = hashlib.sha256()
146 sha256 = hashlib.sha256()
149
147
150 with self.vfs(oid, 'wb', atomictemp=True) as fp:
148 with self.vfs(oid, 'wb', atomictemp=True) as fp:
151 for chunk in util.filechunkiter(src, size=1048576):
149 for chunk in util.filechunkiter(src, size=1048576):
152 fp.write(chunk)
150 fp.write(chunk)
153 sha256.update(chunk)
151 sha256.update(chunk)
154
152
155 realoid = sha256.hexdigest()
153 realoid = sha256.hexdigest()
156 if realoid != oid:
154 if realoid != oid:
157 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
155 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
158
156
159 self._linktousercache(oid)
157 self._linktousercache(oid)
160
158
161 def write(self, oid, data):
159 def write(self, oid, data):
162 """Write blob to local blobstore.
160 """Write blob to local blobstore.
163
161
164 This should only be called from the filelog during a commit or similar.
162 This should only be called from the filelog during a commit or similar.
165 As such, there is no need to verify the data. Imports from a remote
163 As such, there is no need to verify the data. Imports from a remote
166 store must use ``download()`` instead."""
164 store must use ``download()`` instead."""
167 with self.vfs(oid, 'wb', atomictemp=True) as fp:
165 with self.vfs(oid, 'wb', atomictemp=True) as fp:
168 fp.write(data)
166 fp.write(data)
169
167
170 self._linktousercache(oid)
168 self._linktousercache(oid)
171
169
172 def _linktousercache(self, oid):
170 def _linktousercache(self, oid):
173 # XXX: should we verify the content of the cache, and hardlink back to
171 # XXX: should we verify the content of the cache, and hardlink back to
174 # the local store on success, but truncate, write and link on failure?
172 # the local store on success, but truncate, write and link on failure?
175 if (not self.cachevfs.exists(oid)
173 if (not self.cachevfs.exists(oid)
176 and not isinstance(self.cachevfs, nullvfs)):
174 and not isinstance(self.cachevfs, nullvfs)):
177 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
175 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
178 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
176 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
179
177
180 def read(self, oid, verify=True):
178 def read(self, oid, verify=True):
181 """Read blob from local blobstore."""
179 """Read blob from local blobstore."""
182 if not self.vfs.exists(oid):
180 if not self.vfs.exists(oid):
183 blob = self._read(self.cachevfs, oid, verify)
181 blob = self._read(self.cachevfs, oid, verify)
184
182
185 # Even if revlog will verify the content, it needs to be verified
183 # Even if revlog will verify the content, it needs to be verified
186 # now before making the hardlink to avoid propagating corrupt blobs.
184 # now before making the hardlink to avoid propagating corrupt blobs.
187 # Don't abort if corruption is detected, because `hg verify` will
185 # Don't abort if corruption is detected, because `hg verify` will
188 # give more useful info about the corruption- simply don't add the
186 # give more useful info about the corruption- simply don't add the
189 # hardlink.
187 # hardlink.
190 if verify or hashlib.sha256(blob).hexdigest() == oid:
188 if verify or hashlib.sha256(blob).hexdigest() == oid:
191 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
189 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
192 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
190 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
193 else:
191 else:
194 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
192 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
195 blob = self._read(self.vfs, oid, verify)
193 blob = self._read(self.vfs, oid, verify)
196 return blob
194 return blob
197
195
198 def _read(self, vfs, oid, verify):
196 def _read(self, vfs, oid, verify):
199 """Read blob (after verifying) from the given store"""
197 """Read blob (after verifying) from the given store"""
200 blob = vfs.read(oid)
198 blob = vfs.read(oid)
201 if verify:
199 if verify:
202 _verify(oid, blob)
200 _verify(oid, blob)
203 return blob
201 return blob
204
202
205 def verify(self, oid):
203 def verify(self, oid):
206 """Indicate whether or not the hash of the underlying file matches its
204 """Indicate whether or not the hash of the underlying file matches its
207 name."""
205 name."""
208 sha256 = hashlib.sha256()
206 sha256 = hashlib.sha256()
209
207
210 with self.open(oid) as fp:
208 with self.open(oid) as fp:
211 for chunk in util.filechunkiter(fp, size=1048576):
209 for chunk in util.filechunkiter(fp, size=1048576):
212 sha256.update(chunk)
210 sha256.update(chunk)
213
211
214 return oid == sha256.hexdigest()
212 return oid == sha256.hexdigest()
215
213
216 def has(self, oid):
214 def has(self, oid):
217 """Returns True if the local blobstore contains the requested blob,
215 """Returns True if the local blobstore contains the requested blob,
218 False otherwise."""
216 False otherwise."""
219 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
217 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
220
218
221 class _gitlfsremote(object):
219 class _gitlfsremote(object):
222
220
223 def __init__(self, repo, url):
221 def __init__(self, repo, url):
224 ui = repo.ui
222 ui = repo.ui
225 self.ui = ui
223 self.ui = ui
226 baseurl, authinfo = url.authinfo()
224 baseurl, authinfo = url.authinfo()
227 self.baseurl = baseurl.rstrip('/')
225 self.baseurl = baseurl.rstrip('/')
228 useragent = repo.ui.config('experimental', 'lfs.user-agent')
226 useragent = repo.ui.config('experimental', 'lfs.user-agent')
229 if not useragent:
227 if not useragent:
230 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
228 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
231 self.urlopener = urlmod.opener(ui, authinfo, useragent)
229 self.urlopener = urlmod.opener(ui, authinfo, useragent)
232 self.retry = ui.configint('lfs', 'retry')
230 self.retry = ui.configint('lfs', 'retry')
233
231
234 def writebatch(self, pointers, fromstore):
232 def writebatch(self, pointers, fromstore):
235 """Batch upload from local to remote blobstore."""
233 """Batch upload from local to remote blobstore."""
236 self._batch(_deduplicate(pointers), fromstore, 'upload')
234 self._batch(_deduplicate(pointers), fromstore, 'upload')
237
235
238 def readbatch(self, pointers, tostore):
236 def readbatch(self, pointers, tostore):
239 """Batch download from remote to local blostore."""
237 """Batch download from remote to local blostore."""
240 self._batch(_deduplicate(pointers), tostore, 'download')
238 self._batch(_deduplicate(pointers), tostore, 'download')
241
239
242 def _batchrequest(self, pointers, action):
240 def _batchrequest(self, pointers, action):
243 """Get metadata about objects pointed by pointers for given action
241 """Get metadata about objects pointed by pointers for given action
244
242
245 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
243 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
246 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
244 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
247 """
245 """
248 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
246 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
249 requestdata = json.dumps({
247 requestdata = json.dumps({
250 'objects': objects,
248 'objects': objects,
251 'operation': action,
249 'operation': action,
252 })
250 })
253 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
251 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
254 data=requestdata)
252 data=requestdata)
255 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
253 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
256 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
254 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
257 try:
255 try:
258 rsp = self.urlopener.open(batchreq)
256 rsp = self.urlopener.open(batchreq)
259 rawjson = rsp.read()
257 rawjson = rsp.read()
260 except util.urlerr.httperror as ex:
258 except util.urlerr.httperror as ex:
261 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
259 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
262 % (ex, action))
260 % (ex, action))
263 try:
261 try:
264 response = json.loads(rawjson)
262 response = json.loads(rawjson)
265 except ValueError:
263 except ValueError:
266 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
264 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
267 % rawjson)
265 % rawjson)
268
266
269 if self.ui.debugflag:
267 if self.ui.debugflag:
270 self.ui.debug('Status: %d\n' % rsp.status)
268 self.ui.debug('Status: %d\n' % rsp.status)
271 # lfs-test-server and hg serve return headers in different order
269 # lfs-test-server and hg serve return headers in different order
272 self.ui.debug('%s\n'
270 self.ui.debug('%s\n'
273 % '\n'.join(sorted(str(rsp.info()).splitlines())))
271 % '\n'.join(sorted(str(rsp.info()).splitlines())))
274
272
275 if 'objects' in response:
273 if 'objects' in response:
276 response['objects'] = sorted(response['objects'],
274 response['objects'] = sorted(response['objects'],
277 key=lambda p: p['oid'])
275 key=lambda p: p['oid'])
278 self.ui.debug('%s\n'
276 self.ui.debug('%s\n'
279 % json.dumps(response, indent=2,
277 % json.dumps(response, indent=2,
280 separators=('', ': '), sort_keys=True))
278 separators=('', ': '), sort_keys=True))
281
279
282 return response
280 return response
283
281
284 def _checkforservererror(self, pointers, responses, action):
282 def _checkforservererror(self, pointers, responses, action):
285 """Scans errors from objects
283 """Scans errors from objects
286
284
287 Raises LfsRemoteError if any objects have an error"""
285 Raises LfsRemoteError if any objects have an error"""
288 for response in responses:
286 for response in responses:
289 # The server should return 404 when objects cannot be found. Some
287 # The server should return 404 when objects cannot be found. Some
290 # server implementation (ex. lfs-test-server) does not set "error"
288 # server implementation (ex. lfs-test-server) does not set "error"
291 # but just removes "download" from "actions". Treat that case
289 # but just removes "download" from "actions". Treat that case
292 # as the same as 404 error.
290 # as the same as 404 error.
293 if 'error' not in response:
291 if 'error' not in response:
294 if (action == 'download'
292 if (action == 'download'
295 and action not in response.get('actions', [])):
293 and action not in response.get('actions', [])):
296 code = 404
294 code = 404
297 else:
295 else:
298 continue
296 continue
299 else:
297 else:
300 # An error dict without a code doesn't make much sense, so
298 # An error dict without a code doesn't make much sense, so
301 # treat as a server error.
299 # treat as a server error.
302 code = response.get('error').get('code', 500)
300 code = response.get('error').get('code', 500)
303
301
304 ptrmap = {p.oid(): p for p in pointers}
302 ptrmap = {p.oid(): p for p in pointers}
305 p = ptrmap.get(response['oid'], None)
303 p = ptrmap.get(response['oid'], None)
306 if p:
304 if p:
307 filename = getattr(p, 'filename', 'unknown')
305 filename = getattr(p, 'filename', 'unknown')
308 errors = {
306 errors = {
309 404: 'The object does not exist',
307 404: 'The object does not exist',
310 410: 'The object was removed by the owner',
308 410: 'The object was removed by the owner',
311 422: 'Validation error',
309 422: 'Validation error',
312 500: 'Internal server error',
310 500: 'Internal server error',
313 }
311 }
314 msg = errors.get(code, 'status code %d' % code)
312 msg = errors.get(code, 'status code %d' % code)
315 raise LfsRemoteError(_('LFS server error for "%s": %s')
313 raise LfsRemoteError(_('LFS server error for "%s": %s')
316 % (filename, msg))
314 % (filename, msg))
317 else:
315 else:
318 raise LfsRemoteError(
316 raise LfsRemoteError(
319 _('LFS server error. Unsolicited response for oid %s')
317 _('LFS server error. Unsolicited response for oid %s')
320 % response['oid'])
318 % response['oid'])
321
319
322 def _extractobjects(self, response, pointers, action):
320 def _extractobjects(self, response, pointers, action):
323 """extract objects from response of the batch API
321 """extract objects from response of the batch API
324
322
325 response: parsed JSON object returned by batch API
323 response: parsed JSON object returned by batch API
326 return response['objects'] filtered by action
324 return response['objects'] filtered by action
327 raise if any object has an error
325 raise if any object has an error
328 """
326 """
329 # Scan errors from objects - fail early
327 # Scan errors from objects - fail early
330 objects = response.get('objects', [])
328 objects = response.get('objects', [])
331 self._checkforservererror(pointers, objects, action)
329 self._checkforservererror(pointers, objects, action)
332
330
333 # Filter objects with given action. Practically, this skips uploading
331 # Filter objects with given action. Practically, this skips uploading
334 # objects which exist in the server.
332 # objects which exist in the server.
335 filteredobjects = [o for o in objects if action in o.get('actions', [])]
333 filteredobjects = [o for o in objects if action in o.get('actions', [])]
336
334
337 return filteredobjects
335 return filteredobjects
338
336
339 def _basictransfer(self, obj, action, localstore):
337 def _basictransfer(self, obj, action, localstore):
340 """Download or upload a single object using basic transfer protocol
338 """Download or upload a single object using basic transfer protocol
341
339
342 obj: dict, an object description returned by batch API
340 obj: dict, an object description returned by batch API
343 action: string, one of ['upload', 'download']
341 action: string, one of ['upload', 'download']
344 localstore: blobstore.local
342 localstore: blobstore.local
345
343
346 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
344 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
347 basic-transfers.md
345 basic-transfers.md
348 """
346 """
349 oid = pycompat.bytestr(obj['oid'])
347 oid = pycompat.bytestr(obj['oid'])
350
348
351 href = pycompat.bytestr(obj['actions'][action].get('href'))
349 href = pycompat.bytestr(obj['actions'][action].get('href'))
352 headers = obj['actions'][action].get('header', {}).items()
350 headers = obj['actions'][action].get('header', {}).items()
353
351
354 request = util.urlreq.request(href)
352 request = util.urlreq.request(href)
355 if action == 'upload':
353 if action == 'upload':
356 # If uploading blobs, read data from local blobstore.
354 # If uploading blobs, read data from local blobstore.
357 if not localstore.verify(oid):
355 if not localstore.verify(oid):
358 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
356 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
359 hint=_('run hg verify'))
357 hint=_('run hg verify'))
360 request.data = filewithprogress(localstore.open(oid), None)
358 request.data = filewithprogress(localstore.open(oid), None)
361 request.get_method = lambda: 'PUT'
359 request.get_method = lambda: 'PUT'
362 request.add_header('Content-Type', 'application/octet-stream')
360 request.add_header('Content-Type', 'application/octet-stream')
363
361
364 for k, v in headers:
362 for k, v in headers:
365 request.add_header(k, v)
363 request.add_header(k, v)
366
364
367 response = b''
365 response = b''
368 try:
366 try:
369 req = self.urlopener.open(request)
367 req = self.urlopener.open(request)
370
368
371 if self.ui.debugflag:
369 if self.ui.debugflag:
372 self.ui.debug('Status: %d\n' % req.status)
370 self.ui.debug('Status: %d\n' % req.status)
373 # lfs-test-server and hg serve return headers in different order
371 # lfs-test-server and hg serve return headers in different order
374 self.ui.debug('%s\n'
372 self.ui.debug('%s\n'
375 % '\n'.join(sorted(str(req.info()).splitlines())))
373 % '\n'.join(sorted(str(req.info()).splitlines())))
376
374
377 if action == 'download':
375 if action == 'download':
378 # If downloading blobs, store downloaded data to local blobstore
376 # If downloading blobs, store downloaded data to local blobstore
379 localstore.download(oid, req)
377 localstore.download(oid, req)
380 else:
378 else:
381 while True:
379 while True:
382 data = req.read(1048576)
380 data = req.read(1048576)
383 if not data:
381 if not data:
384 break
382 break
385 response += data
383 response += data
386 if response:
384 if response:
387 self.ui.debug('lfs %s response: %s' % (action, response))
385 self.ui.debug('lfs %s response: %s' % (action, response))
388 except util.urlerr.httperror as ex:
386 except util.urlerr.httperror as ex:
389 if self.ui.debugflag:
387 if self.ui.debugflag:
390 self.ui.debug('%s: %s\n' % (oid, ex.read()))
388 self.ui.debug('%s: %s\n' % (oid, ex.read()))
391 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
389 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
392 % (ex, oid, action))
390 % (ex, oid, action))
393
391
394 def _batch(self, pointers, localstore, action):
392 def _batch(self, pointers, localstore, action):
395 if action not in ['upload', 'download']:
393 if action not in ['upload', 'download']:
396 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
394 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
397
395
398 response = self._batchrequest(pointers, action)
396 response = self._batchrequest(pointers, action)
399 objects = self._extractobjects(response, pointers, action)
397 objects = self._extractobjects(response, pointers, action)
400 total = sum(x.get('size', 0) for x in objects)
398 total = sum(x.get('size', 0) for x in objects)
401 sizes = {}
399 sizes = {}
402 for obj in objects:
400 for obj in objects:
403 sizes[obj.get('oid')] = obj.get('size', 0)
401 sizes[obj.get('oid')] = obj.get('size', 0)
404 topic = {'upload': _('lfs uploading'),
402 topic = {'upload': _('lfs uploading'),
405 'download': _('lfs downloading')}[action]
403 'download': _('lfs downloading')}[action]
406 if len(objects) > 1:
404 if len(objects) > 1:
407 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
405 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
408 % (len(objects), util.bytecount(total)))
406 % (len(objects), util.bytecount(total)))
409 self.ui.progress(topic, 0, total=total)
407 self.ui.progress(topic, 0, total=total)
410 def transfer(chunk):
408 def transfer(chunk):
411 for obj in chunk:
409 for obj in chunk:
412 objsize = obj.get('size', 0)
410 objsize = obj.get('size', 0)
413 if self.ui.verbose:
411 if self.ui.verbose:
414 if action == 'download':
412 if action == 'download':
415 msg = _('lfs: downloading %s (%s)\n')
413 msg = _('lfs: downloading %s (%s)\n')
416 elif action == 'upload':
414 elif action == 'upload':
417 msg = _('lfs: uploading %s (%s)\n')
415 msg = _('lfs: uploading %s (%s)\n')
418 self.ui.note(msg % (obj.get('oid'),
416 self.ui.note(msg % (obj.get('oid'),
419 util.bytecount(objsize)))
417 util.bytecount(objsize)))
420 retry = self.retry
418 retry = self.retry
421 while True:
419 while True:
422 try:
420 try:
423 self._basictransfer(obj, action, localstore)
421 self._basictransfer(obj, action, localstore)
424 yield 1, obj.get('oid')
422 yield 1, obj.get('oid')
425 break
423 break
426 except socket.error as ex:
424 except socket.error as ex:
427 if retry > 0:
425 if retry > 0:
428 self.ui.note(
426 self.ui.note(
429 _('lfs: failed: %r (remaining retry %d)\n')
427 _('lfs: failed: %r (remaining retry %d)\n')
430 % (ex, retry))
428 % (ex, retry))
431 retry -= 1
429 retry -= 1
432 continue
430 continue
433 raise
431 raise
434
432
435 # Until https multiplexing gets sorted out
433 # Until https multiplexing gets sorted out
436 if self.ui.configbool('experimental', 'lfs.worker-enable'):
434 if self.ui.configbool('experimental', 'lfs.worker-enable'):
437 oids = worker.worker(self.ui, 0.1, transfer, (),
435 oids = worker.worker(self.ui, 0.1, transfer, (),
438 sorted(objects, key=lambda o: o.get('oid')))
436 sorted(objects, key=lambda o: o.get('oid')))
439 else:
437 else:
440 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
438 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
441
439
442 processed = 0
440 processed = 0
443 blobs = 0
441 blobs = 0
444 for _one, oid in oids:
442 for _one, oid in oids:
445 processed += sizes[oid]
443 processed += sizes[oid]
446 blobs += 1
444 blobs += 1
447 self.ui.progress(topic, processed, total=total)
445 self.ui.progress(topic, processed, total=total)
448 self.ui.note(_('lfs: processed: %s\n') % oid)
446 self.ui.note(_('lfs: processed: %s\n') % oid)
449 self.ui.progress(topic, pos=None, total=total)
447 self.ui.progress(topic, pos=None, total=total)
450
448
451 if blobs > 0:
449 if blobs > 0:
452 if action == 'upload':
450 if action == 'upload':
453 self.ui.status(_('lfs: uploaded %d files (%s)\n')
451 self.ui.status(_('lfs: uploaded %d files (%s)\n')
454 % (blobs, util.bytecount(processed)))
452 % (blobs, util.bytecount(processed)))
455 # TODO: coalesce the download requests, and comment this in
453 # TODO: coalesce the download requests, and comment this in
456 #elif action == 'download':
454 #elif action == 'download':
457 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
455 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
458 # % (blobs, util.bytecount(processed)))
456 # % (blobs, util.bytecount(processed)))
459
457
460 def __del__(self):
458 def __del__(self):
461 # copied from mercurial/httppeer.py
459 # copied from mercurial/httppeer.py
462 urlopener = getattr(self, 'urlopener', None)
460 urlopener = getattr(self, 'urlopener', None)
463 if urlopener:
461 if urlopener:
464 for h in urlopener.handlers:
462 for h in urlopener.handlers:
465 h.close()
463 h.close()
466 getattr(h, "close_all", lambda : None)()
464 getattr(h, "close_all", lambda : None)()
467
465
468 class _dummyremote(object):
466 class _dummyremote(object):
469 """Dummy store storing blobs to temp directory."""
467 """Dummy store storing blobs to temp directory."""
470
468
471 def __init__(self, repo, url):
469 def __init__(self, repo, url):
472 fullpath = repo.vfs.join('lfs', url.path)
470 fullpath = repo.vfs.join('lfs', url.path)
473 self.vfs = lfsvfs(fullpath)
471 self.vfs = lfsvfs(fullpath)
474
472
475 def writebatch(self, pointers, fromstore):
473 def writebatch(self, pointers, fromstore):
476 for p in _deduplicate(pointers):
474 for p in _deduplicate(pointers):
477 content = fromstore.read(p.oid(), verify=True)
475 content = fromstore.read(p.oid(), verify=True)
478 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
476 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
479 fp.write(content)
477 fp.write(content)
480
478
481 def readbatch(self, pointers, tostore):
479 def readbatch(self, pointers, tostore):
482 for p in _deduplicate(pointers):
480 for p in _deduplicate(pointers):
483 with self.vfs(p.oid(), 'rb') as fp:
481 with self.vfs(p.oid(), 'rb') as fp:
484 tostore.download(p.oid(), fp)
482 tostore.download(p.oid(), fp)
485
483
486 class _nullremote(object):
484 class _nullremote(object):
487 """Null store storing blobs to /dev/null."""
485 """Null store storing blobs to /dev/null."""
488
486
489 def __init__(self, repo, url):
487 def __init__(self, repo, url):
490 pass
488 pass
491
489
492 def writebatch(self, pointers, fromstore):
490 def writebatch(self, pointers, fromstore):
493 pass
491 pass
494
492
495 def readbatch(self, pointers, tostore):
493 def readbatch(self, pointers, tostore):
496 pass
494 pass
497
495
498 class _promptremote(object):
496 class _promptremote(object):
499 """Prompt user to set lfs.url when accessed."""
497 """Prompt user to set lfs.url when accessed."""
500
498
501 def __init__(self, repo, url):
499 def __init__(self, repo, url):
502 pass
500 pass
503
501
504 def writebatch(self, pointers, fromstore, ui=None):
502 def writebatch(self, pointers, fromstore, ui=None):
505 self._prompt()
503 self._prompt()
506
504
507 def readbatch(self, pointers, tostore, ui=None):
505 def readbatch(self, pointers, tostore, ui=None):
508 self._prompt()
506 self._prompt()
509
507
510 def _prompt(self):
508 def _prompt(self):
511 raise error.Abort(_('lfs.url needs to be configured'))
509 raise error.Abort(_('lfs.url needs to be configured'))
512
510
513 _storemap = {
511 _storemap = {
514 'https': _gitlfsremote,
512 'https': _gitlfsremote,
515 'http': _gitlfsremote,
513 'http': _gitlfsremote,
516 'file': _dummyremote,
514 'file': _dummyremote,
517 'null': _nullremote,
515 'null': _nullremote,
518 None: _promptremote,
516 None: _promptremote,
519 }
517 }
520
518
521 def _deduplicate(pointers):
519 def _deduplicate(pointers):
522 """Remove any duplicate oids that exist in the list"""
520 """Remove any duplicate oids that exist in the list"""
523 reduced = util.sortdict()
521 reduced = util.sortdict()
524 for p in pointers:
522 for p in pointers:
525 reduced[p.oid()] = p
523 reduced[p.oid()] = p
526 return reduced.values()
524 return reduced.values()
527
525
528 def _verify(oid, content):
526 def _verify(oid, content):
529 realoid = hashlib.sha256(content).hexdigest()
527 realoid = hashlib.sha256(content).hexdigest()
530 if realoid != oid:
528 if realoid != oid:
531 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
529 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
532 hint=_('run hg verify'))
530 hint=_('run hg verify'))
533
531
534 def remote(repo):
532 def remote(repo):
535 """remotestore factory. return a store in _storemap depending on config
533 """remotestore factory. return a store in _storemap depending on config
536
534
537 If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to
535 If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to
538 infer the endpoint, based on the remote repository using the same path
536 infer the endpoint, based on the remote repository using the same path
539 adjustments as git. As an extension, 'http' is supported as well so that
537 adjustments as git. As an extension, 'http' is supported as well so that
540 ``hg serve`` works out of the box.
538 ``hg serve`` works out of the box.
541
539
542 https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
540 https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
543 """
541 """
544 url = util.url(repo.ui.config('lfs', 'url') or '')
542 url = util.url(repo.ui.config('lfs', 'url') or '')
545 if url.scheme is None:
543 if url.scheme is None:
546 # TODO: investigate 'paths.remote:lfsurl' style path customization,
544 # TODO: investigate 'paths.remote:lfsurl' style path customization,
547 # and fall back to inferring from 'paths.remote' if unspecified.
545 # and fall back to inferring from 'paths.remote' if unspecified.
548 defaulturl = util.url(repo.ui.config('paths', 'default') or b'')
546 defaulturl = util.url(repo.ui.config('paths', 'default') or b'')
549
547
550 # TODO: support local paths as well.
548 # TODO: support local paths as well.
551 # TODO: consider the ssh -> https transformation that git applies
549 # TODO: consider the ssh -> https transformation that git applies
552 if defaulturl.scheme in (b'http', b'https'):
550 if defaulturl.scheme in (b'http', b'https'):
553 defaulturl.path = defaulturl.path or b'' + b'.git/info/lfs'
551 defaulturl.path = defaulturl.path or b'' + b'.git/info/lfs'
554
552
555 url = util.url(bytes(defaulturl))
553 url = util.url(bytes(defaulturl))
556 repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
554 repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
557
555
558 scheme = url.scheme
556 scheme = url.scheme
559 if scheme not in _storemap:
557 if scheme not in _storemap:
560 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
558 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
561 return _storemap[scheme](repo, url)
559 return _storemap[scheme](repo, url)
562
560
563 class LfsRemoteError(error.RevlogError):
561 class LfsRemoteError(error.RevlogError):
564 pass
562 pass
@@ -1,283 +1,284
1 #testcases lfsremote-on lfsremote-off
1 #testcases lfsremote-on lfsremote-off
2 #require serve no-reposimplestore
2 #require serve no-reposimplestore
3
3
4 This test splits `hg serve` with and without using the extension into separate
4 This test splits `hg serve` with and without using the extension into separate
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 indicates whether or not the extension is loaded. The "X" cases are not tested
7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 individually, because the lfs requirement causes the process to bail early if
8 individually, because the lfs requirement causes the process to bail early if
9 the extension is disabled.
9 the extension is disabled.
10
10
11 . Server
11 . Server
12 .
12 .
13 . No-LFS LFS
13 . No-LFS LFS
14 . +----------------------------+
14 . +----------------------------+
15 . | || D | E | D | E |
15 . | || D | E | D | E |
16 . |---++=======================|
16 . |---++=======================|
17 . C | D || N/A | #1 | X | #4 |
17 . C | D || N/A | #1 | X | #4 |
18 . l No +---++-----------------------|
18 . l No +---++-----------------------|
19 . i LFS | E || #2 | #2 | X | #5 |
19 . i LFS | E || #2 | #2 | X | #5 |
20 . e +---++-----------------------|
20 . e +---++-----------------------|
21 . n | D || X | X | X | X |
21 . n | D || X | X | X | X |
22 . t LFS |---++-----------------------|
22 . t LFS |---++-----------------------|
23 . | E || #3 | #3 | X | #6 |
23 . | E || #3 | #3 | X | #6 |
24 . |---++-----------------------+
24 . |---++-----------------------+
25
25
26 $ hg init server
26 $ hg init server
27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
28
28
29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
30 first, and causes a "ValueError: no common changegroup version" or "abort:
30 first, and causes a "ValueError: no common changegroup version" or "abort:
31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
34 masked by the Internal Server Error message).
34 masked by the Internal Server Error message).
35 $ cat >> $HGRCPATH <<EOF
35 $ cat >> $HGRCPATH <<EOF
36 > [experimental]
37 > lfs.disableusercache = True
36 > [lfs]
38 > [lfs]
37 > usercache = null://
38 > threshold=10
39 > threshold=10
39 > [web]
40 > [web]
40 > allow_push=*
41 > allow_push=*
41 > push_ssl=False
42 > push_ssl=False
42 > EOF
43 > EOF
43
44
44 #if lfsremote-on
45 #if lfsremote-on
45 $ hg --config extensions.lfs= -R server \
46 $ hg --config extensions.lfs= -R server \
46 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
47 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
47 #else
48 #else
48 $ hg --config extensions.lfs=! -R server \
49 $ hg --config extensions.lfs=! -R server \
49 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
50 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
50 #endif
51 #endif
51
52
52 $ cat hg.pid >> $DAEMON_PIDS
53 $ cat hg.pid >> $DAEMON_PIDS
53 $ hg clone -q http://localhost:$HGPORT client
54 $ hg clone -q http://localhost:$HGPORT client
54 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
55 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
55 [1]
56 [1]
56
57
57 --------------------------------------------------------------------------------
58 --------------------------------------------------------------------------------
58 Case #1: client with non-lfs content and the extension disabled; server with
59 Case #1: client with non-lfs content and the extension disabled; server with
59 non-lfs content, and the extension enabled.
60 non-lfs content, and the extension enabled.
60
61
61 $ cd client
62 $ cd client
62 $ echo 'non-lfs' > nonlfs.txt
63 $ echo 'non-lfs' > nonlfs.txt
63 $ hg ci -Aqm 'non-lfs'
64 $ hg ci -Aqm 'non-lfs'
64 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
65 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
65 [1]
66 [1]
66
67
67 #if lfsremote-on
68 #if lfsremote-on
68
69
69 $ hg push -q
70 $ hg push -q
70 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
71 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
71 [1]
72 [1]
72
73
73 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
74 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
74 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
75 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
75 [1]
76 [1]
76
77
77 $ hg init $TESTTMP/client1_pull
78 $ hg init $TESTTMP/client1_pull
78 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
79 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
79 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
80 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
80 [1]
81 [1]
81
82
82 $ hg identify http://localhost:$HGPORT
83 $ hg identify http://localhost:$HGPORT
83 d437e1d24fbd
84 d437e1d24fbd
84
85
85 #endif
86 #endif
86
87
87 --------------------------------------------------------------------------------
88 --------------------------------------------------------------------------------
88 Case #2: client with non-lfs content and the extension enabled; server with
89 Case #2: client with non-lfs content and the extension enabled; server with
89 non-lfs content, and the extension state controlled by #testcases.
90 non-lfs content, and the extension state controlled by #testcases.
90
91
91 $ cat >> $HGRCPATH <<EOF
92 $ cat >> $HGRCPATH <<EOF
92 > [extensions]
93 > [extensions]
93 > lfs =
94 > lfs =
94 > EOF
95 > EOF
95 $ echo 'non-lfs' > nonlfs2.txt
96 $ echo 'non-lfs' > nonlfs2.txt
96 $ hg ci -Aqm 'non-lfs file with lfs client'
97 $ hg ci -Aqm 'non-lfs file with lfs client'
97
98
98 Since no lfs content has been added yet, the push is allowed, even when the
99 Since no lfs content has been added yet, the push is allowed, even when the
99 extension is not enabled remotely.
100 extension is not enabled remotely.
100
101
101 $ hg push -q
102 $ hg push -q
102 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
103 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
103 [1]
104 [1]
104
105
105 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
106 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
106 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
107 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
107 [1]
108 [1]
108
109
109 $ hg init $TESTTMP/client2_pull
110 $ hg init $TESTTMP/client2_pull
110 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
111 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
111 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
112 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
112 [1]
113 [1]
113
114
114 $ hg identify http://localhost:$HGPORT
115 $ hg identify http://localhost:$HGPORT
115 1477875038c6
116 1477875038c6
116
117
117 --------------------------------------------------------------------------------
118 --------------------------------------------------------------------------------
118 Case #3: client with lfs content and the extension enabled; server with
119 Case #3: client with lfs content and the extension enabled; server with
119 non-lfs content, and the extension state controlled by #testcases. The server
120 non-lfs content, and the extension state controlled by #testcases. The server
120 should have an 'lfs' requirement after it picks up its first commit with a blob.
121 should have an 'lfs' requirement after it picks up its first commit with a blob.
121
122
122 $ echo 'this is a big lfs file' > lfs.bin
123 $ echo 'this is a big lfs file' > lfs.bin
123 $ hg ci -Aqm 'lfs'
124 $ hg ci -Aqm 'lfs'
124 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
125 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
125 .hg/requires:lfs
126 .hg/requires:lfs
126
127
127 #if lfsremote-off
128 #if lfsremote-off
128 $ hg push -q
129 $ hg push -q
129 abort: required features are not supported in the destination: lfs
130 abort: required features are not supported in the destination: lfs
130 (enable the lfs extension on the server)
131 (enable the lfs extension on the server)
131 [255]
132 [255]
132 #else
133 #else
133 $ hg push -q
134 $ hg push -q
134 #endif
135 #endif
135 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
136 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
136 .hg/requires:lfs
137 .hg/requires:lfs
137 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
138 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
138
139
139 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
140 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
140 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
141 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
141 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
142 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
142 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
143 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
143
144
144 $ hg init $TESTTMP/client3_pull
145 $ hg init $TESTTMP/client3_pull
145 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
146 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
146 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
147 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
147 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
148 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
148 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
149 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
149
150
150 The difference here is the push failed above when the extension isn't
151 The difference here is the push failed above when the extension isn't
151 enabled on the server.
152 enabled on the server.
152 $ hg identify http://localhost:$HGPORT
153 $ hg identify http://localhost:$HGPORT
153 8374dc4052cb (lfsremote-on !)
154 8374dc4052cb (lfsremote-on !)
154 1477875038c6 (lfsremote-off !)
155 1477875038c6 (lfsremote-off !)
155
156
156 Don't bother testing the lfsremote-off cases- the server won't be able
157 Don't bother testing the lfsremote-off cases- the server won't be able
157 to launch if there's lfs content and the extension is disabled.
158 to launch if there's lfs content and the extension is disabled.
158
159
159 #if lfsremote-on
160 #if lfsremote-on
160
161
161 --------------------------------------------------------------------------------
162 --------------------------------------------------------------------------------
162 Case #4: client with non-lfs content and the extension disabled; server with
163 Case #4: client with non-lfs content and the extension disabled; server with
163 lfs content, and the extension enabled.
164 lfs content, and the extension enabled.
164
165
165 $ cat >> $HGRCPATH <<EOF
166 $ cat >> $HGRCPATH <<EOF
166 > [extensions]
167 > [extensions]
167 > lfs = !
168 > lfs = !
168 > EOF
169 > EOF
169
170
170 $ hg init $TESTTMP/client4
171 $ hg init $TESTTMP/client4
171 $ cd $TESTTMP/client4
172 $ cd $TESTTMP/client4
172 $ cat >> .hg/hgrc <<EOF
173 $ cat >> .hg/hgrc <<EOF
173 > [paths]
174 > [paths]
174 > default = http://localhost:$HGPORT
175 > default = http://localhost:$HGPORT
175 > EOF
176 > EOF
176 $ echo 'non-lfs' > nonlfs2.txt
177 $ echo 'non-lfs' > nonlfs2.txt
177 $ hg ci -Aqm 'non-lfs'
178 $ hg ci -Aqm 'non-lfs'
178 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
179 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
179 $TESTTMP/server/.hg/requires:lfs
180 $TESTTMP/server/.hg/requires:lfs
180
181
181 $ hg push -q --force
182 $ hg push -q --force
182 warning: repository is unrelated
183 warning: repository is unrelated
183 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
184 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
184 $TESTTMP/server/.hg/requires:lfs
185 $TESTTMP/server/.hg/requires:lfs
185
186
186 TODO: fail more gracefully.
187 TODO: fail more gracefully.
187
188
188 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
189 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
189 abort: HTTP Error 500: Internal Server Error
190 abort: HTTP Error 500: Internal Server Error
190 [255]
191 [255]
191 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
192 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
192 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
193 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
193 $TESTTMP/server/.hg/requires:lfs
194 $TESTTMP/server/.hg/requires:lfs
194 [2]
195 [2]
195
196
196 TODO: fail more gracefully.
197 TODO: fail more gracefully.
197
198
198 $ hg init $TESTTMP/client4_pull
199 $ hg init $TESTTMP/client4_pull
199 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
200 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
200 abort: HTTP Error 500: Internal Server Error
201 abort: HTTP Error 500: Internal Server Error
201 [255]
202 [255]
202 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
203 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
203 $TESTTMP/server/.hg/requires:lfs
204 $TESTTMP/server/.hg/requires:lfs
204
205
205 $ hg identify http://localhost:$HGPORT
206 $ hg identify http://localhost:$HGPORT
206 03b080fa9d93
207 03b080fa9d93
207
208
208 --------------------------------------------------------------------------------
209 --------------------------------------------------------------------------------
209 Case #5: client with non-lfs content and the extension enabled; server with
210 Case #5: client with non-lfs content and the extension enabled; server with
210 lfs content, and the extension enabled.
211 lfs content, and the extension enabled.
211
212
212 $ cat >> $HGRCPATH <<EOF
213 $ cat >> $HGRCPATH <<EOF
213 > [extensions]
214 > [extensions]
214 > lfs =
215 > lfs =
215 > EOF
216 > EOF
216 $ echo 'non-lfs' > nonlfs3.txt
217 $ echo 'non-lfs' > nonlfs3.txt
217 $ hg ci -Aqm 'non-lfs file with lfs client'
218 $ hg ci -Aqm 'non-lfs file with lfs client'
218
219
219 $ hg push -q
220 $ hg push -q
220 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
221 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
221 $TESTTMP/server/.hg/requires:lfs
222 $TESTTMP/server/.hg/requires:lfs
222
223
223 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
224 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
224 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
225 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
225 $TESTTMP/client5_clone/.hg/requires:lfs
226 $TESTTMP/client5_clone/.hg/requires:lfs
226 $TESTTMP/server/.hg/requires:lfs
227 $TESTTMP/server/.hg/requires:lfs
227
228
228 $ hg init $TESTTMP/client5_pull
229 $ hg init $TESTTMP/client5_pull
229 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
230 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
230 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
231 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
231 $TESTTMP/client5_pull/.hg/requires:lfs
232 $TESTTMP/client5_pull/.hg/requires:lfs
232 $TESTTMP/server/.hg/requires:lfs
233 $TESTTMP/server/.hg/requires:lfs
233
234
234 $ hg identify http://localhost:$HGPORT
235 $ hg identify http://localhost:$HGPORT
235 c729025cc5e3
236 c729025cc5e3
236
237
237 --------------------------------------------------------------------------------
238 --------------------------------------------------------------------------------
238 Case #6: client with lfs content and the extension enabled; server with
239 Case #6: client with lfs content and the extension enabled; server with
239 lfs content, and the extension enabled.
240 lfs content, and the extension enabled.
240
241
241 $ echo 'this is another lfs file' > lfs2.txt
242 $ echo 'this is another lfs file' > lfs2.txt
242 $ hg ci -Aqm 'lfs file with lfs client'
243 $ hg ci -Aqm 'lfs file with lfs client'
243
244
244 $ hg push -q
245 $ hg push -q
245 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
246 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
246 .hg/requires:lfs
247 .hg/requires:lfs
247 $TESTTMP/server/.hg/requires:lfs
248 $TESTTMP/server/.hg/requires:lfs
248
249
249 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
250 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
250 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
251 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
251 $TESTTMP/client6_clone/.hg/requires:lfs
252 $TESTTMP/client6_clone/.hg/requires:lfs
252 $TESTTMP/server/.hg/requires:lfs
253 $TESTTMP/server/.hg/requires:lfs
253
254
254 $ hg init $TESTTMP/client6_pull
255 $ hg init $TESTTMP/client6_pull
255 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
256 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
256 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
257 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
257 $TESTTMP/client6_pull/.hg/requires:lfs
258 $TESTTMP/client6_pull/.hg/requires:lfs
258 $TESTTMP/server/.hg/requires:lfs
259 $TESTTMP/server/.hg/requires:lfs
259
260
260 $ hg identify http://localhost:$HGPORT
261 $ hg identify http://localhost:$HGPORT
261 d3b84d50eacb
262 d3b84d50eacb
262
263
263 --------------------------------------------------------------------------------
264 --------------------------------------------------------------------------------
264 Misc: process dies early if a requirement exists and the extension is disabled
265 Misc: process dies early if a requirement exists and the extension is disabled
265
266
266 $ hg --config extensions.lfs=! summary
267 $ hg --config extensions.lfs=! summary
267 abort: repository requires features unknown to this Mercurial: lfs!
268 abort: repository requires features unknown to this Mercurial: lfs!
268 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
269 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
269 [255]
270 [255]
270
271
271 #endif
272 #endif
272
273
273 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
274 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
274
275
275 #if lfsremote-on
276 #if lfsremote-on
276 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
277 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
277 Traceback (most recent call last):
278 Traceback (most recent call last):
278 ValueError: no common changegroup version
279 ValueError: no common changegroup version
279 Traceback (most recent call last):
280 Traceback (most recent call last):
280 ValueError: no common changegroup version
281 ValueError: no common changegroup version
281 #else
282 #else
282 $ cat $TESTTMP/errors.log
283 $ cat $TESTTMP/errors.log
283 #endif
284 #endif
General Comments 0
You need to be logged in to leave comments. Login now