##// END OF EJS Templates
lfs: special case the null:// usercache instead of treating it as a url...
Matt Harbison -
r37580:e5cd8d1a default
parent child Browse files
Show More
@@ -1,396 +1,399
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will assume the repository at ``paths.default`` also handles
91 91 # blob storage for http(s) URLs. Otherwise, lfs will prompt to set this
92 92 # when it must use this value.
93 93 # (default: unset)
94 94 url = https://example.com/repo.git/info/lfs
95 95
96 96 # Which files to track in LFS. Path tests are "**.extname" for file
97 97 # extensions, and "path:under/some/directory" for path prefix. Both
98 98 # are relative to the repository root.
99 99 # File size can be tested with the "size()" fileset, and tests can be
100 100 # joined with fileset operators. (See "hg help filesets.operators".)
101 101 #
102 102 # Some examples:
103 103 # - all() # everything
104 104 # - none() # nothing
105 105 # - size(">20MB") # larger than 20MB
106 106 # - !**.txt # anything not a *.txt file
107 107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 108 # - path:bin # files under "bin" in the project root
109 109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 111 # (default: none())
112 112 #
113 113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 114 # will eventually be deprecated and removed.
115 115 track = size(">10M")
116 116
117 117 # how many times to retry before giving up on transferring an object
118 118 retry = 5
119 119
120 120 # the local directory to store lfs files for sharing across local clones.
121 121 # If not set, the cache is located in an OS specific cache location.
122 122 usercache = /path/to/global/cache
123 123 """
124 124
125 125 from __future__ import absolute_import
126 126
127 127 from mercurial.i18n import _
128 128
129 129 from mercurial import (
130 130 bundle2,
131 131 changegroup,
132 132 cmdutil,
133 133 config,
134 134 context,
135 135 error,
136 136 exchange,
137 137 extensions,
138 138 filelog,
139 139 fileset,
140 140 hg,
141 141 localrepo,
142 142 minifileset,
143 143 node,
144 144 pycompat,
145 145 registrar,
146 146 revlog,
147 147 scmutil,
148 148 templateutil,
149 149 upgrade,
150 150 util,
151 151 vfs as vfsmod,
152 152 wireproto,
153 153 wireprotoserver,
154 154 )
155 155
156 156 from . import (
157 157 blobstore,
158 158 wireprotolfsserver,
159 159 wrapper,
160 160 )
161 161
162 162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
163 163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
164 164 # be specifying the version(s) of Mercurial they are tested with, or
165 165 # leave the attribute unspecified.
166 166 testedwith = 'ships-with-hg-core'
167 167
168 168 configtable = {}
169 169 configitem = registrar.configitem(configtable)
170 170
171 171 configitem('experimental', 'lfs.serve',
172 172 default=True,
173 173 )
174 174 configitem('experimental', 'lfs.user-agent',
175 175 default=None,
176 176 )
177 configitem('experimental', 'lfs.disableusercache',
178 default=False,
179 )
177 180 configitem('experimental', 'lfs.worker-enable',
178 181 default=False,
179 182 )
180 183
181 184 configitem('lfs', 'url',
182 185 default=None,
183 186 )
184 187 configitem('lfs', 'usercache',
185 188 default=None,
186 189 )
187 190 # Deprecated
188 191 configitem('lfs', 'threshold',
189 192 default=None,
190 193 )
191 194 configitem('lfs', 'track',
192 195 default='none()',
193 196 )
194 197 configitem('lfs', 'retry',
195 198 default=5,
196 199 )
197 200
198 201 cmdtable = {}
199 202 command = registrar.command(cmdtable)
200 203
201 204 templatekeyword = registrar.templatekeyword()
202 205 filesetpredicate = registrar.filesetpredicate()
203 206
204 207 def featuresetup(ui, supported):
205 208 # don't die on seeing a repo with the lfs requirement
206 209 supported |= {'lfs'}
207 210
208 211 def uisetup(ui):
209 212 localrepo.featuresetupfuncs.add(featuresetup)
210 213
211 214 def reposetup(ui, repo):
212 215 # Nothing to do with a remote repo
213 216 if not repo.local():
214 217 return
215 218
216 219 repo.svfs.lfslocalblobstore = blobstore.local(repo)
217 220 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
218 221
219 222 class lfsrepo(repo.__class__):
220 223 @localrepo.unfilteredmethod
221 224 def commitctx(self, ctx, error=False):
222 225 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
223 226 return super(lfsrepo, self).commitctx(ctx, error)
224 227
225 228 repo.__class__ = lfsrepo
226 229
227 230 if 'lfs' not in repo.requirements:
228 231 def checkrequireslfs(ui, repo, **kwargs):
229 232 if 'lfs' not in repo.requirements:
230 233 last = kwargs.get(r'node_last')
231 234 _bin = node.bin
232 235 if last:
233 236 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
234 237 else:
235 238 s = repo.set('%n', _bin(kwargs[r'node']))
236 239 match = repo.narrowmatch()
237 240 for ctx in s:
238 241 # TODO: is there a way to just walk the files in the commit?
239 242 if any(ctx[f].islfs() for f in ctx.files()
240 243 if f in ctx and match(f)):
241 244 repo.requirements.add('lfs')
242 245 repo._writerequirements()
243 246 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
244 247 break
245 248
246 249 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
247 250 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
248 251 else:
249 252 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
250 253
251 254 def _trackedmatcher(repo):
252 255 """Return a function (path, size) -> bool indicating whether or not to
253 256 track a given file with lfs."""
254 257 if not repo.wvfs.exists('.hglfs'):
255 258 # No '.hglfs' in wdir. Fallback to config for now.
256 259 trackspec = repo.ui.config('lfs', 'track')
257 260
258 261 # deprecated config: lfs.threshold
259 262 threshold = repo.ui.configbytes('lfs', 'threshold')
260 263 if threshold:
261 264 fileset.parse(trackspec) # make sure syntax errors are confined
262 265 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
263 266
264 267 return minifileset.compile(trackspec)
265 268
266 269 data = repo.wvfs.tryread('.hglfs')
267 270 if not data:
268 271 return lambda p, s: False
269 272
270 273 # Parse errors here will abort with a message that points to the .hglfs file
271 274 # and line number.
272 275 cfg = config.config()
273 276 cfg.parse('.hglfs', data)
274 277
275 278 try:
276 279 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
277 280 for pattern, rule in cfg.items('track')]
278 281 except error.ParseError as e:
279 282 # The original exception gives no indicator that the error is in the
280 283 # .hglfs file, so add that.
281 284
282 285 # TODO: See if the line number of the file can be made available.
283 286 raise error.Abort(_('parse error in .hglfs: %s') % e)
284 287
285 288 def _match(path, size):
286 289 for pat, rule in rules:
287 290 if pat(path, size):
288 291 return rule(path, size)
289 292
290 293 return False
291 294
292 295 return _match
293 296
294 297 def wrapfilelog(filelog):
295 298 wrapfunction = extensions.wrapfunction
296 299
297 300 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
298 301 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
299 302 wrapfunction(filelog, 'size', wrapper.filelogsize)
300 303
301 304 def extsetup(ui):
302 305 wrapfilelog(filelog.filelog)
303 306
304 307 wrapfunction = extensions.wrapfunction
305 308
306 309 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
307 310 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
308 311
309 312 wrapfunction(upgrade, '_finishdatamigration',
310 313 wrapper.upgradefinishdatamigration)
311 314
312 315 wrapfunction(upgrade, 'preservedrequirements',
313 316 wrapper.upgraderequirements)
314 317
315 318 wrapfunction(upgrade, 'supporteddestrequirements',
316 319 wrapper.upgraderequirements)
317 320
318 321 wrapfunction(changegroup,
319 322 'allsupportedversions',
320 323 wrapper.allsupportedversions)
321 324
322 325 wrapfunction(exchange, 'push', wrapper.push)
323 326 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
324 327 wrapfunction(wireprotoserver, 'handlewsgirequest',
325 328 wireprotolfsserver.handlewsgirequest)
326 329
327 330 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
328 331 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
329 332 context.basefilectx.islfs = wrapper.filectxislfs
330 333
331 334 revlog.addflagprocessor(
332 335 revlog.REVIDX_EXTSTORED,
333 336 (
334 337 wrapper.readfromstore,
335 338 wrapper.writetostore,
336 339 wrapper.bypasscheckhash,
337 340 ),
338 341 )
339 342
340 343 wrapfunction(hg, 'clone', wrapper.hgclone)
341 344 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
342 345
343 346 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
344 347
345 348 # Make bundle choose changegroup3 instead of changegroup2. This affects
346 349 # "hg bundle" command. Note: it does not cover all bundle formats like
347 350 # "packed1". Using "packed1" with lfs will likely cause trouble.
348 351 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
349 352
350 353 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
351 354 # options and blob stores are passed from othervfs to the new readonlyvfs.
352 355 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
353 356
354 357 # when writing a bundle via "hg bundle" command, upload related LFS blobs
355 358 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
356 359
357 360 @filesetpredicate('lfs()', callstatus=True)
358 361 def lfsfileset(mctx, x):
359 362 """File that uses LFS storage."""
360 363 # i18n: "lfs" is a keyword
361 364 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
362 365 return [f for f in mctx.subset
363 366 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
364 367
365 368 @templatekeyword('lfs_files', requires={'ctx'})
366 369 def lfsfiles(context, mapping):
367 370 """List of strings. All files modified, added, or removed by this
368 371 changeset."""
369 372 ctx = context.resource(mapping, 'ctx')
370 373
371 374 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
372 375 files = sorted(pointers.keys())
373 376
374 377 def pointer(v):
375 378 # In the file spec, version is first and the other keys are sorted.
376 379 sortkeyfunc = lambda x: (x[0] != 'version', x)
377 380 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
378 381 return util.sortdict(items)
379 382
380 383 makemap = lambda v: {
381 384 'file': v,
382 385 'lfsoid': pointers[v].oid() if pointers[v] else None,
383 386 'lfspointer': templateutil.hybriddict(pointer(v)),
384 387 }
385 388
386 389 # TODO: make the separator ', '?
387 390 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
388 391 return templateutil.hybrid(f, files, makemap, pycompat.identity)
389 392
390 393 @command('debuglfsupload',
391 394 [('r', 'rev', [], _('upload large files introduced by REV'))])
392 395 def debuglfsupload(ui, repo, **opts):
393 396 """upload lfs blobs added by the working copy parent or given revisions"""
394 397 revs = opts.get(r'rev', [])
395 398 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
396 399 wrapper.uploadblobs(repo, pointers)
@@ -1,564 +1,562
1 1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12 import json
13 13 import os
14 14 import re
15 15 import socket
16 16
17 17 from mercurial.i18n import _
18 18
19 19 from mercurial import (
20 20 error,
21 21 pathutil,
22 22 pycompat,
23 23 url as urlmod,
24 24 util,
25 25 vfs as vfsmod,
26 26 worker,
27 27 )
28 28
29 29 from ..largefiles import lfutil
30 30
31 31 # 64 bytes for SHA256
32 32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
33 33
34 34 class lfsvfs(vfsmod.vfs):
35 35 def join(self, path):
36 36 """split the path at first two characters, like: XX/XXXXX..."""
37 37 if not _lfsre.match(path):
38 38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
39 39 return super(lfsvfs, self).join(path[0:2], path[2:])
40 40
41 41 def walk(self, path=None, onerror=None):
42 42 """Yield (dirpath, [], oids) tuple for blobs under path
43 43
44 44 Oids only exist in the root of this vfs, so dirpath is always ''.
45 45 """
46 46 root = os.path.normpath(self.base)
47 47 # when dirpath == root, dirpath[prefixlen:] becomes empty
48 48 # because len(dirpath) < prefixlen.
49 49 prefixlen = len(pathutil.normasprefix(root))
50 50 oids = []
51 51
52 52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
53 53 onerror=onerror):
54 54 dirpath = dirpath[prefixlen:]
55 55
56 56 # Silently skip unexpected files and directories
57 57 if len(dirpath) == 2:
58 58 oids.extend([dirpath + f for f in files
59 59 if _lfsre.match(dirpath + f)])
60 60
61 61 yield ('', [], oids)
62 62
63 63 class nullvfs(lfsvfs):
64 64 def __init__(self):
65 65 pass
66 66
67 67 def exists(self, oid):
68 68 return False
69 69
70 70 def read(self, oid):
71 71 # store.read() calls into here if the blob doesn't exist in its
72 72 # self.vfs. Raise the same error as a normal vfs when asked to read a
73 73 # file that doesn't exist. The only difference is the full file path
74 74 # isn't available in the error.
75 75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
76 76
77 77 def walk(self, path=None, onerror=None):
78 78 return ('', [], [])
79 79
80 80 def write(self, oid, data):
81 81 pass
82 82
83 83 class filewithprogress(object):
84 84 """a file-like object that supports __len__ and read.
85 85
86 86 Useful to provide progress information for how many bytes are read.
87 87 """
88 88
89 89 def __init__(self, fp, callback):
90 90 self._fp = fp
91 91 self._callback = callback # func(readsize)
92 92 fp.seek(0, os.SEEK_END)
93 93 self._len = fp.tell()
94 94 fp.seek(0)
95 95
96 96 def __len__(self):
97 97 return self._len
98 98
99 99 def read(self, size):
100 100 if self._fp is None:
101 101 return b''
102 102 data = self._fp.read(size)
103 103 if data:
104 104 if self._callback:
105 105 self._callback(len(data))
106 106 else:
107 107 self._fp.close()
108 108 self._fp = None
109 109 return data
110 110
111 111 class local(object):
112 112 """Local blobstore for large file contents.
113 113
114 114 This blobstore is used both as a cache and as a staging area for large blobs
115 115 to be uploaded to the remote blobstore.
116 116 """
117 117
118 118 def __init__(self, repo):
119 119 fullpath = repo.svfs.join('lfs/objects')
120 120 self.vfs = lfsvfs(fullpath)
121 usercache = util.url(lfutil._usercachedir(repo.ui, 'lfs'))
122 if usercache.scheme in (None, 'file'):
123 self.cachevfs = lfsvfs(usercache.localpath())
124 elif usercache.scheme == 'null':
121
122 if repo.ui.configbool('experimental', 'lfs.disableusercache'):
125 123 self.cachevfs = nullvfs()
126 124 else:
127 raise error.Abort(_('unknown lfs cache scheme: %s')
128 % usercache.scheme)
125 usercache = lfutil._usercachedir(repo.ui, 'lfs')
126 self.cachevfs = lfsvfs(usercache)
129 127 self.ui = repo.ui
130 128
131 129 def open(self, oid):
132 130 """Open a read-only file descriptor to the named blob, in either the
133 131 usercache or the local store."""
134 132 # The usercache is the most likely place to hold the file. Commit will
135 133 # write to both it and the local store, as will anything that downloads
136 134 # the blobs. However, things like clone without an update won't
137 135 # populate the local store. For an init + push of a local clone,
138 136 # the usercache is the only place it _could_ be. If not present, the
139 137 # missing file msg here will indicate the local repo, not the usercache.
140 138 if self.cachevfs.exists(oid):
141 139 return self.cachevfs(oid, 'rb')
142 140
143 141 return self.vfs(oid, 'rb')
144 142
145 143 def download(self, oid, src):
146 144 """Read the blob from the remote source in chunks, verify the content,
147 145 and write to this local blobstore."""
148 146 sha256 = hashlib.sha256()
149 147
150 148 with self.vfs(oid, 'wb', atomictemp=True) as fp:
151 149 for chunk in util.filechunkiter(src, size=1048576):
152 150 fp.write(chunk)
153 151 sha256.update(chunk)
154 152
155 153 realoid = sha256.hexdigest()
156 154 if realoid != oid:
157 155 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
158 156
159 157 self._linktousercache(oid)
160 158
161 159 def write(self, oid, data):
162 160 """Write blob to local blobstore.
163 161
164 162 This should only be called from the filelog during a commit or similar.
165 163 As such, there is no need to verify the data. Imports from a remote
166 164 store must use ``download()`` instead."""
167 165 with self.vfs(oid, 'wb', atomictemp=True) as fp:
168 166 fp.write(data)
169 167
170 168 self._linktousercache(oid)
171 169
172 170 def _linktousercache(self, oid):
173 171 # XXX: should we verify the content of the cache, and hardlink back to
174 172 # the local store on success, but truncate, write and link on failure?
175 173 if (not self.cachevfs.exists(oid)
176 174 and not isinstance(self.cachevfs, nullvfs)):
177 175 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
178 176 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
179 177
180 178 def read(self, oid, verify=True):
181 179 """Read blob from local blobstore."""
182 180 if not self.vfs.exists(oid):
183 181 blob = self._read(self.cachevfs, oid, verify)
184 182
185 183 # Even if revlog will verify the content, it needs to be verified
186 184 # now before making the hardlink to avoid propagating corrupt blobs.
187 185 # Don't abort if corruption is detected, because `hg verify` will
188 186 # give more useful info about the corruption- simply don't add the
189 187 # hardlink.
190 188 if verify or hashlib.sha256(blob).hexdigest() == oid:
191 189 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
192 190 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
193 191 else:
194 192 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
195 193 blob = self._read(self.vfs, oid, verify)
196 194 return blob
197 195
198 196 def _read(self, vfs, oid, verify):
199 197 """Read blob (after verifying) from the given store"""
200 198 blob = vfs.read(oid)
201 199 if verify:
202 200 _verify(oid, blob)
203 201 return blob
204 202
205 203 def verify(self, oid):
206 204 """Indicate whether or not the hash of the underlying file matches its
207 205 name."""
208 206 sha256 = hashlib.sha256()
209 207
210 208 with self.open(oid) as fp:
211 209 for chunk in util.filechunkiter(fp, size=1048576):
212 210 sha256.update(chunk)
213 211
214 212 return oid == sha256.hexdigest()
215 213
216 214 def has(self, oid):
217 215 """Returns True if the local blobstore contains the requested blob,
218 216 False otherwise."""
219 217 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
220 218
221 219 class _gitlfsremote(object):
222 220
223 221 def __init__(self, repo, url):
224 222 ui = repo.ui
225 223 self.ui = ui
226 224 baseurl, authinfo = url.authinfo()
227 225 self.baseurl = baseurl.rstrip('/')
228 226 useragent = repo.ui.config('experimental', 'lfs.user-agent')
229 227 if not useragent:
230 228 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
231 229 self.urlopener = urlmod.opener(ui, authinfo, useragent)
232 230 self.retry = ui.configint('lfs', 'retry')
233 231
234 232 def writebatch(self, pointers, fromstore):
235 233 """Batch upload from local to remote blobstore."""
236 234 self._batch(_deduplicate(pointers), fromstore, 'upload')
237 235
238 236 def readbatch(self, pointers, tostore):
239 237 """Batch download from remote to local blostore."""
240 238 self._batch(_deduplicate(pointers), tostore, 'download')
241 239
242 240 def _batchrequest(self, pointers, action):
243 241 """Get metadata about objects pointed by pointers for given action
244 242
245 243 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
246 244 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
247 245 """
248 246 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
249 247 requestdata = json.dumps({
250 248 'objects': objects,
251 249 'operation': action,
252 250 })
253 251 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
254 252 data=requestdata)
255 253 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
256 254 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
257 255 try:
258 256 rsp = self.urlopener.open(batchreq)
259 257 rawjson = rsp.read()
260 258 except util.urlerr.httperror as ex:
261 259 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
262 260 % (ex, action))
263 261 try:
264 262 response = json.loads(rawjson)
265 263 except ValueError:
266 264 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
267 265 % rawjson)
268 266
269 267 if self.ui.debugflag:
270 268 self.ui.debug('Status: %d\n' % rsp.status)
271 269 # lfs-test-server and hg serve return headers in different order
272 270 self.ui.debug('%s\n'
273 271 % '\n'.join(sorted(str(rsp.info()).splitlines())))
274 272
275 273 if 'objects' in response:
276 274 response['objects'] = sorted(response['objects'],
277 275 key=lambda p: p['oid'])
278 276 self.ui.debug('%s\n'
279 277 % json.dumps(response, indent=2,
280 278 separators=('', ': '), sort_keys=True))
281 279
282 280 return response
283 281
284 282 def _checkforservererror(self, pointers, responses, action):
285 283 """Scans errors from objects
286 284
287 285 Raises LfsRemoteError if any objects have an error"""
288 286 for response in responses:
289 287 # The server should return 404 when objects cannot be found. Some
290 288 # server implementation (ex. lfs-test-server) does not set "error"
291 289 # but just removes "download" from "actions". Treat that case
292 290 # as the same as 404 error.
293 291 if 'error' not in response:
294 292 if (action == 'download'
295 293 and action not in response.get('actions', [])):
296 294 code = 404
297 295 else:
298 296 continue
299 297 else:
300 298 # An error dict without a code doesn't make much sense, so
301 299 # treat as a server error.
302 300 code = response.get('error').get('code', 500)
303 301
304 302 ptrmap = {p.oid(): p for p in pointers}
305 303 p = ptrmap.get(response['oid'], None)
306 304 if p:
307 305 filename = getattr(p, 'filename', 'unknown')
308 306 errors = {
309 307 404: 'The object does not exist',
310 308 410: 'The object was removed by the owner',
311 309 422: 'Validation error',
312 310 500: 'Internal server error',
313 311 }
314 312 msg = errors.get(code, 'status code %d' % code)
315 313 raise LfsRemoteError(_('LFS server error for "%s": %s')
316 314 % (filename, msg))
317 315 else:
318 316 raise LfsRemoteError(
319 317 _('LFS server error. Unsolicited response for oid %s')
320 318 % response['oid'])
321 319
322 320 def _extractobjects(self, response, pointers, action):
323 321 """extract objects from response of the batch API
324 322
325 323 response: parsed JSON object returned by batch API
326 324 return response['objects'] filtered by action
327 325 raise if any object has an error
328 326 """
329 327 # Scan errors from objects - fail early
330 328 objects = response.get('objects', [])
331 329 self._checkforservererror(pointers, objects, action)
332 330
333 331 # Filter objects with given action. Practically, this skips uploading
334 332 # objects which exist in the server.
335 333 filteredobjects = [o for o in objects if action in o.get('actions', [])]
336 334
337 335 return filteredobjects
338 336
339 337 def _basictransfer(self, obj, action, localstore):
340 338 """Download or upload a single object using basic transfer protocol
341 339
342 340 obj: dict, an object description returned by batch API
343 341 action: string, one of ['upload', 'download']
344 342 localstore: blobstore.local
345 343
346 344 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
347 345 basic-transfers.md
348 346 """
349 347 oid = pycompat.bytestr(obj['oid'])
350 348
351 349 href = pycompat.bytestr(obj['actions'][action].get('href'))
352 350 headers = obj['actions'][action].get('header', {}).items()
353 351
354 352 request = util.urlreq.request(href)
355 353 if action == 'upload':
356 354 # If uploading blobs, read data from local blobstore.
357 355 if not localstore.verify(oid):
358 356 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
359 357 hint=_('run hg verify'))
360 358 request.data = filewithprogress(localstore.open(oid), None)
361 359 request.get_method = lambda: 'PUT'
362 360 request.add_header('Content-Type', 'application/octet-stream')
363 361
364 362 for k, v in headers:
365 363 request.add_header(k, v)
366 364
367 365 response = b''
368 366 try:
369 367 req = self.urlopener.open(request)
370 368
371 369 if self.ui.debugflag:
372 370 self.ui.debug('Status: %d\n' % req.status)
373 371 # lfs-test-server and hg serve return headers in different order
374 372 self.ui.debug('%s\n'
375 373 % '\n'.join(sorted(str(req.info()).splitlines())))
376 374
377 375 if action == 'download':
378 376 # If downloading blobs, store downloaded data to local blobstore
379 377 localstore.download(oid, req)
380 378 else:
381 379 while True:
382 380 data = req.read(1048576)
383 381 if not data:
384 382 break
385 383 response += data
386 384 if response:
387 385 self.ui.debug('lfs %s response: %s' % (action, response))
388 386 except util.urlerr.httperror as ex:
389 387 if self.ui.debugflag:
390 388 self.ui.debug('%s: %s\n' % (oid, ex.read()))
391 389 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
392 390 % (ex, oid, action))
393 391
394 392 def _batch(self, pointers, localstore, action):
395 393 if action not in ['upload', 'download']:
396 394 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
397 395
398 396 response = self._batchrequest(pointers, action)
399 397 objects = self._extractobjects(response, pointers, action)
400 398 total = sum(x.get('size', 0) for x in objects)
401 399 sizes = {}
402 400 for obj in objects:
403 401 sizes[obj.get('oid')] = obj.get('size', 0)
404 402 topic = {'upload': _('lfs uploading'),
405 403 'download': _('lfs downloading')}[action]
406 404 if len(objects) > 1:
407 405 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
408 406 % (len(objects), util.bytecount(total)))
409 407 self.ui.progress(topic, 0, total=total)
410 408 def transfer(chunk):
411 409 for obj in chunk:
412 410 objsize = obj.get('size', 0)
413 411 if self.ui.verbose:
414 412 if action == 'download':
415 413 msg = _('lfs: downloading %s (%s)\n')
416 414 elif action == 'upload':
417 415 msg = _('lfs: uploading %s (%s)\n')
418 416 self.ui.note(msg % (obj.get('oid'),
419 417 util.bytecount(objsize)))
420 418 retry = self.retry
421 419 while True:
422 420 try:
423 421 self._basictransfer(obj, action, localstore)
424 422 yield 1, obj.get('oid')
425 423 break
426 424 except socket.error as ex:
427 425 if retry > 0:
428 426 self.ui.note(
429 427 _('lfs: failed: %r (remaining retry %d)\n')
430 428 % (ex, retry))
431 429 retry -= 1
432 430 continue
433 431 raise
434 432
435 433 # Until https multiplexing gets sorted out
436 434 if self.ui.configbool('experimental', 'lfs.worker-enable'):
437 435 oids = worker.worker(self.ui, 0.1, transfer, (),
438 436 sorted(objects, key=lambda o: o.get('oid')))
439 437 else:
440 438 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
441 439
442 440 processed = 0
443 441 blobs = 0
444 442 for _one, oid in oids:
445 443 processed += sizes[oid]
446 444 blobs += 1
447 445 self.ui.progress(topic, processed, total=total)
448 446 self.ui.note(_('lfs: processed: %s\n') % oid)
449 447 self.ui.progress(topic, pos=None, total=total)
450 448
451 449 if blobs > 0:
452 450 if action == 'upload':
453 451 self.ui.status(_('lfs: uploaded %d files (%s)\n')
454 452 % (blobs, util.bytecount(processed)))
455 453 # TODO: coalesce the download requests, and comment this in
456 454 #elif action == 'download':
457 455 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
458 456 # % (blobs, util.bytecount(processed)))
459 457
460 458 def __del__(self):
461 459 # copied from mercurial/httppeer.py
462 460 urlopener = getattr(self, 'urlopener', None)
463 461 if urlopener:
464 462 for h in urlopener.handlers:
465 463 h.close()
466 464 getattr(h, "close_all", lambda : None)()
467 465
468 466 class _dummyremote(object):
469 467 """Dummy store storing blobs to temp directory."""
470 468
471 469 def __init__(self, repo, url):
472 470 fullpath = repo.vfs.join('lfs', url.path)
473 471 self.vfs = lfsvfs(fullpath)
474 472
475 473 def writebatch(self, pointers, fromstore):
476 474 for p in _deduplicate(pointers):
477 475 content = fromstore.read(p.oid(), verify=True)
478 476 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
479 477 fp.write(content)
480 478
481 479 def readbatch(self, pointers, tostore):
482 480 for p in _deduplicate(pointers):
483 481 with self.vfs(p.oid(), 'rb') as fp:
484 482 tostore.download(p.oid(), fp)
485 483
486 484 class _nullremote(object):
487 485 """Null store storing blobs to /dev/null."""
488 486
489 487 def __init__(self, repo, url):
490 488 pass
491 489
492 490 def writebatch(self, pointers, fromstore):
493 491 pass
494 492
495 493 def readbatch(self, pointers, tostore):
496 494 pass
497 495
498 496 class _promptremote(object):
499 497 """Prompt user to set lfs.url when accessed."""
500 498
501 499 def __init__(self, repo, url):
502 500 pass
503 501
504 502 def writebatch(self, pointers, fromstore, ui=None):
505 503 self._prompt()
506 504
507 505 def readbatch(self, pointers, tostore, ui=None):
508 506 self._prompt()
509 507
510 508 def _prompt(self):
511 509 raise error.Abort(_('lfs.url needs to be configured'))
512 510
513 511 _storemap = {
514 512 'https': _gitlfsremote,
515 513 'http': _gitlfsremote,
516 514 'file': _dummyremote,
517 515 'null': _nullremote,
518 516 None: _promptremote,
519 517 }
520 518
521 519 def _deduplicate(pointers):
522 520 """Remove any duplicate oids that exist in the list"""
523 521 reduced = util.sortdict()
524 522 for p in pointers:
525 523 reduced[p.oid()] = p
526 524 return reduced.values()
527 525
528 526 def _verify(oid, content):
529 527 realoid = hashlib.sha256(content).hexdigest()
530 528 if realoid != oid:
531 529 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
532 530 hint=_('run hg verify'))
533 531
534 532 def remote(repo):
535 533 """remotestore factory. return a store in _storemap depending on config
536 534
537 535 If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to
538 536 infer the endpoint, based on the remote repository using the same path
539 537 adjustments as git. As an extension, 'http' is supported as well so that
540 538 ``hg serve`` works out of the box.
541 539
542 540 https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
543 541 """
544 542 url = util.url(repo.ui.config('lfs', 'url') or '')
545 543 if url.scheme is None:
546 544 # TODO: investigate 'paths.remote:lfsurl' style path customization,
547 545 # and fall back to inferring from 'paths.remote' if unspecified.
548 546 defaulturl = util.url(repo.ui.config('paths', 'default') or b'')
549 547
550 548 # TODO: support local paths as well.
551 549 # TODO: consider the ssh -> https transformation that git applies
552 550 if defaulturl.scheme in (b'http', b'https'):
553 551 defaulturl.path = defaulturl.path or b'' + b'.git/info/lfs'
554 552
555 553 url = util.url(bytes(defaulturl))
556 554 repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
557 555
558 556 scheme = url.scheme
559 557 if scheme not in _storemap:
560 558 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
561 559 return _storemap[scheme](repo, url)
562 560
563 561 class LfsRemoteError(error.RevlogError):
564 562 pass
@@ -1,283 +1,284
1 1 #testcases lfsremote-on lfsremote-off
2 2 #require serve no-reposimplestore
3 3
4 4 This test splits `hg serve` with and without using the extension into separate
5 5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 8 individually, because the lfs requirement causes the process to bail early if
9 9 the extension is disabled.
10 10
11 11 . Server
12 12 .
13 13 . No-LFS LFS
14 14 . +----------------------------+
15 15 . | || D | E | D | E |
16 16 . |---++=======================|
17 17 . C | D || N/A | #1 | X | #4 |
18 18 . l No +---++-----------------------|
19 19 . i LFS | E || #2 | #2 | X | #5 |
20 20 . e +---++-----------------------|
21 21 . n | D || X | X | X | X |
22 22 . t LFS |---++-----------------------|
23 23 . | E || #3 | #3 | X | #6 |
24 24 . |---++-----------------------+
25 25
26 26 $ hg init server
27 27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
28 28
29 29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
30 30 first, and causes a "ValueError: no common changegroup version" or "abort:
31 31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
32 32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
33 33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
34 34 masked by the Internal Server Error message).
35 35 $ cat >> $HGRCPATH <<EOF
36 > [experimental]
37 > lfs.disableusercache = True
36 38 > [lfs]
37 > usercache = null://
38 39 > threshold=10
39 40 > [web]
40 41 > allow_push=*
41 42 > push_ssl=False
42 43 > EOF
43 44
44 45 #if lfsremote-on
45 46 $ hg --config extensions.lfs= -R server \
46 47 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
47 48 #else
48 49 $ hg --config extensions.lfs=! -R server \
49 50 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
50 51 #endif
51 52
52 53 $ cat hg.pid >> $DAEMON_PIDS
53 54 $ hg clone -q http://localhost:$HGPORT client
54 55 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
55 56 [1]
56 57
57 58 --------------------------------------------------------------------------------
58 59 Case #1: client with non-lfs content and the extension disabled; server with
59 60 non-lfs content, and the extension enabled.
60 61
61 62 $ cd client
62 63 $ echo 'non-lfs' > nonlfs.txt
63 64 $ hg ci -Aqm 'non-lfs'
64 65 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
65 66 [1]
66 67
67 68 #if lfsremote-on
68 69
69 70 $ hg push -q
70 71 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
71 72 [1]
72 73
73 74 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
74 75 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
75 76 [1]
76 77
77 78 $ hg init $TESTTMP/client1_pull
78 79 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
79 80 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
80 81 [1]
81 82
82 83 $ hg identify http://localhost:$HGPORT
83 84 d437e1d24fbd
84 85
85 86 #endif
86 87
87 88 --------------------------------------------------------------------------------
88 89 Case #2: client with non-lfs content and the extension enabled; server with
89 90 non-lfs content, and the extension state controlled by #testcases.
90 91
91 92 $ cat >> $HGRCPATH <<EOF
92 93 > [extensions]
93 94 > lfs =
94 95 > EOF
95 96 $ echo 'non-lfs' > nonlfs2.txt
96 97 $ hg ci -Aqm 'non-lfs file with lfs client'
97 98
98 99 Since no lfs content has been added yet, the push is allowed, even when the
99 100 extension is not enabled remotely.
100 101
101 102 $ hg push -q
102 103 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
103 104 [1]
104 105
105 106 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
106 107 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
107 108 [1]
108 109
109 110 $ hg init $TESTTMP/client2_pull
110 111 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
111 112 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
112 113 [1]
113 114
114 115 $ hg identify http://localhost:$HGPORT
115 116 1477875038c6
116 117
117 118 --------------------------------------------------------------------------------
118 119 Case #3: client with lfs content and the extension enabled; server with
119 120 non-lfs content, and the extension state controlled by #testcases. The server
120 121 should have an 'lfs' requirement after it picks up its first commit with a blob.
121 122
122 123 $ echo 'this is a big lfs file' > lfs.bin
123 124 $ hg ci -Aqm 'lfs'
124 125 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
125 126 .hg/requires:lfs
126 127
127 128 #if lfsremote-off
128 129 $ hg push -q
129 130 abort: required features are not supported in the destination: lfs
130 131 (enable the lfs extension on the server)
131 132 [255]
132 133 #else
133 134 $ hg push -q
134 135 #endif
135 136 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
136 137 .hg/requires:lfs
137 138 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
138 139
139 140 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
140 141 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
141 142 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
142 143 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
143 144
144 145 $ hg init $TESTTMP/client3_pull
145 146 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
146 147 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
147 148 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
148 149 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
149 150
150 151 The difference here is the push failed above when the extension isn't
151 152 enabled on the server.
152 153 $ hg identify http://localhost:$HGPORT
153 154 8374dc4052cb (lfsremote-on !)
154 155 1477875038c6 (lfsremote-off !)
155 156
156 157 Don't bother testing the lfsremote-off cases- the server won't be able
157 158 to launch if there's lfs content and the extension is disabled.
158 159
159 160 #if lfsremote-on
160 161
161 162 --------------------------------------------------------------------------------
162 163 Case #4: client with non-lfs content and the extension disabled; server with
163 164 lfs content, and the extension enabled.
164 165
165 166 $ cat >> $HGRCPATH <<EOF
166 167 > [extensions]
167 168 > lfs = !
168 169 > EOF
169 170
170 171 $ hg init $TESTTMP/client4
171 172 $ cd $TESTTMP/client4
172 173 $ cat >> .hg/hgrc <<EOF
173 174 > [paths]
174 175 > default = http://localhost:$HGPORT
175 176 > EOF
176 177 $ echo 'non-lfs' > nonlfs2.txt
177 178 $ hg ci -Aqm 'non-lfs'
178 179 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
179 180 $TESTTMP/server/.hg/requires:lfs
180 181
181 182 $ hg push -q --force
182 183 warning: repository is unrelated
183 184 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
184 185 $TESTTMP/server/.hg/requires:lfs
185 186
186 187 TODO: fail more gracefully.
187 188
188 189 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
189 190 abort: HTTP Error 500: Internal Server Error
190 191 [255]
191 192 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
192 193 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
193 194 $TESTTMP/server/.hg/requires:lfs
194 195 [2]
195 196
196 197 TODO: fail more gracefully.
197 198
198 199 $ hg init $TESTTMP/client4_pull
199 200 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
200 201 abort: HTTP Error 500: Internal Server Error
201 202 [255]
202 203 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
203 204 $TESTTMP/server/.hg/requires:lfs
204 205
205 206 $ hg identify http://localhost:$HGPORT
206 207 03b080fa9d93
207 208
208 209 --------------------------------------------------------------------------------
209 210 Case #5: client with non-lfs content and the extension enabled; server with
210 211 lfs content, and the extension enabled.
211 212
212 213 $ cat >> $HGRCPATH <<EOF
213 214 > [extensions]
214 215 > lfs =
215 216 > EOF
216 217 $ echo 'non-lfs' > nonlfs3.txt
217 218 $ hg ci -Aqm 'non-lfs file with lfs client'
218 219
219 220 $ hg push -q
220 221 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
221 222 $TESTTMP/server/.hg/requires:lfs
222 223
223 224 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
224 225 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
225 226 $TESTTMP/client5_clone/.hg/requires:lfs
226 227 $TESTTMP/server/.hg/requires:lfs
227 228
228 229 $ hg init $TESTTMP/client5_pull
229 230 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
230 231 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
231 232 $TESTTMP/client5_pull/.hg/requires:lfs
232 233 $TESTTMP/server/.hg/requires:lfs
233 234
234 235 $ hg identify http://localhost:$HGPORT
235 236 c729025cc5e3
236 237
237 238 --------------------------------------------------------------------------------
238 239 Case #6: client with lfs content and the extension enabled; server with
239 240 lfs content, and the extension enabled.
240 241
241 242 $ echo 'this is another lfs file' > lfs2.txt
242 243 $ hg ci -Aqm 'lfs file with lfs client'
243 244
244 245 $ hg push -q
245 246 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
246 247 .hg/requires:lfs
247 248 $TESTTMP/server/.hg/requires:lfs
248 249
249 250 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
250 251 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
251 252 $TESTTMP/client6_clone/.hg/requires:lfs
252 253 $TESTTMP/server/.hg/requires:lfs
253 254
254 255 $ hg init $TESTTMP/client6_pull
255 256 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
256 257 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
257 258 $TESTTMP/client6_pull/.hg/requires:lfs
258 259 $TESTTMP/server/.hg/requires:lfs
259 260
260 261 $ hg identify http://localhost:$HGPORT
261 262 d3b84d50eacb
262 263
263 264 --------------------------------------------------------------------------------
264 265 Misc: process dies early if a requirement exists and the extension is disabled
265 266
266 267 $ hg --config extensions.lfs=! summary
267 268 abort: repository requires features unknown to this Mercurial: lfs!
268 269 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
269 270 [255]
270 271
271 272 #endif
272 273
273 274 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
274 275
275 276 #if lfsremote-on
276 277 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
277 278 Traceback (most recent call last):
278 279 ValueError: no common changegroup version
279 280 Traceback (most recent call last):
280 281 ValueError: no common changegroup version
281 282 #else
282 283 $ cat $TESTTMP/errors.log
283 284 #endif
General Comments 0
You need to be logged in to leave comments. Login now