##// END OF EJS Templates
lfs: infer the blob store URL from paths.default...
Matt Harbison -
r37536:092eff68 default
parent child Browse files
Show More
@@ -1,394 +1,396
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will assume the repository at ``paths.default`` also handles
91 # blob storage for http(s) URLs. Otherwise, lfs will prompt to set this
92 # when it must use this value.
91 93 # (default: unset)
92 94 url = https://example.com/repo.git/info/lfs
93 95
94 96 # Which files to track in LFS. Path tests are "**.extname" for file
95 97 # extensions, and "path:under/some/directory" for path prefix. Both
96 98 # are relative to the repository root.
97 99 # File size can be tested with the "size()" fileset, and tests can be
98 100 # joined with fileset operators. (See "hg help filesets.operators".)
99 101 #
100 102 # Some examples:
101 103 # - all() # everything
102 104 # - none() # nothing
103 105 # - size(">20MB") # larger than 20MB
104 106 # - !**.txt # anything not a *.txt file
105 107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 108 # - path:bin # files under "bin" in the project root
107 109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 110 # | (path:bin & !path:/bin/README) | size(">1GB")
109 111 # (default: none())
110 112 #
111 113 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 114 # will eventually be deprecated and removed.
113 115 track = size(">10M")
114 116
115 117 # how many times to retry before giving up on transferring an object
116 118 retry = 5
117 119
118 120 # the local directory to store lfs files for sharing across local clones.
119 121 # If not set, the cache is located in an OS specific cache location.
120 122 usercache = /path/to/global/cache
121 123 """
122 124
123 125 from __future__ import absolute_import
124 126
125 127 from mercurial.i18n import _
126 128
127 129 from mercurial import (
128 130 bundle2,
129 131 changegroup,
130 132 cmdutil,
131 133 config,
132 134 context,
133 135 error,
134 136 exchange,
135 137 extensions,
136 138 filelog,
137 139 fileset,
138 140 hg,
139 141 localrepo,
140 142 minifileset,
141 143 node,
142 144 pycompat,
143 145 registrar,
144 146 revlog,
145 147 scmutil,
146 148 templateutil,
147 149 upgrade,
148 150 util,
149 151 vfs as vfsmod,
150 152 wireproto,
151 153 wireprotoserver,
152 154 )
153 155
154 156 from . import (
155 157 blobstore,
156 158 wireprotolfsserver,
157 159 wrapper,
158 160 )
159 161
160 162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 164 # be specifying the version(s) of Mercurial they are tested with, or
163 165 # leave the attribute unspecified.
164 166 testedwith = 'ships-with-hg-core'
165 167
166 168 configtable = {}
167 169 configitem = registrar.configitem(configtable)
168 170
169 171 configitem('experimental', 'lfs.serve',
170 172 default=True,
171 173 )
172 174 configitem('experimental', 'lfs.user-agent',
173 175 default=None,
174 176 )
175 177 configitem('experimental', 'lfs.worker-enable',
176 178 default=False,
177 179 )
178 180
179 181 configitem('lfs', 'url',
180 182 default=None,
181 183 )
182 184 configitem('lfs', 'usercache',
183 185 default=None,
184 186 )
185 187 # Deprecated
186 188 configitem('lfs', 'threshold',
187 189 default=None,
188 190 )
189 191 configitem('lfs', 'track',
190 192 default='none()',
191 193 )
192 194 configitem('lfs', 'retry',
193 195 default=5,
194 196 )
195 197
196 198 cmdtable = {}
197 199 command = registrar.command(cmdtable)
198 200
199 201 templatekeyword = registrar.templatekeyword()
200 202 filesetpredicate = registrar.filesetpredicate()
201 203
202 204 def featuresetup(ui, supported):
203 205 # don't die on seeing a repo with the lfs requirement
204 206 supported |= {'lfs'}
205 207
206 208 def uisetup(ui):
207 209 localrepo.featuresetupfuncs.add(featuresetup)
208 210
209 211 def reposetup(ui, repo):
210 212 # Nothing to do with a remote repo
211 213 if not repo.local():
212 214 return
213 215
214 216 repo.svfs.lfslocalblobstore = blobstore.local(repo)
215 217 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
216 218
217 219 class lfsrepo(repo.__class__):
218 220 @localrepo.unfilteredmethod
219 221 def commitctx(self, ctx, error=False):
220 222 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
221 223 return super(lfsrepo, self).commitctx(ctx, error)
222 224
223 225 repo.__class__ = lfsrepo
224 226
225 227 if 'lfs' not in repo.requirements:
226 228 def checkrequireslfs(ui, repo, **kwargs):
227 229 if 'lfs' not in repo.requirements:
228 230 last = kwargs.get(r'node_last')
229 231 _bin = node.bin
230 232 if last:
231 233 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
232 234 else:
233 235 s = repo.set('%n', _bin(kwargs[r'node']))
234 236 match = repo.narrowmatch()
235 237 for ctx in s:
236 238 # TODO: is there a way to just walk the files in the commit?
237 239 if any(ctx[f].islfs() for f in ctx.files()
238 240 if f in ctx and match(f)):
239 241 repo.requirements.add('lfs')
240 242 repo._writerequirements()
241 243 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
242 244 break
243 245
244 246 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
245 247 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
246 248 else:
247 249 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
248 250
249 251 def _trackedmatcher(repo):
250 252 """Return a function (path, size) -> bool indicating whether or not to
251 253 track a given file with lfs."""
252 254 if not repo.wvfs.exists('.hglfs'):
253 255 # No '.hglfs' in wdir. Fallback to config for now.
254 256 trackspec = repo.ui.config('lfs', 'track')
255 257
256 258 # deprecated config: lfs.threshold
257 259 threshold = repo.ui.configbytes('lfs', 'threshold')
258 260 if threshold:
259 261 fileset.parse(trackspec) # make sure syntax errors are confined
260 262 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
261 263
262 264 return minifileset.compile(trackspec)
263 265
264 266 data = repo.wvfs.tryread('.hglfs')
265 267 if not data:
266 268 return lambda p, s: False
267 269
268 270 # Parse errors here will abort with a message that points to the .hglfs file
269 271 # and line number.
270 272 cfg = config.config()
271 273 cfg.parse('.hglfs', data)
272 274
273 275 try:
274 276 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
275 277 for pattern, rule in cfg.items('track')]
276 278 except error.ParseError as e:
277 279 # The original exception gives no indicator that the error is in the
278 280 # .hglfs file, so add that.
279 281
280 282 # TODO: See if the line number of the file can be made available.
281 283 raise error.Abort(_('parse error in .hglfs: %s') % e)
282 284
283 285 def _match(path, size):
284 286 for pat, rule in rules:
285 287 if pat(path, size):
286 288 return rule(path, size)
287 289
288 290 return False
289 291
290 292 return _match
291 293
292 294 def wrapfilelog(filelog):
293 295 wrapfunction = extensions.wrapfunction
294 296
295 297 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
296 298 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
297 299 wrapfunction(filelog, 'size', wrapper.filelogsize)
298 300
299 301 def extsetup(ui):
300 302 wrapfilelog(filelog.filelog)
301 303
302 304 wrapfunction = extensions.wrapfunction
303 305
304 306 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
305 307 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
306 308
307 309 wrapfunction(upgrade, '_finishdatamigration',
308 310 wrapper.upgradefinishdatamigration)
309 311
310 312 wrapfunction(upgrade, 'preservedrequirements',
311 313 wrapper.upgraderequirements)
312 314
313 315 wrapfunction(upgrade, 'supporteddestrequirements',
314 316 wrapper.upgraderequirements)
315 317
316 318 wrapfunction(changegroup,
317 319 'allsupportedversions',
318 320 wrapper.allsupportedversions)
319 321
320 322 wrapfunction(exchange, 'push', wrapper.push)
321 323 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
322 324 wrapfunction(wireprotoserver, 'handlewsgirequest',
323 325 wireprotolfsserver.handlewsgirequest)
324 326
325 327 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
326 328 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
327 329 context.basefilectx.islfs = wrapper.filectxislfs
328 330
329 331 revlog.addflagprocessor(
330 332 revlog.REVIDX_EXTSTORED,
331 333 (
332 334 wrapper.readfromstore,
333 335 wrapper.writetostore,
334 336 wrapper.bypasscheckhash,
335 337 ),
336 338 )
337 339
338 340 wrapfunction(hg, 'clone', wrapper.hgclone)
339 341 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
340 342
341 343 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
342 344
343 345 # Make bundle choose changegroup3 instead of changegroup2. This affects
344 346 # "hg bundle" command. Note: it does not cover all bundle formats like
345 347 # "packed1". Using "packed1" with lfs will likely cause trouble.
346 348 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
347 349
348 350 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
349 351 # options and blob stores are passed from othervfs to the new readonlyvfs.
350 352 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
351 353
352 354 # when writing a bundle via "hg bundle" command, upload related LFS blobs
353 355 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
354 356
355 357 @filesetpredicate('lfs()', callstatus=True)
356 358 def lfsfileset(mctx, x):
357 359 """File that uses LFS storage."""
358 360 # i18n: "lfs" is a keyword
359 361 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
360 362 return [f for f in mctx.subset
361 363 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
362 364
363 365 @templatekeyword('lfs_files', requires={'ctx'})
364 366 def lfsfiles(context, mapping):
365 367 """List of strings. All files modified, added, or removed by this
366 368 changeset."""
367 369 ctx = context.resource(mapping, 'ctx')
368 370
369 371 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
370 372 files = sorted(pointers.keys())
371 373
372 374 def pointer(v):
373 375 # In the file spec, version is first and the other keys are sorted.
374 376 sortkeyfunc = lambda x: (x[0] != 'version', x)
375 377 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
376 378 return util.sortdict(items)
377 379
378 380 makemap = lambda v: {
379 381 'file': v,
380 382 'lfsoid': pointers[v].oid() if pointers[v] else None,
381 383 'lfspointer': templateutil.hybriddict(pointer(v)),
382 384 }
383 385
384 386 # TODO: make the separator ', '?
385 387 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
386 388 return templateutil.hybrid(f, files, makemap, pycompat.identity)
387 389
388 390 @command('debuglfsupload',
389 391 [('r', 'rev', [], _('upload large files introduced by REV'))])
390 392 def debuglfsupload(ui, repo, **opts):
391 393 """upload lfs blobs added by the working copy parent or given revisions"""
392 394 revs = opts.get(r'rev', [])
393 395 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
394 396 wrapper.uploadblobs(repo, pointers)
@@ -1,543 +1,564
1 1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12 import json
13 13 import os
14 14 import re
15 15 import socket
16 16
17 17 from mercurial.i18n import _
18 18
19 19 from mercurial import (
20 20 error,
21 21 pathutil,
22 22 pycompat,
23 23 url as urlmod,
24 24 util,
25 25 vfs as vfsmod,
26 26 worker,
27 27 )
28 28
29 29 from ..largefiles import lfutil
30 30
31 31 # 64 bytes for SHA256
32 32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
33 33
34 34 class lfsvfs(vfsmod.vfs):
35 35 def join(self, path):
36 36 """split the path at first two characters, like: XX/XXXXX..."""
37 37 if not _lfsre.match(path):
38 38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
39 39 return super(lfsvfs, self).join(path[0:2], path[2:])
40 40
41 41 def walk(self, path=None, onerror=None):
42 42 """Yield (dirpath, [], oids) tuple for blobs under path
43 43
44 44 Oids only exist in the root of this vfs, so dirpath is always ''.
45 45 """
46 46 root = os.path.normpath(self.base)
47 47 # when dirpath == root, dirpath[prefixlen:] becomes empty
48 48 # because len(dirpath) < prefixlen.
49 49 prefixlen = len(pathutil.normasprefix(root))
50 50 oids = []
51 51
52 52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
53 53 onerror=onerror):
54 54 dirpath = dirpath[prefixlen:]
55 55
56 56 # Silently skip unexpected files and directories
57 57 if len(dirpath) == 2:
58 58 oids.extend([dirpath + f for f in files
59 59 if _lfsre.match(dirpath + f)])
60 60
61 61 yield ('', [], oids)
62 62
63 63 class nullvfs(lfsvfs):
64 64 def __init__(self):
65 65 pass
66 66
67 67 def exists(self, oid):
68 68 return False
69 69
70 70 def read(self, oid):
71 71 # store.read() calls into here if the blob doesn't exist in its
72 72 # self.vfs. Raise the same error as a normal vfs when asked to read a
73 73 # file that doesn't exist. The only difference is the full file path
74 74 # isn't available in the error.
75 75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
76 76
77 77 def walk(self, path=None, onerror=None):
78 78 return ('', [], [])
79 79
80 80 def write(self, oid, data):
81 81 pass
82 82
83 83 class filewithprogress(object):
84 84 """a file-like object that supports __len__ and read.
85 85
86 86 Useful to provide progress information for how many bytes are read.
87 87 """
88 88
89 89 def __init__(self, fp, callback):
90 90 self._fp = fp
91 91 self._callback = callback # func(readsize)
92 92 fp.seek(0, os.SEEK_END)
93 93 self._len = fp.tell()
94 94 fp.seek(0)
95 95
96 96 def __len__(self):
97 97 return self._len
98 98
99 99 def read(self, size):
100 100 if self._fp is None:
101 101 return b''
102 102 data = self._fp.read(size)
103 103 if data:
104 104 if self._callback:
105 105 self._callback(len(data))
106 106 else:
107 107 self._fp.close()
108 108 self._fp = None
109 109 return data
110 110
111 111 class local(object):
112 112 """Local blobstore for large file contents.
113 113
114 114 This blobstore is used both as a cache and as a staging area for large blobs
115 115 to be uploaded to the remote blobstore.
116 116 """
117 117
118 118 def __init__(self, repo):
119 119 fullpath = repo.svfs.join('lfs/objects')
120 120 self.vfs = lfsvfs(fullpath)
121 121 usercache = util.url(lfutil._usercachedir(repo.ui, 'lfs'))
122 122 if usercache.scheme in (None, 'file'):
123 123 self.cachevfs = lfsvfs(usercache.localpath())
124 124 elif usercache.scheme == 'null':
125 125 self.cachevfs = nullvfs()
126 126 else:
127 127 raise error.Abort(_('unknown lfs cache scheme: %s')
128 128 % usercache.scheme)
129 129 self.ui = repo.ui
130 130
131 131 def open(self, oid):
132 132 """Open a read-only file descriptor to the named blob, in either the
133 133 usercache or the local store."""
134 134 # The usercache is the most likely place to hold the file. Commit will
135 135 # write to both it and the local store, as will anything that downloads
136 136 # the blobs. However, things like clone without an update won't
137 137 # populate the local store. For an init + push of a local clone,
138 138 # the usercache is the only place it _could_ be. If not present, the
139 139 # missing file msg here will indicate the local repo, not the usercache.
140 140 if self.cachevfs.exists(oid):
141 141 return self.cachevfs(oid, 'rb')
142 142
143 143 return self.vfs(oid, 'rb')
144 144
145 145 def download(self, oid, src):
146 146 """Read the blob from the remote source in chunks, verify the content,
147 147 and write to this local blobstore."""
148 148 sha256 = hashlib.sha256()
149 149
150 150 with self.vfs(oid, 'wb', atomictemp=True) as fp:
151 151 for chunk in util.filechunkiter(src, size=1048576):
152 152 fp.write(chunk)
153 153 sha256.update(chunk)
154 154
155 155 realoid = sha256.hexdigest()
156 156 if realoid != oid:
157 157 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
158 158
159 159 self._linktousercache(oid)
160 160
161 161 def write(self, oid, data):
162 162 """Write blob to local blobstore.
163 163
164 164 This should only be called from the filelog during a commit or similar.
165 165 As such, there is no need to verify the data. Imports from a remote
166 166 store must use ``download()`` instead."""
167 167 with self.vfs(oid, 'wb', atomictemp=True) as fp:
168 168 fp.write(data)
169 169
170 170 self._linktousercache(oid)
171 171
172 172 def _linktousercache(self, oid):
173 173 # XXX: should we verify the content of the cache, and hardlink back to
174 174 # the local store on success, but truncate, write and link on failure?
175 175 if (not self.cachevfs.exists(oid)
176 176 and not isinstance(self.cachevfs, nullvfs)):
177 177 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
178 178 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
179 179
180 180 def read(self, oid, verify=True):
181 181 """Read blob from local blobstore."""
182 182 if not self.vfs.exists(oid):
183 183 blob = self._read(self.cachevfs, oid, verify)
184 184
185 185 # Even if revlog will verify the content, it needs to be verified
186 186 # now before making the hardlink to avoid propagating corrupt blobs.
187 187 # Don't abort if corruption is detected, because `hg verify` will
188 188 # give more useful info about the corruption- simply don't add the
189 189 # hardlink.
190 190 if verify or hashlib.sha256(blob).hexdigest() == oid:
191 191 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
192 192 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
193 193 else:
194 194 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
195 195 blob = self._read(self.vfs, oid, verify)
196 196 return blob
197 197
198 198 def _read(self, vfs, oid, verify):
199 199 """Read blob (after verifying) from the given store"""
200 200 blob = vfs.read(oid)
201 201 if verify:
202 202 _verify(oid, blob)
203 203 return blob
204 204
205 205 def verify(self, oid):
206 206 """Indicate whether or not the hash of the underlying file matches its
207 207 name."""
208 208 sha256 = hashlib.sha256()
209 209
210 210 with self.open(oid) as fp:
211 211 for chunk in util.filechunkiter(fp, size=1048576):
212 212 sha256.update(chunk)
213 213
214 214 return oid == sha256.hexdigest()
215 215
216 216 def has(self, oid):
217 217 """Returns True if the local blobstore contains the requested blob,
218 218 False otherwise."""
219 219 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
220 220
221 221 class _gitlfsremote(object):
222 222
223 223 def __init__(self, repo, url):
224 224 ui = repo.ui
225 225 self.ui = ui
226 226 baseurl, authinfo = url.authinfo()
227 227 self.baseurl = baseurl.rstrip('/')
228 228 useragent = repo.ui.config('experimental', 'lfs.user-agent')
229 229 if not useragent:
230 230 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
231 231 self.urlopener = urlmod.opener(ui, authinfo, useragent)
232 232 self.retry = ui.configint('lfs', 'retry')
233 233
234 234 def writebatch(self, pointers, fromstore):
235 235 """Batch upload from local to remote blobstore."""
236 236 self._batch(_deduplicate(pointers), fromstore, 'upload')
237 237
238 238 def readbatch(self, pointers, tostore):
239 239 """Batch download from remote to local blostore."""
240 240 self._batch(_deduplicate(pointers), tostore, 'download')
241 241
242 242 def _batchrequest(self, pointers, action):
243 243 """Get metadata about objects pointed by pointers for given action
244 244
245 245 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
246 246 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
247 247 """
248 248 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
249 249 requestdata = json.dumps({
250 250 'objects': objects,
251 251 'operation': action,
252 252 })
253 253 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
254 254 data=requestdata)
255 255 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
256 256 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
257 257 try:
258 258 rsp = self.urlopener.open(batchreq)
259 259 rawjson = rsp.read()
260 260 except util.urlerr.httperror as ex:
261 261 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
262 262 % (ex, action))
263 263 try:
264 264 response = json.loads(rawjson)
265 265 except ValueError:
266 266 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
267 267 % rawjson)
268 268
269 269 if self.ui.debugflag:
270 270 self.ui.debug('Status: %d\n' % rsp.status)
271 271 # lfs-test-server and hg serve return headers in different order
272 272 self.ui.debug('%s\n'
273 273 % '\n'.join(sorted(str(rsp.info()).splitlines())))
274 274
275 275 if 'objects' in response:
276 276 response['objects'] = sorted(response['objects'],
277 277 key=lambda p: p['oid'])
278 278 self.ui.debug('%s\n'
279 279 % json.dumps(response, indent=2,
280 280 separators=('', ': '), sort_keys=True))
281 281
282 282 return response
283 283
284 284 def _checkforservererror(self, pointers, responses, action):
285 285 """Scans errors from objects
286 286
287 287 Raises LfsRemoteError if any objects have an error"""
288 288 for response in responses:
289 289 # The server should return 404 when objects cannot be found. Some
290 290 # server implementation (ex. lfs-test-server) does not set "error"
291 291 # but just removes "download" from "actions". Treat that case
292 292 # as the same as 404 error.
293 293 if 'error' not in response:
294 294 if (action == 'download'
295 295 and action not in response.get('actions', [])):
296 296 code = 404
297 297 else:
298 298 continue
299 299 else:
300 300 # An error dict without a code doesn't make much sense, so
301 301 # treat as a server error.
302 302 code = response.get('error').get('code', 500)
303 303
304 304 ptrmap = {p.oid(): p for p in pointers}
305 305 p = ptrmap.get(response['oid'], None)
306 306 if p:
307 307 filename = getattr(p, 'filename', 'unknown')
308 308 errors = {
309 309 404: 'The object does not exist',
310 310 410: 'The object was removed by the owner',
311 311 422: 'Validation error',
312 312 500: 'Internal server error',
313 313 }
314 314 msg = errors.get(code, 'status code %d' % code)
315 315 raise LfsRemoteError(_('LFS server error for "%s": %s')
316 316 % (filename, msg))
317 317 else:
318 318 raise LfsRemoteError(
319 319 _('LFS server error. Unsolicited response for oid %s')
320 320 % response['oid'])
321 321
322 322 def _extractobjects(self, response, pointers, action):
323 323 """extract objects from response of the batch API
324 324
325 325 response: parsed JSON object returned by batch API
326 326 return response['objects'] filtered by action
327 327 raise if any object has an error
328 328 """
329 329 # Scan errors from objects - fail early
330 330 objects = response.get('objects', [])
331 331 self._checkforservererror(pointers, objects, action)
332 332
333 333 # Filter objects with given action. Practically, this skips uploading
334 334 # objects which exist in the server.
335 335 filteredobjects = [o for o in objects if action in o.get('actions', [])]
336 336
337 337 return filteredobjects
338 338
339 339 def _basictransfer(self, obj, action, localstore):
340 340 """Download or upload a single object using basic transfer protocol
341 341
342 342 obj: dict, an object description returned by batch API
343 343 action: string, one of ['upload', 'download']
344 344 localstore: blobstore.local
345 345
346 346 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
347 347 basic-transfers.md
348 348 """
349 349 oid = pycompat.bytestr(obj['oid'])
350 350
351 351 href = pycompat.bytestr(obj['actions'][action].get('href'))
352 352 headers = obj['actions'][action].get('header', {}).items()
353 353
354 354 request = util.urlreq.request(href)
355 355 if action == 'upload':
356 356 # If uploading blobs, read data from local blobstore.
357 357 if not localstore.verify(oid):
358 358 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
359 359 hint=_('run hg verify'))
360 360 request.data = filewithprogress(localstore.open(oid), None)
361 361 request.get_method = lambda: 'PUT'
362 362 request.add_header('Content-Type', 'application/octet-stream')
363 363
364 364 for k, v in headers:
365 365 request.add_header(k, v)
366 366
367 367 response = b''
368 368 try:
369 369 req = self.urlopener.open(request)
370 370
371 371 if self.ui.debugflag:
372 372 self.ui.debug('Status: %d\n' % req.status)
373 373 # lfs-test-server and hg serve return headers in different order
374 374 self.ui.debug('%s\n'
375 375 % '\n'.join(sorted(str(req.info()).splitlines())))
376 376
377 377 if action == 'download':
378 378 # If downloading blobs, store downloaded data to local blobstore
379 379 localstore.download(oid, req)
380 380 else:
381 381 while True:
382 382 data = req.read(1048576)
383 383 if not data:
384 384 break
385 385 response += data
386 386 if response:
387 387 self.ui.debug('lfs %s response: %s' % (action, response))
388 388 except util.urlerr.httperror as ex:
389 389 if self.ui.debugflag:
390 390 self.ui.debug('%s: %s\n' % (oid, ex.read()))
391 391 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
392 392 % (ex, oid, action))
393 393
394 394 def _batch(self, pointers, localstore, action):
395 395 if action not in ['upload', 'download']:
396 396 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
397 397
398 398 response = self._batchrequest(pointers, action)
399 399 objects = self._extractobjects(response, pointers, action)
400 400 total = sum(x.get('size', 0) for x in objects)
401 401 sizes = {}
402 402 for obj in objects:
403 403 sizes[obj.get('oid')] = obj.get('size', 0)
404 404 topic = {'upload': _('lfs uploading'),
405 405 'download': _('lfs downloading')}[action]
406 406 if len(objects) > 1:
407 407 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
408 408 % (len(objects), util.bytecount(total)))
409 409 self.ui.progress(topic, 0, total=total)
410 410 def transfer(chunk):
411 411 for obj in chunk:
412 412 objsize = obj.get('size', 0)
413 413 if self.ui.verbose:
414 414 if action == 'download':
415 415 msg = _('lfs: downloading %s (%s)\n')
416 416 elif action == 'upload':
417 417 msg = _('lfs: uploading %s (%s)\n')
418 418 self.ui.note(msg % (obj.get('oid'),
419 419 util.bytecount(objsize)))
420 420 retry = self.retry
421 421 while True:
422 422 try:
423 423 self._basictransfer(obj, action, localstore)
424 424 yield 1, obj.get('oid')
425 425 break
426 426 except socket.error as ex:
427 427 if retry > 0:
428 428 self.ui.note(
429 429 _('lfs: failed: %r (remaining retry %d)\n')
430 430 % (ex, retry))
431 431 retry -= 1
432 432 continue
433 433 raise
434 434
435 435 # Until https multiplexing gets sorted out
436 436 if self.ui.configbool('experimental', 'lfs.worker-enable'):
437 437 oids = worker.worker(self.ui, 0.1, transfer, (),
438 438 sorted(objects, key=lambda o: o.get('oid')))
439 439 else:
440 440 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
441 441
442 442 processed = 0
443 443 blobs = 0
444 444 for _one, oid in oids:
445 445 processed += sizes[oid]
446 446 blobs += 1
447 447 self.ui.progress(topic, processed, total=total)
448 448 self.ui.note(_('lfs: processed: %s\n') % oid)
449 449 self.ui.progress(topic, pos=None, total=total)
450 450
451 451 if blobs > 0:
452 452 if action == 'upload':
453 453 self.ui.status(_('lfs: uploaded %d files (%s)\n')
454 454 % (blobs, util.bytecount(processed)))
455 455 # TODO: coalesce the download requests, and comment this in
456 456 #elif action == 'download':
457 457 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
458 458 # % (blobs, util.bytecount(processed)))
459 459
460 460 def __del__(self):
461 461 # copied from mercurial/httppeer.py
462 462 urlopener = getattr(self, 'urlopener', None)
463 463 if urlopener:
464 464 for h in urlopener.handlers:
465 465 h.close()
466 466 getattr(h, "close_all", lambda : None)()
467 467
468 468 class _dummyremote(object):
469 469 """Dummy store storing blobs to temp directory."""
470 470
471 471 def __init__(self, repo, url):
472 472 fullpath = repo.vfs.join('lfs', url.path)
473 473 self.vfs = lfsvfs(fullpath)
474 474
475 475 def writebatch(self, pointers, fromstore):
476 476 for p in _deduplicate(pointers):
477 477 content = fromstore.read(p.oid(), verify=True)
478 478 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
479 479 fp.write(content)
480 480
481 481 def readbatch(self, pointers, tostore):
482 482 for p in _deduplicate(pointers):
483 483 with self.vfs(p.oid(), 'rb') as fp:
484 484 tostore.download(p.oid(), fp)
485 485
486 486 class _nullremote(object):
487 487 """Null store storing blobs to /dev/null."""
488 488
489 489 def __init__(self, repo, url):
490 490 pass
491 491
492 492 def writebatch(self, pointers, fromstore):
493 493 pass
494 494
495 495 def readbatch(self, pointers, tostore):
496 496 pass
497 497
498 498 class _promptremote(object):
499 499 """Prompt user to set lfs.url when accessed."""
500 500
501 501 def __init__(self, repo, url):
502 502 pass
503 503
504 504 def writebatch(self, pointers, fromstore, ui=None):
505 505 self._prompt()
506 506
507 507 def readbatch(self, pointers, tostore, ui=None):
508 508 self._prompt()
509 509
510 510 def _prompt(self):
511 511 raise error.Abort(_('lfs.url needs to be configured'))
512 512
513 513 _storemap = {
514 514 'https': _gitlfsremote,
515 515 'http': _gitlfsremote,
516 516 'file': _dummyremote,
517 517 'null': _nullremote,
518 518 None: _promptremote,
519 519 }
520 520
521 521 def _deduplicate(pointers):
522 522 """Remove any duplicate oids that exist in the list"""
523 523 reduced = util.sortdict()
524 524 for p in pointers:
525 525 reduced[p.oid()] = p
526 526 return reduced.values()
527 527
528 528 def _verify(oid, content):
529 529 realoid = hashlib.sha256(content).hexdigest()
530 530 if realoid != oid:
531 531 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
532 532 hint=_('run hg verify'))
533 533
534 534 def remote(repo):
535 """remotestore factory. return a store in _storemap depending on config"""
535 """remotestore factory. return a store in _storemap depending on config
536
537 If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to
538 infer the endpoint, based on the remote repository using the same path
539 adjustments as git. As an extension, 'http' is supported as well so that
540 ``hg serve`` works out of the box.
541
542 https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
543 """
536 544 url = util.url(repo.ui.config('lfs', 'url') or '')
545 if url.scheme is None:
546 # TODO: investigate 'paths.remote:lfsurl' style path customization,
547 # and fall back to inferring from 'paths.remote' if unspecified.
548 defaulturl = util.url(repo.ui.config('paths', 'default') or b'')
549
550 # TODO: support local paths as well.
551 # TODO: consider the ssh -> https transformation that git applies
552 if defaulturl.scheme in (b'http', b'https'):
553 defaulturl.path = defaulturl.path or b'' + b'.git/info/lfs'
554
555 url = util.url(bytes(defaulturl))
556 repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
557
537 558 scheme = url.scheme
538 559 if scheme not in _storemap:
539 560 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
540 561 return _storemap[scheme](repo, url)
541 562
542 563 class LfsRemoteError(error.RevlogError):
543 564 pass
@@ -1,386 +1,388
1 1 # wrapper.py - methods wrapping core mercurial logic
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import hashlib
11 11
12 12 from mercurial.i18n import _
13 13 from mercurial.node import bin, hex, nullid, short
14 14
15 15 from mercurial import (
16 16 error,
17 17 revlog,
18 18 util,
19 19 )
20 20
21 21 from mercurial.utils import (
22 22 stringutil,
23 23 )
24 24
25 25 from ..largefiles import lfutil
26 26
27 27 from . import (
28 28 blobstore,
29 29 pointer,
30 30 )
31 31
32 32 def allsupportedversions(orig, ui):
33 33 versions = orig(ui)
34 34 versions.add('03')
35 35 return versions
36 36
37 37 def _capabilities(orig, repo, proto):
38 38 '''Wrap server command to announce lfs server capability'''
39 39 caps = orig(repo, proto)
40 40 # XXX: change to 'lfs=serve' when separate git server isn't required?
41 41 caps.append('lfs')
42 42 return caps
43 43
44 44 def bypasscheckhash(self, text):
45 45 return False
46 46
47 47 def readfromstore(self, text):
48 48 """Read filelog content from local blobstore transform for flagprocessor.
49 49
50 50 Default tranform for flagprocessor, returning contents from blobstore.
51 51 Returns a 2-typle (text, validatehash) where validatehash is True as the
52 52 contents of the blobstore should be checked using checkhash.
53 53 """
54 54 p = pointer.deserialize(text)
55 55 oid = p.oid()
56 56 store = self.opener.lfslocalblobstore
57 57 if not store.has(oid):
58 58 p.filename = self.filename
59 59 self.opener.lfsremoteblobstore.readbatch([p], store)
60 60
61 61 # The caller will validate the content
62 62 text = store.read(oid, verify=False)
63 63
64 64 # pack hg filelog metadata
65 65 hgmeta = {}
66 66 for k in p.keys():
67 67 if k.startswith('x-hg-'):
68 68 name = k[len('x-hg-'):]
69 69 hgmeta[name] = p[k]
70 70 if hgmeta or text.startswith('\1\n'):
71 71 text = revlog.packmeta(hgmeta, text)
72 72
73 73 return (text, True)
74 74
75 75 def writetostore(self, text):
76 76 # hg filelog metadata (includes rename, etc)
77 77 hgmeta, offset = revlog.parsemeta(text)
78 78 if offset and offset > 0:
79 79 # lfs blob does not contain hg filelog metadata
80 80 text = text[offset:]
81 81
82 82 # git-lfs only supports sha256
83 83 oid = hex(hashlib.sha256(text).digest())
84 84 self.opener.lfslocalblobstore.write(oid, text)
85 85
86 86 # replace contents with metadata
87 87 longoid = 'sha256:%s' % oid
88 88 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
89 89
90 90 # by default, we expect the content to be binary. however, LFS could also
91 91 # be used for non-binary content. add a special entry for non-binary data.
92 92 # this will be used by filectx.isbinary().
93 93 if not stringutil.binary(text):
94 94 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
95 95 metadata['x-is-binary'] = '0'
96 96
97 97 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
98 98 if hgmeta is not None:
99 99 for k, v in hgmeta.iteritems():
100 100 metadata['x-hg-%s' % k] = v
101 101
102 102 rawtext = metadata.serialize()
103 103 return (rawtext, False)
104 104
105 105 def _islfs(rlog, node=None, rev=None):
106 106 if rev is None:
107 107 if node is None:
108 108 # both None - likely working copy content where node is not ready
109 109 return False
110 110 rev = rlog.rev(node)
111 111 else:
112 112 node = rlog.node(rev)
113 113 if node == nullid:
114 114 return False
115 115 flags = rlog.flags(rev)
116 116 return bool(flags & revlog.REVIDX_EXTSTORED)
117 117
118 118 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
119 119 cachedelta=None, node=None,
120 120 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
121 121 textlen = len(text)
122 122 # exclude hg rename meta from file size
123 123 meta, offset = revlog.parsemeta(text)
124 124 if offset:
125 125 textlen -= offset
126 126
127 127 lfstrack = self.opener.options['lfstrack']
128 128
129 129 if lfstrack(self.filename, textlen):
130 130 flags |= revlog.REVIDX_EXTSTORED
131 131
132 132 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
133 133 node=node, flags=flags, **kwds)
134 134
135 135 def filelogrenamed(orig, self, node):
136 136 if _islfs(self, node):
137 137 rawtext = self.revision(node, raw=True)
138 138 if not rawtext:
139 139 return False
140 140 metadata = pointer.deserialize(rawtext)
141 141 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
142 142 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
143 143 else:
144 144 return False
145 145 return orig(self, node)
146 146
147 147 def filelogsize(orig, self, rev):
148 148 if _islfs(self, rev=rev):
149 149 # fast path: use lfs metadata to answer size
150 150 rawtext = self.revision(rev, raw=True)
151 151 metadata = pointer.deserialize(rawtext)
152 152 return int(metadata['size'])
153 153 return orig(self, rev)
154 154
155 155 def filectxcmp(orig, self, fctx):
156 156 """returns True if text is different than fctx"""
157 157 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
158 158 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
159 159 # fast path: check LFS oid
160 160 p1 = pointer.deserialize(self.rawdata())
161 161 p2 = pointer.deserialize(fctx.rawdata())
162 162 return p1.oid() != p2.oid()
163 163 return orig(self, fctx)
164 164
165 165 def filectxisbinary(orig, self):
166 166 if self.islfs():
167 167 # fast path: use lfs metadata to answer isbinary
168 168 metadata = pointer.deserialize(self.rawdata())
169 169 # if lfs metadata says nothing, assume it's binary by default
170 170 return bool(int(metadata.get('x-is-binary', 1)))
171 171 return orig(self)
172 172
173 173 def filectxislfs(self):
174 174 return _islfs(self.filelog(), self.filenode())
175 175
176 176 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
177 177 orig(fm, ctx, matcher, path, decode)
178 178 fm.data(rawdata=ctx[path].rawdata())
179 179
180 180 def convertsink(orig, sink):
181 181 sink = orig(sink)
182 182 if sink.repotype == 'hg':
183 183 class lfssink(sink.__class__):
184 184 def putcommit(self, files, copies, parents, commit, source, revmap,
185 185 full, cleanp2):
186 186 pc = super(lfssink, self).putcommit
187 187 node = pc(files, copies, parents, commit, source, revmap, full,
188 188 cleanp2)
189 189
190 190 if 'lfs' not in self.repo.requirements:
191 191 ctx = self.repo[node]
192 192
193 193 # The file list may contain removed files, so check for
194 194 # membership before assuming it is in the context.
195 195 if any(f in ctx and ctx[f].islfs() for f, n in files):
196 196 self.repo.requirements.add('lfs')
197 197 self.repo._writerequirements()
198 198
199 199 # Permanently enable lfs locally
200 200 self.repo.vfs.append(
201 201 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
202 202
203 203 return node
204 204
205 205 sink.__class__ = lfssink
206 206
207 207 return sink
208 208
209 209 def vfsinit(orig, self, othervfs):
210 210 orig(self, othervfs)
211 211 # copy lfs related options
212 212 for k, v in othervfs.options.items():
213 213 if k.startswith('lfs'):
214 214 self.options[k] = v
215 215 # also copy lfs blobstores. note: this can run before reposetup, so lfs
216 216 # blobstore attributes are not always ready at this time.
217 217 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
218 218 if util.safehasattr(othervfs, name):
219 219 setattr(self, name, getattr(othervfs, name))
220 220
221 221 def hgclone(orig, ui, opts, *args, **kwargs):
222 222 result = orig(ui, opts, *args, **kwargs)
223 223
224 224 if result is not None:
225 225 sourcerepo, destrepo = result
226 226 repo = destrepo.local()
227 227
228 228 # When cloning to a remote repo (like through SSH), no repo is available
229 229 # from the peer. Therefore the hgrc can't be updated.
230 230 if not repo:
231 231 return result
232 232
233 233 # If lfs is required for this repo, permanently enable it locally
234 234 if 'lfs' in repo.requirements:
235 235 repo.vfs.append('hgrc',
236 236 util.tonativeeol('\n[extensions]\nlfs=\n'))
237 237
238 238 return result
239 239
240 240 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
241 241 orig(sourcerepo, destrepo, bookmarks, defaultpath)
242 242
243 243 # If lfs is required for this repo, permanently enable it locally
244 244 if 'lfs' in destrepo.requirements:
245 245 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
246 246
247 247 def _prefetchfiles(repo, ctx, files):
248 248 """Ensure that required LFS blobs are present, fetching them as a group if
249 249 needed."""
250 250 pointers = []
251 251 localstore = repo.svfs.lfslocalblobstore
252 252
253 253 for f in files:
254 254 p = pointerfromctx(ctx, f)
255 255 if p and not localstore.has(p.oid()):
256 256 p.filename = f
257 257 pointers.append(p)
258 258
259 259 if pointers:
260 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
260 # Recalculating the repo store here allows 'paths.default' that is set
261 # on the repo by a clone command to be used for the update.
262 blobstore.remote(repo).readbatch(pointers, localstore)
261 263
262 264 def _canskipupload(repo):
263 265 # if remotestore is a null store, upload is a no-op and can be skipped
264 266 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
265 267
266 268 def candownload(repo):
267 269 # if remotestore is a null store, downloads will lead to nothing
268 270 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
269 271
270 272 def uploadblobsfromrevs(repo, revs):
271 273 '''upload lfs blobs introduced by revs
272 274
273 275 Note: also used by other extensions e. g. infinitepush. avoid renaming.
274 276 '''
275 277 if _canskipupload(repo):
276 278 return
277 279 pointers = extractpointers(repo, revs)
278 280 uploadblobs(repo, pointers)
279 281
280 282 def prepush(pushop):
281 283 """Prepush hook.
282 284
283 285 Read through the revisions to push, looking for filelog entries that can be
284 286 deserialized into metadata so that we can block the push on their upload to
285 287 the remote blobstore.
286 288 """
287 289 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
288 290
289 291 def push(orig, repo, remote, *args, **kwargs):
290 292 """bail on push if the extension isn't enabled on remote when needed"""
291 293 if 'lfs' in repo.requirements:
292 294 # If the remote peer is for a local repo, the requirement tests in the
293 295 # base class method enforce lfs support. Otherwise, some revisions in
294 296 # this repo use lfs, and the remote repo needs the extension loaded.
295 297 if not remote.local() and not remote.capable('lfs'):
296 298 # This is a copy of the message in exchange.push() when requirements
297 299 # are missing between local repos.
298 300 m = _("required features are not supported in the destination: %s")
299 301 raise error.Abort(m % 'lfs',
300 302 hint=_('enable the lfs extension on the server'))
301 303 return orig(repo, remote, *args, **kwargs)
302 304
303 305 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
304 306 *args, **kwargs):
305 307 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
306 308 uploadblobsfromrevs(repo, outgoing.missing)
307 309 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
308 310 **kwargs)
309 311
310 312 def extractpointers(repo, revs):
311 313 """return a list of lfs pointers added by given revs"""
312 314 repo.ui.debug('lfs: computing set of blobs to upload\n')
313 315 pointers = {}
314 316 for r in revs:
315 317 ctx = repo[r]
316 318 for p in pointersfromctx(ctx).values():
317 319 pointers[p.oid()] = p
318 320 return sorted(pointers.values())
319 321
320 322 def pointerfromctx(ctx, f, removed=False):
321 323 """return a pointer for the named file from the given changectx, or None if
322 324 the file isn't LFS.
323 325
324 326 Optionally, the pointer for a file deleted from the context can be returned.
325 327 Since no such pointer is actually stored, and to distinguish from a non LFS
326 328 file, this pointer is represented by an empty dict.
327 329 """
328 330 _ctx = ctx
329 331 if f not in ctx:
330 332 if not removed:
331 333 return None
332 334 if f in ctx.p1():
333 335 _ctx = ctx.p1()
334 336 elif f in ctx.p2():
335 337 _ctx = ctx.p2()
336 338 else:
337 339 return None
338 340 fctx = _ctx[f]
339 341 if not _islfs(fctx.filelog(), fctx.filenode()):
340 342 return None
341 343 try:
342 344 p = pointer.deserialize(fctx.rawdata())
343 345 if ctx == _ctx:
344 346 return p
345 347 return {}
346 348 except pointer.InvalidPointer as ex:
347 349 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
348 350 % (f, short(_ctx.node()), ex))
349 351
350 352 def pointersfromctx(ctx, removed=False):
351 353 """return a dict {path: pointer} for given single changectx.
352 354
353 355 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
354 356 stored for the path is an empty dict.
355 357 """
356 358 result = {}
357 359 for f in ctx.files():
358 360 p = pointerfromctx(ctx, f, removed=removed)
359 361 if p is not None:
360 362 result[f] = p
361 363 return result
362 364
363 365 def uploadblobs(repo, pointers):
364 366 """upload given pointers from local blobstore"""
365 367 if not pointers:
366 368 return
367 369
368 370 remoteblob = repo.svfs.lfsremoteblobstore
369 371 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
370 372
371 373 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
372 374 orig(ui, srcrepo, dstrepo, requirements)
373 375
374 376 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
375 377 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
376 378
377 379 for dirpath, dirs, files in srclfsvfs.walk():
378 380 for oid in files:
379 381 ui.write(_('copying lfs blob %s\n') % oid)
380 382 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
381 383
382 384 def upgraderequirements(orig, repo):
383 385 reqs = orig(repo)
384 386 if 'lfs' in repo.requirements:
385 387 reqs.add('lfs')
386 388 return reqs
@@ -1,284 +1,283
1 1 #testcases lfsremote-on lfsremote-off
2 2 #require serve no-reposimplestore
3 3
4 4 This test splits `hg serve` with and without using the extension into separate
5 5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 8 individually, because the lfs requirement causes the process to bail early if
9 9 the extension is disabled.
10 10
11 11 . Server
12 12 .
13 13 . No-LFS LFS
14 14 . +----------------------------+
15 15 . | || D | E | D | E |
16 16 . |---++=======================|
17 17 . C | D || N/A | #1 | X | #4 |
18 18 . l No +---++-----------------------|
19 19 . i LFS | E || #2 | #2 | X | #5 |
20 20 . e +---++-----------------------|
21 21 . n | D || X | X | X | X |
22 22 . t LFS |---++-----------------------|
23 23 . | E || #3 | #3 | X | #6 |
24 24 . |---++-----------------------+
25 25
26 26 $ hg init server
27 27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
28 28
29 29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
30 30 first, and causes a "ValueError: no common changegroup version" or "abort:
31 31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
32 32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
33 33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
34 34 masked by the Internal Server Error message).
35 35 $ cat >> $HGRCPATH <<EOF
36 36 > [lfs]
37 > url=file:$TESTTMP/dummy-remote/
38 37 > usercache = null://
39 38 > threshold=10
40 39 > [web]
41 40 > allow_push=*
42 41 > push_ssl=False
43 42 > EOF
44 43
45 44 #if lfsremote-on
46 45 $ hg --config extensions.lfs= -R server \
47 46 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
48 47 #else
49 48 $ hg --config extensions.lfs=! -R server \
50 49 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
51 50 #endif
52 51
53 52 $ cat hg.pid >> $DAEMON_PIDS
54 53 $ hg clone -q http://localhost:$HGPORT client
55 54 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
56 55 [1]
57 56
58 57 --------------------------------------------------------------------------------
59 58 Case #1: client with non-lfs content and the extension disabled; server with
60 59 non-lfs content, and the extension enabled.
61 60
62 61 $ cd client
63 62 $ echo 'non-lfs' > nonlfs.txt
64 63 $ hg ci -Aqm 'non-lfs'
65 64 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
66 65 [1]
67 66
68 67 #if lfsremote-on
69 68
70 69 $ hg push -q
71 70 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
72 71 [1]
73 72
74 73 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
75 74 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
76 75 [1]
77 76
78 77 $ hg init $TESTTMP/client1_pull
79 78 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
80 79 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
81 80 [1]
82 81
83 82 $ hg identify http://localhost:$HGPORT
84 83 d437e1d24fbd
85 84
86 85 #endif
87 86
88 87 --------------------------------------------------------------------------------
89 88 Case #2: client with non-lfs content and the extension enabled; server with
90 89 non-lfs content, and the extension state controlled by #testcases.
91 90
92 91 $ cat >> $HGRCPATH <<EOF
93 92 > [extensions]
94 93 > lfs =
95 94 > EOF
96 95 $ echo 'non-lfs' > nonlfs2.txt
97 96 $ hg ci -Aqm 'non-lfs file with lfs client'
98 97
99 98 Since no lfs content has been added yet, the push is allowed, even when the
100 99 extension is not enabled remotely.
101 100
102 101 $ hg push -q
103 102 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
104 103 [1]
105 104
106 105 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
107 106 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
108 107 [1]
109 108
110 109 $ hg init $TESTTMP/client2_pull
111 110 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
112 111 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
113 112 [1]
114 113
115 114 $ hg identify http://localhost:$HGPORT
116 115 1477875038c6
117 116
118 117 --------------------------------------------------------------------------------
119 118 Case #3: client with lfs content and the extension enabled; server with
120 119 non-lfs content, and the extension state controlled by #testcases. The server
121 120 should have an 'lfs' requirement after it picks up its first commit with a blob.
122 121
123 122 $ echo 'this is a big lfs file' > lfs.bin
124 123 $ hg ci -Aqm 'lfs'
125 124 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
126 125 .hg/requires:lfs
127 126
128 127 #if lfsremote-off
129 128 $ hg push -q
130 129 abort: required features are not supported in the destination: lfs
131 130 (enable the lfs extension on the server)
132 131 [255]
133 132 #else
134 133 $ hg push -q
135 134 #endif
136 135 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
137 136 .hg/requires:lfs
138 137 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
139 138
140 139 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
141 140 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
142 141 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
143 142 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
144 143
145 144 $ hg init $TESTTMP/client3_pull
146 145 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
147 146 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
148 147 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
149 148 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
150 149
151 150 The difference here is the push failed above when the extension isn't
152 151 enabled on the server.
153 152 $ hg identify http://localhost:$HGPORT
154 153 8374dc4052cb (lfsremote-on !)
155 154 1477875038c6 (lfsremote-off !)
156 155
157 156 Don't bother testing the lfsremote-off cases- the server won't be able
158 157 to launch if there's lfs content and the extension is disabled.
159 158
160 159 #if lfsremote-on
161 160
162 161 --------------------------------------------------------------------------------
163 162 Case #4: client with non-lfs content and the extension disabled; server with
164 163 lfs content, and the extension enabled.
165 164
166 165 $ cat >> $HGRCPATH <<EOF
167 166 > [extensions]
168 167 > lfs = !
169 168 > EOF
170 169
171 170 $ hg init $TESTTMP/client4
172 171 $ cd $TESTTMP/client4
173 172 $ cat >> .hg/hgrc <<EOF
174 173 > [paths]
175 174 > default = http://localhost:$HGPORT
176 175 > EOF
177 176 $ echo 'non-lfs' > nonlfs2.txt
178 177 $ hg ci -Aqm 'non-lfs'
179 178 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
180 179 $TESTTMP/server/.hg/requires:lfs
181 180
182 181 $ hg push -q --force
183 182 warning: repository is unrelated
184 183 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
185 184 $TESTTMP/server/.hg/requires:lfs
186 185
187 186 TODO: fail more gracefully.
188 187
189 188 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
190 189 abort: HTTP Error 500: Internal Server Error
191 190 [255]
192 191 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
193 192 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
194 193 $TESTTMP/server/.hg/requires:lfs
195 194 [2]
196 195
197 196 TODO: fail more gracefully.
198 197
199 198 $ hg init $TESTTMP/client4_pull
200 199 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
201 200 abort: HTTP Error 500: Internal Server Error
202 201 [255]
203 202 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
204 203 $TESTTMP/server/.hg/requires:lfs
205 204
206 205 $ hg identify http://localhost:$HGPORT
207 206 03b080fa9d93
208 207
209 208 --------------------------------------------------------------------------------
210 209 Case #5: client with non-lfs content and the extension enabled; server with
211 210 lfs content, and the extension enabled.
212 211
213 212 $ cat >> $HGRCPATH <<EOF
214 213 > [extensions]
215 214 > lfs =
216 215 > EOF
217 216 $ echo 'non-lfs' > nonlfs3.txt
218 217 $ hg ci -Aqm 'non-lfs file with lfs client'
219 218
220 219 $ hg push -q
221 220 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
222 221 $TESTTMP/server/.hg/requires:lfs
223 222
224 223 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
225 224 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
226 225 $TESTTMP/client5_clone/.hg/requires:lfs
227 226 $TESTTMP/server/.hg/requires:lfs
228 227
229 228 $ hg init $TESTTMP/client5_pull
230 229 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
231 230 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
232 231 $TESTTMP/client5_pull/.hg/requires:lfs
233 232 $TESTTMP/server/.hg/requires:lfs
234 233
235 234 $ hg identify http://localhost:$HGPORT
236 235 c729025cc5e3
237 236
238 237 --------------------------------------------------------------------------------
239 238 Case #6: client with lfs content and the extension enabled; server with
240 239 lfs content, and the extension enabled.
241 240
242 241 $ echo 'this is another lfs file' > lfs2.txt
243 242 $ hg ci -Aqm 'lfs file with lfs client'
244 243
245 244 $ hg push -q
246 245 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
247 246 .hg/requires:lfs
248 247 $TESTTMP/server/.hg/requires:lfs
249 248
250 249 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
251 250 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
252 251 $TESTTMP/client6_clone/.hg/requires:lfs
253 252 $TESTTMP/server/.hg/requires:lfs
254 253
255 254 $ hg init $TESTTMP/client6_pull
256 255 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
257 256 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
258 257 $TESTTMP/client6_pull/.hg/requires:lfs
259 258 $TESTTMP/server/.hg/requires:lfs
260 259
261 260 $ hg identify http://localhost:$HGPORT
262 261 d3b84d50eacb
263 262
264 263 --------------------------------------------------------------------------------
265 264 Misc: process dies early if a requirement exists and the extension is disabled
266 265
267 266 $ hg --config extensions.lfs=! summary
268 267 abort: repository requires features unknown to this Mercurial: lfs!
269 268 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
270 269 [255]
271 270
272 271 #endif
273 272
274 273 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
275 274
276 275 #if lfsremote-on
277 276 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
278 277 Traceback (most recent call last):
279 278 ValueError: no common changegroup version
280 279 Traceback (most recent call last):
281 280 ValueError: no common changegroup version
282 281 #else
283 282 $ cat $TESTTMP/errors.log
284 283 #endif
@@ -1,925 +1,933
1 1 #require no-reposimplestore
2 2 #testcases git-server hg-server
3 3
4 4 #if git-server
5 5 #require lfs-test-server
6 6 #else
7 7 #require serve
8 8 #endif
9 9
10 10 #if git-server
11 11 $ LFS_LISTEN="tcp://:$HGPORT"
12 12 $ LFS_HOST="localhost:$HGPORT"
13 13 $ LFS_PUBLIC=1
14 14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
15 15 #else
16 16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
17 17 #endif
18 18
19 19 #if no-windows git-server
20 20 $ lfs-test-server &> lfs-server.log &
21 21 $ echo $! >> $DAEMON_PIDS
22 22 #endif
23 23
24 24 #if windows git-server
25 25 $ cat >> $TESTTMP/spawn.py <<EOF
26 26 > import os
27 27 > import subprocess
28 28 > import sys
29 29 >
30 30 > for path in os.environ["PATH"].split(os.pathsep):
31 31 > exe = os.path.join(path, 'lfs-test-server.exe')
32 32 > if os.path.exists(exe):
33 33 > with open('lfs-server.log', 'wb') as out:
34 34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
35 35 > sys.stdout.write('%s\n' % p.pid)
36 36 > sys.exit(0)
37 37 > sys.exit(1)
38 38 > EOF
39 39 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
40 40 #endif
41 41
42 42 $ cat >> $HGRCPATH <<EOF
43 43 > [extensions]
44 44 > lfs=
45 45 > [lfs]
46 46 > url=http://foo:bar@$LFS_HOST
47 47 > track=all()
48 48 > [web]
49 49 > push_ssl = False
50 50 > allow-push = *
51 51 > EOF
52 52
53 53 Use a separate usercache, otherwise the server sees what the client commits, and
54 54 never requests a transfer.
55 55
56 56 #if hg-server
57 57 $ hg init server
58 58 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
59 59 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
60 60 $ cat hg.pid >> $DAEMON_PIDS
61 61 #endif
62 62
63 63 $ hg init repo1
64 64 $ cd repo1
65 65 $ echo THIS-IS-LFS > a
66 66 $ hg commit -m a -A a
67 67
68 68 A push can be serviced directly from the usercache if it isn't in the local
69 69 store.
70 70
71 71 $ hg init ../repo2
72 72 $ mv .hg/store/lfs .hg/store/lfs_
73 73 $ hg push ../repo2 --debug
74 74 http auth: user foo, password ***
75 75 pushing to ../repo2
76 76 http auth: user foo, password ***
77 77 query 1; heads
78 78 searching for changes
79 79 1 total queries in *s (glob)
80 80 listing keys for "phases"
81 81 checking for updated bookmarks
82 82 listing keys for "bookmarks"
83 83 lfs: computing set of blobs to upload
84 84 Status: 200
85 85 Content-Length: 309 (git-server !)
86 86 Content-Length: 350 (hg-server !)
87 87 Content-Type: application/vnd.git-lfs+json
88 88 Date: $HTTP_DATE$
89 89 Server: testing stub value (hg-server !)
90 90 {
91 91 "objects": [
92 92 {
93 93 "actions": {
94 94 "upload": {
95 95 "expires_at": "$ISO_8601_DATE_TIME$"
96 96 "header": {
97 97 "Accept": "application/vnd.git-lfs"
98 98 }
99 99 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
100 100 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
101 101 }
102 102 }
103 103 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
104 104 "size": 12
105 105 }
106 106 ]
107 107 "transfer": "basic" (hg-server !)
108 108 }
109 109 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
110 110 Status: 200 (git-server !)
111 111 Status: 201 (hg-server !)
112 112 Content-Length: 0
113 113 Content-Type: text/plain; charset=utf-8
114 114 Date: $HTTP_DATE$
115 115 Server: testing stub value (hg-server !)
116 116 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
117 117 lfs: uploaded 1 files (12 bytes)
118 118 1 changesets found
119 119 list of changesets:
120 120 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
121 121 bundle2-output-bundle: "HG20", 4 parts total
122 122 bundle2-output-part: "replycaps" * bytes payload (glob)
123 123 bundle2-output-part: "check:heads" streamed payload
124 124 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
125 125 bundle2-output-part: "phase-heads" 24 bytes payload
126 126 bundle2-input-bundle: with-transaction
127 127 bundle2-input-part: "replycaps" supported
128 128 bundle2-input-part: total payload size * (glob)
129 129 bundle2-input-part: "check:heads" supported
130 130 bundle2-input-part: total payload size 20
131 131 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
132 132 adding changesets
133 133 add changeset 99a7098854a3
134 134 adding manifests
135 135 adding file changes
136 136 adding a revisions
137 137 added 1 changesets with 1 changes to 1 files
138 138 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
139 139 bundle2-input-part: total payload size 617
140 140 bundle2-input-part: "phase-heads" supported
141 141 bundle2-input-part: total payload size 24
142 142 bundle2-input-bundle: 3 parts total
143 143 updating the branch cache
144 144 bundle2-output-bundle: "HG20", 1 parts total
145 145 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
146 146 bundle2-input-bundle: no-transaction
147 147 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
148 148 bundle2-input-bundle: 0 parts total
149 149 listing keys for "phases"
150 150 $ mv .hg/store/lfs_ .hg/store/lfs
151 151
152 152 Clear the cache to force a download
153 153 $ rm -rf `hg config lfs.usercache`
154 154 $ cd ../repo2
155 155 $ hg update tip --debug
156 156 http auth: user foo, password ***
157 157 resolving manifests
158 158 branchmerge: False, force: False, partial: False
159 159 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
160 http auth: user foo, password ***
160 161 Status: 200
161 162 Content-Length: 311 (git-server !)
162 163 Content-Length: 352 (hg-server !)
163 164 Content-Type: application/vnd.git-lfs+json
164 165 Date: $HTTP_DATE$
165 166 Server: testing stub value (hg-server !)
166 167 {
167 168 "objects": [
168 169 {
169 170 "actions": {
170 171 "download": {
171 172 "expires_at": "$ISO_8601_DATE_TIME$"
172 173 "header": {
173 174 "Accept": "application/vnd.git-lfs"
174 175 }
175 176 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
176 177 }
177 178 }
178 179 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
179 180 "size": 12
180 181 }
181 182 ]
182 183 "transfer": "basic" (hg-server !)
183 184 }
184 185 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
185 186 Status: 200
186 187 Content-Length: 12
187 188 Content-Type: text/plain; charset=utf-8 (git-server !)
188 189 Content-Type: application/octet-stream (hg-server !)
189 190 Date: $HTTP_DATE$
190 191 Server: testing stub value (hg-server !)
191 192 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
192 193 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
193 194 a: remote created -> g
194 195 getting a
195 196 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
196 197 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
197 198
198 199 When the server has some blobs already. `hg serve` doesn't offer to upload
199 200 blobs that it already knows about. Note that lfs-test-server is simply
200 201 toggling the action to 'download'. The Batch API spec says it should omit the
201 202 actions property completely.
202 203
203 204 $ hg mv a b
204 205 $ echo ANOTHER-LARGE-FILE > c
205 206 $ echo ANOTHER-LARGE-FILE2 > d
206 207 $ hg commit -m b-and-c -A b c d
207 208 $ hg push ../repo1 --debug
208 209 http auth: user foo, password ***
209 210 pushing to ../repo1
210 211 http auth: user foo, password ***
211 212 query 1; heads
212 213 searching for changes
213 214 all remote heads known locally
214 215 listing keys for "phases"
215 216 checking for updated bookmarks
216 217 listing keys for "bookmarks"
217 218 listing keys for "bookmarks"
218 219 lfs: computing set of blobs to upload
219 220 Status: 200
220 221 Content-Length: 901 (git-server !)
221 222 Content-Length: 755 (hg-server !)
222 223 Content-Type: application/vnd.git-lfs+json
223 224 Date: $HTTP_DATE$
224 225 Server: testing stub value (hg-server !)
225 226 {
226 227 "objects": [
227 228 {
228 229 "actions": { (git-server !)
229 230 "download": { (git-server !)
230 231 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
231 232 "header": { (git-server !)
232 233 "Accept": "application/vnd.git-lfs" (git-server !)
233 234 } (git-server !)
234 235 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
235 236 } (git-server !)
236 237 } (git-server !)
237 238 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
238 239 "size": 12
239 240 }
240 241 {
241 242 "actions": {
242 243 "upload": {
243 244 "expires_at": "$ISO_8601_DATE_TIME$"
244 245 "header": {
245 246 "Accept": "application/vnd.git-lfs"
246 247 }
247 248 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
248 249 }
249 250 }
250 251 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
251 252 "size": 20
252 253 }
253 254 {
254 255 "actions": {
255 256 "upload": {
256 257 "expires_at": "$ISO_8601_DATE_TIME$"
257 258 "header": {
258 259 "Accept": "application/vnd.git-lfs"
259 260 }
260 261 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
261 262 }
262 263 }
263 264 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
264 265 "size": 19
265 266 }
266 267 ]
267 268 "transfer": "basic" (hg-server !)
268 269 }
269 270 lfs: need to transfer 2 objects (39 bytes)
270 271 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
271 272 Status: 200 (git-server !)
272 273 Status: 201 (hg-server !)
273 274 Content-Length: 0
274 275 Content-Type: text/plain; charset=utf-8
275 276 Date: $HTTP_DATE$
276 277 Server: testing stub value (hg-server !)
277 278 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
278 279 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
279 280 Status: 200 (git-server !)
280 281 Status: 201 (hg-server !)
281 282 Content-Length: 0
282 283 Content-Type: text/plain; charset=utf-8
283 284 Date: $HTTP_DATE$
284 285 Server: testing stub value (hg-server !)
285 286 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
286 287 lfs: uploaded 2 files (39 bytes)
287 288 1 changesets found
288 289 list of changesets:
289 290 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
290 291 bundle2-output-bundle: "HG20", 5 parts total
291 292 bundle2-output-part: "replycaps" * bytes payload (glob)
292 293 bundle2-output-part: "check:phases" 24 bytes payload
293 294 bundle2-output-part: "check:heads" streamed payload
294 295 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
295 296 bundle2-output-part: "phase-heads" 24 bytes payload
296 297 bundle2-input-bundle: with-transaction
297 298 bundle2-input-part: "replycaps" supported
298 299 bundle2-input-part: total payload size * (glob)
299 300 bundle2-input-part: "check:phases" supported
300 301 bundle2-input-part: total payload size 24
301 302 bundle2-input-part: "check:heads" supported
302 303 bundle2-input-part: total payload size 20
303 304 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
304 305 adding changesets
305 306 add changeset dfca2c9e2ef2
306 307 adding manifests
307 308 adding file changes
308 309 adding b revisions
309 310 adding c revisions
310 311 adding d revisions
311 312 added 1 changesets with 3 changes to 3 files
312 313 bundle2-input-part: total payload size 1315
313 314 bundle2-input-part: "phase-heads" supported
314 315 bundle2-input-part: total payload size 24
315 316 bundle2-input-bundle: 4 parts total
316 317 updating the branch cache
317 318 bundle2-output-bundle: "HG20", 1 parts total
318 319 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
319 320 bundle2-input-bundle: no-transaction
320 321 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
321 322 bundle2-input-bundle: 0 parts total
322 323 listing keys for "phases"
323 324
324 325 Clear the cache to force a download
325 326 $ rm -rf `hg config lfs.usercache`
326 327 $ hg --repo ../repo1 update tip --debug
327 328 http auth: user foo, password ***
328 329 resolving manifests
329 330 branchmerge: False, force: False, partial: False
330 331 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
332 http auth: user foo, password ***
331 333 Status: 200
332 334 Content-Length: 608 (git-server !)
333 335 Content-Length: 670 (hg-server !)
334 336 Content-Type: application/vnd.git-lfs+json
335 337 Date: $HTTP_DATE$
336 338 Server: testing stub value (hg-server !)
337 339 {
338 340 "objects": [
339 341 {
340 342 "actions": {
341 343 "download": {
342 344 "expires_at": "$ISO_8601_DATE_TIME$"
343 345 "header": {
344 346 "Accept": "application/vnd.git-lfs"
345 347 }
346 348 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
347 349 }
348 350 }
349 351 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
350 352 "size": 20
351 353 }
352 354 {
353 355 "actions": {
354 356 "download": {
355 357 "expires_at": "$ISO_8601_DATE_TIME$"
356 358 "header": {
357 359 "Accept": "application/vnd.git-lfs"
358 360 }
359 361 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
360 362 }
361 363 }
362 364 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
363 365 "size": 19
364 366 }
365 367 ]
366 368 "transfer": "basic" (hg-server !)
367 369 }
368 370 lfs: need to transfer 2 objects (39 bytes)
369 371 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
370 372 Status: 200
371 373 Content-Length: 20
372 374 Content-Type: text/plain; charset=utf-8 (git-server !)
373 375 Content-Type: application/octet-stream (hg-server !)
374 376 Date: $HTTP_DATE$
375 377 Server: testing stub value (hg-server !)
376 378 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
377 379 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
378 380 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
379 381 Status: 200
380 382 Content-Length: 19
381 383 Content-Type: text/plain; charset=utf-8 (git-server !)
382 384 Content-Type: application/octet-stream (hg-server !)
383 385 Date: $HTTP_DATE$
384 386 Server: testing stub value (hg-server !)
385 387 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
386 388 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
387 389 b: remote created -> g
388 390 getting b
389 391 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
390 392 c: remote created -> g
391 393 getting c
392 394 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
393 395 d: remote created -> g
394 396 getting d
395 397 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
396 398 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
397 399
398 400 Test a corrupt file download, but clear the cache first to force a download.
399 401 `hg serve` indicates a corrupt file without transferring it, unlike
400 402 lfs-test-server.
401 403
402 404 $ rm -rf `hg config lfs.usercache`
403 405 #if git-server
404 406 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
405 407 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
406 408 #else
407 409 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
408 410 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
409 411 #endif
410 412 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
411 413 $ rm ../repo1/*
412 414
413 415 TODO: give the proper error indication from `hg serve`
414 416
415 417 $ hg --repo ../repo1 update -C tip --debug
416 418 http auth: user foo, password ***
417 419 resolving manifests
418 420 branchmerge: False, force: True, partial: False
419 421 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
422 http auth: user foo, password ***
420 423 Status: 200
421 424 Content-Length: 311 (git-server !)
422 425 Content-Length: 183 (hg-server !)
423 426 Content-Type: application/vnd.git-lfs+json
424 427 Date: $HTTP_DATE$
425 428 Server: testing stub value (hg-server !)
426 429 {
427 430 "objects": [
428 431 {
429 432 "actions": { (git-server !)
430 433 "download": { (git-server !)
431 434 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
432 435 "header": { (git-server !)
433 436 "Accept": "application/vnd.git-lfs" (git-server !)
434 437 } (git-server !)
435 438 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
436 439 } (git-server !)
437 440 "error": { (hg-server !)
438 441 "code": 422 (hg-server !)
439 442 "message": "The object is corrupt" (hg-server !)
440 443 }
441 444 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
442 445 "size": 19
443 446 }
444 447 ]
445 448 "transfer": "basic" (hg-server !)
446 449 }
447 450 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
448 451 Status: 200 (git-server !)
449 452 Content-Length: 7 (git-server !)
450 453 Content-Type: text/plain; charset=utf-8 (git-server !)
451 454 Date: $HTTP_DATE$ (git-server !)
452 455 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
453 456 abort: LFS server error for "c": Validation error! (hg-server !)
454 457 [255]
455 458
456 459 The corrupted blob is not added to the usercache or local store
457 460
458 461 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
459 462 [1]
460 463 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
461 464 [1]
462 465 #if git-server
463 466 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
464 467 #else
465 468 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
466 469 #endif
467 470
468 471 Test a corrupted file upload
469 472
470 473 $ echo 'another lfs blob' > b
471 474 $ hg ci -m 'another blob'
472 475 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
473 476 $ hg push --debug ../repo1
474 477 http auth: user foo, password ***
475 478 pushing to ../repo1
476 479 http auth: user foo, password ***
477 480 query 1; heads
478 481 searching for changes
479 482 all remote heads known locally
480 483 listing keys for "phases"
481 484 checking for updated bookmarks
482 485 listing keys for "bookmarks"
483 486 listing keys for "bookmarks"
484 487 lfs: computing set of blobs to upload
485 488 Status: 200
486 489 Content-Length: 309 (git-server !)
487 490 Content-Length: 350 (hg-server !)
488 491 Content-Type: application/vnd.git-lfs+json
489 492 Date: $HTTP_DATE$
490 493 Server: testing stub value (hg-server !)
491 494 {
492 495 "objects": [
493 496 {
494 497 "actions": {
495 498 "upload": {
496 499 "expires_at": "$ISO_8601_DATE_TIME$"
497 500 "header": {
498 501 "Accept": "application/vnd.git-lfs"
499 502 }
500 503 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
501 504 }
502 505 }
503 506 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
504 507 "size": 17
505 508 }
506 509 ]
507 510 "transfer": "basic" (hg-server !)
508 511 }
509 512 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
510 513 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
511 514 (run hg verify)
512 515 [255]
513 516
514 517 Archive will prefetch blobs in a group
515 518
516 519 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
517 520 $ hg archive --debug -r 1 ../archive
518 521 http auth: user foo, password ***
522 http auth: user foo, password ***
519 523 Status: 200
520 524 Content-Length: 905 (git-server !)
521 525 Content-Length: 988 (hg-server !)
522 526 Content-Type: application/vnd.git-lfs+json
523 527 Date: $HTTP_DATE$
524 528 Server: testing stub value (hg-server !)
525 529 {
526 530 "objects": [
527 531 {
528 532 "actions": {
529 533 "download": {
530 534 "expires_at": "$ISO_8601_DATE_TIME$"
531 535 "header": {
532 536 "Accept": "application/vnd.git-lfs"
533 537 }
534 538 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
535 539 }
536 540 }
537 541 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
538 542 "size": 12
539 543 }
540 544 {
541 545 "actions": {
542 546 "download": {
543 547 "expires_at": "$ISO_8601_DATE_TIME$"
544 548 "header": {
545 549 "Accept": "application/vnd.git-lfs"
546 550 }
547 551 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
548 552 }
549 553 }
550 554 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
551 555 "size": 20
552 556 }
553 557 {
554 558 "actions": {
555 559 "download": {
556 560 "expires_at": "$ISO_8601_DATE_TIME$"
557 561 "header": {
558 562 "Accept": "application/vnd.git-lfs"
559 563 }
560 564 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
561 565 }
562 566 }
563 567 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
564 568 "size": 19
565 569 }
566 570 ]
567 571 "transfer": "basic" (hg-server !)
568 572 }
569 573 lfs: need to transfer 3 objects (51 bytes)
570 574 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
571 575 Status: 200
572 576 Content-Length: 12
573 577 Content-Type: text/plain; charset=utf-8 (git-server !)
574 578 Content-Type: application/octet-stream (hg-server !)
575 579 Date: $HTTP_DATE$
576 580 Server: testing stub value (hg-server !)
577 581 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
578 582 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
579 583 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
580 584 Status: 200
581 585 Content-Length: 20
582 586 Content-Type: text/plain; charset=utf-8 (git-server !)
583 587 Content-Type: application/octet-stream (hg-server !)
584 588 Date: $HTTP_DATE$
585 589 Server: testing stub value (hg-server !)
586 590 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
587 591 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
588 592 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
589 593 Status: 200
590 594 Content-Length: 19
591 595 Content-Type: text/plain; charset=utf-8 (git-server !)
592 596 Content-Type: application/octet-stream (hg-server !)
593 597 Date: $HTTP_DATE$
594 598 Server: testing stub value (hg-server !)
595 599 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
596 600 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
597 601 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
598 602 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
599 603 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
600 604 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
601 605 $ find ../archive | sort
602 606 ../archive
603 607 ../archive/.hg_archival.txt
604 608 ../archive/a
605 609 ../archive/b
606 610 ../archive/c
607 611 ../archive/d
608 612
609 613 Cat will prefetch blobs in a group
610 614
611 615 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
612 616 $ hg cat --debug -r 1 a b c
613 617 http auth: user foo, password ***
618 http auth: user foo, password ***
614 619 Status: 200
615 620 Content-Length: 608 (git-server !)
616 621 Content-Length: 670 (hg-server !)
617 622 Content-Type: application/vnd.git-lfs+json
618 623 Date: $HTTP_DATE$
619 624 Server: testing stub value (hg-server !)
620 625 {
621 626 "objects": [
622 627 {
623 628 "actions": {
624 629 "download": {
625 630 "expires_at": "$ISO_8601_DATE_TIME$"
626 631 "header": {
627 632 "Accept": "application/vnd.git-lfs"
628 633 }
629 634 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
630 635 }
631 636 }
632 637 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
633 638 "size": 12
634 639 }
635 640 {
636 641 "actions": {
637 642 "download": {
638 643 "expires_at": "$ISO_8601_DATE_TIME$"
639 644 "header": {
640 645 "Accept": "application/vnd.git-lfs"
641 646 }
642 647 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
643 648 }
644 649 }
645 650 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
646 651 "size": 19
647 652 }
648 653 ]
649 654 "transfer": "basic" (hg-server !)
650 655 }
651 656 lfs: need to transfer 2 objects (31 bytes)
652 657 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
653 658 Status: 200
654 659 Content-Length: 12
655 660 Content-Type: text/plain; charset=utf-8 (git-server !)
656 661 Content-Type: application/octet-stream (hg-server !)
657 662 Date: $HTTP_DATE$
658 663 Server: testing stub value (hg-server !)
659 664 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
660 665 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
661 666 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
662 667 Status: 200
663 668 Content-Length: 19
664 669 Content-Type: text/plain; charset=utf-8 (git-server !)
665 670 Content-Type: application/octet-stream (hg-server !)
666 671 Date: $HTTP_DATE$
667 672 Server: testing stub value (hg-server !)
668 673 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
669 674 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
670 675 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
671 676 THIS-IS-LFS
672 677 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
673 678 THIS-IS-LFS
674 679 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
675 680 ANOTHER-LARGE-FILE
676 681
677 682 Revert will prefetch blobs in a group
678 683
679 684 $ rm -rf .hg/store/lfs
680 685 $ rm -rf `hg config lfs.usercache`
681 686 $ rm *
682 687 $ hg revert --all -r 1 --debug
683 688 http auth: user foo, password ***
684 689 adding a
685 690 reverting b
686 691 reverting c
687 692 reverting d
693 http auth: user foo, password ***
688 694 Status: 200
689 695 Content-Length: 905 (git-server !)
690 696 Content-Length: 988 (hg-server !)
691 697 Content-Type: application/vnd.git-lfs+json
692 698 Date: $HTTP_DATE$
693 699 Server: testing stub value (hg-server !)
694 700 {
695 701 "objects": [
696 702 {
697 703 "actions": {
698 704 "download": {
699 705 "expires_at": "$ISO_8601_DATE_TIME$"
700 706 "header": {
701 707 "Accept": "application/vnd.git-lfs"
702 708 }
703 709 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
704 710 }
705 711 }
706 712 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
707 713 "size": 12
708 714 }
709 715 {
710 716 "actions": {
711 717 "download": {
712 718 "expires_at": "$ISO_8601_DATE_TIME$"
713 719 "header": {
714 720 "Accept": "application/vnd.git-lfs"
715 721 }
716 722 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
717 723 }
718 724 }
719 725 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
720 726 "size": 20
721 727 }
722 728 {
723 729 "actions": {
724 730 "download": {
725 731 "expires_at": "$ISO_8601_DATE_TIME$"
726 732 "header": {
727 733 "Accept": "application/vnd.git-lfs"
728 734 }
729 735 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
730 736 }
731 737 }
732 738 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
733 739 "size": 19
734 740 }
735 741 ]
736 742 "transfer": "basic" (hg-server !)
737 743 }
738 744 lfs: need to transfer 3 objects (51 bytes)
739 745 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
740 746 Status: 200
741 747 Content-Length: 12
742 748 Content-Type: text/plain; charset=utf-8 (git-server !)
743 749 Content-Type: application/octet-stream (hg-server !)
744 750 Date: $HTTP_DATE$
745 751 Server: testing stub value (hg-server !)
746 752 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
747 753 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
748 754 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
749 755 Status: 200
750 756 Content-Length: 20
751 757 Content-Type: text/plain; charset=utf-8 (git-server !)
752 758 Content-Type: application/octet-stream (hg-server !)
753 759 Date: $HTTP_DATE$
754 760 Server: testing stub value (hg-server !)
755 761 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
756 762 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
757 763 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
758 764 Status: 200
759 765 Content-Length: 19
760 766 Content-Type: text/plain; charset=utf-8 (git-server !)
761 767 Content-Type: application/octet-stream (hg-server !)
762 768 Date: $HTTP_DATE$
763 769 Server: testing stub value (hg-server !)
764 770 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
765 771 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
766 772 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
767 773 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
768 774 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
769 775 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
770 776
771 777 Check error message when the remote missed a blob:
772 778
773 779 $ echo FFFFF > b
774 780 $ hg commit -m b -A b
775 781 $ echo FFFFF >> b
776 782 $ hg commit -m b b
777 783 $ rm -rf .hg/store/lfs
778 784 $ rm -rf `hg config lfs.usercache`
779 785 $ hg update -C '.^' --debug
780 786 http auth: user foo, password ***
781 787 resolving manifests
782 788 branchmerge: False, force: True, partial: False
783 789 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
790 http auth: user foo, password ***
784 791 Status: 200
785 792 Content-Length: 308 (git-server !)
786 793 Content-Length: 186 (hg-server !)
787 794 Content-Type: application/vnd.git-lfs+json
788 795 Date: $HTTP_DATE$
789 796 Server: testing stub value (hg-server !)
790 797 {
791 798 "objects": [
792 799 {
793 800 "actions": { (git-server !)
794 801 "upload": { (git-server !)
795 802 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
796 803 "header": { (git-server !)
797 804 "Accept": "application/vnd.git-lfs" (git-server !)
798 805 } (git-server !)
799 806 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
800 807 } (git-server !)
801 808 "error": { (hg-server !)
802 809 "code": 404 (hg-server !)
803 810 "message": "The object does not exist" (hg-server !)
804 811 }
805 812 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
806 813 "size": 6
807 814 }
808 815 ]
809 816 "transfer": "basic" (hg-server !)
810 817 }
811 818 abort: LFS server error for "b": The object does not exist!
812 819 [255]
813 820
814 821 Check error message when object does not exist:
815 822
816 823 $ cd $TESTTMP
817 824 $ hg init test && cd test
818 825 $ echo "[extensions]" >> .hg/hgrc
819 826 $ echo "lfs=" >> .hg/hgrc
820 827 $ echo "[lfs]" >> .hg/hgrc
821 828 $ echo "threshold=1" >> .hg/hgrc
822 829 $ echo a > a
823 830 $ hg add a
824 831 $ hg commit -m 'test'
825 832 $ echo aaaaa > a
826 833 $ hg commit -m 'largefile'
827 834 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
828 835 version https://git-lfs.github.com/spec/v1
829 836 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
830 837 size 6
831 838 x-is-binary 0
832 839 $ cd ..
833 840 $ rm -rf `hg config lfs.usercache`
834 841
835 842 (Restart the server in a different location so it no longer has the content)
836 843
837 844 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
838 845
839 846 #if hg-server
840 847 $ cat $TESTTMP/access.log $TESTTMP/errors.log
841 848 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
842 849 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
843 850 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
844 851 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
845 852 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
846 853 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
847 854 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
848 855 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
849 856 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
850 857 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
851 858 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
852 859 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
853 860 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
854 861 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
855 862 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
856 863 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
857 864 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
858 865 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
859 866 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
860 867 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
861 868 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
862 869 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
863 870 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
864 871 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
865 872 #endif
866 873
867 874 $ rm $DAEMON_PIDS
868 875 $ mkdir $TESTTMP/lfs-server2
869 876 $ cd $TESTTMP/lfs-server2
870 877 #if no-windows git-server
871 878 $ lfs-test-server &> lfs-server.log &
872 879 $ echo $! >> $DAEMON_PIDS
873 880 #endif
874 881
875 882 #if windows git-server
876 883 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
877 884 #endif
878 885
879 886 #if hg-server
880 887 $ hg init server2
881 888 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
882 889 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
883 890 $ cat hg.pid >> $DAEMON_PIDS
884 891 #endif
885 892
886 893 $ cd $TESTTMP
887 894 $ hg --debug clone test test2
888 895 http auth: user foo, password ***
889 896 linked 6 files
890 897 http auth: user foo, password ***
891 898 updating to branch default
892 899 resolving manifests
893 900 branchmerge: False, force: False, partial: False
894 901 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
902 http auth: user foo, password ***
895 903 Status: 200
896 904 Content-Length: 308 (git-server !)
897 905 Content-Length: 186 (hg-server !)
898 906 Content-Type: application/vnd.git-lfs+json
899 907 Date: $HTTP_DATE$
900 908 Server: testing stub value (hg-server !)
901 909 {
902 910 "objects": [
903 911 {
904 912 "actions": { (git-server !)
905 913 "upload": { (git-server !)
906 914 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
907 915 "header": { (git-server !)
908 916 "Accept": "application/vnd.git-lfs" (git-server !)
909 917 } (git-server !)
910 918 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
911 919 } (git-server !)
912 920 "error": { (hg-server !)
913 921 "code": 404 (hg-server !)
914 922 "message": "The object does not exist" (hg-server !)
915 923 }
916 924 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
917 925 "size": 6
918 926 }
919 927 ]
920 928 "transfer": "basic" (hg-server !)
921 929 }
922 930 abort: LFS server error for "a": The object does not exist!
923 931 [255]
924 932
925 933 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
General Comments 0
You need to be logged in to leave comments. Login now