##// END OF EJS Templates
lfs: don't skip locally available blobs when verifying...
Matt Harbison -
r44529:1a6dd50c default
parent child Browse files
Show More
@@ -1,425 +1,426 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 The extension reads its configuration from a versioned ``.hglfs``
57 57 configuration file found in the root of the working directory. The
58 58 ``.hglfs`` file uses the same syntax as all other Mercurial
59 59 configuration files. It uses a single section, ``[track]``.
60 60
61 61 The ``[track]`` section specifies which files are stored as LFS (or
62 62 not). Each line is keyed by a file pattern, with a predicate value.
63 63 The first file pattern match is used, so put more specific patterns
64 64 first. The available predicates are ``all()``, ``none()``, and
65 65 ``size()``. See "hg help filesets.size" for the latter.
66 66
67 67 Example versioned ``.hglfs`` file::
68 68
69 69 [track]
70 70 # No Makefile or python file, anywhere, will be LFS
71 71 **Makefile = none()
72 72 **.py = none()
73 73
74 74 **.zip = all()
75 75 **.exe = size(">1MB")
76 76
77 77 # Catchall for everything not matched above
78 78 ** = size(">10MB")
79 79
80 80 Configs::
81 81
82 82 [lfs]
83 83 # Remote endpoint. Multiple protocols are supported:
84 84 # - http(s)://user:pass@example.com/path
85 85 # git-lfs endpoint
86 86 # - file:///tmp/path
87 87 # local filesystem, usually for testing
88 88 # if unset, lfs will assume the remote repository also handles blob storage
89 89 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
90 90 # use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 import sys
126 126
127 127 from mercurial.i18n import _
128 128
129 129 from mercurial import (
130 130 config,
131 131 context,
132 132 error,
133 133 exchange,
134 134 extensions,
135 135 exthelper,
136 136 filelog,
137 137 filesetlang,
138 138 localrepo,
139 139 minifileset,
140 140 node,
141 141 pycompat,
142 142 revlog,
143 143 scmutil,
144 144 templateutil,
145 145 util,
146 146 )
147 147
148 148 from mercurial.interfaces import repository
149 149
150 150 from . import (
151 151 blobstore,
152 152 wireprotolfsserver,
153 153 wrapper,
154 154 )
155 155
156 156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 158 # be specifying the version(s) of Mercurial they are tested with, or
159 159 # leave the attribute unspecified.
160 160 testedwith = b'ships-with-hg-core'
161 161
162 162 eh = exthelper.exthelper()
163 163 eh.merge(wrapper.eh)
164 164 eh.merge(wireprotolfsserver.eh)
165 165
166 166 cmdtable = eh.cmdtable
167 167 configtable = eh.configtable
168 168 extsetup = eh.finalextsetup
169 169 uisetup = eh.finaluisetup
170 170 filesetpredicate = eh.filesetpredicate
171 171 reposetup = eh.finalreposetup
172 172 templatekeyword = eh.templatekeyword
173 173
174 174 eh.configitem(
175 175 b'experimental', b'lfs.serve', default=True,
176 176 )
177 177 eh.configitem(
178 178 b'experimental', b'lfs.user-agent', default=None,
179 179 )
180 180 eh.configitem(
181 181 b'experimental', b'lfs.disableusercache', default=False,
182 182 )
183 183 eh.configitem(
184 184 b'experimental', b'lfs.worker-enable', default=False,
185 185 )
186 186
187 187 eh.configitem(
188 188 b'lfs', b'url', default=None,
189 189 )
190 190 eh.configitem(
191 191 b'lfs', b'usercache', default=None,
192 192 )
193 193 # Deprecated
194 194 eh.configitem(
195 195 b'lfs', b'threshold', default=None,
196 196 )
197 197 eh.configitem(
198 198 b'lfs', b'track', default=b'none()',
199 199 )
200 200 eh.configitem(
201 201 b'lfs', b'retry', default=5,
202 202 )
203 203
204 204 lfsprocessor = (
205 205 wrapper.readfromstore,
206 206 wrapper.writetostore,
207 207 wrapper.bypasscheckhash,
208 208 )
209 209
210 210
211 211 def featuresetup(ui, supported):
212 212 # don't die on seeing a repo with the lfs requirement
213 213 supported |= {b'lfs'}
214 214
215 215
216 216 @eh.uisetup
217 217 def _uisetup(ui):
218 218 localrepo.featuresetupfuncs.add(featuresetup)
219 219
220 220
221 221 @eh.reposetup
222 222 def _reposetup(ui, repo):
223 223 # Nothing to do with a remote repo
224 224 if not repo.local():
225 225 return
226 226
227 227 repo.svfs.lfslocalblobstore = blobstore.local(repo)
228 228 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
229 229
230 230 class lfsrepo(repo.__class__):
231 231 @localrepo.unfilteredmethod
232 232 def commitctx(self, ctx, error=False, origctx=None):
233 233 repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
234 234 return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)
235 235
236 236 repo.__class__ = lfsrepo
237 237
238 238 if b'lfs' not in repo.requirements:
239 239
240 240 def checkrequireslfs(ui, repo, **kwargs):
241 241 if b'lfs' in repo.requirements:
242 242 return 0
243 243
244 244 last = kwargs.get('node_last')
245 245 _bin = node.bin
246 246 if last:
247 247 s = repo.set(b'%n:%n', _bin(kwargs['node']), _bin(last))
248 248 else:
249 249 s = repo.set(b'%n', _bin(kwargs['node']))
250 250 match = repo._storenarrowmatch
251 251 for ctx in s:
252 252 # TODO: is there a way to just walk the files in the commit?
253 253 if any(
254 254 ctx[f].islfs() for f in ctx.files() if f in ctx and match(f)
255 255 ):
256 256 repo.requirements.add(b'lfs')
257 257 repo.features.add(repository.REPO_FEATURE_LFS)
258 258 repo._writerequirements()
259 259 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
260 260 break
261 261
262 262 ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
263 263 ui.setconfig(
264 264 b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
265 265 )
266 266 else:
267 267 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
268 268
269 269
270 270 def _trackedmatcher(repo):
271 271 """Return a function (path, size) -> bool indicating whether or not to
272 272 track a given file with lfs."""
273 273 if not repo.wvfs.exists(b'.hglfs'):
274 274 # No '.hglfs' in wdir. Fallback to config for now.
275 275 trackspec = repo.ui.config(b'lfs', b'track')
276 276
277 277 # deprecated config: lfs.threshold
278 278 threshold = repo.ui.configbytes(b'lfs', b'threshold')
279 279 if threshold:
280 280 filesetlang.parse(trackspec) # make sure syntax errors are confined
281 281 trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)
282 282
283 283 return minifileset.compile(trackspec)
284 284
285 285 data = repo.wvfs.tryread(b'.hglfs')
286 286 if not data:
287 287 return lambda p, s: False
288 288
289 289 # Parse errors here will abort with a message that points to the .hglfs file
290 290 # and line number.
291 291 cfg = config.config()
292 292 cfg.parse(b'.hglfs', data)
293 293
294 294 try:
295 295 rules = [
296 296 (minifileset.compile(pattern), minifileset.compile(rule))
297 297 for pattern, rule in cfg.items(b'track')
298 298 ]
299 299 except error.ParseError as e:
300 300 # The original exception gives no indicator that the error is in the
301 301 # .hglfs file, so add that.
302 302
303 303 # TODO: See if the line number of the file can be made available.
304 304 raise error.Abort(_(b'parse error in .hglfs: %s') % e)
305 305
306 306 def _match(path, size):
307 307 for pat, rule in rules:
308 308 if pat(path, size):
309 309 return rule(path, size)
310 310
311 311 return False
312 312
313 313 return _match
314 314
315 315
316 316 # Called by remotefilelog
317 317 def wrapfilelog(filelog):
318 318 wrapfunction = extensions.wrapfunction
319 319
320 320 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
321 321 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
322 322 wrapfunction(filelog, 'size', wrapper.filelogsize)
323 323
324 324
325 325 @eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
326 326 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
327 327 opts = orig(ui, requirements, features)
328 328 for name, module in extensions.extensions(ui):
329 329 if module is sys.modules[__name__]:
330 330 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
331 331 msg = (
332 332 _(b"cannot register multiple processors on flag '%#x'.")
333 333 % revlog.REVIDX_EXTSTORED
334 334 )
335 335 raise error.Abort(msg)
336 336
337 337 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
338 338 break
339 339
340 340 return opts
341 341
342 342
343 343 @eh.extsetup
344 344 def _extsetup(ui):
345 345 wrapfilelog(filelog.filelog)
346 346
347 347 context.basefilectx.islfs = wrapper.filectxislfs
348 348
349 349 scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)
350 350
351 351 # Make bundle choose changegroup3 instead of changegroup2. This affects
352 352 # "hg bundle" command. Note: it does not cover all bundle formats like
353 353 # "packed1". Using "packed1" with lfs will likely cause trouble.
354 354 exchange._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"
355 355
356 356
357 357 @eh.filesetpredicate(b'lfs()')
358 358 def lfsfileset(mctx, x):
359 359 """File that uses LFS storage."""
360 360 # i18n: "lfs" is a keyword
361 361 filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
362 362 ctx = mctx.ctx
363 363
364 364 def lfsfilep(f):
365 365 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
366 366
367 367 return mctx.predicate(lfsfilep, predrepr=b'<lfs>')
368 368
369 369
370 370 @eh.templatekeyword(b'lfs_files', requires={b'ctx'})
371 371 def lfsfiles(context, mapping):
372 372 """List of strings. All files modified, added, or removed by this
373 373 changeset."""
374 374 ctx = context.resource(mapping, b'ctx')
375 375
376 376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
377 377 files = sorted(pointers.keys())
378 378
379 379 def pointer(v):
380 380 # In the file spec, version is first and the other keys are sorted.
381 381 sortkeyfunc = lambda x: (x[0] != b'version', x)
382 382 items = sorted(pycompat.iteritems(pointers[v]), key=sortkeyfunc)
383 383 return util.sortdict(items)
384 384
385 385 makemap = lambda v: {
386 386 b'file': v,
387 387 b'lfsoid': pointers[v].oid() if pointers[v] else None,
388 388 b'lfspointer': templateutil.hybriddict(pointer(v)),
389 389 }
390 390
391 391 # TODO: make the separator ', '?
392 392 f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
393 393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
394 394
395 395
396 396 @eh.command(
397 397 b'debuglfsupload',
398 398 [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
399 399 )
400 400 def debuglfsupload(ui, repo, **opts):
401 401 """upload lfs blobs added by the working copy parent or given revisions"""
402 402 revs = opts.get('rev', [])
403 403 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
404 404 wrapper.uploadblobs(repo, pointers)
405 405
406 406
407 407 @eh.wrapcommand(
408 b'verify', opts=[(b'', b'no-lfs', None, _(b'skip all lfs blob content'))]
408 b'verify',
409 opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
409 410 )
410 411 def verify(orig, ui, repo, **opts):
411 412 skipflags = repo.ui.configint(b'verify', b'skipflags')
412 413 no_lfs = opts.pop('no_lfs')
413 414
414 415 if skipflags:
415 416 # --lfs overrides the config bit, if set.
416 417 if no_lfs is False:
417 418 skipflags &= ~repository.REVISION_FLAG_EXTSTORED
418 419 else:
419 420 skipflags = 0
420 421
421 422 if no_lfs is True:
422 423 skipflags |= repository.REVISION_FLAG_EXTSTORED
423 424
424 425 with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
425 426 return orig(ui, repo, **opts)
@@ -1,527 +1,542 b''
1 1 # wrapper.py - methods wrapping core mercurial logic
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import hashlib
11 11
12 12 from mercurial.i18n import _
13 13 from mercurial.node import bin, hex, nullid, short
14 14 from mercurial.pycompat import (
15 15 getattr,
16 16 setattr,
17 17 )
18 18
19 19 from mercurial import (
20 20 bundle2,
21 21 changegroup,
22 22 cmdutil,
23 23 context,
24 24 error,
25 25 exchange,
26 26 exthelper,
27 27 localrepo,
28 28 pycompat,
29 29 revlog,
30 30 scmutil,
31 31 upgrade,
32 32 util,
33 33 vfs as vfsmod,
34 34 wireprotov1server,
35 35 )
36 36
37 37 from mercurial.interfaces import repository
38 38
39 39 from mercurial.utils import (
40 40 storageutil,
41 41 stringutil,
42 42 )
43 43
44 44 from ..largefiles import lfutil
45 45
46 46 from . import (
47 47 blobstore,
48 48 pointer,
49 49 )
50 50
51 51 eh = exthelper.exthelper()
52 52
53 53
54 54 @eh.wrapfunction(localrepo, b'makefilestorage')
55 55 def localrepomakefilestorage(orig, requirements, features, **kwargs):
56 56 if b'lfs' in requirements:
57 57 features.add(repository.REPO_FEATURE_LFS)
58 58
59 59 return orig(requirements=requirements, features=features, **kwargs)
60 60
61 61
62 62 @eh.wrapfunction(changegroup, b'allsupportedversions')
63 63 def allsupportedversions(orig, ui):
64 64 versions = orig(ui)
65 65 versions.add(b'03')
66 66 return versions
67 67
68 68
69 69 @eh.wrapfunction(wireprotov1server, b'_capabilities')
70 70 def _capabilities(orig, repo, proto):
71 71 '''Wrap server command to announce lfs server capability'''
72 72 caps = orig(repo, proto)
73 73 if util.safehasattr(repo.svfs, b'lfslocalblobstore'):
74 74 # Advertise a slightly different capability when lfs is *required*, so
75 75 # that the client knows it MUST load the extension. If lfs is not
76 76 # required on the server, there's no reason to autoload the extension
77 77 # on the client.
78 78 if b'lfs' in repo.requirements:
79 79 caps.append(b'lfs-serve')
80 80
81 81 caps.append(b'lfs')
82 82 return caps
83 83
84 84
85 85 def bypasscheckhash(self, text):
86 86 return False
87 87
88 88
89 89 def readfromstore(self, text):
90 90 """Read filelog content from local blobstore transform for flagprocessor.
91 91
92 92 Default tranform for flagprocessor, returning contents from blobstore.
93 93 Returns a 2-typle (text, validatehash) where validatehash is True as the
94 94 contents of the blobstore should be checked using checkhash.
95 95 """
96 96 p = pointer.deserialize(text)
97 97 oid = p.oid()
98 98 store = self.opener.lfslocalblobstore
99 99 if not store.has(oid):
100 100 p.filename = self.filename
101 101 self.opener.lfsremoteblobstore.readbatch([p], store)
102 102
103 103 # The caller will validate the content
104 104 text = store.read(oid, verify=False)
105 105
106 106 # pack hg filelog metadata
107 107 hgmeta = {}
108 108 for k in p.keys():
109 109 if k.startswith(b'x-hg-'):
110 110 name = k[len(b'x-hg-') :]
111 111 hgmeta[name] = p[k]
112 112 if hgmeta or text.startswith(b'\1\n'):
113 113 text = storageutil.packmeta(hgmeta, text)
114 114
115 115 return (text, True, {})
116 116
117 117
118 118 def writetostore(self, text, sidedata):
119 119 # hg filelog metadata (includes rename, etc)
120 120 hgmeta, offset = storageutil.parsemeta(text)
121 121 if offset and offset > 0:
122 122 # lfs blob does not contain hg filelog metadata
123 123 text = text[offset:]
124 124
125 125 # git-lfs only supports sha256
126 126 oid = hex(hashlib.sha256(text).digest())
127 127 self.opener.lfslocalblobstore.write(oid, text)
128 128
129 129 # replace contents with metadata
130 130 longoid = b'sha256:%s' % oid
131 131 metadata = pointer.gitlfspointer(oid=longoid, size=b'%d' % len(text))
132 132
133 133 # by default, we expect the content to be binary. however, LFS could also
134 134 # be used for non-binary content. add a special entry for non-binary data.
135 135 # this will be used by filectx.isbinary().
136 136 if not stringutil.binary(text):
137 137 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
138 138 metadata[b'x-is-binary'] = b'0'
139 139
140 140 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
141 141 if hgmeta is not None:
142 142 for k, v in pycompat.iteritems(hgmeta):
143 143 metadata[b'x-hg-%s' % k] = v
144 144
145 145 rawtext = metadata.serialize()
146 146 return (rawtext, False)
147 147
148 148
149 149 def _islfs(rlog, node=None, rev=None):
150 150 if rev is None:
151 151 if node is None:
152 152 # both None - likely working copy content where node is not ready
153 153 return False
154 154 rev = rlog.rev(node)
155 155 else:
156 156 node = rlog.node(rev)
157 157 if node == nullid:
158 158 return False
159 159 flags = rlog.flags(rev)
160 160 return bool(flags & revlog.REVIDX_EXTSTORED)
161 161
162 162
163 163 # Wrapping may also be applied by remotefilelog
164 164 def filelogaddrevision(
165 165 orig,
166 166 self,
167 167 text,
168 168 transaction,
169 169 link,
170 170 p1,
171 171 p2,
172 172 cachedelta=None,
173 173 node=None,
174 174 flags=revlog.REVIDX_DEFAULT_FLAGS,
175 175 **kwds
176 176 ):
177 177 # The matcher isn't available if reposetup() wasn't called.
178 178 lfstrack = self._revlog.opener.options.get(b'lfstrack')
179 179
180 180 if lfstrack:
181 181 textlen = len(text)
182 182 # exclude hg rename meta from file size
183 183 meta, offset = storageutil.parsemeta(text)
184 184 if offset:
185 185 textlen -= offset
186 186
187 187 if lfstrack(self._revlog.filename, textlen):
188 188 flags |= revlog.REVIDX_EXTSTORED
189 189
190 190 return orig(
191 191 self,
192 192 text,
193 193 transaction,
194 194 link,
195 195 p1,
196 196 p2,
197 197 cachedelta=cachedelta,
198 198 node=node,
199 199 flags=flags,
200 200 **kwds
201 201 )
202 202
203 203
204 204 # Wrapping may also be applied by remotefilelog
205 205 def filelogrenamed(orig, self, node):
206 206 if _islfs(self._revlog, node):
207 207 rawtext = self._revlog.rawdata(node)
208 208 if not rawtext:
209 209 return False
210 210 metadata = pointer.deserialize(rawtext)
211 211 if b'x-hg-copy' in metadata and b'x-hg-copyrev' in metadata:
212 212 return metadata[b'x-hg-copy'], bin(metadata[b'x-hg-copyrev'])
213 213 else:
214 214 return False
215 215 return orig(self, node)
216 216
217 217
218 218 # Wrapping may also be applied by remotefilelog
219 219 def filelogsize(orig, self, rev):
220 220 if _islfs(self._revlog, rev=rev):
221 221 # fast path: use lfs metadata to answer size
222 222 rawtext = self._revlog.rawdata(rev)
223 223 metadata = pointer.deserialize(rawtext)
224 224 return int(metadata[b'size'])
225 225 return orig(self, rev)
226 226
227 227
228 @eh.wrapfunction(revlog, b'_verify_revision')
229 def _verify_revision(orig, rl, skipflags, state, node):
230 if _islfs(rl, node=node):
231 rawtext = rl.rawdata(node)
232 metadata = pointer.deserialize(rawtext)
233
234 # Don't skip blobs that are stored locally, as local verification is
235 # relatively cheap and there's no other way to verify the raw data in
236 # the revlog.
237 if rl.opener.lfslocalblobstore.has(metadata.oid()):
238 skipflags &= ~revlog.REVIDX_EXTSTORED
239
240 orig(rl, skipflags, state, node)
241
242
228 243 @eh.wrapfunction(context.basefilectx, b'cmp')
229 244 def filectxcmp(orig, self, fctx):
230 245 """returns True if text is different than fctx"""
231 246 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
232 247 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
233 248 # fast path: check LFS oid
234 249 p1 = pointer.deserialize(self.rawdata())
235 250 p2 = pointer.deserialize(fctx.rawdata())
236 251 return p1.oid() != p2.oid()
237 252 return orig(self, fctx)
238 253
239 254
240 255 @eh.wrapfunction(context.basefilectx, b'isbinary')
241 256 def filectxisbinary(orig, self):
242 257 if self.islfs():
243 258 # fast path: use lfs metadata to answer isbinary
244 259 metadata = pointer.deserialize(self.rawdata())
245 260 # if lfs metadata says nothing, assume it's binary by default
246 261 return bool(int(metadata.get(b'x-is-binary', 1)))
247 262 return orig(self)
248 263
249 264
250 265 def filectxislfs(self):
251 266 return _islfs(self.filelog()._revlog, self.filenode())
252 267
253 268
254 269 @eh.wrapfunction(cmdutil, b'_updatecatformatter')
255 270 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
256 271 orig(fm, ctx, matcher, path, decode)
257 272 fm.data(rawdata=ctx[path].rawdata())
258 273
259 274
260 275 @eh.wrapfunction(scmutil, b'wrapconvertsink')
261 276 def convertsink(orig, sink):
262 277 sink = orig(sink)
263 278 if sink.repotype == b'hg':
264 279
265 280 class lfssink(sink.__class__):
266 281 def putcommit(
267 282 self,
268 283 files,
269 284 copies,
270 285 parents,
271 286 commit,
272 287 source,
273 288 revmap,
274 289 full,
275 290 cleanp2,
276 291 ):
277 292 pc = super(lfssink, self).putcommit
278 293 node = pc(
279 294 files,
280 295 copies,
281 296 parents,
282 297 commit,
283 298 source,
284 299 revmap,
285 300 full,
286 301 cleanp2,
287 302 )
288 303
289 304 if b'lfs' not in self.repo.requirements:
290 305 ctx = self.repo[node]
291 306
292 307 # The file list may contain removed files, so check for
293 308 # membership before assuming it is in the context.
294 309 if any(f in ctx and ctx[f].islfs() for f, n in files):
295 310 self.repo.requirements.add(b'lfs')
296 311 self.repo._writerequirements()
297 312
298 313 return node
299 314
300 315 sink.__class__ = lfssink
301 316
302 317 return sink
303 318
304 319
305 320 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
306 321 # options and blob stores are passed from othervfs to the new readonlyvfs.
307 322 @eh.wrapfunction(vfsmod.readonlyvfs, b'__init__')
308 323 def vfsinit(orig, self, othervfs):
309 324 orig(self, othervfs)
310 325 # copy lfs related options
311 326 for k, v in othervfs.options.items():
312 327 if k.startswith(b'lfs'):
313 328 self.options[k] = v
314 329 # also copy lfs blobstores. note: this can run before reposetup, so lfs
315 330 # blobstore attributes are not always ready at this time.
316 331 for name in [b'lfslocalblobstore', b'lfsremoteblobstore']:
317 332 if util.safehasattr(othervfs, name):
318 333 setattr(self, name, getattr(othervfs, name))
319 334
320 335
321 336 def _prefetchfiles(repo, revs, match):
322 337 """Ensure that required LFS blobs are present, fetching them as a group if
323 338 needed."""
324 339 if not util.safehasattr(repo.svfs, b'lfslocalblobstore'):
325 340 return
326 341
327 342 pointers = []
328 343 oids = set()
329 344 localstore = repo.svfs.lfslocalblobstore
330 345
331 346 for rev in revs:
332 347 ctx = repo[rev]
333 348 for f in ctx.walk(match):
334 349 p = pointerfromctx(ctx, f)
335 350 if p and p.oid() not in oids and not localstore.has(p.oid()):
336 351 p.filename = f
337 352 pointers.append(p)
338 353 oids.add(p.oid())
339 354
340 355 if pointers:
341 356 # Recalculating the repo store here allows 'paths.default' that is set
342 357 # on the repo by a clone command to be used for the update.
343 358 blobstore.remote(repo).readbatch(pointers, localstore)
344 359
345 360
346 361 def _canskipupload(repo):
347 362 # Skip if this hasn't been passed to reposetup()
348 363 if not util.safehasattr(repo.svfs, b'lfsremoteblobstore'):
349 364 return True
350 365
351 366 # if remotestore is a null store, upload is a no-op and can be skipped
352 367 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
353 368
354 369
355 370 def candownload(repo):
356 371 # Skip if this hasn't been passed to reposetup()
357 372 if not util.safehasattr(repo.svfs, b'lfsremoteblobstore'):
358 373 return False
359 374
360 375 # if remotestore is a null store, downloads will lead to nothing
361 376 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
362 377
363 378
364 379 def uploadblobsfromrevs(repo, revs):
365 380 '''upload lfs blobs introduced by revs
366 381
367 382 Note: also used by other extensions e. g. infinitepush. avoid renaming.
368 383 '''
369 384 if _canskipupload(repo):
370 385 return
371 386 pointers = extractpointers(repo, revs)
372 387 uploadblobs(repo, pointers)
373 388
374 389
375 390 def prepush(pushop):
376 391 """Prepush hook.
377 392
378 393 Read through the revisions to push, looking for filelog entries that can be
379 394 deserialized into metadata so that we can block the push on their upload to
380 395 the remote blobstore.
381 396 """
382 397 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
383 398
384 399
385 400 @eh.wrapfunction(exchange, b'push')
386 401 def push(orig, repo, remote, *args, **kwargs):
387 402 """bail on push if the extension isn't enabled on remote when needed, and
388 403 update the remote store based on the destination path."""
389 404 if b'lfs' in repo.requirements:
390 405 # If the remote peer is for a local repo, the requirement tests in the
391 406 # base class method enforce lfs support. Otherwise, some revisions in
392 407 # this repo use lfs, and the remote repo needs the extension loaded.
393 408 if not remote.local() and not remote.capable(b'lfs'):
394 409 # This is a copy of the message in exchange.push() when requirements
395 410 # are missing between local repos.
396 411 m = _(b"required features are not supported in the destination: %s")
397 412 raise error.Abort(
398 413 m % b'lfs', hint=_(b'enable the lfs extension on the server')
399 414 )
400 415
401 416 # Repositories where this extension is disabled won't have the field.
402 417 # But if there's a requirement, then the extension must be loaded AND
403 418 # there may be blobs to push.
404 419 remotestore = repo.svfs.lfsremoteblobstore
405 420 try:
406 421 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
407 422 return orig(repo, remote, *args, **kwargs)
408 423 finally:
409 424 repo.svfs.lfsremoteblobstore = remotestore
410 425 else:
411 426 return orig(repo, remote, *args, **kwargs)
412 427
413 428
414 429 # when writing a bundle via "hg bundle" command, upload related LFS blobs
415 430 @eh.wrapfunction(bundle2, b'writenewbundle')
416 431 def writenewbundle(
417 432 orig, ui, repo, source, filename, bundletype, outgoing, *args, **kwargs
418 433 ):
419 434 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
420 435 uploadblobsfromrevs(repo, outgoing.missing)
421 436 return orig(
422 437 ui, repo, source, filename, bundletype, outgoing, *args, **kwargs
423 438 )
424 439
425 440
426 441 def extractpointers(repo, revs):
427 442 """return a list of lfs pointers added by given revs"""
428 443 repo.ui.debug(b'lfs: computing set of blobs to upload\n')
429 444 pointers = {}
430 445
431 446 makeprogress = repo.ui.makeprogress
432 447 with makeprogress(
433 448 _(b'lfs search'), _(b'changesets'), len(revs)
434 449 ) as progress:
435 450 for r in revs:
436 451 ctx = repo[r]
437 452 for p in pointersfromctx(ctx).values():
438 453 pointers[p.oid()] = p
439 454 progress.increment()
440 455 return sorted(pointers.values(), key=lambda p: p.oid())
441 456
442 457
443 458 def pointerfromctx(ctx, f, removed=False):
444 459 """return a pointer for the named file from the given changectx, or None if
445 460 the file isn't LFS.
446 461
447 462 Optionally, the pointer for a file deleted from the context can be returned.
448 463 Since no such pointer is actually stored, and to distinguish from a non LFS
449 464 file, this pointer is represented by an empty dict.
450 465 """
451 466 _ctx = ctx
452 467 if f not in ctx:
453 468 if not removed:
454 469 return None
455 470 if f in ctx.p1():
456 471 _ctx = ctx.p1()
457 472 elif f in ctx.p2():
458 473 _ctx = ctx.p2()
459 474 else:
460 475 return None
461 476 fctx = _ctx[f]
462 477 if not _islfs(fctx.filelog()._revlog, fctx.filenode()):
463 478 return None
464 479 try:
465 480 p = pointer.deserialize(fctx.rawdata())
466 481 if ctx == _ctx:
467 482 return p
468 483 return {}
469 484 except pointer.InvalidPointer as ex:
470 485 raise error.Abort(
471 486 _(b'lfs: corrupted pointer (%s@%s): %s\n')
472 487 % (f, short(_ctx.node()), ex)
473 488 )
474 489
475 490
476 491 def pointersfromctx(ctx, removed=False):
477 492 """return a dict {path: pointer} for given single changectx.
478 493
479 494 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
480 495 stored for the path is an empty dict.
481 496 """
482 497 result = {}
483 498 m = ctx.repo().narrowmatch()
484 499
485 500 # TODO: consider manifest.fastread() instead
486 501 for f in ctx.files():
487 502 if not m(f):
488 503 continue
489 504 p = pointerfromctx(ctx, f, removed=removed)
490 505 if p is not None:
491 506 result[f] = p
492 507 return result
493 508
494 509
495 510 def uploadblobs(repo, pointers):
496 511 """upload given pointers from local blobstore"""
497 512 if not pointers:
498 513 return
499 514
500 515 remoteblob = repo.svfs.lfsremoteblobstore
501 516 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
502 517
503 518
504 519 @eh.wrapfunction(upgrade, b'_finishdatamigration')
505 520 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
506 521 orig(ui, srcrepo, dstrepo, requirements)
507 522
508 523 # Skip if this hasn't been passed to reposetup()
509 524 if util.safehasattr(
510 525 srcrepo.svfs, b'lfslocalblobstore'
511 526 ) and util.safehasattr(dstrepo.svfs, b'lfslocalblobstore'):
512 527 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
513 528 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
514 529
515 530 for dirpath, dirs, files in srclfsvfs.walk():
516 531 for oid in files:
517 532 ui.write(_(b'copying lfs blob %s\n') % oid)
518 533 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
519 534
520 535
521 536 @eh.wrapfunction(upgrade, b'preservedrequirements')
522 537 @eh.wrapfunction(upgrade, b'supporteddestrequirements')
523 538 def upgraderequirements(orig, repo):
524 539 reqs = orig(repo)
525 540 if b'lfs' in repo.requirements:
526 541 reqs.add(b'lfs')
527 542 return reqs
@@ -1,1190 +1,1192 b''
1 1 #require no-reposimplestore no-chg
2 2
3 3 $ hg init requirements
4 4 $ cd requirements
5 5
6 6 # LFS not loaded by default.
7 7
8 8 $ hg config extensions
9 9 [1]
10 10
11 11 # Adding lfs to requires file will auto-load lfs extension.
12 12
13 13 $ echo lfs >> .hg/requires
14 14 $ hg config extensions
15 15 extensions.lfs=
16 16
17 17 # But only if there is no config entry for the extension already.
18 18
19 19 $ cat > .hg/hgrc << EOF
20 20 > [extensions]
21 21 > lfs=!
22 22 > EOF
23 23
24 24 $ hg config extensions
25 25 abort: repository requires features unknown to this Mercurial: lfs!
26 26 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
27 27 [255]
28 28
29 29 $ cat > .hg/hgrc << EOF
30 30 > [extensions]
31 31 > lfs=
32 32 > EOF
33 33
34 34 $ hg config extensions
35 35 extensions.lfs=
36 36
37 37 $ cat > .hg/hgrc << EOF
38 38 > [extensions]
39 39 > lfs = missing.py
40 40 > EOF
41 41
42 42 $ hg config extensions
43 43 \*\*\* failed to import extension lfs from missing.py: [Errno *] $ENOENT$: 'missing.py' (glob)
44 44 abort: repository requires features unknown to this Mercurial: lfs!
45 45 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
46 46 [255]
47 47
48 48 $ cd ..
49 49
50 50 # Initial setup
51 51
52 52 $ cat >> $HGRCPATH << EOF
53 53 > [extensions]
54 54 > lfs=
55 55 > [lfs]
56 56 > # Test deprecated config
57 57 > threshold=1000B
58 58 > EOF
59 59
60 60 $ LONG=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
61 61
62 62 # Prepare server and enable extension
63 63 $ hg init server
64 64 $ hg clone -q server client
65 65 $ cd client
66 66
67 67 # Commit small file
68 68 $ echo s > smallfile
69 69 $ echo '**.py = LF' > .hgeol
70 70 $ hg --config lfs.track='"size(\">1000B\")"' commit -Aqm "add small file"
71 71 hg: parse error: unsupported file pattern: size(">1000B")
72 72 (paths must be prefixed with "path:")
73 73 [255]
74 74 $ hg --config lfs.track='size(">1000B")' commit -Aqm "add small file"
75 75
76 76 # Commit large file
77 77 $ echo $LONG > largefile
78 78 $ grep lfs .hg/requires
79 79 [1]
80 80 $ hg commit --traceback -Aqm "add large file"
81 81 $ grep lfs .hg/requires
82 82 lfs
83 83
84 84 # Ensure metadata is stored
85 85 $ hg debugdata largefile 0
86 86 version https://git-lfs.github.com/spec/v1
87 87 oid sha256:f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
88 88 size 1501
89 89 x-is-binary 0
90 90
91 91 # Check the blobstore is populated
92 92 $ find .hg/store/lfs/objects | sort
93 93 .hg/store/lfs/objects
94 94 .hg/store/lfs/objects/f1
95 95 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
96 96
97 97 # Check the blob stored contains the actual contents of the file
98 98 $ cat .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
99 99 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
100 100
101 101 # Push changes to the server
102 102
103 103 $ hg push
104 104 pushing to $TESTTMP/server
105 105 searching for changes
106 106 abort: lfs.url needs to be configured
107 107 [255]
108 108
109 109 $ cat >> $HGRCPATH << EOF
110 110 > [lfs]
111 111 > url=file:$TESTTMP/dummy-remote/
112 112 > EOF
113 113
114 114 Push to a local non-lfs repo with the extension enabled will add the
115 115 lfs requirement
116 116
117 117 $ grep lfs $TESTTMP/server/.hg/requires
118 118 [1]
119 119 $ hg push -v | egrep -v '^(uncompressed| )'
120 120 pushing to $TESTTMP/server
121 121 searching for changes
122 122 lfs: found f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b in the local lfs store
123 123 2 changesets found
124 124 adding changesets
125 125 adding manifests
126 126 adding file changes
127 127 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
128 128 added 2 changesets with 3 changes to 3 files
129 129 $ grep lfs $TESTTMP/server/.hg/requires
130 130 lfs
131 131
132 132 # Unknown URL scheme
133 133
134 134 $ hg push --config lfs.url=ftp://foobar
135 135 abort: lfs: unknown url scheme: ftp
136 136 [255]
137 137
138 138 $ cd ../
139 139
140 140 # Initialize new client (not cloning) and setup extension
141 141 $ hg init client2
142 142 $ cd client2
143 143 $ cat >> .hg/hgrc <<EOF
144 144 > [paths]
145 145 > default = $TESTTMP/server
146 146 > EOF
147 147
148 148 # Pull from server
149 149
150 150 Pulling a local lfs repo into a local non-lfs repo with the extension
151 151 enabled adds the lfs requirement
152 152
153 153 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
154 154 $TESTTMP/server/.hg/requires:lfs
155 155 $ hg pull default
156 156 pulling from $TESTTMP/server
157 157 requesting all changes
158 158 adding changesets
159 159 adding manifests
160 160 adding file changes
161 161 added 2 changesets with 3 changes to 3 files
162 162 new changesets 0ead593177f7:b88141481348
163 163 (run 'hg update' to get a working copy)
164 164 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
165 165 .hg/requires:lfs
166 166 $TESTTMP/server/.hg/requires:lfs
167 167
168 168 # Check the blobstore is not yet populated
169 169 $ [ -d .hg/store/lfs/objects ]
170 170 [1]
171 171
172 172 # Update to the last revision containing the large file
173 173 $ hg update
174 174 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
175 175
176 176 # Check the blobstore has been populated on update
177 177 $ find .hg/store/lfs/objects | sort
178 178 .hg/store/lfs/objects
179 179 .hg/store/lfs/objects/f1
180 180 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
181 181
182 182 # Check the contents of the file are fetched from blobstore when requested
183 183 $ hg cat -r . largefile
184 184 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
185 185
186 186 # Check the file has been copied in the working copy
187 187 $ cat largefile
188 188 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
189 189
190 190 $ cd ..
191 191
192 192 # Check rename, and switch between large and small files
193 193
194 194 $ hg init repo3
195 195 $ cd repo3
196 196 $ cat >> .hg/hgrc << EOF
197 197 > [lfs]
198 198 > track=size(">10B")
199 199 > EOF
200 200
201 201 $ echo LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS > large
202 202 $ echo SHORTER > small
203 203 $ hg add . -q
204 204 $ hg commit -m 'commit with lfs content'
205 205
206 206 $ hg files -r . 'set:added()'
207 207 large
208 208 small
209 209 $ hg files -r . 'set:added() & lfs()'
210 210 large
211 211
212 212 $ hg mv large l
213 213 $ hg mv small s
214 214 $ hg status 'set:removed()'
215 215 R large
216 216 R small
217 217 $ hg status 'set:removed() & lfs()'
218 218 R large
219 219 $ hg commit -m 'renames'
220 220
221 221 $ hg files -r . 'set:copied()'
222 222 l
223 223 s
224 224 $ hg files -r . 'set:copied() & lfs()'
225 225 l
226 226 $ hg status --change . 'set:removed()'
227 227 R large
228 228 R small
229 229 $ hg status --change . 'set:removed() & lfs()'
230 230 R large
231 231
232 232 $ echo SHORT > l
233 233 $ echo BECOME-LARGER-FROM-SHORTER > s
234 234 $ hg commit -m 'large to small, small to large'
235 235
236 236 $ echo 1 >> l
237 237 $ echo 2 >> s
238 238 $ hg commit -m 'random modifications'
239 239
240 240 $ echo RESTORE-TO-BE-LARGE > l
241 241 $ echo SHORTER > s
242 242 $ hg commit -m 'switch large and small again'
243 243
244 244 # Test lfs_files template
245 245
246 246 $ hg log -r 'all()' -T '{rev} {join(lfs_files, ", ")}\n'
247 247 0 large
248 248 1 l, large
249 249 2 s
250 250 3 s
251 251 4 l
252 252
253 253 # Push and pull the above repo
254 254
255 255 $ hg --cwd .. init repo4
256 256 $ hg push ../repo4
257 257 pushing to ../repo4
258 258 searching for changes
259 259 adding changesets
260 260 adding manifests
261 261 adding file changes
262 262 added 5 changesets with 10 changes to 4 files
263 263
264 264 $ hg --cwd .. init repo5
265 265 $ hg --cwd ../repo5 pull ../repo3
266 266 pulling from ../repo3
267 267 requesting all changes
268 268 adding changesets
269 269 adding manifests
270 270 adding file changes
271 271 added 5 changesets with 10 changes to 4 files
272 272 new changesets fd47a419c4f7:5adf850972b9
273 273 (run 'hg update' to get a working copy)
274 274
275 275 $ cd ..
276 276
277 277 # Test clone
278 278
279 279 $ hg init repo6
280 280 $ cd repo6
281 281 $ cat >> .hg/hgrc << EOF
282 282 > [lfs]
283 283 > track=size(">30B")
284 284 > EOF
285 285
286 286 $ echo LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES > large
287 287 $ echo SMALL > small
288 288 $ hg commit -Aqm 'create a lfs file' large small
289 289 $ hg debuglfsupload -r 'all()' -v
290 290 lfs: found 8e92251415339ae9b148c8da89ed5ec665905166a1ab11b09dca8fad83344738 in the local lfs store
291 291
292 292 $ cd ..
293 293
294 294 $ hg clone repo6 repo7
295 295 updating to branch default
296 296 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
297 297 $ cd repo7
298 298 $ cat large
299 299 LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES
300 300 $ cat small
301 301 SMALL
302 302
303 303 $ cd ..
304 304
305 305 $ hg --config extensions.share= share repo7 sharedrepo
306 306 updating working directory
307 307 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
308 308 $ grep lfs sharedrepo/.hg/requires
309 309 lfs
310 310
311 311 # Test rename and status
312 312
313 313 $ hg init repo8
314 314 $ cd repo8
315 315 $ cat >> .hg/hgrc << EOF
316 316 > [lfs]
317 317 > track=size(">10B")
318 318 > EOF
319 319
320 320 $ echo THIS-IS-LFS-BECAUSE-10-BYTES > a1
321 321 $ echo SMALL > a2
322 322 $ hg commit -m a -A a1 a2
323 323 $ hg status
324 324 $ hg mv a1 b1
325 325 $ hg mv a2 a1
326 326 $ hg mv b1 a2
327 327 $ hg commit -m b
328 328 $ hg status
329 329 >>> with open('a2', 'wb') as f:
330 330 ... f.write(b'\1\nSTART-WITH-HG-FILELOG-METADATA') and None
331 331 >>> with open('a1', 'wb') as f:
332 332 ... f.write(b'\1\nMETA\n') and None
333 333 $ hg commit -m meta
334 334 $ hg status
335 335 $ hg log -T '{rev}: {file_copies} | {file_dels} | {file_adds}\n'
336 336 2: | |
337 337 1: a1 (a2)a2 (a1) | |
338 338 0: | | a1 a2
339 339
340 340 $ for n in a1 a2; do
341 341 > for r in 0 1 2; do
342 342 > printf '\n%s @ %s\n' $n $r
343 343 > hg debugdata $n $r
344 344 > done
345 345 > done
346 346
347 347 a1 @ 0
348 348 version https://git-lfs.github.com/spec/v1
349 349 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
350 350 size 29
351 351 x-is-binary 0
352 352
353 353 a1 @ 1
354 354 \x01 (esc)
355 355 copy: a2
356 356 copyrev: 50470ad23cf937b1f4b9f80bfe54df38e65b50d9
357 357 \x01 (esc)
358 358 SMALL
359 359
360 360 a1 @ 2
361 361 \x01 (esc)
362 362 \x01 (esc)
363 363 \x01 (esc)
364 364 META
365 365
366 366 a2 @ 0
367 367 SMALL
368 368
369 369 a2 @ 1
370 370 version https://git-lfs.github.com/spec/v1
371 371 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
372 372 size 29
373 373 x-hg-copy a1
374 374 x-hg-copyrev be23af27908a582af43e5cda209a5a9b319de8d4
375 375 x-is-binary 0
376 376
377 377 a2 @ 2
378 378 version https://git-lfs.github.com/spec/v1
379 379 oid sha256:876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
380 380 size 32
381 381 x-is-binary 0
382 382
383 383 # Verify commit hashes include rename metadata
384 384
385 385 $ hg log -T '{rev}:{node|short} {desc}\n'
386 386 2:0fae949de7fa meta
387 387 1:9cd6bdffdac0 b
388 388 0:7f96794915f7 a
389 389
390 390 $ cd ..
391 391
392 392 # Test bundle
393 393
394 394 $ hg init repo9
395 395 $ cd repo9
396 396 $ cat >> .hg/hgrc << EOF
397 397 > [lfs]
398 398 > track=size(">10B")
399 399 > [diff]
400 400 > git=1
401 401 > EOF
402 402
403 403 $ for i in 0 single two three 4; do
404 404 > echo 'THIS-IS-LFS-'$i > a
405 405 > hg commit -m a-$i -A a
406 406 > done
407 407
408 408 $ hg update 2 -q
409 409 $ echo 'THIS-IS-LFS-2-CHILD' > a
410 410 $ hg commit -m branching -q
411 411
412 412 $ hg bundle --base 1 bundle.hg -v
413 413 lfs: found 5ab7a3739a5feec94a562d070a14f36dba7cad17e5484a4a89eea8e5f3166888 in the local lfs store
414 414 lfs: found a9c7d1cd6ce2b9bbdf46ed9a862845228717b921c089d0d42e3bcaed29eb612e in the local lfs store
415 415 lfs: found f693890c49c409ec33673b71e53f297681f76c1166daf33b2ad7ebf8b1d3237e in the local lfs store
416 416 lfs: found fda198fea753eb66a252e9856915e1f5cddbe41723bd4b695ece2604ad3c9f75 in the local lfs store
417 417 4 changesets found
418 418 uncompressed size of bundle content:
419 419 * (changelog) (glob)
420 420 * (manifests) (glob)
421 421 * a (glob)
422 422 $ hg --config extensions.strip= strip -r 2 --no-backup --force -q
423 423 $ hg -R bundle.hg log -p -T '{rev} {desc}\n' a
424 424 5 branching
425 425 diff --git a/a b/a
426 426 --- a/a
427 427 +++ b/a
428 428 @@ -1,1 +1,1 @@
429 429 -THIS-IS-LFS-two
430 430 +THIS-IS-LFS-2-CHILD
431 431
432 432 4 a-4
433 433 diff --git a/a b/a
434 434 --- a/a
435 435 +++ b/a
436 436 @@ -1,1 +1,1 @@
437 437 -THIS-IS-LFS-three
438 438 +THIS-IS-LFS-4
439 439
440 440 3 a-three
441 441 diff --git a/a b/a
442 442 --- a/a
443 443 +++ b/a
444 444 @@ -1,1 +1,1 @@
445 445 -THIS-IS-LFS-two
446 446 +THIS-IS-LFS-three
447 447
448 448 2 a-two
449 449 diff --git a/a b/a
450 450 --- a/a
451 451 +++ b/a
452 452 @@ -1,1 +1,1 @@
453 453 -THIS-IS-LFS-single
454 454 +THIS-IS-LFS-two
455 455
456 456 1 a-single
457 457 diff --git a/a b/a
458 458 --- a/a
459 459 +++ b/a
460 460 @@ -1,1 +1,1 @@
461 461 -THIS-IS-LFS-0
462 462 +THIS-IS-LFS-single
463 463
464 464 0 a-0
465 465 diff --git a/a b/a
466 466 new file mode 100644
467 467 --- /dev/null
468 468 +++ b/a
469 469 @@ -0,0 +1,1 @@
470 470 +THIS-IS-LFS-0
471 471
472 472 $ hg bundle -R bundle.hg --base 1 bundle-again.hg -q
473 473 $ hg -R bundle-again.hg log -p -T '{rev} {desc}\n' a
474 474 5 branching
475 475 diff --git a/a b/a
476 476 --- a/a
477 477 +++ b/a
478 478 @@ -1,1 +1,1 @@
479 479 -THIS-IS-LFS-two
480 480 +THIS-IS-LFS-2-CHILD
481 481
482 482 4 a-4
483 483 diff --git a/a b/a
484 484 --- a/a
485 485 +++ b/a
486 486 @@ -1,1 +1,1 @@
487 487 -THIS-IS-LFS-three
488 488 +THIS-IS-LFS-4
489 489
490 490 3 a-three
491 491 diff --git a/a b/a
492 492 --- a/a
493 493 +++ b/a
494 494 @@ -1,1 +1,1 @@
495 495 -THIS-IS-LFS-two
496 496 +THIS-IS-LFS-three
497 497
498 498 2 a-two
499 499 diff --git a/a b/a
500 500 --- a/a
501 501 +++ b/a
502 502 @@ -1,1 +1,1 @@
503 503 -THIS-IS-LFS-single
504 504 +THIS-IS-LFS-two
505 505
506 506 1 a-single
507 507 diff --git a/a b/a
508 508 --- a/a
509 509 +++ b/a
510 510 @@ -1,1 +1,1 @@
511 511 -THIS-IS-LFS-0
512 512 +THIS-IS-LFS-single
513 513
514 514 0 a-0
515 515 diff --git a/a b/a
516 516 new file mode 100644
517 517 --- /dev/null
518 518 +++ b/a
519 519 @@ -0,0 +1,1 @@
520 520 +THIS-IS-LFS-0
521 521
522 522 $ cd ..
523 523
524 524 # Test isbinary
525 525
526 526 $ hg init repo10
527 527 $ cd repo10
528 528 $ cat >> .hg/hgrc << EOF
529 529 > [extensions]
530 530 > lfs=
531 531 > [lfs]
532 532 > track=all()
533 533 > EOF
534 534 $ "$PYTHON" <<'EOF'
535 535 > def write(path, content):
536 536 > with open(path, 'wb') as f:
537 537 > f.write(content)
538 538 > write('a', b'\0\0')
539 539 > write('b', b'\1\n')
540 540 > write('c', b'\1\n\0')
541 541 > write('d', b'xx')
542 542 > EOF
543 543 $ hg add a b c d
544 544 $ hg diff --stat
545 545 a | Bin
546 546 b | 1 +
547 547 c | Bin
548 548 d | 1 +
549 549 4 files changed, 2 insertions(+), 0 deletions(-)
550 550 $ hg commit -m binarytest
551 551 $ cat > $TESTTMP/dumpbinary.py << EOF
552 552 > from mercurial.utils import (
553 553 > stringutil,
554 554 > )
555 555 > def reposetup(ui, repo):
556 556 > for n in (b'a', b'b', b'c', b'd'):
557 557 > ui.write((b'%s: binary=%s\n')
558 558 > % (n, stringutil.pprint(repo[b'.'][n].isbinary())))
559 559 > EOF
560 560 $ hg --config extensions.dumpbinary=$TESTTMP/dumpbinary.py id --trace
561 561 a: binary=True
562 562 b: binary=False
563 563 c: binary=True
564 564 d: binary=False
565 565 b55353847f02 tip
566 566
567 567 Binary blobs don't need to be present to be skipped in filesets. (And their
568 568 absence doesn't cause an abort.)
569 569
570 570 $ rm .hg/store/lfs/objects/96/a296d224f285c67bee93c30f8a309157f0daa35dc5b87e410b78630a09cfc7
571 571 $ rm .hg/store/lfs/objects/92/f76135a4baf4faccb8586a60faf830c2bdfce147cefa188aaf4b790bd01b7e
572 572
573 573 $ hg files --debug -r . 'set:eol("unix")' --config 'experimental.lfs.disableusercache=True'
574 574 lfs: found c04b5bb1a5b2eb3e9cd4805420dba5a9d133da5b7adeeafb5474c4adae9faa80 in the local lfs store
575 575 2 b
576 576 lfs: found 5dde896887f6754c9b15bfe3a441ae4806df2fde94001311e08bf110622e0bbe in the local lfs store
577 577
578 578 $ hg files --debug -r . 'set:binary()' --config 'experimental.lfs.disableusercache=True'
579 579 2 a
580 580 3 c
581 581
582 582 $ cd ..
583 583
584 584 # Test fctx.cmp fastpath - diff without LFS blobs
585 585
586 586 $ hg init repo12
587 587 $ cd repo12
588 588 $ cat >> .hg/hgrc <<EOF
589 589 > [lfs]
590 590 > threshold=1
591 591 > EOF
592 592 $ cat > ../patch.diff <<EOF
593 593 > # HG changeset patch
594 594 > 2
595 595 >
596 596 > diff --git a/a b/a
597 597 > old mode 100644
598 598 > new mode 100755
599 599 > EOF
600 600
601 601 $ for i in 1 2 3; do
602 602 > cp ../repo10/a a
603 603 > if [ $i = 3 ]; then
604 604 > # make a content-only change
605 605 > hg import -q --bypass ../patch.diff
606 606 > hg update -q
607 607 > rm ../patch.diff
608 608 > else
609 609 > echo $i >> a
610 610 > hg commit -m $i -A a
611 611 > fi
612 612 > done
613 613 $ [ -d .hg/store/lfs/objects ]
614 614
615 615 $ cd ..
616 616
617 617 $ hg clone repo12 repo13 --noupdate
618 618 $ cd repo13
619 619 $ hg log --removed -p a -T '{desc}\n' --config diff.nobinary=1 --git
620 620 2
621 621 diff --git a/a b/a
622 622 old mode 100644
623 623 new mode 100755
624 624
625 625 2
626 626 diff --git a/a b/a
627 627 Binary file a has changed
628 628
629 629 1
630 630 diff --git a/a b/a
631 631 new file mode 100644
632 632 Binary file a has changed
633 633
634 634 $ [ -d .hg/store/lfs/objects ]
635 635 [1]
636 636
637 637 $ cd ..
638 638
639 639 # Test filter
640 640
641 641 $ hg init repo11
642 642 $ cd repo11
643 643 $ cat >> .hg/hgrc << EOF
644 644 > [lfs]
645 645 > track=(**.a & size(">5B")) | (**.b & !size(">5B"))
646 646 > | (**.c & "path:d" & !"path:d/c.c") | size(">10B")
647 647 > EOF
648 648
649 649 $ mkdir a
650 650 $ echo aaaaaa > a/1.a
651 651 $ echo a > a/2.a
652 652 $ echo aaaaaa > 1.b
653 653 $ echo a > 2.b
654 654 $ echo a > 1.c
655 655 $ mkdir d
656 656 $ echo a > d/c.c
657 657 $ echo a > d/d.c
658 658 $ echo aaaaaaaaaaaa > x
659 659 $ hg add . -q
660 660 $ hg commit -m files
661 661
662 662 $ for p in a/1.a a/2.a 1.b 2.b 1.c d/c.c d/d.c x; do
663 663 > if hg debugdata $p 0 2>&1 | grep git-lfs >/dev/null; then
664 664 > echo "${p}: is lfs"
665 665 > else
666 666 > echo "${p}: not lfs"
667 667 > fi
668 668 > done
669 669 a/1.a: is lfs
670 670 a/2.a: not lfs
671 671 1.b: not lfs
672 672 2.b: is lfs
673 673 1.c: not lfs
674 674 d/c.c: not lfs
675 675 d/d.c: is lfs
676 676 x: is lfs
677 677
678 678 $ cd ..
679 679
680 680 # Verify the repos
681 681
682 682 $ cat > $TESTTMP/dumpflog.py << EOF
683 683 > # print raw revision sizes, flags, and hashes for certain files
684 684 > import hashlib
685 685 > from mercurial.node import short
686 686 > from mercurial import (
687 687 > pycompat,
688 688 > revlog,
689 689 > )
690 690 > from mercurial.utils import (
691 691 > stringutil,
692 692 > )
693 693 > def hash(rawtext):
694 694 > h = hashlib.sha512()
695 695 > h.update(rawtext)
696 696 > return pycompat.sysbytes(h.hexdigest()[:4])
697 697 > def reposetup(ui, repo):
698 698 > # these 2 files are interesting
699 699 > for name in [b'l', b's']:
700 700 > fl = repo.file(name)
701 701 > if len(fl) == 0:
702 702 > continue
703 703 > sizes = [fl._revlog.rawsize(i) for i in fl]
704 704 > texts = [fl.rawdata(i) for i in fl]
705 705 > flags = [int(fl._revlog.flags(i)) for i in fl]
706 706 > hashes = [hash(t) for t in texts]
707 707 > pycompat.stdout.write(b' %s: rawsizes=%r flags=%r hashes=%s\n'
708 708 > % (name, sizes, flags, stringutil.pprint(hashes)))
709 709 > EOF
710 710
711 711 $ for i in client client2 server repo3 repo4 repo5 repo6 repo7 repo8 repo9 \
712 712 > repo10; do
713 713 > echo 'repo:' $i
714 714 > hg --cwd $i verify --config extensions.dumpflog=$TESTTMP/dumpflog.py -q
715 715 > done
716 716 repo: client
717 717 repo: client2
718 718 repo: server
719 719 repo: repo3
720 720 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
721 721 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
722 722 repo: repo4
723 723 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
724 724 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
725 725 repo: repo5
726 726 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
727 727 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
728 728 repo: repo6
729 729 repo: repo7
730 730 repo: repo8
731 731 repo: repo9
732 732 repo: repo10
733 733
734 734 repo13 doesn't have any cached lfs files and its source never pushed its
735 735 files. Therefore, the files don't exist in the remote store. Use the files in
736 736 the user cache.
737 737
738 738 $ test -d $TESTTMP/repo13/.hg/store/lfs/objects
739 739 [1]
740 740
741 741 $ hg --config extensions.share= share repo13 repo14
742 742 updating working directory
743 743 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
744 744 $ hg -R repo14 -q verify
745 745
746 746 $ hg clone repo13 repo15
747 747 updating to branch default
748 748 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
749 749 $ hg -R repo15 -q verify
750 750
751 751 If the source repo doesn't have the blob (maybe it was pulled or cloned with
752 752 --noupdate), the blob is still accessible via the global cache to send to the
753 753 remote store.
754 754
755 755 $ rm -rf $TESTTMP/repo15/.hg/store/lfs
756 756 $ hg init repo16
757 757 $ hg -R repo15 push repo16
758 758 pushing to repo16
759 759 searching for changes
760 760 adding changesets
761 761 adding manifests
762 762 adding file changes
763 763 added 3 changesets with 2 changes to 1 files
764 764 $ hg -R repo15 -q verify
765 765
766 766 Test damaged file scenarios. (This also damages the usercache because of the
767 767 hardlinks.)
768 768
769 769 $ echo 'damage' >> repo5/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
770 770
771 771 Repo with damaged lfs objects in any revision will fail verification.
772 772
773 773 $ hg -R repo5 verify
774 774 checking changesets
775 775 checking manifests
776 776 crosschecking files in changesets and manifests
777 777 checking files
778 778 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
779 779 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
780 780 checked 5 changesets with 10 changes to 4 files
781 781 2 integrity errors encountered!
782 782 (first damaged changeset appears to be 0)
783 783 [1]
784 784
785 785 Updates work after cloning a damaged repo, if the damaged lfs objects aren't in
786 786 the update destination. Those objects won't be added to the new repo's store
787 787 because they aren't accessed.
788 788
789 789 $ hg clone -v repo5 fromcorrupt
790 790 updating to branch default
791 791 resolving manifests
792 792 getting l
793 793 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the usercache
794 794 getting s
795 795 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
796 796 $ test -f fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
797 797 [1]
798 798
799 799 Verify will not try to download lfs blobs, if told not to process lfs content
800 800
801 801 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v --no-lfs
802 802 repository uses revlog format 1
803 803 checking changesets
804 804 checking manifests
805 805 crosschecking files in changesets and manifests
806 806 checking files
807 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
807 808 checked 5 changesets with 10 changes to 4 files
808 809
809 810 Verify will not try to download lfs blobs, if told not to by the config option
810 811
811 812 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v \
812 813 > --config verify.skipflags=8192
813 814 repository uses revlog format 1
814 815 checking changesets
815 816 checking manifests
816 817 crosschecking files in changesets and manifests
817 818 checking files
819 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
818 820 checked 5 changesets with 10 changes to 4 files
819 821
820 822 Verify will copy/link all lfs objects into the local store that aren't already
821 823 present. Bypass the corrupted usercache to show that verify works when fed by
822 824 the (uncorrupted) remote store.
823 825
824 826 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
825 827 repository uses revlog format 1
826 828 checking changesets
827 829 checking manifests
828 830 crosschecking files in changesets and manifests
829 831 checking files
830 832 lfs: adding 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e to the usercache
831 833 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
832 834 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
833 835 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
834 836 lfs: adding 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 to the usercache
835 837 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
836 838 lfs: adding b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c to the usercache
837 839 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
838 840 checked 5 changesets with 10 changes to 4 files
839 841
840 842 Verify will not copy/link a corrupted file from the usercache into the local
841 843 store, and poison it. (The verify with a good remote now works.)
842 844
843 845 $ rm -r fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
844 846 $ hg -R fromcorrupt verify -v
845 847 repository uses revlog format 1
846 848 checking changesets
847 849 checking manifests
848 850 crosschecking files in changesets and manifests
849 851 checking files
850 852 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
851 853 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
852 854 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
853 855 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
854 856 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
855 857 checked 5 changesets with 10 changes to 4 files
856 858 2 integrity errors encountered!
857 859 (first damaged changeset appears to be 0)
858 860 [1]
859 861 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
860 862 repository uses revlog format 1
861 863 checking changesets
862 864 checking manifests
863 865 crosschecking files in changesets and manifests
864 866 checking files
865 867 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the usercache
866 868 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
867 869 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
868 870 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
869 871 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
870 872 checked 5 changesets with 10 changes to 4 files
871 873
872 874 Damaging a file required by the update destination fails the update.
873 875
874 876 $ echo 'damage' >> $TESTTMP/dummy-remote/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
875 877 $ hg --config lfs.usercache=emptycache clone -v repo5 fromcorrupt2
876 878 updating to branch default
877 879 resolving manifests
878 880 abort: corrupt remote lfs object: 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
879 881 [255]
880 882
881 883 A corrupted lfs blob is not transferred from a file://remotestore to the
882 884 usercache or local store.
883 885
884 886 $ test -f emptycache/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
885 887 [1]
886 888 $ test -f fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
887 889 [1]
888 890
889 891 $ hg -R fromcorrupt2 verify
890 892 checking changesets
891 893 checking manifests
892 894 crosschecking files in changesets and manifests
893 895 checking files
894 896 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
895 897 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
896 898 checked 5 changesets with 10 changes to 4 files
897 899 2 integrity errors encountered!
898 900 (first damaged changeset appears to be 0)
899 901 [1]
900 902
901 903 Corrupt local files are not sent upstream. (The alternate dummy remote
902 904 avoids the corrupt lfs object in the original remote.)
903 905
904 906 $ mkdir $TESTTMP/dummy-remote2
905 907 $ hg init dest
906 908 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 push -v dest
907 909 pushing to dest
908 910 searching for changes
909 911 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
910 912 abort: detected corrupt lfs object: 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
911 913 (run hg verify)
912 914 [255]
913 915
914 916 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 verify -v
915 917 repository uses revlog format 1
916 918 checking changesets
917 919 checking manifests
918 920 crosschecking files in changesets and manifests
919 921 checking files
920 922 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
921 923 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
922 924 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
923 925 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
924 926 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
925 927 checked 5 changesets with 10 changes to 4 files
926 928 2 integrity errors encountered!
927 929 (first damaged changeset appears to be 0)
928 930 [1]
929 931
930 932 $ cat $TESTTMP/dummy-remote2/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
931 933 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
932 934 $ cat fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
933 935 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
934 936 $ test -f $TESTTMP/dummy-remote2/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
935 937 [1]
936 938
937 939 Accessing a corrupt file will complain
938 940
939 941 $ hg --cwd fromcorrupt2 cat -r 0 large
940 942 abort: integrity check failed on data/large.i:0!
941 943 [255]
942 944
943 945 lfs -> normal -> lfs round trip conversions are possible. The 'none()'
944 946 predicate on the command line will override whatever is configured globally and
945 947 locally, and ensures everything converts to a regular file. For lfs -> normal,
946 948 there's no 'lfs' destination repo requirement. For normal -> lfs, there is.
947 949
948 950 $ hg --config extensions.convert= --config 'lfs.track=none()' \
949 951 > convert repo8 convert_normal
950 952 initializing destination convert_normal repository
951 953 scanning source...
952 954 sorting...
953 955 converting...
954 956 2 a
955 957 1 b
956 958 0 meta
957 959 $ grep 'lfs' convert_normal/.hg/requires
958 960 [1]
959 961 $ hg --cwd convert_normal cat a1 -r 0 -T '{rawdata}'
960 962 THIS-IS-LFS-BECAUSE-10-BYTES
961 963
962 964 $ hg --config extensions.convert= --config lfs.threshold=10B \
963 965 > convert convert_normal convert_lfs
964 966 initializing destination convert_lfs repository
965 967 scanning source...
966 968 sorting...
967 969 converting...
968 970 2 a
969 971 1 b
970 972 0 meta
971 973
972 974 $ hg --cwd convert_lfs cat -r 0 a1 -T '{rawdata}'
973 975 version https://git-lfs.github.com/spec/v1
974 976 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
975 977 size 29
976 978 x-is-binary 0
977 979 $ hg --cwd convert_lfs debugdata a1 0
978 980 version https://git-lfs.github.com/spec/v1
979 981 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
980 982 size 29
981 983 x-is-binary 0
982 984 $ hg --cwd convert_lfs log -r 0 -T "{lfs_files % '{lfspointer % '{key}={value}\n'}'}"
983 985 version=https://git-lfs.github.com/spec/v1
984 986 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
985 987 size=29
986 988 x-is-binary=0
987 989 $ hg --cwd convert_lfs log -r 0 \
988 990 > -T '{lfs_files % "{get(lfspointer, "oid")}\n"}{lfs_files % "{lfspointer.oid}\n"}'
989 991 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
990 992 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
991 993 $ hg --cwd convert_lfs log -r 0 -T '{lfs_files % "{lfspointer}\n"}'
992 994 version=https://git-lfs.github.com/spec/v1 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024 size=29 x-is-binary=0
993 995 $ hg --cwd convert_lfs \
994 996 > log -r 'all()' -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}'
995 997 0: a1: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
996 998 1: a2: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
997 999 2: a2: 876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
998 1000
999 1001 $ grep 'lfs' convert_lfs/.hg/requires
1000 1002 lfs
1001 1003
1002 1004 The hashes in all stages of the conversion are unchanged.
1003 1005
1004 1006 $ hg -R repo8 log -T '{node|short}\n'
1005 1007 0fae949de7fa
1006 1008 9cd6bdffdac0
1007 1009 7f96794915f7
1008 1010 $ hg -R convert_normal log -T '{node|short}\n'
1009 1011 0fae949de7fa
1010 1012 9cd6bdffdac0
1011 1013 7f96794915f7
1012 1014 $ hg -R convert_lfs log -T '{node|short}\n'
1013 1015 0fae949de7fa
1014 1016 9cd6bdffdac0
1015 1017 7f96794915f7
1016 1018
1017 1019 This convert is trickier, because it contains deleted files (via `hg mv`)
1018 1020
1019 1021 $ hg --config extensions.convert= --config lfs.threshold=1000M \
1020 1022 > convert repo3 convert_normal2
1021 1023 initializing destination convert_normal2 repository
1022 1024 scanning source...
1023 1025 sorting...
1024 1026 converting...
1025 1027 4 commit with lfs content
1026 1028 3 renames
1027 1029 2 large to small, small to large
1028 1030 1 random modifications
1029 1031 0 switch large and small again
1030 1032 $ grep 'lfs' convert_normal2/.hg/requires
1031 1033 [1]
1032 1034 $ hg --cwd convert_normal2 debugdata large 0
1033 1035 LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS
1034 1036
1035 1037 $ hg --config extensions.convert= --config lfs.threshold=10B \
1036 1038 > convert convert_normal2 convert_lfs2
1037 1039 initializing destination convert_lfs2 repository
1038 1040 scanning source...
1039 1041 sorting...
1040 1042 converting...
1041 1043 4 commit with lfs content
1042 1044 3 renames
1043 1045 2 large to small, small to large
1044 1046 1 random modifications
1045 1047 0 switch large and small again
1046 1048 $ grep 'lfs' convert_lfs2/.hg/requires
1047 1049 lfs
1048 1050 $ hg --cwd convert_lfs2 debugdata large 0
1049 1051 version https://git-lfs.github.com/spec/v1
1050 1052 oid sha256:66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
1051 1053 size 39
1052 1054 x-is-binary 0
1053 1055
1054 1056 Committing deleted files works:
1055 1057
1056 1058 $ hg init $TESTTMP/repo-del
1057 1059 $ cd $TESTTMP/repo-del
1058 1060 $ echo 1 > A
1059 1061 $ hg commit -m 'add A' -A A
1060 1062 $ hg rm A
1061 1063 $ hg commit -m 'rm A'
1062 1064
1063 1065 Bad .hglfs files will block the commit with a useful message
1064 1066
1065 1067 $ cat > .hglfs << EOF
1066 1068 > [track]
1067 1069 > **.test = size(">5B")
1068 1070 > bad file ... no commit
1069 1071 > EOF
1070 1072
1071 1073 $ echo x > file.txt
1072 1074 $ hg ci -Aqm 'should fail'
1073 1075 hg: parse error at .hglfs:3: bad file ... no commit
1074 1076 [255]
1075 1077
1076 1078 $ cat > .hglfs << EOF
1077 1079 > [track]
1078 1080 > **.test = size(">5B")
1079 1081 > ** = nonexistent()
1080 1082 > EOF
1081 1083
1082 1084 $ hg ci -Aqm 'should fail'
1083 1085 abort: parse error in .hglfs: unknown identifier: nonexistent
1084 1086 [255]
1085 1087
1086 1088 '**' works out to mean all files.
1087 1089
1088 1090 $ cat > .hglfs << EOF
1089 1091 > [track]
1090 1092 > path:.hglfs = none()
1091 1093 > **.test = size(">5B")
1092 1094 > **.exclude = none()
1093 1095 > ** = size(">10B")
1094 1096 > EOF
1095 1097
1096 1098 The LFS policy takes effect without tracking the .hglfs file
1097 1099
1098 1100 $ echo 'largefile' > lfs.test
1099 1101 $ echo '012345678901234567890' > nolfs.exclude
1100 1102 $ echo '01234567890123456' > lfs.catchall
1101 1103 $ hg add *
1102 1104 $ hg ci -qm 'before add .hglfs'
1103 1105 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1104 1106 2: lfs.catchall: d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1105 1107 lfs.test: 5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1106 1108
1107 1109 The .hglfs file works when tracked
1108 1110
1109 1111 $ echo 'largefile2' > lfs.test
1110 1112 $ echo '012345678901234567890a' > nolfs.exclude
1111 1113 $ echo '01234567890123456a' > lfs.catchall
1112 1114 $ hg ci -Aqm 'after adding .hglfs'
1113 1115 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1114 1116 3: lfs.catchall: 31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1115 1117 lfs.test: 8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1116 1118
1117 1119 The LFS policy stops when the .hglfs is gone
1118 1120
1119 1121 $ mv .hglfs .hglfs_
1120 1122 $ echo 'largefile3' > lfs.test
1121 1123 $ echo '012345678901234567890abc' > nolfs.exclude
1122 1124 $ echo '01234567890123456abc' > lfs.catchall
1123 1125 $ hg ci -qm 'file test' -X .hglfs
1124 1126 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1125 1127 4:
1126 1128
1127 1129 $ mv .hglfs_ .hglfs
1128 1130 $ echo '012345678901234567890abc' > lfs.test
1129 1131 $ hg ci -m 'back to lfs'
1130 1132 $ hg rm lfs.test
1131 1133 $ hg ci -qm 'remove lfs'
1132 1134
1133 1135 {lfs_files} will list deleted files too
1134 1136
1135 1137 $ hg log -T "{lfs_files % '{rev} {file}: {lfspointer.oid}\n'}"
1136 1138 6 lfs.test:
1137 1139 5 lfs.test: sha256:43f8f41171b6f62a6b61ba4ce98a8a6c1649240a47ebafd43120aa215ac9e7f6
1138 1140 3 lfs.catchall: sha256:31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1139 1141 3 lfs.test: sha256:8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1140 1142 2 lfs.catchall: sha256:d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1141 1143 2 lfs.test: sha256:5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1142 1144
1143 1145 $ hg log -r 'file("set:lfs()")' -T '{rev} {join(lfs_files, ", ")}\n'
1144 1146 2 lfs.catchall, lfs.test
1145 1147 3 lfs.catchall, lfs.test
1146 1148 5 lfs.test
1147 1149 6 lfs.test
1148 1150
1149 1151 $ cd ..
1150 1152
1151 1153 Unbundling adds a requirement to a non-lfs repo, if necessary.
1152 1154
1153 1155 $ hg bundle -R $TESTTMP/repo-del -qr 0 --base null nolfs.hg
1154 1156 $ hg bundle -R convert_lfs2 -qr tip --base null lfs.hg
1155 1157 $ hg init unbundle
1156 1158 $ hg pull -R unbundle -q nolfs.hg
1157 1159 $ grep lfs unbundle/.hg/requires
1158 1160 [1]
1159 1161 $ hg pull -R unbundle -q lfs.hg
1160 1162 $ grep lfs unbundle/.hg/requires
1161 1163 lfs
1162 1164
1163 1165 $ hg init no_lfs
1164 1166 $ cat >> no_lfs/.hg/hgrc <<EOF
1165 1167 > [experimental]
1166 1168 > changegroup3 = True
1167 1169 > [extensions]
1168 1170 > lfs=!
1169 1171 > EOF
1170 1172 $ cp -R no_lfs no_lfs2
1171 1173
1172 1174 Pushing from a local lfs repo to a local repo without an lfs requirement and
1173 1175 with lfs disabled, fails.
1174 1176
1175 1177 $ hg push -R convert_lfs2 no_lfs
1176 1178 pushing to no_lfs
1177 1179 abort: required features are not supported in the destination: lfs
1178 1180 [255]
1179 1181 $ grep lfs no_lfs/.hg/requires
1180 1182 [1]
1181 1183
1182 1184 Pulling from a local lfs repo to a local repo without an lfs requirement and
1183 1185 with lfs disabled, fails.
1184 1186
1185 1187 $ hg pull -R no_lfs2 convert_lfs2
1186 1188 pulling from convert_lfs2
1187 1189 abort: required features are not supported in the destination: lfs
1188 1190 [255]
1189 1191 $ grep lfs no_lfs2/.hg/requires
1190 1192 [1]
General Comments 0
You need to be logged in to leave comments. Login now