##// END OF EJS Templates
lfs: prefetch lfs blobs when applying merge updates...
Matt Harbison -
r35940:0b79f99f default
parent child Browse files
Show More
@@ -1,378 +1,381 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will prompt setting this when it must use this value.
91 # (default: unset)
91 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
92 url = https://example.com/repo.git/info/lfs
93
93
94 # Which files to track in LFS. Path tests are "**.extname" for file
94 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
95 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
96 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
97 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
98 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
99 #
100 # Some examples:
100 # Some examples:
101 # - all() # everything
101 # - all() # everything
102 # - none() # nothing
102 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
103 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
104 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
106 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
109 # (default: none())
110 #
110 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
112 # will eventually be deprecated and removed.
113 track = size(">10M")
113 track = size(">10M")
114
114
115 # how many times to retry before giving up on transferring an object
115 # how many times to retry before giving up on transferring an object
116 retry = 5
116 retry = 5
117
117
118 # the local directory to store lfs files for sharing across local clones.
118 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
119 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
120 usercache = /path/to/global/cache
121 """
121 """
122
122
123 from __future__ import absolute_import
123 from __future__ import absolute_import
124
124
125 from mercurial.i18n import _
125 from mercurial.i18n import _
126
126
127 from mercurial import (
127 from mercurial import (
128 bundle2,
128 bundle2,
129 changegroup,
129 changegroup,
130 cmdutil,
130 cmdutil,
131 config,
131 config,
132 context,
132 context,
133 error,
133 error,
134 exchange,
134 exchange,
135 extensions,
135 extensions,
136 filelog,
136 filelog,
137 fileset,
137 fileset,
138 hg,
138 hg,
139 localrepo,
139 localrepo,
140 merge,
140 minifileset,
141 minifileset,
141 node,
142 node,
142 pycompat,
143 pycompat,
143 registrar,
144 registrar,
144 revlog,
145 revlog,
145 scmutil,
146 scmutil,
146 templatekw,
147 templatekw,
147 upgrade,
148 upgrade,
148 util,
149 util,
149 vfs as vfsmod,
150 vfs as vfsmod,
150 wireproto,
151 wireproto,
151 )
152 )
152
153
153 from . import (
154 from . import (
154 blobstore,
155 blobstore,
155 wrapper,
156 wrapper,
156 )
157 )
157
158
158 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
159 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
159 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
160 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
160 # be specifying the version(s) of Mercurial they are tested with, or
161 # be specifying the version(s) of Mercurial they are tested with, or
161 # leave the attribute unspecified.
162 # leave the attribute unspecified.
162 testedwith = 'ships-with-hg-core'
163 testedwith = 'ships-with-hg-core'
163
164
164 configtable = {}
165 configtable = {}
165 configitem = registrar.configitem(configtable)
166 configitem = registrar.configitem(configtable)
166
167
167 configitem('experimental', 'lfs.user-agent',
168 configitem('experimental', 'lfs.user-agent',
168 default=None,
169 default=None,
169 )
170 )
170 configitem('experimental', 'lfs.worker-enable',
171 configitem('experimental', 'lfs.worker-enable',
171 default=False,
172 default=False,
172 )
173 )
173
174
174 configitem('lfs', 'url',
175 configitem('lfs', 'url',
175 default=None,
176 default=None,
176 )
177 )
177 configitem('lfs', 'usercache',
178 configitem('lfs', 'usercache',
178 default=None,
179 default=None,
179 )
180 )
180 # Deprecated
181 # Deprecated
181 configitem('lfs', 'threshold',
182 configitem('lfs', 'threshold',
182 default=None,
183 default=None,
183 )
184 )
184 configitem('lfs', 'track',
185 configitem('lfs', 'track',
185 default='none()',
186 default='none()',
186 )
187 )
187 configitem('lfs', 'retry',
188 configitem('lfs', 'retry',
188 default=5,
189 default=5,
189 )
190 )
190
191
191 cmdtable = {}
192 cmdtable = {}
192 command = registrar.command(cmdtable)
193 command = registrar.command(cmdtable)
193
194
194 templatekeyword = registrar.templatekeyword()
195 templatekeyword = registrar.templatekeyword()
195
196
196 def featuresetup(ui, supported):
197 def featuresetup(ui, supported):
197 # don't die on seeing a repo with the lfs requirement
198 # don't die on seeing a repo with the lfs requirement
198 supported |= {'lfs'}
199 supported |= {'lfs'}
199
200
200 def uisetup(ui):
201 def uisetup(ui):
201 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
202 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
202
203
203 def reposetup(ui, repo):
204 def reposetup(ui, repo):
204 # Nothing to do with a remote repo
205 # Nothing to do with a remote repo
205 if not repo.local():
206 if not repo.local():
206 return
207 return
207
208
208 repo.svfs.lfslocalblobstore = blobstore.local(repo)
209 repo.svfs.lfslocalblobstore = blobstore.local(repo)
209 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
210 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
210
211
211 class lfsrepo(repo.__class__):
212 class lfsrepo(repo.__class__):
212 @localrepo.unfilteredmethod
213 @localrepo.unfilteredmethod
213 def commitctx(self, ctx, error=False):
214 def commitctx(self, ctx, error=False):
214 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
215 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
215 return super(lfsrepo, self).commitctx(ctx, error)
216 return super(lfsrepo, self).commitctx(ctx, error)
216
217
217 repo.__class__ = lfsrepo
218 repo.__class__ = lfsrepo
218
219
219 if 'lfs' not in repo.requirements:
220 if 'lfs' not in repo.requirements:
220 def checkrequireslfs(ui, repo, **kwargs):
221 def checkrequireslfs(ui, repo, **kwargs):
221 if 'lfs' not in repo.requirements:
222 if 'lfs' not in repo.requirements:
222 last = kwargs.get('node_last')
223 last = kwargs.get('node_last')
223 _bin = node.bin
224 _bin = node.bin
224 if last:
225 if last:
225 s = repo.set('%n:%n', _bin(kwargs['node']), _bin(last))
226 s = repo.set('%n:%n', _bin(kwargs['node']), _bin(last))
226 else:
227 else:
227 s = repo.set('%n', _bin(kwargs['node']))
228 s = repo.set('%n', _bin(kwargs['node']))
228 for ctx in s:
229 for ctx in s:
229 # TODO: is there a way to just walk the files in the commit?
230 # TODO: is there a way to just walk the files in the commit?
230 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
231 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
231 repo.requirements.add('lfs')
232 repo.requirements.add('lfs')
232 repo._writerequirements()
233 repo._writerequirements()
233 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
234 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
234 break
235 break
235
236
236 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
237 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
237 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
238 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
238 else:
239 else:
239 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
240 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
240
241
241 def _trackedmatcher(repo):
242 def _trackedmatcher(repo):
242 """Return a function (path, size) -> bool indicating whether or not to
243 """Return a function (path, size) -> bool indicating whether or not to
243 track a given file with lfs."""
244 track a given file with lfs."""
244 if not repo.wvfs.exists('.hglfs'):
245 if not repo.wvfs.exists('.hglfs'):
245 # No '.hglfs' in wdir. Fallback to config for now.
246 # No '.hglfs' in wdir. Fallback to config for now.
246 trackspec = repo.ui.config('lfs', 'track')
247 trackspec = repo.ui.config('lfs', 'track')
247
248
248 # deprecated config: lfs.threshold
249 # deprecated config: lfs.threshold
249 threshold = repo.ui.configbytes('lfs', 'threshold')
250 threshold = repo.ui.configbytes('lfs', 'threshold')
250 if threshold:
251 if threshold:
251 fileset.parse(trackspec) # make sure syntax errors are confined
252 fileset.parse(trackspec) # make sure syntax errors are confined
252 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
253 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
253
254
254 return minifileset.compile(trackspec)
255 return minifileset.compile(trackspec)
255
256
256 data = repo.wvfs.tryread('.hglfs')
257 data = repo.wvfs.tryread('.hglfs')
257 if not data:
258 if not data:
258 return lambda p, s: False
259 return lambda p, s: False
259
260
260 # Parse errors here will abort with a message that points to the .hglfs file
261 # Parse errors here will abort with a message that points to the .hglfs file
261 # and line number.
262 # and line number.
262 cfg = config.config()
263 cfg = config.config()
263 cfg.parse('.hglfs', data)
264 cfg.parse('.hglfs', data)
264
265
265 try:
266 try:
266 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
267 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
267 for pattern, rule in cfg.items('track')]
268 for pattern, rule in cfg.items('track')]
268 except error.ParseError as e:
269 except error.ParseError as e:
269 # The original exception gives no indicator that the error is in the
270 # The original exception gives no indicator that the error is in the
270 # .hglfs file, so add that.
271 # .hglfs file, so add that.
271
272
272 # TODO: See if the line number of the file can be made available.
273 # TODO: See if the line number of the file can be made available.
273 raise error.Abort(_('parse error in .hglfs: %s') % e)
274 raise error.Abort(_('parse error in .hglfs: %s') % e)
274
275
275 def _match(path, size):
276 def _match(path, size):
276 for pat, rule in rules:
277 for pat, rule in rules:
277 if pat(path, size):
278 if pat(path, size):
278 return rule(path, size)
279 return rule(path, size)
279
280
280 return False
281 return False
281
282
282 return _match
283 return _match
283
284
284 def wrapfilelog(filelog):
285 def wrapfilelog(filelog):
285 wrapfunction = extensions.wrapfunction
286 wrapfunction = extensions.wrapfunction
286
287
287 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
288 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
288 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
289 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
289 wrapfunction(filelog, 'size', wrapper.filelogsize)
290 wrapfunction(filelog, 'size', wrapper.filelogsize)
290
291
291 def extsetup(ui):
292 def extsetup(ui):
292 wrapfilelog(filelog.filelog)
293 wrapfilelog(filelog.filelog)
293
294
294 wrapfunction = extensions.wrapfunction
295 wrapfunction = extensions.wrapfunction
295
296
296 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
297 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
297 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
298 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
298
299
299 wrapfunction(upgrade, '_finishdatamigration',
300 wrapfunction(upgrade, '_finishdatamigration',
300 wrapper.upgradefinishdatamigration)
301 wrapper.upgradefinishdatamigration)
301
302
302 wrapfunction(upgrade, 'preservedrequirements',
303 wrapfunction(upgrade, 'preservedrequirements',
303 wrapper.upgraderequirements)
304 wrapper.upgraderequirements)
304
305
305 wrapfunction(upgrade, 'supporteddestrequirements',
306 wrapfunction(upgrade, 'supporteddestrequirements',
306 wrapper.upgraderequirements)
307 wrapper.upgraderequirements)
307
308
308 wrapfunction(changegroup,
309 wrapfunction(changegroup,
309 'supportedoutgoingversions',
310 'supportedoutgoingversions',
310 wrapper.supportedoutgoingversions)
311 wrapper.supportedoutgoingversions)
311 wrapfunction(changegroup,
312 wrapfunction(changegroup,
312 'allsupportedversions',
313 'allsupportedversions',
313 wrapper.allsupportedversions)
314 wrapper.allsupportedversions)
314
315
315 wrapfunction(exchange, 'push', wrapper.push)
316 wrapfunction(exchange, 'push', wrapper.push)
316 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
317 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
317
318
318 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
319 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
319 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
320 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
320 context.basefilectx.islfs = wrapper.filectxislfs
321 context.basefilectx.islfs = wrapper.filectxislfs
321
322
322 revlog.addflagprocessor(
323 revlog.addflagprocessor(
323 revlog.REVIDX_EXTSTORED,
324 revlog.REVIDX_EXTSTORED,
324 (
325 (
325 wrapper.readfromstore,
326 wrapper.readfromstore,
326 wrapper.writetostore,
327 wrapper.writetostore,
327 wrapper.bypasscheckhash,
328 wrapper.bypasscheckhash,
328 ),
329 ),
329 )
330 )
330
331
331 wrapfunction(hg, 'clone', wrapper.hgclone)
332 wrapfunction(hg, 'clone', wrapper.hgclone)
332 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
333 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
333
334
335 wrapfunction(merge, 'applyupdates', wrapper.mergemodapplyupdates)
336
334 # Make bundle choose changegroup3 instead of changegroup2. This affects
337 # Make bundle choose changegroup3 instead of changegroup2. This affects
335 # "hg bundle" command. Note: it does not cover all bundle formats like
338 # "hg bundle" command. Note: it does not cover all bundle formats like
336 # "packed1". Using "packed1" with lfs will likely cause trouble.
339 # "packed1". Using "packed1" with lfs will likely cause trouble.
337 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
340 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
338 for k in names:
341 for k in names:
339 exchange._bundlespeccgversions[k] = '03'
342 exchange._bundlespeccgversions[k] = '03'
340
343
341 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
344 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
342 # options and blob stores are passed from othervfs to the new readonlyvfs.
345 # options and blob stores are passed from othervfs to the new readonlyvfs.
343 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
346 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
344
347
345 # when writing a bundle via "hg bundle" command, upload related LFS blobs
348 # when writing a bundle via "hg bundle" command, upload related LFS blobs
346 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
349 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
347
350
348 @templatekeyword('lfs_files')
351 @templatekeyword('lfs_files')
349 def lfsfiles(repo, ctx, **args):
352 def lfsfiles(repo, ctx, **args):
350 """List of strings. LFS files added or modified by the changeset."""
353 """List of strings. LFS files added or modified by the changeset."""
351 args = pycompat.byteskwargs(args)
354 args = pycompat.byteskwargs(args)
352
355
353 pointers = wrapper.pointersfromctx(ctx) # {path: pointer}
356 pointers = wrapper.pointersfromctx(ctx) # {path: pointer}
354 files = sorted(pointers.keys())
357 files = sorted(pointers.keys())
355
358
356 def pointer(v):
359 def pointer(v):
357 # In the file spec, version is first and the other keys are sorted.
360 # In the file spec, version is first and the other keys are sorted.
358 sortkeyfunc = lambda x: (x[0] != 'version', x)
361 sortkeyfunc = lambda x: (x[0] != 'version', x)
359 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
362 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
360 return util.sortdict(items)
363 return util.sortdict(items)
361
364
362 makemap = lambda v: {
365 makemap = lambda v: {
363 'file': v,
366 'file': v,
364 'lfsoid': pointers[v].oid(),
367 'lfsoid': pointers[v].oid(),
365 'lfspointer': templatekw.hybriddict(pointer(v)),
368 'lfspointer': templatekw.hybriddict(pointer(v)),
366 }
369 }
367
370
368 # TODO: make the separator ', '?
371 # TODO: make the separator ', '?
369 f = templatekw._showlist('lfs_file', files, args)
372 f = templatekw._showlist('lfs_file', files, args)
370 return templatekw._hybrid(f, files, makemap, pycompat.identity)
373 return templatekw._hybrid(f, files, makemap, pycompat.identity)
371
374
372 @command('debuglfsupload',
375 @command('debuglfsupload',
373 [('r', 'rev', [], _('upload large files introduced by REV'))])
376 [('r', 'rev', [], _('upload large files introduced by REV'))])
374 def debuglfsupload(ui, repo, **opts):
377 def debuglfsupload(ui, repo, **opts):
375 """upload lfs blobs added by the working copy parent or given revisions"""
378 """upload lfs blobs added by the working copy parent or given revisions"""
376 revs = opts.get('rev', [])
379 revs = opts.get('rev', [])
377 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
380 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
378 wrapper.uploadblobs(repo, pointers)
381 wrapper.uploadblobs(repo, pointers)
@@ -1,356 +1,392 b''
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, nullid, short
13 from mercurial.node import bin, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 filelog,
17 filelog,
18 revlog,
18 revlog,
19 util,
19 util,
20 )
20 )
21
21
22 from ..largefiles import lfutil
22 from ..largefiles import lfutil
23
23
24 from . import (
24 from . import (
25 blobstore,
25 blobstore,
26 pointer,
26 pointer,
27 )
27 )
28
28
29 def supportedoutgoingversions(orig, repo):
29 def supportedoutgoingversions(orig, repo):
30 versions = orig(repo)
30 versions = orig(repo)
31 if 'lfs' in repo.requirements:
31 if 'lfs' in repo.requirements:
32 versions.discard('01')
32 versions.discard('01')
33 versions.discard('02')
33 versions.discard('02')
34 versions.add('03')
34 versions.add('03')
35 return versions
35 return versions
36
36
37 def allsupportedversions(orig, ui):
37 def allsupportedversions(orig, ui):
38 versions = orig(ui)
38 versions = orig(ui)
39 versions.add('03')
39 versions.add('03')
40 return versions
40 return versions
41
41
42 def _capabilities(orig, repo, proto):
42 def _capabilities(orig, repo, proto):
43 '''Wrap server command to announce lfs server capability'''
43 '''Wrap server command to announce lfs server capability'''
44 caps = orig(repo, proto)
44 caps = orig(repo, proto)
45 # XXX: change to 'lfs=serve' when separate git server isn't required?
45 # XXX: change to 'lfs=serve' when separate git server isn't required?
46 caps.append('lfs')
46 caps.append('lfs')
47 return caps
47 return caps
48
48
49 def bypasscheckhash(self, text):
49 def bypasscheckhash(self, text):
50 return False
50 return False
51
51
52 def readfromstore(self, text):
52 def readfromstore(self, text):
53 """Read filelog content from local blobstore transform for flagprocessor.
53 """Read filelog content from local blobstore transform for flagprocessor.
54
54
55 Default tranform for flagprocessor, returning contents from blobstore.
55 Default tranform for flagprocessor, returning contents from blobstore.
56 Returns a 2-typle (text, validatehash) where validatehash is True as the
56 Returns a 2-typle (text, validatehash) where validatehash is True as the
57 contents of the blobstore should be checked using checkhash.
57 contents of the blobstore should be checked using checkhash.
58 """
58 """
59 p = pointer.deserialize(text)
59 p = pointer.deserialize(text)
60 oid = p.oid()
60 oid = p.oid()
61 store = self.opener.lfslocalblobstore
61 store = self.opener.lfslocalblobstore
62 if not store.has(oid):
62 if not store.has(oid):
63 p.filename = self.filename
63 p.filename = self.filename
64 self.opener.lfsremoteblobstore.readbatch([p], store)
64 self.opener.lfsremoteblobstore.readbatch([p], store)
65
65
66 # The caller will validate the content
66 # The caller will validate the content
67 text = store.read(oid, verify=False)
67 text = store.read(oid, verify=False)
68
68
69 # pack hg filelog metadata
69 # pack hg filelog metadata
70 hgmeta = {}
70 hgmeta = {}
71 for k in p.keys():
71 for k in p.keys():
72 if k.startswith('x-hg-'):
72 if k.startswith('x-hg-'):
73 name = k[len('x-hg-'):]
73 name = k[len('x-hg-'):]
74 hgmeta[name] = p[k]
74 hgmeta[name] = p[k]
75 if hgmeta or text.startswith('\1\n'):
75 if hgmeta or text.startswith('\1\n'):
76 text = filelog.packmeta(hgmeta, text)
76 text = filelog.packmeta(hgmeta, text)
77
77
78 return (text, True)
78 return (text, True)
79
79
80 def writetostore(self, text):
80 def writetostore(self, text):
81 # hg filelog metadata (includes rename, etc)
81 # hg filelog metadata (includes rename, etc)
82 hgmeta, offset = filelog.parsemeta(text)
82 hgmeta, offset = filelog.parsemeta(text)
83 if offset and offset > 0:
83 if offset and offset > 0:
84 # lfs blob does not contain hg filelog metadata
84 # lfs blob does not contain hg filelog metadata
85 text = text[offset:]
85 text = text[offset:]
86
86
87 # git-lfs only supports sha256
87 # git-lfs only supports sha256
88 oid = hashlib.sha256(text).hexdigest()
88 oid = hashlib.sha256(text).hexdigest()
89 self.opener.lfslocalblobstore.write(oid, text)
89 self.opener.lfslocalblobstore.write(oid, text)
90
90
91 # replace contents with metadata
91 # replace contents with metadata
92 longoid = 'sha256:%s' % oid
92 longoid = 'sha256:%s' % oid
93 metadata = pointer.gitlfspointer(oid=longoid, size=str(len(text)))
93 metadata = pointer.gitlfspointer(oid=longoid, size=str(len(text)))
94
94
95 # by default, we expect the content to be binary. however, LFS could also
95 # by default, we expect the content to be binary. however, LFS could also
96 # be used for non-binary content. add a special entry for non-binary data.
96 # be used for non-binary content. add a special entry for non-binary data.
97 # this will be used by filectx.isbinary().
97 # this will be used by filectx.isbinary().
98 if not util.binary(text):
98 if not util.binary(text):
99 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
99 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
100 metadata['x-is-binary'] = '0'
100 metadata['x-is-binary'] = '0'
101
101
102 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
102 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
103 if hgmeta is not None:
103 if hgmeta is not None:
104 for k, v in hgmeta.iteritems():
104 for k, v in hgmeta.iteritems():
105 metadata['x-hg-%s' % k] = v
105 metadata['x-hg-%s' % k] = v
106
106
107 rawtext = metadata.serialize()
107 rawtext = metadata.serialize()
108 return (rawtext, False)
108 return (rawtext, False)
109
109
110 def _islfs(rlog, node=None, rev=None):
110 def _islfs(rlog, node=None, rev=None):
111 if rev is None:
111 if rev is None:
112 if node is None:
112 if node is None:
113 # both None - likely working copy content where node is not ready
113 # both None - likely working copy content where node is not ready
114 return False
114 return False
115 rev = rlog.rev(node)
115 rev = rlog.rev(node)
116 else:
116 else:
117 node = rlog.node(rev)
117 node = rlog.node(rev)
118 if node == nullid:
118 if node == nullid:
119 return False
119 return False
120 flags = rlog.flags(rev)
120 flags = rlog.flags(rev)
121 return bool(flags & revlog.REVIDX_EXTSTORED)
121 return bool(flags & revlog.REVIDX_EXTSTORED)
122
122
123 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
123 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
124 cachedelta=None, node=None,
124 cachedelta=None, node=None,
125 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
125 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
126 textlen = len(text)
126 textlen = len(text)
127 # exclude hg rename meta from file size
127 # exclude hg rename meta from file size
128 meta, offset = filelog.parsemeta(text)
128 meta, offset = filelog.parsemeta(text)
129 if offset:
129 if offset:
130 textlen -= offset
130 textlen -= offset
131
131
132 lfstrack = self.opener.options['lfstrack']
132 lfstrack = self.opener.options['lfstrack']
133
133
134 if lfstrack(self.filename, textlen):
134 if lfstrack(self.filename, textlen):
135 flags |= revlog.REVIDX_EXTSTORED
135 flags |= revlog.REVIDX_EXTSTORED
136
136
137 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
137 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
138 node=node, flags=flags, **kwds)
138 node=node, flags=flags, **kwds)
139
139
140 def filelogrenamed(orig, self, node):
140 def filelogrenamed(orig, self, node):
141 if _islfs(self, node):
141 if _islfs(self, node):
142 rawtext = self.revision(node, raw=True)
142 rawtext = self.revision(node, raw=True)
143 if not rawtext:
143 if not rawtext:
144 return False
144 return False
145 metadata = pointer.deserialize(rawtext)
145 metadata = pointer.deserialize(rawtext)
146 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
146 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
147 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
147 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
148 else:
148 else:
149 return False
149 return False
150 return orig(self, node)
150 return orig(self, node)
151
151
152 def filelogsize(orig, self, rev):
152 def filelogsize(orig, self, rev):
153 if _islfs(self, rev=rev):
153 if _islfs(self, rev=rev):
154 # fast path: use lfs metadata to answer size
154 # fast path: use lfs metadata to answer size
155 rawtext = self.revision(rev, raw=True)
155 rawtext = self.revision(rev, raw=True)
156 metadata = pointer.deserialize(rawtext)
156 metadata = pointer.deserialize(rawtext)
157 return int(metadata['size'])
157 return int(metadata['size'])
158 return orig(self, rev)
158 return orig(self, rev)
159
159
160 def filectxcmp(orig, self, fctx):
160 def filectxcmp(orig, self, fctx):
161 """returns True if text is different than fctx"""
161 """returns True if text is different than fctx"""
162 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
162 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
163 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
163 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
164 # fast path: check LFS oid
164 # fast path: check LFS oid
165 p1 = pointer.deserialize(self.rawdata())
165 p1 = pointer.deserialize(self.rawdata())
166 p2 = pointer.deserialize(fctx.rawdata())
166 p2 = pointer.deserialize(fctx.rawdata())
167 return p1.oid() != p2.oid()
167 return p1.oid() != p2.oid()
168 return orig(self, fctx)
168 return orig(self, fctx)
169
169
170 def filectxisbinary(orig, self):
170 def filectxisbinary(orig, self):
171 if self.islfs():
171 if self.islfs():
172 # fast path: use lfs metadata to answer isbinary
172 # fast path: use lfs metadata to answer isbinary
173 metadata = pointer.deserialize(self.rawdata())
173 metadata = pointer.deserialize(self.rawdata())
174 # if lfs metadata says nothing, assume it's binary by default
174 # if lfs metadata says nothing, assume it's binary by default
175 return bool(int(metadata.get('x-is-binary', 1)))
175 return bool(int(metadata.get('x-is-binary', 1)))
176 return orig(self)
176 return orig(self)
177
177
178 def filectxislfs(self):
178 def filectxislfs(self):
179 return _islfs(self.filelog(), self.filenode())
179 return _islfs(self.filelog(), self.filenode())
180
180
181 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
181 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
182 orig(fm, ctx, matcher, path, decode)
182 orig(fm, ctx, matcher, path, decode)
183 fm.data(rawdata=ctx[path].rawdata())
183 fm.data(rawdata=ctx[path].rawdata())
184
184
185 def convertsink(orig, sink):
185 def convertsink(orig, sink):
186 sink = orig(sink)
186 sink = orig(sink)
187 if sink.repotype == 'hg':
187 if sink.repotype == 'hg':
188 class lfssink(sink.__class__):
188 class lfssink(sink.__class__):
189 def putcommit(self, files, copies, parents, commit, source, revmap,
189 def putcommit(self, files, copies, parents, commit, source, revmap,
190 full, cleanp2):
190 full, cleanp2):
191 pc = super(lfssink, self).putcommit
191 pc = super(lfssink, self).putcommit
192 node = pc(files, copies, parents, commit, source, revmap, full,
192 node = pc(files, copies, parents, commit, source, revmap, full,
193 cleanp2)
193 cleanp2)
194
194
195 if 'lfs' not in self.repo.requirements:
195 if 'lfs' not in self.repo.requirements:
196 ctx = self.repo[node]
196 ctx = self.repo[node]
197
197
198 # The file list may contain removed files, so check for
198 # The file list may contain removed files, so check for
199 # membership before assuming it is in the context.
199 # membership before assuming it is in the context.
200 if any(f in ctx and ctx[f].islfs() for f, n in files):
200 if any(f in ctx and ctx[f].islfs() for f, n in files):
201 self.repo.requirements.add('lfs')
201 self.repo.requirements.add('lfs')
202 self.repo._writerequirements()
202 self.repo._writerequirements()
203
203
204 # Permanently enable lfs locally
204 # Permanently enable lfs locally
205 self.repo.vfs.append(
205 self.repo.vfs.append(
206 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
206 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
207
207
208 return node
208 return node
209
209
210 sink.__class__ = lfssink
210 sink.__class__ = lfssink
211
211
212 return sink
212 return sink
213
213
214 def vfsinit(orig, self, othervfs):
214 def vfsinit(orig, self, othervfs):
215 orig(self, othervfs)
215 orig(self, othervfs)
216 # copy lfs related options
216 # copy lfs related options
217 for k, v in othervfs.options.items():
217 for k, v in othervfs.options.items():
218 if k.startswith('lfs'):
218 if k.startswith('lfs'):
219 self.options[k] = v
219 self.options[k] = v
220 # also copy lfs blobstores. note: this can run before reposetup, so lfs
220 # also copy lfs blobstores. note: this can run before reposetup, so lfs
221 # blobstore attributes are not always ready at this time.
221 # blobstore attributes are not always ready at this time.
222 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
222 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
223 if util.safehasattr(othervfs, name):
223 if util.safehasattr(othervfs, name):
224 setattr(self, name, getattr(othervfs, name))
224 setattr(self, name, getattr(othervfs, name))
225
225
226 def hgclone(orig, ui, opts, *args, **kwargs):
226 def hgclone(orig, ui, opts, *args, **kwargs):
227 result = orig(ui, opts, *args, **kwargs)
227 result = orig(ui, opts, *args, **kwargs)
228
228
229 if result is not None:
229 if result is not None:
230 sourcerepo, destrepo = result
230 sourcerepo, destrepo = result
231 repo = destrepo.local()
231 repo = destrepo.local()
232
232
233 # When cloning to a remote repo (like through SSH), no repo is available
233 # When cloning to a remote repo (like through SSH), no repo is available
234 # from the peer. Therefore the hgrc can't be updated.
234 # from the peer. Therefore the hgrc can't be updated.
235 if not repo:
235 if not repo:
236 return result
236 return result
237
237
238 # If lfs is required for this repo, permanently enable it locally
238 # If lfs is required for this repo, permanently enable it locally
239 if 'lfs' in repo.requirements:
239 if 'lfs' in repo.requirements:
240 repo.vfs.append('hgrc',
240 repo.vfs.append('hgrc',
241 util.tonativeeol('\n[extensions]\nlfs=\n'))
241 util.tonativeeol('\n[extensions]\nlfs=\n'))
242
242
243 return result
243 return result
244
244
245 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
245 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
246 orig(sourcerepo, destrepo, bookmarks, defaultpath)
246 orig(sourcerepo, destrepo, bookmarks, defaultpath)
247
247
248 # If lfs is required for this repo, permanently enable it locally
248 # If lfs is required for this repo, permanently enable it locally
249 if 'lfs' in destrepo.requirements:
249 if 'lfs' in destrepo.requirements:
250 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
250 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
251
251
252 def _prefetchfiles(repo, ctx, files):
253 """Ensure that required LFS blobs are present, fetching them as a group if
254 needed.
255
256 This is centralized logic for various prefetch hooks."""
257 pointers = []
258 localstore = repo.svfs.lfslocalblobstore
259
260 for f in files:
261 p = pointerfromctx(ctx, f)
262 if p and not localstore.has(p.oid()):
263 p.filename = f
264 pointers.append(p)
265
266 if pointers:
267 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
268
269 def mergemodapplyupdates(orig, repo, actions, wctx, mctx, overwrite,
270 labels=None):
271 """Ensure that the required LFS blobs are present before applying updates,
272 fetching them as a group if needed.
273
274 This has the effect of ensuring all necessary LFS blobs are present before
275 making working directory changes during an update (including after clone and
276 share) or merge."""
277
278 # Skipping 'a', 'am', 'f', 'r', 'dm', 'e', 'k', 'p' and 'pr', because they
279 # don't touch mctx. 'cd' is skipped, because changed/deleted never resolves
280 # to something from the remote side.
281 oplist = [actions[a] for a in 'g dc dg m'.split()]
282
283 _prefetchfiles(repo, mctx,
284 [f for sublist in oplist for f, args, msg in sublist])
285
286 return orig(repo, actions, wctx, mctx, overwrite, labels)
287
252 def _canskipupload(repo):
288 def _canskipupload(repo):
253 # if remotestore is a null store, upload is a no-op and can be skipped
289 # if remotestore is a null store, upload is a no-op and can be skipped
254 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
290 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
255
291
256 def candownload(repo):
292 def candownload(repo):
257 # if remotestore is a null store, downloads will lead to nothing
293 # if remotestore is a null store, downloads will lead to nothing
258 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
294 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
259
295
260 def uploadblobsfromrevs(repo, revs):
296 def uploadblobsfromrevs(repo, revs):
261 '''upload lfs blobs introduced by revs
297 '''upload lfs blobs introduced by revs
262
298
263 Note: also used by other extensions e. g. infinitepush. avoid renaming.
299 Note: also used by other extensions e. g. infinitepush. avoid renaming.
264 '''
300 '''
265 if _canskipupload(repo):
301 if _canskipupload(repo):
266 return
302 return
267 pointers = extractpointers(repo, revs)
303 pointers = extractpointers(repo, revs)
268 uploadblobs(repo, pointers)
304 uploadblobs(repo, pointers)
269
305
270 def prepush(pushop):
306 def prepush(pushop):
271 """Prepush hook.
307 """Prepush hook.
272
308
273 Read through the revisions to push, looking for filelog entries that can be
309 Read through the revisions to push, looking for filelog entries that can be
274 deserialized into metadata so that we can block the push on their upload to
310 deserialized into metadata so that we can block the push on their upload to
275 the remote blobstore.
311 the remote blobstore.
276 """
312 """
277 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
313 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
278
314
279 def push(orig, repo, remote, *args, **kwargs):
315 def push(orig, repo, remote, *args, **kwargs):
280 """bail on push if the extension isn't enabled on remote when needed"""
316 """bail on push if the extension isn't enabled on remote when needed"""
281 if 'lfs' in repo.requirements:
317 if 'lfs' in repo.requirements:
282 # If the remote peer is for a local repo, the requirement tests in the
318 # If the remote peer is for a local repo, the requirement tests in the
283 # base class method enforce lfs support. Otherwise, some revisions in
319 # base class method enforce lfs support. Otherwise, some revisions in
284 # this repo use lfs, and the remote repo needs the extension loaded.
320 # this repo use lfs, and the remote repo needs the extension loaded.
285 if not remote.local() and not remote.capable('lfs'):
321 if not remote.local() and not remote.capable('lfs'):
286 # This is a copy of the message in exchange.push() when requirements
322 # This is a copy of the message in exchange.push() when requirements
287 # are missing between local repos.
323 # are missing between local repos.
288 m = _("required features are not supported in the destination: %s")
324 m = _("required features are not supported in the destination: %s")
289 raise error.Abort(m % 'lfs',
325 raise error.Abort(m % 'lfs',
290 hint=_('enable the lfs extension on the server'))
326 hint=_('enable the lfs extension on the server'))
291 return orig(repo, remote, *args, **kwargs)
327 return orig(repo, remote, *args, **kwargs)
292
328
293 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
329 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
294 *args, **kwargs):
330 *args, **kwargs):
295 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
331 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
296 uploadblobsfromrevs(repo, outgoing.missing)
332 uploadblobsfromrevs(repo, outgoing.missing)
297 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
333 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
298 **kwargs)
334 **kwargs)
299
335
300 def extractpointers(repo, revs):
336 def extractpointers(repo, revs):
301 """return a list of lfs pointers added by given revs"""
337 """return a list of lfs pointers added by given revs"""
302 repo.ui.debug('lfs: computing set of blobs to upload\n')
338 repo.ui.debug('lfs: computing set of blobs to upload\n')
303 pointers = {}
339 pointers = {}
304 for r in revs:
340 for r in revs:
305 ctx = repo[r]
341 ctx = repo[r]
306 for p in pointersfromctx(ctx).values():
342 for p in pointersfromctx(ctx).values():
307 pointers[p.oid()] = p
343 pointers[p.oid()] = p
308 return sorted(pointers.values())
344 return sorted(pointers.values())
309
345
310 def pointerfromctx(ctx, f):
346 def pointerfromctx(ctx, f):
311 """return a pointer for the named file from the given changectx, or None if
347 """return a pointer for the named file from the given changectx, or None if
312 the file isn't LFS."""
348 the file isn't LFS."""
313 if f not in ctx:
349 if f not in ctx:
314 return None
350 return None
315 fctx = ctx[f]
351 fctx = ctx[f]
316 if not _islfs(fctx.filelog(), fctx.filenode()):
352 if not _islfs(fctx.filelog(), fctx.filenode()):
317 return None
353 return None
318 try:
354 try:
319 return pointer.deserialize(fctx.rawdata())
355 return pointer.deserialize(fctx.rawdata())
320 except pointer.InvalidPointer as ex:
356 except pointer.InvalidPointer as ex:
321 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
357 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
322 % (f, short(ctx.node()), ex))
358 % (f, short(ctx.node()), ex))
323
359
324 def pointersfromctx(ctx):
360 def pointersfromctx(ctx):
325 """return a dict {path: pointer} for given single changectx"""
361 """return a dict {path: pointer} for given single changectx"""
326 result = {}
362 result = {}
327 for f in ctx.files():
363 for f in ctx.files():
328 p = pointerfromctx(ctx, f)
364 p = pointerfromctx(ctx, f)
329 if p:
365 if p:
330 result[f] = p
366 result[f] = p
331 return result
367 return result
332
368
333 def uploadblobs(repo, pointers):
369 def uploadblobs(repo, pointers):
334 """upload given pointers from local blobstore"""
370 """upload given pointers from local blobstore"""
335 if not pointers:
371 if not pointers:
336 return
372 return
337
373
338 remoteblob = repo.svfs.lfsremoteblobstore
374 remoteblob = repo.svfs.lfsremoteblobstore
339 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
375 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
340
376
341 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
377 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
342 orig(ui, srcrepo, dstrepo, requirements)
378 orig(ui, srcrepo, dstrepo, requirements)
343
379
344 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
380 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
345 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
381 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
346
382
347 for dirpath, dirs, files in srclfsvfs.walk():
383 for dirpath, dirs, files in srclfsvfs.walk():
348 for oid in files:
384 for oid in files:
349 ui.write(_('copying lfs blob %s\n') % oid)
385 ui.write(_('copying lfs blob %s\n') % oid)
350 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
386 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
351
387
352 def upgraderequirements(orig, repo):
388 def upgraderequirements(orig, repo):
353 reqs = orig(repo)
389 reqs = orig(repo)
354 if 'lfs' in repo.requirements:
390 if 'lfs' in repo.requirements:
355 reqs.add('lfs')
391 reqs.add('lfs')
356 return reqs
392 return reqs
@@ -1,208 +1,204 b''
1 #require lfs-test-server
1 #require lfs-test-server
2
2
3 $ LFS_LISTEN="tcp://:$HGPORT"
3 $ LFS_LISTEN="tcp://:$HGPORT"
4 $ LFS_HOST="localhost:$HGPORT"
4 $ LFS_HOST="localhost:$HGPORT"
5 $ LFS_PUBLIC=1
5 $ LFS_PUBLIC=1
6 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
6 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
7 #if no-windows
7 #if no-windows
8 $ lfs-test-server &> lfs-server.log &
8 $ lfs-test-server &> lfs-server.log &
9 $ echo $! >> $DAEMON_PIDS
9 $ echo $! >> $DAEMON_PIDS
10 #else
10 #else
11 $ cat >> $TESTTMP/spawn.py <<EOF
11 $ cat >> $TESTTMP/spawn.py <<EOF
12 > import os
12 > import os
13 > import subprocess
13 > import subprocess
14 > import sys
14 > import sys
15 >
15 >
16 > for path in os.environ["PATH"].split(os.pathsep):
16 > for path in os.environ["PATH"].split(os.pathsep):
17 > exe = os.path.join(path, 'lfs-test-server.exe')
17 > exe = os.path.join(path, 'lfs-test-server.exe')
18 > if os.path.exists(exe):
18 > if os.path.exists(exe):
19 > with open('lfs-server.log', 'wb') as out:
19 > with open('lfs-server.log', 'wb') as out:
20 > p = subprocess.Popen(exe, stdout=out, stderr=out)
20 > p = subprocess.Popen(exe, stdout=out, stderr=out)
21 > sys.stdout.write('%s\n' % p.pid)
21 > sys.stdout.write('%s\n' % p.pid)
22 > sys.exit(0)
22 > sys.exit(0)
23 > sys.exit(1)
23 > sys.exit(1)
24 > EOF
24 > EOF
25 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
25 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
26 #endif
26 #endif
27
27
28 $ cat >> $HGRCPATH <<EOF
28 $ cat >> $HGRCPATH <<EOF
29 > [extensions]
29 > [extensions]
30 > lfs=
30 > lfs=
31 > [lfs]
31 > [lfs]
32 > url=http://foo:bar@$LFS_HOST/
32 > url=http://foo:bar@$LFS_HOST/
33 > track=all()
33 > track=all()
34 > EOF
34 > EOF
35
35
36 $ hg init repo1
36 $ hg init repo1
37 $ cd repo1
37 $ cd repo1
38 $ echo THIS-IS-LFS > a
38 $ echo THIS-IS-LFS > a
39 $ hg commit -m a -A a
39 $ hg commit -m a -A a
40
40
41 A push can be serviced directly from the usercache if it isn't in the local
41 A push can be serviced directly from the usercache if it isn't in the local
42 store.
42 store.
43
43
44 $ hg init ../repo2
44 $ hg init ../repo2
45 $ mv .hg/store/lfs .hg/store/lfs_
45 $ mv .hg/store/lfs .hg/store/lfs_
46 $ hg push ../repo2 -v
46 $ hg push ../repo2 -v
47 pushing to ../repo2
47 pushing to ../repo2
48 searching for changes
48 searching for changes
49 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
49 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
50 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
50 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
51 lfs: uploaded 1 files (12 bytes)
51 lfs: uploaded 1 files (12 bytes)
52 1 changesets found
52 1 changesets found
53 uncompressed size of bundle content:
53 uncompressed size of bundle content:
54 * (changelog) (glob)
54 * (changelog) (glob)
55 * (manifests) (glob)
55 * (manifests) (glob)
56 * a (glob)
56 * a (glob)
57 adding changesets
57 adding changesets
58 adding manifests
58 adding manifests
59 adding file changes
59 adding file changes
60 added 1 changesets with 1 changes to 1 files
60 added 1 changesets with 1 changes to 1 files
61 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
61 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
62 $ mv .hg/store/lfs_ .hg/store/lfs
62 $ mv .hg/store/lfs_ .hg/store/lfs
63
63
64 Clear the cache to force a download
64 Clear the cache to force a download
65 $ rm -rf `hg config lfs.usercache`
65 $ rm -rf `hg config lfs.usercache`
66 $ cd ../repo2
66 $ cd ../repo2
67 $ hg update tip -v
67 $ hg update tip -v
68 resolving manifests
68 resolving manifests
69 getting a
70 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
69 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
71 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
70 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
72 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
71 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
72 getting a
73 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
73 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
74 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
74 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
75
75
76 When the server has some blobs already
76 When the server has some blobs already
77
77
78 $ hg mv a b
78 $ hg mv a b
79 $ echo ANOTHER-LARGE-FILE > c
79 $ echo ANOTHER-LARGE-FILE > c
80 $ echo ANOTHER-LARGE-FILE2 > d
80 $ echo ANOTHER-LARGE-FILE2 > d
81 $ hg commit -m b-and-c -A b c d
81 $ hg commit -m b-and-c -A b c d
82 $ hg push ../repo1 -v | grep -v '^ '
82 $ hg push ../repo1 -v | grep -v '^ '
83 pushing to ../repo1
83 pushing to ../repo1
84 searching for changes
84 searching for changes
85 lfs: need to transfer 2 objects (39 bytes)
85 lfs: need to transfer 2 objects (39 bytes)
86 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
86 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
87 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
87 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
88 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
88 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
89 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
89 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
90 lfs: uploaded 2 files (39 bytes)
90 lfs: uploaded 2 files (39 bytes)
91 1 changesets found
91 1 changesets found
92 uncompressed size of bundle content:
92 uncompressed size of bundle content:
93 adding changesets
93 adding changesets
94 adding manifests
94 adding manifests
95 adding file changes
95 adding file changes
96 added 1 changesets with 3 changes to 3 files
96 added 1 changesets with 3 changes to 3 files
97
97
98 Clear the cache to force a download
98 Clear the cache to force a download
99 $ rm -rf `hg config lfs.usercache`
99 $ rm -rf `hg config lfs.usercache`
100 $ hg --repo ../repo1 update tip -v
100 $ hg --repo ../repo1 update tip -v
101 resolving manifests
101 resolving manifests
102 getting b
102 lfs: need to transfer 2 objects (39 bytes)
103 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
103 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
104 getting c
104 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
105 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
105 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
106 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
106 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
107 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
107 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
108 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
109 getting b
110 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
111 getting c
108 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
112 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
109 getting d
113 getting d
110 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
111 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
112 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
113 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
114 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
114 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
115
116
116 Test a corrupt file download, but clear the cache first to force a download.
117 Test a corrupt file download, but clear the cache first to force a download.
117
118
118 $ rm -rf `hg config lfs.usercache`
119 $ rm -rf `hg config lfs.usercache`
119 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
120 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
120 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
121 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
121 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
122 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
122 $ rm ../repo1/*
123 $ rm ../repo1/*
123
124
124 $ hg --repo ../repo1 update -C tip -v
125 $ hg --repo ../repo1 update -C tip -v
125 resolving manifests
126 resolving manifests
126 getting a
127 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
128 getting b
129 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
130 getting c
131 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
127 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
132 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
128 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
133 [255]
129 [255]
134
130
135 The corrupted blob is not added to the usercache or local store
131 The corrupted blob is not added to the usercache or local store
136
132
137 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
133 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
138 [1]
134 [1]
139 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
135 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
140 [1]
136 [1]
141 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
137 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
142
138
143 Test a corrupted file upload
139 Test a corrupted file upload
144
140
145 $ echo 'another lfs blob' > b
141 $ echo 'another lfs blob' > b
146 $ hg ci -m 'another blob'
142 $ hg ci -m 'another blob'
147 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
143 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
148 $ hg push -v ../repo1
144 $ hg push -v ../repo1
149 pushing to ../repo1
145 pushing to ../repo1
150 searching for changes
146 searching for changes
151 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
147 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
152 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
148 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
153 (run hg verify)
149 (run hg verify)
154 [255]
150 [255]
155
151
156 Check error message when the remote missed a blob:
152 Check error message when the remote missed a blob:
157
153
158 $ echo FFFFF > b
154 $ echo FFFFF > b
159 $ hg commit -m b -A b
155 $ hg commit -m b -A b
160 $ echo FFFFF >> b
156 $ echo FFFFF >> b
161 $ hg commit -m b b
157 $ hg commit -m b b
162 $ rm -rf .hg/store/lfs
158 $ rm -rf .hg/store/lfs
163 $ rm -rf `hg config lfs.usercache`
159 $ rm -rf `hg config lfs.usercache`
164 $ hg update -C '.^'
160 $ hg update -C '.^'
165 abort: LFS server error. Remote object for "b" not found:(.*)! (re)
161 abort: LFS server error. Remote object for "b" not found:(.*)! (re)
166 [255]
162 [255]
167
163
168 Check error message when object does not exist:
164 Check error message when object does not exist:
169
165
170 $ cd $TESTTMP
166 $ cd $TESTTMP
171 $ hg init test && cd test
167 $ hg init test && cd test
172 $ echo "[extensions]" >> .hg/hgrc
168 $ echo "[extensions]" >> .hg/hgrc
173 $ echo "lfs=" >> .hg/hgrc
169 $ echo "lfs=" >> .hg/hgrc
174 $ echo "[lfs]" >> .hg/hgrc
170 $ echo "[lfs]" >> .hg/hgrc
175 $ echo "threshold=1" >> .hg/hgrc
171 $ echo "threshold=1" >> .hg/hgrc
176 $ echo a > a
172 $ echo a > a
177 $ hg add a
173 $ hg add a
178 $ hg commit -m 'test'
174 $ hg commit -m 'test'
179 $ echo aaaaa > a
175 $ echo aaaaa > a
180 $ hg commit -m 'largefile'
176 $ hg commit -m 'largefile'
181 $ hg debugdata .hg/store/data/a.i 1 # verify this is no the file content but includes "oid", the LFS "pointer".
177 $ hg debugdata .hg/store/data/a.i 1 # verify this is no the file content but includes "oid", the LFS "pointer".
182 version https://git-lfs.github.com/spec/v1
178 version https://git-lfs.github.com/spec/v1
183 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
179 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
184 size 6
180 size 6
185 x-is-binary 0
181 x-is-binary 0
186 $ cd ..
182 $ cd ..
187 $ rm -rf `hg config lfs.usercache`
183 $ rm -rf `hg config lfs.usercache`
188
184
189 (Restart the server in a different location so it no longer has the content)
185 (Restart the server in a different location so it no longer has the content)
190
186
191 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
187 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
192 $ rm $DAEMON_PIDS
188 $ rm $DAEMON_PIDS
193 $ mkdir $TESTTMP/lfs-server2
189 $ mkdir $TESTTMP/lfs-server2
194 $ cd $TESTTMP/lfs-server2
190 $ cd $TESTTMP/lfs-server2
195 #if no-windows
191 #if no-windows
196 $ lfs-test-server &> lfs-server.log &
192 $ lfs-test-server &> lfs-server.log &
197 $ echo $! >> $DAEMON_PIDS
193 $ echo $! >> $DAEMON_PIDS
198 #else
194 #else
199 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
195 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
200 #endif
196 #endif
201
197
202 $ cd $TESTTMP
198 $ cd $TESTTMP
203 $ hg clone test test2
199 $ hg clone test test2
204 updating to branch default
200 updating to branch default
205 abort: LFS server error. Remote object for "a" not found:(.*)! (re)
201 abort: LFS server error. Remote object for "a" not found:(.*)! (re)
206 [255]
202 [255]
207
203
208 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
204 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
@@ -1,1059 +1,1058 b''
1 # Initial setup
1 # Initial setup
2
2
3 $ cat >> $HGRCPATH << EOF
3 $ cat >> $HGRCPATH << EOF
4 > [extensions]
4 > [extensions]
5 > lfs=
5 > lfs=
6 > [lfs]
6 > [lfs]
7 > # Test deprecated config
7 > # Test deprecated config
8 > threshold=1000B
8 > threshold=1000B
9 > EOF
9 > EOF
10
10
11 $ LONG=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
11 $ LONG=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
12
12
13 # Prepare server and enable extension
13 # Prepare server and enable extension
14 $ hg init server
14 $ hg init server
15 $ hg clone -q server client
15 $ hg clone -q server client
16 $ cd client
16 $ cd client
17
17
18 # Commit small file
18 # Commit small file
19 $ echo s > smallfile
19 $ echo s > smallfile
20 $ echo '**.py = LF' > .hgeol
20 $ echo '**.py = LF' > .hgeol
21 $ hg --config lfs.track='"size(\">1000B\")"' commit -Aqm "add small file"
21 $ hg --config lfs.track='"size(\">1000B\")"' commit -Aqm "add small file"
22 hg: parse error: unsupported file pattern: size(">1000B")
22 hg: parse error: unsupported file pattern: size(">1000B")
23 (paths must be prefixed with "path:")
23 (paths must be prefixed with "path:")
24 [255]
24 [255]
25 $ hg --config lfs.track='size(">1000B")' commit -Aqm "add small file"
25 $ hg --config lfs.track='size(">1000B")' commit -Aqm "add small file"
26
26
27 # Commit large file
27 # Commit large file
28 $ echo $LONG > largefile
28 $ echo $LONG > largefile
29 $ grep lfs .hg/requires
29 $ grep lfs .hg/requires
30 [1]
30 [1]
31 $ hg commit --traceback -Aqm "add large file"
31 $ hg commit --traceback -Aqm "add large file"
32 $ grep lfs .hg/requires
32 $ grep lfs .hg/requires
33 lfs
33 lfs
34
34
35 # Ensure metadata is stored
35 # Ensure metadata is stored
36 $ hg debugdata largefile 0
36 $ hg debugdata largefile 0
37 version https://git-lfs.github.com/spec/v1
37 version https://git-lfs.github.com/spec/v1
38 oid sha256:f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
38 oid sha256:f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
39 size 1501
39 size 1501
40 x-is-binary 0
40 x-is-binary 0
41
41
42 # Check the blobstore is populated
42 # Check the blobstore is populated
43 $ find .hg/store/lfs/objects | sort
43 $ find .hg/store/lfs/objects | sort
44 .hg/store/lfs/objects
44 .hg/store/lfs/objects
45 .hg/store/lfs/objects/f1
45 .hg/store/lfs/objects/f1
46 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
46 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
47
47
48 # Check the blob stored contains the actual contents of the file
48 # Check the blob stored contains the actual contents of the file
49 $ cat .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
49 $ cat .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
50 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
50 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
51
51
52 # Push changes to the server
52 # Push changes to the server
53
53
54 $ hg push
54 $ hg push
55 pushing to $TESTTMP/server
55 pushing to $TESTTMP/server
56 searching for changes
56 searching for changes
57 abort: lfs.url needs to be configured
57 abort: lfs.url needs to be configured
58 [255]
58 [255]
59
59
60 $ cat >> $HGRCPATH << EOF
60 $ cat >> $HGRCPATH << EOF
61 > [lfs]
61 > [lfs]
62 > url=file:$TESTTMP/dummy-remote/
62 > url=file:$TESTTMP/dummy-remote/
63 > EOF
63 > EOF
64
64
65 Push to a local non-lfs repo with the extension enabled will add the
65 Push to a local non-lfs repo with the extension enabled will add the
66 lfs requirement
66 lfs requirement
67
67
68 $ grep lfs $TESTTMP/server/.hg/requires
68 $ grep lfs $TESTTMP/server/.hg/requires
69 [1]
69 [1]
70 $ hg push -v | egrep -v '^(uncompressed| )'
70 $ hg push -v | egrep -v '^(uncompressed| )'
71 pushing to $TESTTMP/server
71 pushing to $TESTTMP/server
72 searching for changes
72 searching for changes
73 lfs: found f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b in the local lfs store
73 lfs: found f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b in the local lfs store
74 2 changesets found
74 2 changesets found
75 adding changesets
75 adding changesets
76 adding manifests
76 adding manifests
77 adding file changes
77 adding file changes
78 added 2 changesets with 3 changes to 3 files
78 added 2 changesets with 3 changes to 3 files
79 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
79 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
80 $ grep lfs $TESTTMP/server/.hg/requires
80 $ grep lfs $TESTTMP/server/.hg/requires
81 lfs
81 lfs
82
82
83 # Unknown URL scheme
83 # Unknown URL scheme
84
84
85 $ hg push --config lfs.url=ftp://foobar
85 $ hg push --config lfs.url=ftp://foobar
86 abort: lfs: unknown url scheme: ftp
86 abort: lfs: unknown url scheme: ftp
87 [255]
87 [255]
88
88
89 $ cd ../
89 $ cd ../
90
90
91 # Initialize new client (not cloning) and setup extension
91 # Initialize new client (not cloning) and setup extension
92 $ hg init client2
92 $ hg init client2
93 $ cd client2
93 $ cd client2
94 $ cat >> .hg/hgrc <<EOF
94 $ cat >> .hg/hgrc <<EOF
95 > [paths]
95 > [paths]
96 > default = $TESTTMP/server
96 > default = $TESTTMP/server
97 > EOF
97 > EOF
98
98
99 # Pull from server
99 # Pull from server
100
100
101 Pulling a local lfs repo into a local non-lfs repo with the extension
101 Pulling a local lfs repo into a local non-lfs repo with the extension
102 enabled adds the lfs requirement
102 enabled adds the lfs requirement
103
103
104 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
104 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
105 $TESTTMP/server/.hg/requires:lfs
105 $TESTTMP/server/.hg/requires:lfs
106 $ hg pull default
106 $ hg pull default
107 pulling from $TESTTMP/server
107 pulling from $TESTTMP/server
108 requesting all changes
108 requesting all changes
109 adding changesets
109 adding changesets
110 adding manifests
110 adding manifests
111 adding file changes
111 adding file changes
112 added 2 changesets with 3 changes to 3 files
112 added 2 changesets with 3 changes to 3 files
113 new changesets 0ead593177f7:b88141481348
113 new changesets 0ead593177f7:b88141481348
114 (run 'hg update' to get a working copy)
114 (run 'hg update' to get a working copy)
115 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
115 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
116 .hg/requires:lfs
116 .hg/requires:lfs
117 $TESTTMP/server/.hg/requires:lfs
117 $TESTTMP/server/.hg/requires:lfs
118
118
119 # Check the blobstore is not yet populated
119 # Check the blobstore is not yet populated
120 $ [ -d .hg/store/lfs/objects ]
120 $ [ -d .hg/store/lfs/objects ]
121 [1]
121 [1]
122
122
123 # Update to the last revision containing the large file
123 # Update to the last revision containing the large file
124 $ hg update
124 $ hg update
125 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
125 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
126
126
127 # Check the blobstore has been populated on update
127 # Check the blobstore has been populated on update
128 $ find .hg/store/lfs/objects | sort
128 $ find .hg/store/lfs/objects | sort
129 .hg/store/lfs/objects
129 .hg/store/lfs/objects
130 .hg/store/lfs/objects/f1
130 .hg/store/lfs/objects/f1
131 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
131 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
132
132
133 # Check the contents of the file are fetched from blobstore when requested
133 # Check the contents of the file are fetched from blobstore when requested
134 $ hg cat -r . largefile
134 $ hg cat -r . largefile
135 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
135 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
136
136
137 # Check the file has been copied in the working copy
137 # Check the file has been copied in the working copy
138 $ cat largefile
138 $ cat largefile
139 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
139 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
140
140
141 $ cd ..
141 $ cd ..
142
142
143 # Check rename, and switch between large and small files
143 # Check rename, and switch between large and small files
144
144
145 $ hg init repo3
145 $ hg init repo3
146 $ cd repo3
146 $ cd repo3
147 $ cat >> .hg/hgrc << EOF
147 $ cat >> .hg/hgrc << EOF
148 > [lfs]
148 > [lfs]
149 > track=size(">10B")
149 > track=size(">10B")
150 > EOF
150 > EOF
151
151
152 $ echo LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS > large
152 $ echo LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS > large
153 $ echo SHORTER > small
153 $ echo SHORTER > small
154 $ hg add . -q
154 $ hg add . -q
155 $ hg commit -m 'commit with lfs content'
155 $ hg commit -m 'commit with lfs content'
156
156
157 $ hg mv large l
157 $ hg mv large l
158 $ hg mv small s
158 $ hg mv small s
159 $ hg commit -m 'renames'
159 $ hg commit -m 'renames'
160
160
161 $ echo SHORT > l
161 $ echo SHORT > l
162 $ echo BECOME-LARGER-FROM-SHORTER > s
162 $ echo BECOME-LARGER-FROM-SHORTER > s
163 $ hg commit -m 'large to small, small to large'
163 $ hg commit -m 'large to small, small to large'
164
164
165 $ echo 1 >> l
165 $ echo 1 >> l
166 $ echo 2 >> s
166 $ echo 2 >> s
167 $ hg commit -m 'random modifications'
167 $ hg commit -m 'random modifications'
168
168
169 $ echo RESTORE-TO-BE-LARGE > l
169 $ echo RESTORE-TO-BE-LARGE > l
170 $ echo SHORTER > s
170 $ echo SHORTER > s
171 $ hg commit -m 'switch large and small again'
171 $ hg commit -m 'switch large and small again'
172
172
173 # Test lfs_files template
173 # Test lfs_files template
174
174
175 $ hg log -r 'all()' -T '{rev} {join(lfs_files, ", ")}\n'
175 $ hg log -r 'all()' -T '{rev} {join(lfs_files, ", ")}\n'
176 0 large
176 0 large
177 1 l
177 1 l
178 2 s
178 2 s
179 3 s
179 3 s
180 4 l
180 4 l
181
181
182 # Push and pull the above repo
182 # Push and pull the above repo
183
183
184 $ hg --cwd .. init repo4
184 $ hg --cwd .. init repo4
185 $ hg push ../repo4
185 $ hg push ../repo4
186 pushing to ../repo4
186 pushing to ../repo4
187 searching for changes
187 searching for changes
188 adding changesets
188 adding changesets
189 adding manifests
189 adding manifests
190 adding file changes
190 adding file changes
191 added 5 changesets with 10 changes to 4 files
191 added 5 changesets with 10 changes to 4 files
192
192
193 $ hg --cwd .. init repo5
193 $ hg --cwd .. init repo5
194 $ hg --cwd ../repo5 pull ../repo3
194 $ hg --cwd ../repo5 pull ../repo3
195 pulling from ../repo3
195 pulling from ../repo3
196 requesting all changes
196 requesting all changes
197 adding changesets
197 adding changesets
198 adding manifests
198 adding manifests
199 adding file changes
199 adding file changes
200 added 5 changesets with 10 changes to 4 files
200 added 5 changesets with 10 changes to 4 files
201 new changesets fd47a419c4f7:5adf850972b9
201 new changesets fd47a419c4f7:5adf850972b9
202 (run 'hg update' to get a working copy)
202 (run 'hg update' to get a working copy)
203
203
204 $ cd ..
204 $ cd ..
205
205
206 # Test clone
206 # Test clone
207
207
208 $ hg init repo6
208 $ hg init repo6
209 $ cd repo6
209 $ cd repo6
210 $ cat >> .hg/hgrc << EOF
210 $ cat >> .hg/hgrc << EOF
211 > [lfs]
211 > [lfs]
212 > track=size(">30B")
212 > track=size(">30B")
213 > EOF
213 > EOF
214
214
215 $ echo LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES > large
215 $ echo LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES > large
216 $ echo SMALL > small
216 $ echo SMALL > small
217 $ hg commit -Aqm 'create a lfs file' large small
217 $ hg commit -Aqm 'create a lfs file' large small
218 $ hg debuglfsupload -r 'all()' -v
218 $ hg debuglfsupload -r 'all()' -v
219 lfs: found 8e92251415339ae9b148c8da89ed5ec665905166a1ab11b09dca8fad83344738 in the local lfs store
219 lfs: found 8e92251415339ae9b148c8da89ed5ec665905166a1ab11b09dca8fad83344738 in the local lfs store
220
220
221 $ cd ..
221 $ cd ..
222
222
223 $ hg clone repo6 repo7
223 $ hg clone repo6 repo7
224 updating to branch default
224 updating to branch default
225 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
225 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
226 $ cd repo7
226 $ cd repo7
227 $ hg config extensions --debug | grep lfs
227 $ hg config extensions --debug | grep lfs
228 $TESTTMP/repo7/.hg/hgrc:*: extensions.lfs= (glob)
228 $TESTTMP/repo7/.hg/hgrc:*: extensions.lfs= (glob)
229 $ cat large
229 $ cat large
230 LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES
230 LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES
231 $ cat small
231 $ cat small
232 SMALL
232 SMALL
233
233
234 $ cd ..
234 $ cd ..
235
235
236 $ hg --config extensions.share= share repo7 sharedrepo
236 $ hg --config extensions.share= share repo7 sharedrepo
237 updating working directory
237 updating working directory
238 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
238 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
239 $ hg -R sharedrepo config extensions --debug | grep lfs
239 $ hg -R sharedrepo config extensions --debug | grep lfs
240 $TESTTMP/sharedrepo/.hg/hgrc:*: extensions.lfs= (glob)
240 $TESTTMP/sharedrepo/.hg/hgrc:*: extensions.lfs= (glob)
241
241
242 # Test rename and status
242 # Test rename and status
243
243
244 $ hg init repo8
244 $ hg init repo8
245 $ cd repo8
245 $ cd repo8
246 $ cat >> .hg/hgrc << EOF
246 $ cat >> .hg/hgrc << EOF
247 > [lfs]
247 > [lfs]
248 > track=size(">10B")
248 > track=size(">10B")
249 > EOF
249 > EOF
250
250
251 $ echo THIS-IS-LFS-BECAUSE-10-BYTES > a1
251 $ echo THIS-IS-LFS-BECAUSE-10-BYTES > a1
252 $ echo SMALL > a2
252 $ echo SMALL > a2
253 $ hg commit -m a -A a1 a2
253 $ hg commit -m a -A a1 a2
254 $ hg status
254 $ hg status
255 $ hg mv a1 b1
255 $ hg mv a1 b1
256 $ hg mv a2 a1
256 $ hg mv a2 a1
257 $ hg mv b1 a2
257 $ hg mv b1 a2
258 $ hg commit -m b
258 $ hg commit -m b
259 $ hg status
259 $ hg status
260 >>> with open('a2', 'wb') as f:
260 >>> with open('a2', 'wb') as f:
261 ... f.write(b'\1\nSTART-WITH-HG-FILELOG-METADATA')
261 ... f.write(b'\1\nSTART-WITH-HG-FILELOG-METADATA')
262 >>> with open('a1', 'wb') as f:
262 >>> with open('a1', 'wb') as f:
263 ... f.write(b'\1\nMETA\n')
263 ... f.write(b'\1\nMETA\n')
264 $ hg commit -m meta
264 $ hg commit -m meta
265 $ hg status
265 $ hg status
266 $ hg log -T '{rev}: {file_copies} | {file_dels} | {file_adds}\n'
266 $ hg log -T '{rev}: {file_copies} | {file_dels} | {file_adds}\n'
267 2: | |
267 2: | |
268 1: a1 (a2)a2 (a1) | |
268 1: a1 (a2)a2 (a1) | |
269 0: | | a1 a2
269 0: | | a1 a2
270
270
271 $ for n in a1 a2; do
271 $ for n in a1 a2; do
272 > for r in 0 1 2; do
272 > for r in 0 1 2; do
273 > printf '\n%s @ %s\n' $n $r
273 > printf '\n%s @ %s\n' $n $r
274 > hg debugdata $n $r
274 > hg debugdata $n $r
275 > done
275 > done
276 > done
276 > done
277
277
278 a1 @ 0
278 a1 @ 0
279 version https://git-lfs.github.com/spec/v1
279 version https://git-lfs.github.com/spec/v1
280 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
280 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
281 size 29
281 size 29
282 x-is-binary 0
282 x-is-binary 0
283
283
284 a1 @ 1
284 a1 @ 1
285 \x01 (esc)
285 \x01 (esc)
286 copy: a2
286 copy: a2
287 copyrev: 50470ad23cf937b1f4b9f80bfe54df38e65b50d9
287 copyrev: 50470ad23cf937b1f4b9f80bfe54df38e65b50d9
288 \x01 (esc)
288 \x01 (esc)
289 SMALL
289 SMALL
290
290
291 a1 @ 2
291 a1 @ 2
292 \x01 (esc)
292 \x01 (esc)
293 \x01 (esc)
293 \x01 (esc)
294 \x01 (esc)
294 \x01 (esc)
295 META
295 META
296
296
297 a2 @ 0
297 a2 @ 0
298 SMALL
298 SMALL
299
299
300 a2 @ 1
300 a2 @ 1
301 version https://git-lfs.github.com/spec/v1
301 version https://git-lfs.github.com/spec/v1
302 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
302 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
303 size 29
303 size 29
304 x-hg-copy a1
304 x-hg-copy a1
305 x-hg-copyrev be23af27908a582af43e5cda209a5a9b319de8d4
305 x-hg-copyrev be23af27908a582af43e5cda209a5a9b319de8d4
306 x-is-binary 0
306 x-is-binary 0
307
307
308 a2 @ 2
308 a2 @ 2
309 version https://git-lfs.github.com/spec/v1
309 version https://git-lfs.github.com/spec/v1
310 oid sha256:876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
310 oid sha256:876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
311 size 32
311 size 32
312 x-is-binary 0
312 x-is-binary 0
313
313
314 # Verify commit hashes include rename metadata
314 # Verify commit hashes include rename metadata
315
315
316 $ hg log -T '{rev}:{node|short} {desc}\n'
316 $ hg log -T '{rev}:{node|short} {desc}\n'
317 2:0fae949de7fa meta
317 2:0fae949de7fa meta
318 1:9cd6bdffdac0 b
318 1:9cd6bdffdac0 b
319 0:7f96794915f7 a
319 0:7f96794915f7 a
320
320
321 $ cd ..
321 $ cd ..
322
322
323 # Test bundle
323 # Test bundle
324
324
325 $ hg init repo9
325 $ hg init repo9
326 $ cd repo9
326 $ cd repo9
327 $ cat >> .hg/hgrc << EOF
327 $ cat >> .hg/hgrc << EOF
328 > [lfs]
328 > [lfs]
329 > track=size(">10B")
329 > track=size(">10B")
330 > [diff]
330 > [diff]
331 > git=1
331 > git=1
332 > EOF
332 > EOF
333
333
334 $ for i in 0 single two three 4; do
334 $ for i in 0 single two three 4; do
335 > echo 'THIS-IS-LFS-'$i > a
335 > echo 'THIS-IS-LFS-'$i > a
336 > hg commit -m a-$i -A a
336 > hg commit -m a-$i -A a
337 > done
337 > done
338
338
339 $ hg update 2 -q
339 $ hg update 2 -q
340 $ echo 'THIS-IS-LFS-2-CHILD' > a
340 $ echo 'THIS-IS-LFS-2-CHILD' > a
341 $ hg commit -m branching -q
341 $ hg commit -m branching -q
342
342
343 $ hg bundle --base 1 bundle.hg -v
343 $ hg bundle --base 1 bundle.hg -v
344 lfs: found 5ab7a3739a5feec94a562d070a14f36dba7cad17e5484a4a89eea8e5f3166888 in the local lfs store
344 lfs: found 5ab7a3739a5feec94a562d070a14f36dba7cad17e5484a4a89eea8e5f3166888 in the local lfs store
345 lfs: found a9c7d1cd6ce2b9bbdf46ed9a862845228717b921c089d0d42e3bcaed29eb612e in the local lfs store
345 lfs: found a9c7d1cd6ce2b9bbdf46ed9a862845228717b921c089d0d42e3bcaed29eb612e in the local lfs store
346 lfs: found f693890c49c409ec33673b71e53f297681f76c1166daf33b2ad7ebf8b1d3237e in the local lfs store
346 lfs: found f693890c49c409ec33673b71e53f297681f76c1166daf33b2ad7ebf8b1d3237e in the local lfs store
347 lfs: found fda198fea753eb66a252e9856915e1f5cddbe41723bd4b695ece2604ad3c9f75 in the local lfs store
347 lfs: found fda198fea753eb66a252e9856915e1f5cddbe41723bd4b695ece2604ad3c9f75 in the local lfs store
348 4 changesets found
348 4 changesets found
349 uncompressed size of bundle content:
349 uncompressed size of bundle content:
350 * (changelog) (glob)
350 * (changelog) (glob)
351 * (manifests) (glob)
351 * (manifests) (glob)
352 * a (glob)
352 * a (glob)
353 $ hg --config extensions.strip= strip -r 2 --no-backup --force -q
353 $ hg --config extensions.strip= strip -r 2 --no-backup --force -q
354 $ hg -R bundle.hg log -p -T '{rev} {desc}\n' a
354 $ hg -R bundle.hg log -p -T '{rev} {desc}\n' a
355 5 branching
355 5 branching
356 diff --git a/a b/a
356 diff --git a/a b/a
357 --- a/a
357 --- a/a
358 +++ b/a
358 +++ b/a
359 @@ -1,1 +1,1 @@
359 @@ -1,1 +1,1 @@
360 -THIS-IS-LFS-two
360 -THIS-IS-LFS-two
361 +THIS-IS-LFS-2-CHILD
361 +THIS-IS-LFS-2-CHILD
362
362
363 4 a-4
363 4 a-4
364 diff --git a/a b/a
364 diff --git a/a b/a
365 --- a/a
365 --- a/a
366 +++ b/a
366 +++ b/a
367 @@ -1,1 +1,1 @@
367 @@ -1,1 +1,1 @@
368 -THIS-IS-LFS-three
368 -THIS-IS-LFS-three
369 +THIS-IS-LFS-4
369 +THIS-IS-LFS-4
370
370
371 3 a-three
371 3 a-three
372 diff --git a/a b/a
372 diff --git a/a b/a
373 --- a/a
373 --- a/a
374 +++ b/a
374 +++ b/a
375 @@ -1,1 +1,1 @@
375 @@ -1,1 +1,1 @@
376 -THIS-IS-LFS-two
376 -THIS-IS-LFS-two
377 +THIS-IS-LFS-three
377 +THIS-IS-LFS-three
378
378
379 2 a-two
379 2 a-two
380 diff --git a/a b/a
380 diff --git a/a b/a
381 --- a/a
381 --- a/a
382 +++ b/a
382 +++ b/a
383 @@ -1,1 +1,1 @@
383 @@ -1,1 +1,1 @@
384 -THIS-IS-LFS-single
384 -THIS-IS-LFS-single
385 +THIS-IS-LFS-two
385 +THIS-IS-LFS-two
386
386
387 1 a-single
387 1 a-single
388 diff --git a/a b/a
388 diff --git a/a b/a
389 --- a/a
389 --- a/a
390 +++ b/a
390 +++ b/a
391 @@ -1,1 +1,1 @@
391 @@ -1,1 +1,1 @@
392 -THIS-IS-LFS-0
392 -THIS-IS-LFS-0
393 +THIS-IS-LFS-single
393 +THIS-IS-LFS-single
394
394
395 0 a-0
395 0 a-0
396 diff --git a/a b/a
396 diff --git a/a b/a
397 new file mode 100644
397 new file mode 100644
398 --- /dev/null
398 --- /dev/null
399 +++ b/a
399 +++ b/a
400 @@ -0,0 +1,1 @@
400 @@ -0,0 +1,1 @@
401 +THIS-IS-LFS-0
401 +THIS-IS-LFS-0
402
402
403 $ hg bundle -R bundle.hg --base 1 bundle-again.hg -q
403 $ hg bundle -R bundle.hg --base 1 bundle-again.hg -q
404 $ hg -R bundle-again.hg log -p -T '{rev} {desc}\n' a
404 $ hg -R bundle-again.hg log -p -T '{rev} {desc}\n' a
405 5 branching
405 5 branching
406 diff --git a/a b/a
406 diff --git a/a b/a
407 --- a/a
407 --- a/a
408 +++ b/a
408 +++ b/a
409 @@ -1,1 +1,1 @@
409 @@ -1,1 +1,1 @@
410 -THIS-IS-LFS-two
410 -THIS-IS-LFS-two
411 +THIS-IS-LFS-2-CHILD
411 +THIS-IS-LFS-2-CHILD
412
412
413 4 a-4
413 4 a-4
414 diff --git a/a b/a
414 diff --git a/a b/a
415 --- a/a
415 --- a/a
416 +++ b/a
416 +++ b/a
417 @@ -1,1 +1,1 @@
417 @@ -1,1 +1,1 @@
418 -THIS-IS-LFS-three
418 -THIS-IS-LFS-three
419 +THIS-IS-LFS-4
419 +THIS-IS-LFS-4
420
420
421 3 a-three
421 3 a-three
422 diff --git a/a b/a
422 diff --git a/a b/a
423 --- a/a
423 --- a/a
424 +++ b/a
424 +++ b/a
425 @@ -1,1 +1,1 @@
425 @@ -1,1 +1,1 @@
426 -THIS-IS-LFS-two
426 -THIS-IS-LFS-two
427 +THIS-IS-LFS-three
427 +THIS-IS-LFS-three
428
428
429 2 a-two
429 2 a-two
430 diff --git a/a b/a
430 diff --git a/a b/a
431 --- a/a
431 --- a/a
432 +++ b/a
432 +++ b/a
433 @@ -1,1 +1,1 @@
433 @@ -1,1 +1,1 @@
434 -THIS-IS-LFS-single
434 -THIS-IS-LFS-single
435 +THIS-IS-LFS-two
435 +THIS-IS-LFS-two
436
436
437 1 a-single
437 1 a-single
438 diff --git a/a b/a
438 diff --git a/a b/a
439 --- a/a
439 --- a/a
440 +++ b/a
440 +++ b/a
441 @@ -1,1 +1,1 @@
441 @@ -1,1 +1,1 @@
442 -THIS-IS-LFS-0
442 -THIS-IS-LFS-0
443 +THIS-IS-LFS-single
443 +THIS-IS-LFS-single
444
444
445 0 a-0
445 0 a-0
446 diff --git a/a b/a
446 diff --git a/a b/a
447 new file mode 100644
447 new file mode 100644
448 --- /dev/null
448 --- /dev/null
449 +++ b/a
449 +++ b/a
450 @@ -0,0 +1,1 @@
450 @@ -0,0 +1,1 @@
451 +THIS-IS-LFS-0
451 +THIS-IS-LFS-0
452
452
453 $ cd ..
453 $ cd ..
454
454
455 # Test isbinary
455 # Test isbinary
456
456
457 $ hg init repo10
457 $ hg init repo10
458 $ cd repo10
458 $ cd repo10
459 $ cat >> .hg/hgrc << EOF
459 $ cat >> .hg/hgrc << EOF
460 > [extensions]
460 > [extensions]
461 > lfs=
461 > lfs=
462 > [lfs]
462 > [lfs]
463 > track=all()
463 > track=all()
464 > EOF
464 > EOF
465 $ $PYTHON <<'EOF'
465 $ $PYTHON <<'EOF'
466 > def write(path, content):
466 > def write(path, content):
467 > with open(path, 'wb') as f:
467 > with open(path, 'wb') as f:
468 > f.write(content)
468 > f.write(content)
469 > write('a', b'\0\0')
469 > write('a', b'\0\0')
470 > write('b', b'\1\n')
470 > write('b', b'\1\n')
471 > write('c', b'\1\n\0')
471 > write('c', b'\1\n\0')
472 > write('d', b'xx')
472 > write('d', b'xx')
473 > EOF
473 > EOF
474 $ hg add a b c d
474 $ hg add a b c d
475 $ hg diff --stat
475 $ hg diff --stat
476 a | Bin
476 a | Bin
477 b | 1 +
477 b | 1 +
478 c | Bin
478 c | Bin
479 d | 1 +
479 d | 1 +
480 4 files changed, 2 insertions(+), 0 deletions(-)
480 4 files changed, 2 insertions(+), 0 deletions(-)
481 $ hg commit -m binarytest
481 $ hg commit -m binarytest
482 $ cat > $TESTTMP/dumpbinary.py << EOF
482 $ cat > $TESTTMP/dumpbinary.py << EOF
483 > def reposetup(ui, repo):
483 > def reposetup(ui, repo):
484 > for n in 'abcd':
484 > for n in 'abcd':
485 > ui.write(('%s: binary=%s\n') % (n, repo['.'][n].isbinary()))
485 > ui.write(('%s: binary=%s\n') % (n, repo['.'][n].isbinary()))
486 > EOF
486 > EOF
487 $ hg --config extensions.dumpbinary=$TESTTMP/dumpbinary.py id --trace
487 $ hg --config extensions.dumpbinary=$TESTTMP/dumpbinary.py id --trace
488 a: binary=True
488 a: binary=True
489 b: binary=False
489 b: binary=False
490 c: binary=True
490 c: binary=True
491 d: binary=False
491 d: binary=False
492 b55353847f02 tip
492 b55353847f02 tip
493
493
494 $ cd ..
494 $ cd ..
495
495
496 # Test fctx.cmp fastpath - diff without LFS blobs
496 # Test fctx.cmp fastpath - diff without LFS blobs
497
497
498 $ hg init repo12
498 $ hg init repo12
499 $ cd repo12
499 $ cd repo12
500 $ cat >> .hg/hgrc <<EOF
500 $ cat >> .hg/hgrc <<EOF
501 > [lfs]
501 > [lfs]
502 > threshold=1
502 > threshold=1
503 > EOF
503 > EOF
504 $ cat > ../patch.diff <<EOF
504 $ cat > ../patch.diff <<EOF
505 > # HG changeset patch
505 > # HG changeset patch
506 > 2
506 > 2
507 >
507 >
508 > diff --git a/a b/a
508 > diff --git a/a b/a
509 > old mode 100644
509 > old mode 100644
510 > new mode 100755
510 > new mode 100755
511 > EOF
511 > EOF
512
512
513 $ for i in 1 2 3; do
513 $ for i in 1 2 3; do
514 > cp ../repo10/a a
514 > cp ../repo10/a a
515 > if [ $i = 3 ]; then
515 > if [ $i = 3 ]; then
516 > # make a content-only change
516 > # make a content-only change
517 > hg import -q --bypass ../patch.diff
517 > hg import -q --bypass ../patch.diff
518 > hg update -q
518 > hg update -q
519 > rm ../patch.diff
519 > rm ../patch.diff
520 > else
520 > else
521 > echo $i >> a
521 > echo $i >> a
522 > hg commit -m $i -A a
522 > hg commit -m $i -A a
523 > fi
523 > fi
524 > done
524 > done
525 $ [ -d .hg/store/lfs/objects ]
525 $ [ -d .hg/store/lfs/objects ]
526
526
527 $ cd ..
527 $ cd ..
528
528
529 $ hg clone repo12 repo13 --noupdate
529 $ hg clone repo12 repo13 --noupdate
530 $ cd repo13
530 $ cd repo13
531 $ hg log --removed -p a -T '{desc}\n' --config diff.nobinary=1 --git
531 $ hg log --removed -p a -T '{desc}\n' --config diff.nobinary=1 --git
532 2
532 2
533 diff --git a/a b/a
533 diff --git a/a b/a
534 old mode 100644
534 old mode 100644
535 new mode 100755
535 new mode 100755
536
536
537 2
537 2
538 diff --git a/a b/a
538 diff --git a/a b/a
539 Binary file a has changed
539 Binary file a has changed
540
540
541 1
541 1
542 diff --git a/a b/a
542 diff --git a/a b/a
543 new file mode 100644
543 new file mode 100644
544 Binary file a has changed
544 Binary file a has changed
545
545
546 $ [ -d .hg/store/lfs/objects ]
546 $ [ -d .hg/store/lfs/objects ]
547 [1]
547 [1]
548
548
549 $ cd ..
549 $ cd ..
550
550
551 # Test filter
551 # Test filter
552
552
553 $ hg init repo11
553 $ hg init repo11
554 $ cd repo11
554 $ cd repo11
555 $ cat >> .hg/hgrc << EOF
555 $ cat >> .hg/hgrc << EOF
556 > [lfs]
556 > [lfs]
557 > track=(**.a & size(">5B")) | (**.b & !size(">5B"))
557 > track=(**.a & size(">5B")) | (**.b & !size(">5B"))
558 > | (**.c & "path:d" & !"path:d/c.c") | size(">10B")
558 > | (**.c & "path:d" & !"path:d/c.c") | size(">10B")
559 > EOF
559 > EOF
560
560
561 $ mkdir a
561 $ mkdir a
562 $ echo aaaaaa > a/1.a
562 $ echo aaaaaa > a/1.a
563 $ echo a > a/2.a
563 $ echo a > a/2.a
564 $ echo aaaaaa > 1.b
564 $ echo aaaaaa > 1.b
565 $ echo a > 2.b
565 $ echo a > 2.b
566 $ echo a > 1.c
566 $ echo a > 1.c
567 $ mkdir d
567 $ mkdir d
568 $ echo a > d/c.c
568 $ echo a > d/c.c
569 $ echo a > d/d.c
569 $ echo a > d/d.c
570 $ echo aaaaaaaaaaaa > x
570 $ echo aaaaaaaaaaaa > x
571 $ hg add . -q
571 $ hg add . -q
572 $ hg commit -m files
572 $ hg commit -m files
573
573
574 $ for p in a/1.a a/2.a 1.b 2.b 1.c d/c.c d/d.c x; do
574 $ for p in a/1.a a/2.a 1.b 2.b 1.c d/c.c d/d.c x; do
575 > if hg debugdata $p 0 2>&1 | grep git-lfs >/dev/null; then
575 > if hg debugdata $p 0 2>&1 | grep git-lfs >/dev/null; then
576 > echo "${p}: is lfs"
576 > echo "${p}: is lfs"
577 > else
577 > else
578 > echo "${p}: not lfs"
578 > echo "${p}: not lfs"
579 > fi
579 > fi
580 > done
580 > done
581 a/1.a: is lfs
581 a/1.a: is lfs
582 a/2.a: not lfs
582 a/2.a: not lfs
583 1.b: not lfs
583 1.b: not lfs
584 2.b: is lfs
584 2.b: is lfs
585 1.c: not lfs
585 1.c: not lfs
586 d/c.c: not lfs
586 d/c.c: not lfs
587 d/d.c: is lfs
587 d/d.c: is lfs
588 x: is lfs
588 x: is lfs
589
589
590 $ cd ..
590 $ cd ..
591
591
592 # Verify the repos
592 # Verify the repos
593
593
594 $ cat > $TESTTMP/dumpflog.py << EOF
594 $ cat > $TESTTMP/dumpflog.py << EOF
595 > # print raw revision sizes, flags, and hashes for certain files
595 > # print raw revision sizes, flags, and hashes for certain files
596 > import hashlib
596 > import hashlib
597 > from mercurial import revlog
597 > from mercurial import revlog
598 > from mercurial.node import short
598 > from mercurial.node import short
599 > def hash(rawtext):
599 > def hash(rawtext):
600 > h = hashlib.sha512()
600 > h = hashlib.sha512()
601 > h.update(rawtext)
601 > h.update(rawtext)
602 > return h.hexdigest()[:4]
602 > return h.hexdigest()[:4]
603 > def reposetup(ui, repo):
603 > def reposetup(ui, repo):
604 > # these 2 files are interesting
604 > # these 2 files are interesting
605 > for name in ['l', 's']:
605 > for name in ['l', 's']:
606 > fl = repo.file(name)
606 > fl = repo.file(name)
607 > if len(fl) == 0:
607 > if len(fl) == 0:
608 > continue
608 > continue
609 > sizes = [revlog.revlog.rawsize(fl, i) for i in fl]
609 > sizes = [revlog.revlog.rawsize(fl, i) for i in fl]
610 > texts = [fl.revision(i, raw=True) for i in fl]
610 > texts = [fl.revision(i, raw=True) for i in fl]
611 > flags = [int(fl.flags(i)) for i in fl]
611 > flags = [int(fl.flags(i)) for i in fl]
612 > hashes = [hash(t) for t in texts]
612 > hashes = [hash(t) for t in texts]
613 > print(' %s: rawsizes=%r flags=%r hashes=%r'
613 > print(' %s: rawsizes=%r flags=%r hashes=%r'
614 > % (name, sizes, flags, hashes))
614 > % (name, sizes, flags, hashes))
615 > EOF
615 > EOF
616
616
617 $ for i in client client2 server repo3 repo4 repo5 repo6 repo7 repo8 repo9 \
617 $ for i in client client2 server repo3 repo4 repo5 repo6 repo7 repo8 repo9 \
618 > repo10; do
618 > repo10; do
619 > echo 'repo:' $i
619 > echo 'repo:' $i
620 > hg --cwd $i verify --config extensions.dumpflog=$TESTTMP/dumpflog.py -q
620 > hg --cwd $i verify --config extensions.dumpflog=$TESTTMP/dumpflog.py -q
621 > done
621 > done
622 repo: client
622 repo: client
623 repo: client2
623 repo: client2
624 repo: server
624 repo: server
625 repo: repo3
625 repo: repo3
626 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
626 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
627 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
627 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
628 repo: repo4
628 repo: repo4
629 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
629 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
630 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
630 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
631 repo: repo5
631 repo: repo5
632 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
632 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
633 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
633 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
634 repo: repo6
634 repo: repo6
635 repo: repo7
635 repo: repo7
636 repo: repo8
636 repo: repo8
637 repo: repo9
637 repo: repo9
638 repo: repo10
638 repo: repo10
639
639
640 repo13 doesn't have any cached lfs files and its source never pushed its
640 repo13 doesn't have any cached lfs files and its source never pushed its
641 files. Therefore, the files don't exist in the remote store. Use the files in
641 files. Therefore, the files don't exist in the remote store. Use the files in
642 the user cache.
642 the user cache.
643
643
644 $ test -d $TESTTMP/repo13/.hg/store/lfs/objects
644 $ test -d $TESTTMP/repo13/.hg/store/lfs/objects
645 [1]
645 [1]
646
646
647 $ hg --config extensions.share= share repo13 repo14
647 $ hg --config extensions.share= share repo13 repo14
648 updating working directory
648 updating working directory
649 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
649 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
650 $ hg -R repo14 -q verify
650 $ hg -R repo14 -q verify
651
651
652 $ hg clone repo13 repo15
652 $ hg clone repo13 repo15
653 updating to branch default
653 updating to branch default
654 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
654 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
655 $ hg -R repo15 -q verify
655 $ hg -R repo15 -q verify
656
656
657 If the source repo doesn't have the blob (maybe it was pulled or cloned with
657 If the source repo doesn't have the blob (maybe it was pulled or cloned with
658 --noupdate), the blob is still accessible via the global cache to send to the
658 --noupdate), the blob is still accessible via the global cache to send to the
659 remote store.
659 remote store.
660
660
661 $ rm -rf $TESTTMP/repo15/.hg/store/lfs
661 $ rm -rf $TESTTMP/repo15/.hg/store/lfs
662 $ hg init repo16
662 $ hg init repo16
663 $ hg -R repo15 push repo16
663 $ hg -R repo15 push repo16
664 pushing to repo16
664 pushing to repo16
665 searching for changes
665 searching for changes
666 adding changesets
666 adding changesets
667 adding manifests
667 adding manifests
668 adding file changes
668 adding file changes
669 added 3 changesets with 2 changes to 1 files
669 added 3 changesets with 2 changes to 1 files
670 $ hg -R repo15 -q verify
670 $ hg -R repo15 -q verify
671
671
672 Test damaged file scenarios. (This also damages the usercache because of the
672 Test damaged file scenarios. (This also damages the usercache because of the
673 hardlinks.)
673 hardlinks.)
674
674
675 $ echo 'damage' >> repo5/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
675 $ echo 'damage' >> repo5/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
676
676
677 Repo with damaged lfs objects in any revision will fail verification.
677 Repo with damaged lfs objects in any revision will fail verification.
678
678
679 $ hg -R repo5 verify
679 $ hg -R repo5 verify
680 checking changesets
680 checking changesets
681 checking manifests
681 checking manifests
682 crosschecking files in changesets and manifests
682 crosschecking files in changesets and manifests
683 checking files
683 checking files
684 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
684 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
685 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
685 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
686 4 files, 5 changesets, 10 total revisions
686 4 files, 5 changesets, 10 total revisions
687 2 integrity errors encountered!
687 2 integrity errors encountered!
688 (first damaged changeset appears to be 0)
688 (first damaged changeset appears to be 0)
689 [1]
689 [1]
690
690
691 Updates work after cloning a damaged repo, if the damaged lfs objects aren't in
691 Updates work after cloning a damaged repo, if the damaged lfs objects aren't in
692 the update destination. Those objects won't be added to the new repo's store
692 the update destination. Those objects won't be added to the new repo's store
693 because they aren't accessed.
693 because they aren't accessed.
694
694
695 $ hg clone -v repo5 fromcorrupt
695 $ hg clone -v repo5 fromcorrupt
696 updating to branch default
696 updating to branch default
697 resolving manifests
697 resolving manifests
698 getting l
698 getting l
699 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the usercache
699 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the usercache
700 getting s
700 getting s
701 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
701 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
702 $ test -f fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
702 $ test -f fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
703 [1]
703 [1]
704
704
705 Verify will copy/link all lfs objects into the local store that aren't already
705 Verify will copy/link all lfs objects into the local store that aren't already
706 present. Bypass the corrupted usercache to show that verify works when fed by
706 present. Bypass the corrupted usercache to show that verify works when fed by
707 the (uncorrupted) remote store.
707 the (uncorrupted) remote store.
708
708
709 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
709 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
710 repository uses revlog format 1
710 repository uses revlog format 1
711 checking changesets
711 checking changesets
712 checking manifests
712 checking manifests
713 crosschecking files in changesets and manifests
713 crosschecking files in changesets and manifests
714 checking files
714 checking files
715 lfs: adding 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e to the usercache
715 lfs: adding 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e to the usercache
716 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
716 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
717 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
717 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
718 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
718 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
719 lfs: adding 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 to the usercache
719 lfs: adding 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 to the usercache
720 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
720 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
721 lfs: adding b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c to the usercache
721 lfs: adding b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c to the usercache
722 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
722 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
723 4 files, 5 changesets, 10 total revisions
723 4 files, 5 changesets, 10 total revisions
724
724
725 Verify will not copy/link a corrupted file from the usercache into the local
725 Verify will not copy/link a corrupted file from the usercache into the local
726 store, and poison it. (The verify with a good remote now works.)
726 store, and poison it. (The verify with a good remote now works.)
727
727
728 $ rm -r fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
728 $ rm -r fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
729 $ hg -R fromcorrupt verify -v
729 $ hg -R fromcorrupt verify -v
730 repository uses revlog format 1
730 repository uses revlog format 1
731 checking changesets
731 checking changesets
732 checking manifests
732 checking manifests
733 crosschecking files in changesets and manifests
733 crosschecking files in changesets and manifests
734 checking files
734 checking files
735 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
735 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
736 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
736 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
737 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
737 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
738 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
738 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
739 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
739 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
740 4 files, 5 changesets, 10 total revisions
740 4 files, 5 changesets, 10 total revisions
741 2 integrity errors encountered!
741 2 integrity errors encountered!
742 (first damaged changeset appears to be 0)
742 (first damaged changeset appears to be 0)
743 [1]
743 [1]
744 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
744 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
745 repository uses revlog format 1
745 repository uses revlog format 1
746 checking changesets
746 checking changesets
747 checking manifests
747 checking manifests
748 crosschecking files in changesets and manifests
748 crosschecking files in changesets and manifests
749 checking files
749 checking files
750 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the usercache
750 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the usercache
751 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
751 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
752 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
752 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
753 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
753 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
754 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
754 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
755 4 files, 5 changesets, 10 total revisions
755 4 files, 5 changesets, 10 total revisions
756
756
757 Damaging a file required by the update destination fails the update.
757 Damaging a file required by the update destination fails the update.
758
758
759 $ echo 'damage' >> $TESTTMP/dummy-remote/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
759 $ echo 'damage' >> $TESTTMP/dummy-remote/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
760 $ hg --config lfs.usercache=emptycache clone -v repo5 fromcorrupt2
760 $ hg --config lfs.usercache=emptycache clone -v repo5 fromcorrupt2
761 updating to branch default
761 updating to branch default
762 resolving manifests
762 resolving manifests
763 getting l
764 abort: corrupt remote lfs object: 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
763 abort: corrupt remote lfs object: 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
765 [255]
764 [255]
766
765
767 A corrupted lfs blob is not transferred from a file://remotestore to the
766 A corrupted lfs blob is not transferred from a file://remotestore to the
768 usercache or local store.
767 usercache or local store.
769
768
770 $ test -f emptycache/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
769 $ test -f emptycache/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
771 [1]
770 [1]
772 $ test -f fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
771 $ test -f fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
773 [1]
772 [1]
774
773
775 $ hg -R fromcorrupt2 verify
774 $ hg -R fromcorrupt2 verify
776 checking changesets
775 checking changesets
777 checking manifests
776 checking manifests
778 crosschecking files in changesets and manifests
777 crosschecking files in changesets and manifests
779 checking files
778 checking files
780 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
779 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
781 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
780 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
782 4 files, 5 changesets, 10 total revisions
781 4 files, 5 changesets, 10 total revisions
783 2 integrity errors encountered!
782 2 integrity errors encountered!
784 (first damaged changeset appears to be 0)
783 (first damaged changeset appears to be 0)
785 [1]
784 [1]
786
785
787 Corrupt local files are not sent upstream. (The alternate dummy remote
786 Corrupt local files are not sent upstream. (The alternate dummy remote
788 avoids the corrupt lfs object in the original remote.)
787 avoids the corrupt lfs object in the original remote.)
789
788
790 $ mkdir $TESTTMP/dummy-remote2
789 $ mkdir $TESTTMP/dummy-remote2
791 $ hg init dest
790 $ hg init dest
792 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 push -v dest
791 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 push -v dest
793 pushing to dest
792 pushing to dest
794 searching for changes
793 searching for changes
795 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
794 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
796 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
795 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
797 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
796 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
798 abort: detected corrupt lfs object: 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
797 abort: detected corrupt lfs object: 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
799 (run hg verify)
798 (run hg verify)
800 [255]
799 [255]
801
800
802 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 verify -v
801 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 verify -v
803 repository uses revlog format 1
802 repository uses revlog format 1
804 checking changesets
803 checking changesets
805 checking manifests
804 checking manifests
806 crosschecking files in changesets and manifests
805 crosschecking files in changesets and manifests
807 checking files
806 checking files
808 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
807 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
809 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
808 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
810 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
809 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
811 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
810 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
812 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
811 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
813 4 files, 5 changesets, 10 total revisions
812 4 files, 5 changesets, 10 total revisions
814 2 integrity errors encountered!
813 2 integrity errors encountered!
815 (first damaged changeset appears to be 0)
814 (first damaged changeset appears to be 0)
816 [1]
815 [1]
817
816
818 $ cat $TESTTMP/dummy-remote2/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
817 $ cat $TESTTMP/dummy-remote2/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
819 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
818 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
820 $ cat fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
819 $ cat fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
821 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
820 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
822 $ test -f $TESTTMP/dummy-remote2/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
821 $ test -f $TESTTMP/dummy-remote2/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
823 [1]
822 [1]
824
823
825 Accessing a corrupt file will complain
824 Accessing a corrupt file will complain
826
825
827 $ hg --cwd fromcorrupt2 cat -r 0 large
826 $ hg --cwd fromcorrupt2 cat -r 0 large
828 abort: integrity check failed on data/large.i:0!
827 abort: integrity check failed on data/large.i:0!
829 [255]
828 [255]
830
829
831 lfs -> normal -> lfs round trip conversions are possible. The 'none()'
830 lfs -> normal -> lfs round trip conversions are possible. The 'none()'
832 predicate on the command line will override whatever is configured globally and
831 predicate on the command line will override whatever is configured globally and
833 locally, and ensures everything converts to a regular file. For lfs -> normal,
832 locally, and ensures everything converts to a regular file. For lfs -> normal,
834 there's no 'lfs' destination repo requirement. For normal -> lfs, there is.
833 there's no 'lfs' destination repo requirement. For normal -> lfs, there is.
835
834
836 $ hg --config extensions.convert= --config 'lfs.track=none()' \
835 $ hg --config extensions.convert= --config 'lfs.track=none()' \
837 > convert repo8 convert_normal
836 > convert repo8 convert_normal
838 initializing destination convert_normal repository
837 initializing destination convert_normal repository
839 scanning source...
838 scanning source...
840 sorting...
839 sorting...
841 converting...
840 converting...
842 2 a
841 2 a
843 1 b
842 1 b
844 0 meta
843 0 meta
845 $ grep 'lfs' convert_normal/.hg/requires
844 $ grep 'lfs' convert_normal/.hg/requires
846 [1]
845 [1]
847 $ hg --cwd convert_normal cat a1 -r 0 -T '{rawdata}'
846 $ hg --cwd convert_normal cat a1 -r 0 -T '{rawdata}'
848 THIS-IS-LFS-BECAUSE-10-BYTES
847 THIS-IS-LFS-BECAUSE-10-BYTES
849
848
850 $ hg --config extensions.convert= --config lfs.threshold=10B \
849 $ hg --config extensions.convert= --config lfs.threshold=10B \
851 > convert convert_normal convert_lfs
850 > convert convert_normal convert_lfs
852 initializing destination convert_lfs repository
851 initializing destination convert_lfs repository
853 scanning source...
852 scanning source...
854 sorting...
853 sorting...
855 converting...
854 converting...
856 2 a
855 2 a
857 1 b
856 1 b
858 0 meta
857 0 meta
859
858
860 $ hg --cwd convert_lfs cat -r 0 a1 -T '{rawdata}'
859 $ hg --cwd convert_lfs cat -r 0 a1 -T '{rawdata}'
861 version https://git-lfs.github.com/spec/v1
860 version https://git-lfs.github.com/spec/v1
862 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
861 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
863 size 29
862 size 29
864 x-is-binary 0
863 x-is-binary 0
865 $ hg --cwd convert_lfs debugdata a1 0
864 $ hg --cwd convert_lfs debugdata a1 0
866 version https://git-lfs.github.com/spec/v1
865 version https://git-lfs.github.com/spec/v1
867 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
866 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
868 size 29
867 size 29
869 x-is-binary 0
868 x-is-binary 0
870 $ hg --cwd convert_lfs log -r 0 -T "{lfs_files % '{lfspointer % '{key}={value}\n'}'}"
869 $ hg --cwd convert_lfs log -r 0 -T "{lfs_files % '{lfspointer % '{key}={value}\n'}'}"
871 version=https://git-lfs.github.com/spec/v1
870 version=https://git-lfs.github.com/spec/v1
872 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
871 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
873 size=29
872 size=29
874 x-is-binary=0
873 x-is-binary=0
875 $ hg --cwd convert_lfs log -r 0 \
874 $ hg --cwd convert_lfs log -r 0 \
876 > -T '{lfs_files % "{get(lfspointer, "oid")}\n"}{lfs_files % "{lfspointer.oid}\n"}'
875 > -T '{lfs_files % "{get(lfspointer, "oid")}\n"}{lfs_files % "{lfspointer.oid}\n"}'
877 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
876 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
878 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
877 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
879 $ hg --cwd convert_lfs log -r 0 -T '{lfs_files % "{lfspointer}\n"}'
878 $ hg --cwd convert_lfs log -r 0 -T '{lfs_files % "{lfspointer}\n"}'
880 version=https://git-lfs.github.com/spec/v1 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024 size=29 x-is-binary=0
879 version=https://git-lfs.github.com/spec/v1 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024 size=29 x-is-binary=0
881 $ hg --cwd convert_lfs \
880 $ hg --cwd convert_lfs \
882 > log -r 'all()' -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}'
881 > log -r 'all()' -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}'
883 0: a1: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
882 0: a1: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
884 1: a2: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
883 1: a2: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
885 2: a2: 876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
884 2: a2: 876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
886
885
887 $ grep 'lfs' convert_lfs/.hg/requires
886 $ grep 'lfs' convert_lfs/.hg/requires
888 lfs
887 lfs
889
888
890 The hashes in all stages of the conversion are unchanged.
889 The hashes in all stages of the conversion are unchanged.
891
890
892 $ hg -R repo8 log -T '{node|short}\n'
891 $ hg -R repo8 log -T '{node|short}\n'
893 0fae949de7fa
892 0fae949de7fa
894 9cd6bdffdac0
893 9cd6bdffdac0
895 7f96794915f7
894 7f96794915f7
896 $ hg -R convert_normal log -T '{node|short}\n'
895 $ hg -R convert_normal log -T '{node|short}\n'
897 0fae949de7fa
896 0fae949de7fa
898 9cd6bdffdac0
897 9cd6bdffdac0
899 7f96794915f7
898 7f96794915f7
900 $ hg -R convert_lfs log -T '{node|short}\n'
899 $ hg -R convert_lfs log -T '{node|short}\n'
901 0fae949de7fa
900 0fae949de7fa
902 9cd6bdffdac0
901 9cd6bdffdac0
903 7f96794915f7
902 7f96794915f7
904
903
905 This convert is trickier, because it contains deleted files (via `hg mv`)
904 This convert is trickier, because it contains deleted files (via `hg mv`)
906
905
907 $ hg --config extensions.convert= --config lfs.threshold=1000M \
906 $ hg --config extensions.convert= --config lfs.threshold=1000M \
908 > convert repo3 convert_normal2
907 > convert repo3 convert_normal2
909 initializing destination convert_normal2 repository
908 initializing destination convert_normal2 repository
910 scanning source...
909 scanning source...
911 sorting...
910 sorting...
912 converting...
911 converting...
913 4 commit with lfs content
912 4 commit with lfs content
914 3 renames
913 3 renames
915 2 large to small, small to large
914 2 large to small, small to large
916 1 random modifications
915 1 random modifications
917 0 switch large and small again
916 0 switch large and small again
918 $ grep 'lfs' convert_normal2/.hg/requires
917 $ grep 'lfs' convert_normal2/.hg/requires
919 [1]
918 [1]
920 $ hg --cwd convert_normal2 debugdata large 0
919 $ hg --cwd convert_normal2 debugdata large 0
921 LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS
920 LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS
922
921
923 $ hg --config extensions.convert= --config lfs.threshold=10B \
922 $ hg --config extensions.convert= --config lfs.threshold=10B \
924 > convert convert_normal2 convert_lfs2
923 > convert convert_normal2 convert_lfs2
925 initializing destination convert_lfs2 repository
924 initializing destination convert_lfs2 repository
926 scanning source...
925 scanning source...
927 sorting...
926 sorting...
928 converting...
927 converting...
929 4 commit with lfs content
928 4 commit with lfs content
930 3 renames
929 3 renames
931 2 large to small, small to large
930 2 large to small, small to large
932 1 random modifications
931 1 random modifications
933 0 switch large and small again
932 0 switch large and small again
934 $ grep 'lfs' convert_lfs2/.hg/requires
933 $ grep 'lfs' convert_lfs2/.hg/requires
935 lfs
934 lfs
936 $ hg --cwd convert_lfs2 debugdata large 0
935 $ hg --cwd convert_lfs2 debugdata large 0
937 version https://git-lfs.github.com/spec/v1
936 version https://git-lfs.github.com/spec/v1
938 oid sha256:66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
937 oid sha256:66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
939 size 39
938 size 39
940 x-is-binary 0
939 x-is-binary 0
941
940
942 $ hg -R convert_lfs2 config --debug extensions | grep lfs
941 $ hg -R convert_lfs2 config --debug extensions | grep lfs
943 $TESTTMP/convert_lfs2/.hg/hgrc:*: extensions.lfs= (glob)
942 $TESTTMP/convert_lfs2/.hg/hgrc:*: extensions.lfs= (glob)
944
943
945 Committing deleted files works:
944 Committing deleted files works:
946
945
947 $ hg init $TESTTMP/repo-del
946 $ hg init $TESTTMP/repo-del
948 $ cd $TESTTMP/repo-del
947 $ cd $TESTTMP/repo-del
949 $ echo 1 > A
948 $ echo 1 > A
950 $ hg commit -m 'add A' -A A
949 $ hg commit -m 'add A' -A A
951 $ hg rm A
950 $ hg rm A
952 $ hg commit -m 'rm A'
951 $ hg commit -m 'rm A'
953
952
954 Bad .hglfs files will block the commit with a useful message
953 Bad .hglfs files will block the commit with a useful message
955
954
956 $ cat > .hglfs << EOF
955 $ cat > .hglfs << EOF
957 > [track]
956 > [track]
958 > **.test = size(">5B")
957 > **.test = size(">5B")
959 > bad file ... no commit
958 > bad file ... no commit
960 > EOF
959 > EOF
961
960
962 $ echo x > file.txt
961 $ echo x > file.txt
963 $ hg ci -Aqm 'should fail'
962 $ hg ci -Aqm 'should fail'
964 hg: parse error at .hglfs:3: bad file ... no commit
963 hg: parse error at .hglfs:3: bad file ... no commit
965 [255]
964 [255]
966
965
967 $ cat > .hglfs << EOF
966 $ cat > .hglfs << EOF
968 > [track]
967 > [track]
969 > **.test = size(">5B")
968 > **.test = size(">5B")
970 > ** = nonexistent()
969 > ** = nonexistent()
971 > EOF
970 > EOF
972
971
973 $ hg ci -Aqm 'should fail'
972 $ hg ci -Aqm 'should fail'
974 abort: parse error in .hglfs: unknown identifier: nonexistent
973 abort: parse error in .hglfs: unknown identifier: nonexistent
975 [255]
974 [255]
976
975
977 '**' works out to mean all files.
976 '**' works out to mean all files.
978
977
979 $ cat > .hglfs << EOF
978 $ cat > .hglfs << EOF
980 > [track]
979 > [track]
981 > path:.hglfs = none()
980 > path:.hglfs = none()
982 > **.test = size(">5B")
981 > **.test = size(">5B")
983 > **.exclude = none()
982 > **.exclude = none()
984 > ** = size(">10B")
983 > ** = size(">10B")
985 > EOF
984 > EOF
986
985
987 The LFS policy takes effect without tracking the .hglfs file
986 The LFS policy takes effect without tracking the .hglfs file
988
987
989 $ echo 'largefile' > lfs.test
988 $ echo 'largefile' > lfs.test
990 $ echo '012345678901234567890' > nolfs.exclude
989 $ echo '012345678901234567890' > nolfs.exclude
991 $ echo '01234567890123456' > lfs.catchall
990 $ echo '01234567890123456' > lfs.catchall
992 $ hg add *
991 $ hg add *
993 $ hg ci -qm 'before add .hglfs'
992 $ hg ci -qm 'before add .hglfs'
994 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
993 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
995 2: lfs.catchall: d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
994 2: lfs.catchall: d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
996 lfs.test: 5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
995 lfs.test: 5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
997
996
998 The .hglfs file works when tracked
997 The .hglfs file works when tracked
999
998
1000 $ echo 'largefile2' > lfs.test
999 $ echo 'largefile2' > lfs.test
1001 $ echo '012345678901234567890a' > nolfs.exclude
1000 $ echo '012345678901234567890a' > nolfs.exclude
1002 $ echo '01234567890123456a' > lfs.catchall
1001 $ echo '01234567890123456a' > lfs.catchall
1003 $ hg ci -Aqm 'after adding .hglfs'
1002 $ hg ci -Aqm 'after adding .hglfs'
1004 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1003 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1005 3: lfs.catchall: 31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1004 3: lfs.catchall: 31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1006 lfs.test: 8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1005 lfs.test: 8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1007
1006
1008 The LFS policy stops when the .hglfs is gone
1007 The LFS policy stops when the .hglfs is gone
1009
1008
1010 $ hg rm .hglfs
1009 $ hg rm .hglfs
1011 $ echo 'largefile3' > lfs.test
1010 $ echo 'largefile3' > lfs.test
1012 $ echo '012345678901234567890abc' > nolfs.exclude
1011 $ echo '012345678901234567890abc' > nolfs.exclude
1013 $ echo '01234567890123456abc' > lfs.catchall
1012 $ echo '01234567890123456abc' > lfs.catchall
1014 $ hg ci -qm 'file test' -X .hglfs
1013 $ hg ci -qm 'file test' -X .hglfs
1015 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1014 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1016 4:
1015 4:
1017
1016
1018 $ cd ..
1017 $ cd ..
1019
1018
1020 Unbundling adds a requirement to a non-lfs repo, if necessary.
1019 Unbundling adds a requirement to a non-lfs repo, if necessary.
1021
1020
1022 $ hg bundle -R $TESTTMP/repo-del -qr 0 --base null nolfs.hg
1021 $ hg bundle -R $TESTTMP/repo-del -qr 0 --base null nolfs.hg
1023 $ hg bundle -R convert_lfs2 -qr tip --base null lfs.hg
1022 $ hg bundle -R convert_lfs2 -qr tip --base null lfs.hg
1024 $ hg init unbundle
1023 $ hg init unbundle
1025 $ hg pull -R unbundle -q nolfs.hg
1024 $ hg pull -R unbundle -q nolfs.hg
1026 $ grep lfs unbundle/.hg/requires
1025 $ grep lfs unbundle/.hg/requires
1027 [1]
1026 [1]
1028 $ hg pull -R unbundle -q lfs.hg
1027 $ hg pull -R unbundle -q lfs.hg
1029 $ grep lfs unbundle/.hg/requires
1028 $ grep lfs unbundle/.hg/requires
1030 lfs
1029 lfs
1031
1030
1032 $ hg init no_lfs
1031 $ hg init no_lfs
1033 $ cat >> no_lfs/.hg/hgrc <<EOF
1032 $ cat >> no_lfs/.hg/hgrc <<EOF
1034 > [experimental]
1033 > [experimental]
1035 > changegroup3 = True
1034 > changegroup3 = True
1036 > [extensions]
1035 > [extensions]
1037 > lfs=!
1036 > lfs=!
1038 > EOF
1037 > EOF
1039 $ cp -R no_lfs no_lfs2
1038 $ cp -R no_lfs no_lfs2
1040
1039
1041 Pushing from a local lfs repo to a local repo without an lfs requirement and
1040 Pushing from a local lfs repo to a local repo without an lfs requirement and
1042 with lfs disabled, fails.
1041 with lfs disabled, fails.
1043
1042
1044 $ hg push -R convert_lfs2 no_lfs
1043 $ hg push -R convert_lfs2 no_lfs
1045 pushing to no_lfs
1044 pushing to no_lfs
1046 abort: required features are not supported in the destination: lfs
1045 abort: required features are not supported in the destination: lfs
1047 [255]
1046 [255]
1048 $ grep lfs no_lfs/.hg/requires
1047 $ grep lfs no_lfs/.hg/requires
1049 [1]
1048 [1]
1050
1049
1051 Pulling from a local lfs repo to a local repo without an lfs requirement and
1050 Pulling from a local lfs repo to a local repo without an lfs requirement and
1052 with lfs disabled, fails.
1051 with lfs disabled, fails.
1053
1052
1054 $ hg pull -R no_lfs2 convert_lfs2
1053 $ hg pull -R no_lfs2 convert_lfs2
1055 pulling from convert_lfs2
1054 pulling from convert_lfs2
1056 abort: required features are not supported in the destination: lfs
1055 abort: required features are not supported in the destination: lfs
1057 [255]
1056 [255]
1058 $ grep lfs no_lfs2/.hg/requires
1057 $ grep lfs no_lfs2/.hg/requires
1059 [1]
1058 [1]
General Comments 0
You need to be logged in to leave comments. Login now