##// END OF EJS Templates
lfs: infer the blob store URL from paths.default...
Matt Harbison -
r37536:092eff68 default
parent child Browse files
Show More
@@ -1,394 +1,396
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will assume the repository at ``paths.default`` also handles
91 # blob storage for http(s) URLs. Otherwise, lfs will prompt to set this
92 # when it must use this value.
91 # (default: unset)
93 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
93
95
94 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
98 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
101 #
100 # Some examples:
102 # Some examples:
101 # - all() # everything
103 # - all() # everything
102 # - none() # nothing
104 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
111 # (default: none())
110 #
112 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
113 track = size(">10M")
115 track = size(">10M")
114
116
115 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
116 retry = 5
118 retry = 5
117
119
118 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
121 """
123 """
122
124
123 from __future__ import absolute_import
125 from __future__ import absolute_import
124
126
125 from mercurial.i18n import _
127 from mercurial.i18n import _
126
128
127 from mercurial import (
129 from mercurial import (
128 bundle2,
130 bundle2,
129 changegroup,
131 changegroup,
130 cmdutil,
132 cmdutil,
131 config,
133 config,
132 context,
134 context,
133 error,
135 error,
134 exchange,
136 exchange,
135 extensions,
137 extensions,
136 filelog,
138 filelog,
137 fileset,
139 fileset,
138 hg,
140 hg,
139 localrepo,
141 localrepo,
140 minifileset,
142 minifileset,
141 node,
143 node,
142 pycompat,
144 pycompat,
143 registrar,
145 registrar,
144 revlog,
146 revlog,
145 scmutil,
147 scmutil,
146 templateutil,
148 templateutil,
147 upgrade,
149 upgrade,
148 util,
150 util,
149 vfs as vfsmod,
151 vfs as vfsmod,
150 wireproto,
152 wireproto,
151 wireprotoserver,
153 wireprotoserver,
152 )
154 )
153
155
154 from . import (
156 from . import (
155 blobstore,
157 blobstore,
156 wireprotolfsserver,
158 wireprotolfsserver,
157 wrapper,
159 wrapper,
158 )
160 )
159
161
160 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
161 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
162 # be specifying the version(s) of Mercurial they are tested with, or
164 # be specifying the version(s) of Mercurial they are tested with, or
163 # leave the attribute unspecified.
165 # leave the attribute unspecified.
164 testedwith = 'ships-with-hg-core'
166 testedwith = 'ships-with-hg-core'
165
167
166 configtable = {}
168 configtable = {}
167 configitem = registrar.configitem(configtable)
169 configitem = registrar.configitem(configtable)
168
170
169 configitem('experimental', 'lfs.serve',
171 configitem('experimental', 'lfs.serve',
170 default=True,
172 default=True,
171 )
173 )
172 configitem('experimental', 'lfs.user-agent',
174 configitem('experimental', 'lfs.user-agent',
173 default=None,
175 default=None,
174 )
176 )
175 configitem('experimental', 'lfs.worker-enable',
177 configitem('experimental', 'lfs.worker-enable',
176 default=False,
178 default=False,
177 )
179 )
178
180
179 configitem('lfs', 'url',
181 configitem('lfs', 'url',
180 default=None,
182 default=None,
181 )
183 )
182 configitem('lfs', 'usercache',
184 configitem('lfs', 'usercache',
183 default=None,
185 default=None,
184 )
186 )
185 # Deprecated
187 # Deprecated
186 configitem('lfs', 'threshold',
188 configitem('lfs', 'threshold',
187 default=None,
189 default=None,
188 )
190 )
189 configitem('lfs', 'track',
191 configitem('lfs', 'track',
190 default='none()',
192 default='none()',
191 )
193 )
192 configitem('lfs', 'retry',
194 configitem('lfs', 'retry',
193 default=5,
195 default=5,
194 )
196 )
195
197
196 cmdtable = {}
198 cmdtable = {}
197 command = registrar.command(cmdtable)
199 command = registrar.command(cmdtable)
198
200
199 templatekeyword = registrar.templatekeyword()
201 templatekeyword = registrar.templatekeyword()
200 filesetpredicate = registrar.filesetpredicate()
202 filesetpredicate = registrar.filesetpredicate()
201
203
202 def featuresetup(ui, supported):
204 def featuresetup(ui, supported):
203 # don't die on seeing a repo with the lfs requirement
205 # don't die on seeing a repo with the lfs requirement
204 supported |= {'lfs'}
206 supported |= {'lfs'}
205
207
206 def uisetup(ui):
208 def uisetup(ui):
207 localrepo.featuresetupfuncs.add(featuresetup)
209 localrepo.featuresetupfuncs.add(featuresetup)
208
210
209 def reposetup(ui, repo):
211 def reposetup(ui, repo):
210 # Nothing to do with a remote repo
212 # Nothing to do with a remote repo
211 if not repo.local():
213 if not repo.local():
212 return
214 return
213
215
214 repo.svfs.lfslocalblobstore = blobstore.local(repo)
216 repo.svfs.lfslocalblobstore = blobstore.local(repo)
215 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
217 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
216
218
217 class lfsrepo(repo.__class__):
219 class lfsrepo(repo.__class__):
218 @localrepo.unfilteredmethod
220 @localrepo.unfilteredmethod
219 def commitctx(self, ctx, error=False):
221 def commitctx(self, ctx, error=False):
220 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
222 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
221 return super(lfsrepo, self).commitctx(ctx, error)
223 return super(lfsrepo, self).commitctx(ctx, error)
222
224
223 repo.__class__ = lfsrepo
225 repo.__class__ = lfsrepo
224
226
225 if 'lfs' not in repo.requirements:
227 if 'lfs' not in repo.requirements:
226 def checkrequireslfs(ui, repo, **kwargs):
228 def checkrequireslfs(ui, repo, **kwargs):
227 if 'lfs' not in repo.requirements:
229 if 'lfs' not in repo.requirements:
228 last = kwargs.get(r'node_last')
230 last = kwargs.get(r'node_last')
229 _bin = node.bin
231 _bin = node.bin
230 if last:
232 if last:
231 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
233 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
232 else:
234 else:
233 s = repo.set('%n', _bin(kwargs[r'node']))
235 s = repo.set('%n', _bin(kwargs[r'node']))
234 match = repo.narrowmatch()
236 match = repo.narrowmatch()
235 for ctx in s:
237 for ctx in s:
236 # TODO: is there a way to just walk the files in the commit?
238 # TODO: is there a way to just walk the files in the commit?
237 if any(ctx[f].islfs() for f in ctx.files()
239 if any(ctx[f].islfs() for f in ctx.files()
238 if f in ctx and match(f)):
240 if f in ctx and match(f)):
239 repo.requirements.add('lfs')
241 repo.requirements.add('lfs')
240 repo._writerequirements()
242 repo._writerequirements()
241 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
243 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
242 break
244 break
243
245
244 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
246 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
245 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
247 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
246 else:
248 else:
247 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
249 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
248
250
249 def _trackedmatcher(repo):
251 def _trackedmatcher(repo):
250 """Return a function (path, size) -> bool indicating whether or not to
252 """Return a function (path, size) -> bool indicating whether or not to
251 track a given file with lfs."""
253 track a given file with lfs."""
252 if not repo.wvfs.exists('.hglfs'):
254 if not repo.wvfs.exists('.hglfs'):
253 # No '.hglfs' in wdir. Fallback to config for now.
255 # No '.hglfs' in wdir. Fallback to config for now.
254 trackspec = repo.ui.config('lfs', 'track')
256 trackspec = repo.ui.config('lfs', 'track')
255
257
256 # deprecated config: lfs.threshold
258 # deprecated config: lfs.threshold
257 threshold = repo.ui.configbytes('lfs', 'threshold')
259 threshold = repo.ui.configbytes('lfs', 'threshold')
258 if threshold:
260 if threshold:
259 fileset.parse(trackspec) # make sure syntax errors are confined
261 fileset.parse(trackspec) # make sure syntax errors are confined
260 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
262 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
261
263
262 return minifileset.compile(trackspec)
264 return minifileset.compile(trackspec)
263
265
264 data = repo.wvfs.tryread('.hglfs')
266 data = repo.wvfs.tryread('.hglfs')
265 if not data:
267 if not data:
266 return lambda p, s: False
268 return lambda p, s: False
267
269
268 # Parse errors here will abort with a message that points to the .hglfs file
270 # Parse errors here will abort with a message that points to the .hglfs file
269 # and line number.
271 # and line number.
270 cfg = config.config()
272 cfg = config.config()
271 cfg.parse('.hglfs', data)
273 cfg.parse('.hglfs', data)
272
274
273 try:
275 try:
274 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
276 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
275 for pattern, rule in cfg.items('track')]
277 for pattern, rule in cfg.items('track')]
276 except error.ParseError as e:
278 except error.ParseError as e:
277 # The original exception gives no indicator that the error is in the
279 # The original exception gives no indicator that the error is in the
278 # .hglfs file, so add that.
280 # .hglfs file, so add that.
279
281
280 # TODO: See if the line number of the file can be made available.
282 # TODO: See if the line number of the file can be made available.
281 raise error.Abort(_('parse error in .hglfs: %s') % e)
283 raise error.Abort(_('parse error in .hglfs: %s') % e)
282
284
283 def _match(path, size):
285 def _match(path, size):
284 for pat, rule in rules:
286 for pat, rule in rules:
285 if pat(path, size):
287 if pat(path, size):
286 return rule(path, size)
288 return rule(path, size)
287
289
288 return False
290 return False
289
291
290 return _match
292 return _match
291
293
292 def wrapfilelog(filelog):
294 def wrapfilelog(filelog):
293 wrapfunction = extensions.wrapfunction
295 wrapfunction = extensions.wrapfunction
294
296
295 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
297 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
296 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
298 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
297 wrapfunction(filelog, 'size', wrapper.filelogsize)
299 wrapfunction(filelog, 'size', wrapper.filelogsize)
298
300
299 def extsetup(ui):
301 def extsetup(ui):
300 wrapfilelog(filelog.filelog)
302 wrapfilelog(filelog.filelog)
301
303
302 wrapfunction = extensions.wrapfunction
304 wrapfunction = extensions.wrapfunction
303
305
304 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
306 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
305 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
307 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
306
308
307 wrapfunction(upgrade, '_finishdatamigration',
309 wrapfunction(upgrade, '_finishdatamigration',
308 wrapper.upgradefinishdatamigration)
310 wrapper.upgradefinishdatamigration)
309
311
310 wrapfunction(upgrade, 'preservedrequirements',
312 wrapfunction(upgrade, 'preservedrequirements',
311 wrapper.upgraderequirements)
313 wrapper.upgraderequirements)
312
314
313 wrapfunction(upgrade, 'supporteddestrequirements',
315 wrapfunction(upgrade, 'supporteddestrequirements',
314 wrapper.upgraderequirements)
316 wrapper.upgraderequirements)
315
317
316 wrapfunction(changegroup,
318 wrapfunction(changegroup,
317 'allsupportedversions',
319 'allsupportedversions',
318 wrapper.allsupportedversions)
320 wrapper.allsupportedversions)
319
321
320 wrapfunction(exchange, 'push', wrapper.push)
322 wrapfunction(exchange, 'push', wrapper.push)
321 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
323 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
322 wrapfunction(wireprotoserver, 'handlewsgirequest',
324 wrapfunction(wireprotoserver, 'handlewsgirequest',
323 wireprotolfsserver.handlewsgirequest)
325 wireprotolfsserver.handlewsgirequest)
324
326
325 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
327 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
326 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
328 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
327 context.basefilectx.islfs = wrapper.filectxislfs
329 context.basefilectx.islfs = wrapper.filectxislfs
328
330
329 revlog.addflagprocessor(
331 revlog.addflagprocessor(
330 revlog.REVIDX_EXTSTORED,
332 revlog.REVIDX_EXTSTORED,
331 (
333 (
332 wrapper.readfromstore,
334 wrapper.readfromstore,
333 wrapper.writetostore,
335 wrapper.writetostore,
334 wrapper.bypasscheckhash,
336 wrapper.bypasscheckhash,
335 ),
337 ),
336 )
338 )
337
339
338 wrapfunction(hg, 'clone', wrapper.hgclone)
340 wrapfunction(hg, 'clone', wrapper.hgclone)
339 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
341 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
340
342
341 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
343 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
342
344
343 # Make bundle choose changegroup3 instead of changegroup2. This affects
345 # Make bundle choose changegroup3 instead of changegroup2. This affects
344 # "hg bundle" command. Note: it does not cover all bundle formats like
346 # "hg bundle" command. Note: it does not cover all bundle formats like
345 # "packed1". Using "packed1" with lfs will likely cause trouble.
347 # "packed1". Using "packed1" with lfs will likely cause trouble.
346 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
348 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
347
349
348 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
350 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
349 # options and blob stores are passed from othervfs to the new readonlyvfs.
351 # options and blob stores are passed from othervfs to the new readonlyvfs.
350 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
352 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
351
353
352 # when writing a bundle via "hg bundle" command, upload related LFS blobs
354 # when writing a bundle via "hg bundle" command, upload related LFS blobs
353 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
355 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
354
356
355 @filesetpredicate('lfs()', callstatus=True)
357 @filesetpredicate('lfs()', callstatus=True)
356 def lfsfileset(mctx, x):
358 def lfsfileset(mctx, x):
357 """File that uses LFS storage."""
359 """File that uses LFS storage."""
358 # i18n: "lfs" is a keyword
360 # i18n: "lfs" is a keyword
359 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
361 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
360 return [f for f in mctx.subset
362 return [f for f in mctx.subset
361 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
363 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
362
364
363 @templatekeyword('lfs_files', requires={'ctx'})
365 @templatekeyword('lfs_files', requires={'ctx'})
364 def lfsfiles(context, mapping):
366 def lfsfiles(context, mapping):
365 """List of strings. All files modified, added, or removed by this
367 """List of strings. All files modified, added, or removed by this
366 changeset."""
368 changeset."""
367 ctx = context.resource(mapping, 'ctx')
369 ctx = context.resource(mapping, 'ctx')
368
370
369 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
371 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
370 files = sorted(pointers.keys())
372 files = sorted(pointers.keys())
371
373
372 def pointer(v):
374 def pointer(v):
373 # In the file spec, version is first and the other keys are sorted.
375 # In the file spec, version is first and the other keys are sorted.
374 sortkeyfunc = lambda x: (x[0] != 'version', x)
376 sortkeyfunc = lambda x: (x[0] != 'version', x)
375 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
377 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
376 return util.sortdict(items)
378 return util.sortdict(items)
377
379
378 makemap = lambda v: {
380 makemap = lambda v: {
379 'file': v,
381 'file': v,
380 'lfsoid': pointers[v].oid() if pointers[v] else None,
382 'lfsoid': pointers[v].oid() if pointers[v] else None,
381 'lfspointer': templateutil.hybriddict(pointer(v)),
383 'lfspointer': templateutil.hybriddict(pointer(v)),
382 }
384 }
383
385
384 # TODO: make the separator ', '?
386 # TODO: make the separator ', '?
385 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
387 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
386 return templateutil.hybrid(f, files, makemap, pycompat.identity)
388 return templateutil.hybrid(f, files, makemap, pycompat.identity)
387
389
388 @command('debuglfsupload',
390 @command('debuglfsupload',
389 [('r', 'rev', [], _('upload large files introduced by REV'))])
391 [('r', 'rev', [], _('upload large files introduced by REV'))])
390 def debuglfsupload(ui, repo, **opts):
392 def debuglfsupload(ui, repo, **opts):
391 """upload lfs blobs added by the working copy parent or given revisions"""
393 """upload lfs blobs added by the working copy parent or given revisions"""
392 revs = opts.get(r'rev', [])
394 revs = opts.get(r'rev', [])
393 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
395 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
394 wrapper.uploadblobs(repo, pointers)
396 wrapper.uploadblobs(repo, pointers)
@@ -1,543 +1,564
1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
1 # blobstore.py - local and remote (speaking Git-LFS protocol) blob storages
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12 import json
12 import json
13 import os
13 import os
14 import re
14 import re
15 import socket
15 import socket
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18
18
19 from mercurial import (
19 from mercurial import (
20 error,
20 error,
21 pathutil,
21 pathutil,
22 pycompat,
22 pycompat,
23 url as urlmod,
23 url as urlmod,
24 util,
24 util,
25 vfs as vfsmod,
25 vfs as vfsmod,
26 worker,
26 worker,
27 )
27 )
28
28
29 from ..largefiles import lfutil
29 from ..largefiles import lfutil
30
30
31 # 64 bytes for SHA256
31 # 64 bytes for SHA256
32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
32 _lfsre = re.compile(br'\A[a-f0-9]{64}\Z')
33
33
34 class lfsvfs(vfsmod.vfs):
34 class lfsvfs(vfsmod.vfs):
35 def join(self, path):
35 def join(self, path):
36 """split the path at first two characters, like: XX/XXXXX..."""
36 """split the path at first two characters, like: XX/XXXXX..."""
37 if not _lfsre.match(path):
37 if not _lfsre.match(path):
38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
38 raise error.ProgrammingError('unexpected lfs path: %s' % path)
39 return super(lfsvfs, self).join(path[0:2], path[2:])
39 return super(lfsvfs, self).join(path[0:2], path[2:])
40
40
41 def walk(self, path=None, onerror=None):
41 def walk(self, path=None, onerror=None):
42 """Yield (dirpath, [], oids) tuple for blobs under path
42 """Yield (dirpath, [], oids) tuple for blobs under path
43
43
44 Oids only exist in the root of this vfs, so dirpath is always ''.
44 Oids only exist in the root of this vfs, so dirpath is always ''.
45 """
45 """
46 root = os.path.normpath(self.base)
46 root = os.path.normpath(self.base)
47 # when dirpath == root, dirpath[prefixlen:] becomes empty
47 # when dirpath == root, dirpath[prefixlen:] becomes empty
48 # because len(dirpath) < prefixlen.
48 # because len(dirpath) < prefixlen.
49 prefixlen = len(pathutil.normasprefix(root))
49 prefixlen = len(pathutil.normasprefix(root))
50 oids = []
50 oids = []
51
51
52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
52 for dirpath, dirs, files in os.walk(self.reljoin(self.base, path or ''),
53 onerror=onerror):
53 onerror=onerror):
54 dirpath = dirpath[prefixlen:]
54 dirpath = dirpath[prefixlen:]
55
55
56 # Silently skip unexpected files and directories
56 # Silently skip unexpected files and directories
57 if len(dirpath) == 2:
57 if len(dirpath) == 2:
58 oids.extend([dirpath + f for f in files
58 oids.extend([dirpath + f for f in files
59 if _lfsre.match(dirpath + f)])
59 if _lfsre.match(dirpath + f)])
60
60
61 yield ('', [], oids)
61 yield ('', [], oids)
62
62
63 class nullvfs(lfsvfs):
63 class nullvfs(lfsvfs):
64 def __init__(self):
64 def __init__(self):
65 pass
65 pass
66
66
67 def exists(self, oid):
67 def exists(self, oid):
68 return False
68 return False
69
69
70 def read(self, oid):
70 def read(self, oid):
71 # store.read() calls into here if the blob doesn't exist in its
71 # store.read() calls into here if the blob doesn't exist in its
72 # self.vfs. Raise the same error as a normal vfs when asked to read a
72 # self.vfs. Raise the same error as a normal vfs when asked to read a
73 # file that doesn't exist. The only difference is the full file path
73 # file that doesn't exist. The only difference is the full file path
74 # isn't available in the error.
74 # isn't available in the error.
75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
75 raise IOError(errno.ENOENT, '%s: No such file or directory' % oid)
76
76
77 def walk(self, path=None, onerror=None):
77 def walk(self, path=None, onerror=None):
78 return ('', [], [])
78 return ('', [], [])
79
79
80 def write(self, oid, data):
80 def write(self, oid, data):
81 pass
81 pass
82
82
83 class filewithprogress(object):
83 class filewithprogress(object):
84 """a file-like object that supports __len__ and read.
84 """a file-like object that supports __len__ and read.
85
85
86 Useful to provide progress information for how many bytes are read.
86 Useful to provide progress information for how many bytes are read.
87 """
87 """
88
88
89 def __init__(self, fp, callback):
89 def __init__(self, fp, callback):
90 self._fp = fp
90 self._fp = fp
91 self._callback = callback # func(readsize)
91 self._callback = callback # func(readsize)
92 fp.seek(0, os.SEEK_END)
92 fp.seek(0, os.SEEK_END)
93 self._len = fp.tell()
93 self._len = fp.tell()
94 fp.seek(0)
94 fp.seek(0)
95
95
96 def __len__(self):
96 def __len__(self):
97 return self._len
97 return self._len
98
98
99 def read(self, size):
99 def read(self, size):
100 if self._fp is None:
100 if self._fp is None:
101 return b''
101 return b''
102 data = self._fp.read(size)
102 data = self._fp.read(size)
103 if data:
103 if data:
104 if self._callback:
104 if self._callback:
105 self._callback(len(data))
105 self._callback(len(data))
106 else:
106 else:
107 self._fp.close()
107 self._fp.close()
108 self._fp = None
108 self._fp = None
109 return data
109 return data
110
110
111 class local(object):
111 class local(object):
112 """Local blobstore for large file contents.
112 """Local blobstore for large file contents.
113
113
114 This blobstore is used both as a cache and as a staging area for large blobs
114 This blobstore is used both as a cache and as a staging area for large blobs
115 to be uploaded to the remote blobstore.
115 to be uploaded to the remote blobstore.
116 """
116 """
117
117
118 def __init__(self, repo):
118 def __init__(self, repo):
119 fullpath = repo.svfs.join('lfs/objects')
119 fullpath = repo.svfs.join('lfs/objects')
120 self.vfs = lfsvfs(fullpath)
120 self.vfs = lfsvfs(fullpath)
121 usercache = util.url(lfutil._usercachedir(repo.ui, 'lfs'))
121 usercache = util.url(lfutil._usercachedir(repo.ui, 'lfs'))
122 if usercache.scheme in (None, 'file'):
122 if usercache.scheme in (None, 'file'):
123 self.cachevfs = lfsvfs(usercache.localpath())
123 self.cachevfs = lfsvfs(usercache.localpath())
124 elif usercache.scheme == 'null':
124 elif usercache.scheme == 'null':
125 self.cachevfs = nullvfs()
125 self.cachevfs = nullvfs()
126 else:
126 else:
127 raise error.Abort(_('unknown lfs cache scheme: %s')
127 raise error.Abort(_('unknown lfs cache scheme: %s')
128 % usercache.scheme)
128 % usercache.scheme)
129 self.ui = repo.ui
129 self.ui = repo.ui
130
130
131 def open(self, oid):
131 def open(self, oid):
132 """Open a read-only file descriptor to the named blob, in either the
132 """Open a read-only file descriptor to the named blob, in either the
133 usercache or the local store."""
133 usercache or the local store."""
134 # The usercache is the most likely place to hold the file. Commit will
134 # The usercache is the most likely place to hold the file. Commit will
135 # write to both it and the local store, as will anything that downloads
135 # write to both it and the local store, as will anything that downloads
136 # the blobs. However, things like clone without an update won't
136 # the blobs. However, things like clone without an update won't
137 # populate the local store. For an init + push of a local clone,
137 # populate the local store. For an init + push of a local clone,
138 # the usercache is the only place it _could_ be. If not present, the
138 # the usercache is the only place it _could_ be. If not present, the
139 # missing file msg here will indicate the local repo, not the usercache.
139 # missing file msg here will indicate the local repo, not the usercache.
140 if self.cachevfs.exists(oid):
140 if self.cachevfs.exists(oid):
141 return self.cachevfs(oid, 'rb')
141 return self.cachevfs(oid, 'rb')
142
142
143 return self.vfs(oid, 'rb')
143 return self.vfs(oid, 'rb')
144
144
145 def download(self, oid, src):
145 def download(self, oid, src):
146 """Read the blob from the remote source in chunks, verify the content,
146 """Read the blob from the remote source in chunks, verify the content,
147 and write to this local blobstore."""
147 and write to this local blobstore."""
148 sha256 = hashlib.sha256()
148 sha256 = hashlib.sha256()
149
149
150 with self.vfs(oid, 'wb', atomictemp=True) as fp:
150 with self.vfs(oid, 'wb', atomictemp=True) as fp:
151 for chunk in util.filechunkiter(src, size=1048576):
151 for chunk in util.filechunkiter(src, size=1048576):
152 fp.write(chunk)
152 fp.write(chunk)
153 sha256.update(chunk)
153 sha256.update(chunk)
154
154
155 realoid = sha256.hexdigest()
155 realoid = sha256.hexdigest()
156 if realoid != oid:
156 if realoid != oid:
157 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
157 raise error.Abort(_('corrupt remote lfs object: %s') % oid)
158
158
159 self._linktousercache(oid)
159 self._linktousercache(oid)
160
160
161 def write(self, oid, data):
161 def write(self, oid, data):
162 """Write blob to local blobstore.
162 """Write blob to local blobstore.
163
163
164 This should only be called from the filelog during a commit or similar.
164 This should only be called from the filelog during a commit or similar.
165 As such, there is no need to verify the data. Imports from a remote
165 As such, there is no need to verify the data. Imports from a remote
166 store must use ``download()`` instead."""
166 store must use ``download()`` instead."""
167 with self.vfs(oid, 'wb', atomictemp=True) as fp:
167 with self.vfs(oid, 'wb', atomictemp=True) as fp:
168 fp.write(data)
168 fp.write(data)
169
169
170 self._linktousercache(oid)
170 self._linktousercache(oid)
171
171
172 def _linktousercache(self, oid):
172 def _linktousercache(self, oid):
173 # XXX: should we verify the content of the cache, and hardlink back to
173 # XXX: should we verify the content of the cache, and hardlink back to
174 # the local store on success, but truncate, write and link on failure?
174 # the local store on success, but truncate, write and link on failure?
175 if (not self.cachevfs.exists(oid)
175 if (not self.cachevfs.exists(oid)
176 and not isinstance(self.cachevfs, nullvfs)):
176 and not isinstance(self.cachevfs, nullvfs)):
177 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
177 self.ui.note(_('lfs: adding %s to the usercache\n') % oid)
178 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
178 lfutil.link(self.vfs.join(oid), self.cachevfs.join(oid))
179
179
180 def read(self, oid, verify=True):
180 def read(self, oid, verify=True):
181 """Read blob from local blobstore."""
181 """Read blob from local blobstore."""
182 if not self.vfs.exists(oid):
182 if not self.vfs.exists(oid):
183 blob = self._read(self.cachevfs, oid, verify)
183 blob = self._read(self.cachevfs, oid, verify)
184
184
185 # Even if revlog will verify the content, it needs to be verified
185 # Even if revlog will verify the content, it needs to be verified
186 # now before making the hardlink to avoid propagating corrupt blobs.
186 # now before making the hardlink to avoid propagating corrupt blobs.
187 # Don't abort if corruption is detected, because `hg verify` will
187 # Don't abort if corruption is detected, because `hg verify` will
188 # give more useful info about the corruption- simply don't add the
188 # give more useful info about the corruption- simply don't add the
189 # hardlink.
189 # hardlink.
190 if verify or hashlib.sha256(blob).hexdigest() == oid:
190 if verify or hashlib.sha256(blob).hexdigest() == oid:
191 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
191 self.ui.note(_('lfs: found %s in the usercache\n') % oid)
192 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
192 lfutil.link(self.cachevfs.join(oid), self.vfs.join(oid))
193 else:
193 else:
194 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
194 self.ui.note(_('lfs: found %s in the local lfs store\n') % oid)
195 blob = self._read(self.vfs, oid, verify)
195 blob = self._read(self.vfs, oid, verify)
196 return blob
196 return blob
197
197
198 def _read(self, vfs, oid, verify):
198 def _read(self, vfs, oid, verify):
199 """Read blob (after verifying) from the given store"""
199 """Read blob (after verifying) from the given store"""
200 blob = vfs.read(oid)
200 blob = vfs.read(oid)
201 if verify:
201 if verify:
202 _verify(oid, blob)
202 _verify(oid, blob)
203 return blob
203 return blob
204
204
205 def verify(self, oid):
205 def verify(self, oid):
206 """Indicate whether or not the hash of the underlying file matches its
206 """Indicate whether or not the hash of the underlying file matches its
207 name."""
207 name."""
208 sha256 = hashlib.sha256()
208 sha256 = hashlib.sha256()
209
209
210 with self.open(oid) as fp:
210 with self.open(oid) as fp:
211 for chunk in util.filechunkiter(fp, size=1048576):
211 for chunk in util.filechunkiter(fp, size=1048576):
212 sha256.update(chunk)
212 sha256.update(chunk)
213
213
214 return oid == sha256.hexdigest()
214 return oid == sha256.hexdigest()
215
215
216 def has(self, oid):
216 def has(self, oid):
217 """Returns True if the local blobstore contains the requested blob,
217 """Returns True if the local blobstore contains the requested blob,
218 False otherwise."""
218 False otherwise."""
219 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
219 return self.cachevfs.exists(oid) or self.vfs.exists(oid)
220
220
221 class _gitlfsremote(object):
221 class _gitlfsremote(object):
222
222
223 def __init__(self, repo, url):
223 def __init__(self, repo, url):
224 ui = repo.ui
224 ui = repo.ui
225 self.ui = ui
225 self.ui = ui
226 baseurl, authinfo = url.authinfo()
226 baseurl, authinfo = url.authinfo()
227 self.baseurl = baseurl.rstrip('/')
227 self.baseurl = baseurl.rstrip('/')
228 useragent = repo.ui.config('experimental', 'lfs.user-agent')
228 useragent = repo.ui.config('experimental', 'lfs.user-agent')
229 if not useragent:
229 if not useragent:
230 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
230 useragent = 'git-lfs/2.3.4 (Mercurial %s)' % util.version()
231 self.urlopener = urlmod.opener(ui, authinfo, useragent)
231 self.urlopener = urlmod.opener(ui, authinfo, useragent)
232 self.retry = ui.configint('lfs', 'retry')
232 self.retry = ui.configint('lfs', 'retry')
233
233
234 def writebatch(self, pointers, fromstore):
234 def writebatch(self, pointers, fromstore):
235 """Batch upload from local to remote blobstore."""
235 """Batch upload from local to remote blobstore."""
236 self._batch(_deduplicate(pointers), fromstore, 'upload')
236 self._batch(_deduplicate(pointers), fromstore, 'upload')
237
237
238 def readbatch(self, pointers, tostore):
238 def readbatch(self, pointers, tostore):
239 """Batch download from remote to local blostore."""
239 """Batch download from remote to local blostore."""
240 self._batch(_deduplicate(pointers), tostore, 'download')
240 self._batch(_deduplicate(pointers), tostore, 'download')
241
241
242 def _batchrequest(self, pointers, action):
242 def _batchrequest(self, pointers, action):
243 """Get metadata about objects pointed by pointers for given action
243 """Get metadata about objects pointed by pointers for given action
244
244
245 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
245 Return decoded JSON object like {'objects': [{'oid': '', 'size': 1}]}
246 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
246 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/batch.md
247 """
247 """
248 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
248 objects = [{'oid': p.oid(), 'size': p.size()} for p in pointers]
249 requestdata = json.dumps({
249 requestdata = json.dumps({
250 'objects': objects,
250 'objects': objects,
251 'operation': action,
251 'operation': action,
252 })
252 })
253 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
253 batchreq = util.urlreq.request('%s/objects/batch' % self.baseurl,
254 data=requestdata)
254 data=requestdata)
255 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
255 batchreq.add_header('Accept', 'application/vnd.git-lfs+json')
256 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
256 batchreq.add_header('Content-Type', 'application/vnd.git-lfs+json')
257 try:
257 try:
258 rsp = self.urlopener.open(batchreq)
258 rsp = self.urlopener.open(batchreq)
259 rawjson = rsp.read()
259 rawjson = rsp.read()
260 except util.urlerr.httperror as ex:
260 except util.urlerr.httperror as ex:
261 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
261 raise LfsRemoteError(_('LFS HTTP error: %s (action=%s)')
262 % (ex, action))
262 % (ex, action))
263 try:
263 try:
264 response = json.loads(rawjson)
264 response = json.loads(rawjson)
265 except ValueError:
265 except ValueError:
266 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
266 raise LfsRemoteError(_('LFS server returns invalid JSON: %s')
267 % rawjson)
267 % rawjson)
268
268
269 if self.ui.debugflag:
269 if self.ui.debugflag:
270 self.ui.debug('Status: %d\n' % rsp.status)
270 self.ui.debug('Status: %d\n' % rsp.status)
271 # lfs-test-server and hg serve return headers in different order
271 # lfs-test-server and hg serve return headers in different order
272 self.ui.debug('%s\n'
272 self.ui.debug('%s\n'
273 % '\n'.join(sorted(str(rsp.info()).splitlines())))
273 % '\n'.join(sorted(str(rsp.info()).splitlines())))
274
274
275 if 'objects' in response:
275 if 'objects' in response:
276 response['objects'] = sorted(response['objects'],
276 response['objects'] = sorted(response['objects'],
277 key=lambda p: p['oid'])
277 key=lambda p: p['oid'])
278 self.ui.debug('%s\n'
278 self.ui.debug('%s\n'
279 % json.dumps(response, indent=2,
279 % json.dumps(response, indent=2,
280 separators=('', ': '), sort_keys=True))
280 separators=('', ': '), sort_keys=True))
281
281
282 return response
282 return response
283
283
284 def _checkforservererror(self, pointers, responses, action):
284 def _checkforservererror(self, pointers, responses, action):
285 """Scans errors from objects
285 """Scans errors from objects
286
286
287 Raises LfsRemoteError if any objects have an error"""
287 Raises LfsRemoteError if any objects have an error"""
288 for response in responses:
288 for response in responses:
289 # The server should return 404 when objects cannot be found. Some
289 # The server should return 404 when objects cannot be found. Some
290 # server implementation (ex. lfs-test-server) does not set "error"
290 # server implementation (ex. lfs-test-server) does not set "error"
291 # but just removes "download" from "actions". Treat that case
291 # but just removes "download" from "actions". Treat that case
292 # as the same as 404 error.
292 # as the same as 404 error.
293 if 'error' not in response:
293 if 'error' not in response:
294 if (action == 'download'
294 if (action == 'download'
295 and action not in response.get('actions', [])):
295 and action not in response.get('actions', [])):
296 code = 404
296 code = 404
297 else:
297 else:
298 continue
298 continue
299 else:
299 else:
300 # An error dict without a code doesn't make much sense, so
300 # An error dict without a code doesn't make much sense, so
301 # treat as a server error.
301 # treat as a server error.
302 code = response.get('error').get('code', 500)
302 code = response.get('error').get('code', 500)
303
303
304 ptrmap = {p.oid(): p for p in pointers}
304 ptrmap = {p.oid(): p for p in pointers}
305 p = ptrmap.get(response['oid'], None)
305 p = ptrmap.get(response['oid'], None)
306 if p:
306 if p:
307 filename = getattr(p, 'filename', 'unknown')
307 filename = getattr(p, 'filename', 'unknown')
308 errors = {
308 errors = {
309 404: 'The object does not exist',
309 404: 'The object does not exist',
310 410: 'The object was removed by the owner',
310 410: 'The object was removed by the owner',
311 422: 'Validation error',
311 422: 'Validation error',
312 500: 'Internal server error',
312 500: 'Internal server error',
313 }
313 }
314 msg = errors.get(code, 'status code %d' % code)
314 msg = errors.get(code, 'status code %d' % code)
315 raise LfsRemoteError(_('LFS server error for "%s": %s')
315 raise LfsRemoteError(_('LFS server error for "%s": %s')
316 % (filename, msg))
316 % (filename, msg))
317 else:
317 else:
318 raise LfsRemoteError(
318 raise LfsRemoteError(
319 _('LFS server error. Unsolicited response for oid %s')
319 _('LFS server error. Unsolicited response for oid %s')
320 % response['oid'])
320 % response['oid'])
321
321
322 def _extractobjects(self, response, pointers, action):
322 def _extractobjects(self, response, pointers, action):
323 """extract objects from response of the batch API
323 """extract objects from response of the batch API
324
324
325 response: parsed JSON object returned by batch API
325 response: parsed JSON object returned by batch API
326 return response['objects'] filtered by action
326 return response['objects'] filtered by action
327 raise if any object has an error
327 raise if any object has an error
328 """
328 """
329 # Scan errors from objects - fail early
329 # Scan errors from objects - fail early
330 objects = response.get('objects', [])
330 objects = response.get('objects', [])
331 self._checkforservererror(pointers, objects, action)
331 self._checkforservererror(pointers, objects, action)
332
332
333 # Filter objects with given action. Practically, this skips uploading
333 # Filter objects with given action. Practically, this skips uploading
334 # objects which exist in the server.
334 # objects which exist in the server.
335 filteredobjects = [o for o in objects if action in o.get('actions', [])]
335 filteredobjects = [o for o in objects if action in o.get('actions', [])]
336
336
337 return filteredobjects
337 return filteredobjects
338
338
339 def _basictransfer(self, obj, action, localstore):
339 def _basictransfer(self, obj, action, localstore):
340 """Download or upload a single object using basic transfer protocol
340 """Download or upload a single object using basic transfer protocol
341
341
342 obj: dict, an object description returned by batch API
342 obj: dict, an object description returned by batch API
343 action: string, one of ['upload', 'download']
343 action: string, one of ['upload', 'download']
344 localstore: blobstore.local
344 localstore: blobstore.local
345
345
346 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
346 See https://github.com/git-lfs/git-lfs/blob/master/docs/api/\
347 basic-transfers.md
347 basic-transfers.md
348 """
348 """
349 oid = pycompat.bytestr(obj['oid'])
349 oid = pycompat.bytestr(obj['oid'])
350
350
351 href = pycompat.bytestr(obj['actions'][action].get('href'))
351 href = pycompat.bytestr(obj['actions'][action].get('href'))
352 headers = obj['actions'][action].get('header', {}).items()
352 headers = obj['actions'][action].get('header', {}).items()
353
353
354 request = util.urlreq.request(href)
354 request = util.urlreq.request(href)
355 if action == 'upload':
355 if action == 'upload':
356 # If uploading blobs, read data from local blobstore.
356 # If uploading blobs, read data from local blobstore.
357 if not localstore.verify(oid):
357 if not localstore.verify(oid):
358 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
358 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
359 hint=_('run hg verify'))
359 hint=_('run hg verify'))
360 request.data = filewithprogress(localstore.open(oid), None)
360 request.data = filewithprogress(localstore.open(oid), None)
361 request.get_method = lambda: 'PUT'
361 request.get_method = lambda: 'PUT'
362 request.add_header('Content-Type', 'application/octet-stream')
362 request.add_header('Content-Type', 'application/octet-stream')
363
363
364 for k, v in headers:
364 for k, v in headers:
365 request.add_header(k, v)
365 request.add_header(k, v)
366
366
367 response = b''
367 response = b''
368 try:
368 try:
369 req = self.urlopener.open(request)
369 req = self.urlopener.open(request)
370
370
371 if self.ui.debugflag:
371 if self.ui.debugflag:
372 self.ui.debug('Status: %d\n' % req.status)
372 self.ui.debug('Status: %d\n' % req.status)
373 # lfs-test-server and hg serve return headers in different order
373 # lfs-test-server and hg serve return headers in different order
374 self.ui.debug('%s\n'
374 self.ui.debug('%s\n'
375 % '\n'.join(sorted(str(req.info()).splitlines())))
375 % '\n'.join(sorted(str(req.info()).splitlines())))
376
376
377 if action == 'download':
377 if action == 'download':
378 # If downloading blobs, store downloaded data to local blobstore
378 # If downloading blobs, store downloaded data to local blobstore
379 localstore.download(oid, req)
379 localstore.download(oid, req)
380 else:
380 else:
381 while True:
381 while True:
382 data = req.read(1048576)
382 data = req.read(1048576)
383 if not data:
383 if not data:
384 break
384 break
385 response += data
385 response += data
386 if response:
386 if response:
387 self.ui.debug('lfs %s response: %s' % (action, response))
387 self.ui.debug('lfs %s response: %s' % (action, response))
388 except util.urlerr.httperror as ex:
388 except util.urlerr.httperror as ex:
389 if self.ui.debugflag:
389 if self.ui.debugflag:
390 self.ui.debug('%s: %s\n' % (oid, ex.read()))
390 self.ui.debug('%s: %s\n' % (oid, ex.read()))
391 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
391 raise LfsRemoteError(_('HTTP error: %s (oid=%s, action=%s)')
392 % (ex, oid, action))
392 % (ex, oid, action))
393
393
394 def _batch(self, pointers, localstore, action):
394 def _batch(self, pointers, localstore, action):
395 if action not in ['upload', 'download']:
395 if action not in ['upload', 'download']:
396 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
396 raise error.ProgrammingError('invalid Git-LFS action: %s' % action)
397
397
398 response = self._batchrequest(pointers, action)
398 response = self._batchrequest(pointers, action)
399 objects = self._extractobjects(response, pointers, action)
399 objects = self._extractobjects(response, pointers, action)
400 total = sum(x.get('size', 0) for x in objects)
400 total = sum(x.get('size', 0) for x in objects)
401 sizes = {}
401 sizes = {}
402 for obj in objects:
402 for obj in objects:
403 sizes[obj.get('oid')] = obj.get('size', 0)
403 sizes[obj.get('oid')] = obj.get('size', 0)
404 topic = {'upload': _('lfs uploading'),
404 topic = {'upload': _('lfs uploading'),
405 'download': _('lfs downloading')}[action]
405 'download': _('lfs downloading')}[action]
406 if len(objects) > 1:
406 if len(objects) > 1:
407 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
407 self.ui.note(_('lfs: need to transfer %d objects (%s)\n')
408 % (len(objects), util.bytecount(total)))
408 % (len(objects), util.bytecount(total)))
409 self.ui.progress(topic, 0, total=total)
409 self.ui.progress(topic, 0, total=total)
410 def transfer(chunk):
410 def transfer(chunk):
411 for obj in chunk:
411 for obj in chunk:
412 objsize = obj.get('size', 0)
412 objsize = obj.get('size', 0)
413 if self.ui.verbose:
413 if self.ui.verbose:
414 if action == 'download':
414 if action == 'download':
415 msg = _('lfs: downloading %s (%s)\n')
415 msg = _('lfs: downloading %s (%s)\n')
416 elif action == 'upload':
416 elif action == 'upload':
417 msg = _('lfs: uploading %s (%s)\n')
417 msg = _('lfs: uploading %s (%s)\n')
418 self.ui.note(msg % (obj.get('oid'),
418 self.ui.note(msg % (obj.get('oid'),
419 util.bytecount(objsize)))
419 util.bytecount(objsize)))
420 retry = self.retry
420 retry = self.retry
421 while True:
421 while True:
422 try:
422 try:
423 self._basictransfer(obj, action, localstore)
423 self._basictransfer(obj, action, localstore)
424 yield 1, obj.get('oid')
424 yield 1, obj.get('oid')
425 break
425 break
426 except socket.error as ex:
426 except socket.error as ex:
427 if retry > 0:
427 if retry > 0:
428 self.ui.note(
428 self.ui.note(
429 _('lfs: failed: %r (remaining retry %d)\n')
429 _('lfs: failed: %r (remaining retry %d)\n')
430 % (ex, retry))
430 % (ex, retry))
431 retry -= 1
431 retry -= 1
432 continue
432 continue
433 raise
433 raise
434
434
435 # Until https multiplexing gets sorted out
435 # Until https multiplexing gets sorted out
436 if self.ui.configbool('experimental', 'lfs.worker-enable'):
436 if self.ui.configbool('experimental', 'lfs.worker-enable'):
437 oids = worker.worker(self.ui, 0.1, transfer, (),
437 oids = worker.worker(self.ui, 0.1, transfer, (),
438 sorted(objects, key=lambda o: o.get('oid')))
438 sorted(objects, key=lambda o: o.get('oid')))
439 else:
439 else:
440 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
440 oids = transfer(sorted(objects, key=lambda o: o.get('oid')))
441
441
442 processed = 0
442 processed = 0
443 blobs = 0
443 blobs = 0
444 for _one, oid in oids:
444 for _one, oid in oids:
445 processed += sizes[oid]
445 processed += sizes[oid]
446 blobs += 1
446 blobs += 1
447 self.ui.progress(topic, processed, total=total)
447 self.ui.progress(topic, processed, total=total)
448 self.ui.note(_('lfs: processed: %s\n') % oid)
448 self.ui.note(_('lfs: processed: %s\n') % oid)
449 self.ui.progress(topic, pos=None, total=total)
449 self.ui.progress(topic, pos=None, total=total)
450
450
451 if blobs > 0:
451 if blobs > 0:
452 if action == 'upload':
452 if action == 'upload':
453 self.ui.status(_('lfs: uploaded %d files (%s)\n')
453 self.ui.status(_('lfs: uploaded %d files (%s)\n')
454 % (blobs, util.bytecount(processed)))
454 % (blobs, util.bytecount(processed)))
455 # TODO: coalesce the download requests, and comment this in
455 # TODO: coalesce the download requests, and comment this in
456 #elif action == 'download':
456 #elif action == 'download':
457 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
457 # self.ui.status(_('lfs: downloaded %d files (%s)\n')
458 # % (blobs, util.bytecount(processed)))
458 # % (blobs, util.bytecount(processed)))
459
459
460 def __del__(self):
460 def __del__(self):
461 # copied from mercurial/httppeer.py
461 # copied from mercurial/httppeer.py
462 urlopener = getattr(self, 'urlopener', None)
462 urlopener = getattr(self, 'urlopener', None)
463 if urlopener:
463 if urlopener:
464 for h in urlopener.handlers:
464 for h in urlopener.handlers:
465 h.close()
465 h.close()
466 getattr(h, "close_all", lambda : None)()
466 getattr(h, "close_all", lambda : None)()
467
467
468 class _dummyremote(object):
468 class _dummyremote(object):
469 """Dummy store storing blobs to temp directory."""
469 """Dummy store storing blobs to temp directory."""
470
470
471 def __init__(self, repo, url):
471 def __init__(self, repo, url):
472 fullpath = repo.vfs.join('lfs', url.path)
472 fullpath = repo.vfs.join('lfs', url.path)
473 self.vfs = lfsvfs(fullpath)
473 self.vfs = lfsvfs(fullpath)
474
474
475 def writebatch(self, pointers, fromstore):
475 def writebatch(self, pointers, fromstore):
476 for p in _deduplicate(pointers):
476 for p in _deduplicate(pointers):
477 content = fromstore.read(p.oid(), verify=True)
477 content = fromstore.read(p.oid(), verify=True)
478 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
478 with self.vfs(p.oid(), 'wb', atomictemp=True) as fp:
479 fp.write(content)
479 fp.write(content)
480
480
481 def readbatch(self, pointers, tostore):
481 def readbatch(self, pointers, tostore):
482 for p in _deduplicate(pointers):
482 for p in _deduplicate(pointers):
483 with self.vfs(p.oid(), 'rb') as fp:
483 with self.vfs(p.oid(), 'rb') as fp:
484 tostore.download(p.oid(), fp)
484 tostore.download(p.oid(), fp)
485
485
486 class _nullremote(object):
486 class _nullremote(object):
487 """Null store storing blobs to /dev/null."""
487 """Null store storing blobs to /dev/null."""
488
488
489 def __init__(self, repo, url):
489 def __init__(self, repo, url):
490 pass
490 pass
491
491
492 def writebatch(self, pointers, fromstore):
492 def writebatch(self, pointers, fromstore):
493 pass
493 pass
494
494
495 def readbatch(self, pointers, tostore):
495 def readbatch(self, pointers, tostore):
496 pass
496 pass
497
497
498 class _promptremote(object):
498 class _promptremote(object):
499 """Prompt user to set lfs.url when accessed."""
499 """Prompt user to set lfs.url when accessed."""
500
500
501 def __init__(self, repo, url):
501 def __init__(self, repo, url):
502 pass
502 pass
503
503
504 def writebatch(self, pointers, fromstore, ui=None):
504 def writebatch(self, pointers, fromstore, ui=None):
505 self._prompt()
505 self._prompt()
506
506
507 def readbatch(self, pointers, tostore, ui=None):
507 def readbatch(self, pointers, tostore, ui=None):
508 self._prompt()
508 self._prompt()
509
509
510 def _prompt(self):
510 def _prompt(self):
511 raise error.Abort(_('lfs.url needs to be configured'))
511 raise error.Abort(_('lfs.url needs to be configured'))
512
512
513 _storemap = {
513 _storemap = {
514 'https': _gitlfsremote,
514 'https': _gitlfsremote,
515 'http': _gitlfsremote,
515 'http': _gitlfsremote,
516 'file': _dummyremote,
516 'file': _dummyremote,
517 'null': _nullremote,
517 'null': _nullremote,
518 None: _promptremote,
518 None: _promptremote,
519 }
519 }
520
520
521 def _deduplicate(pointers):
521 def _deduplicate(pointers):
522 """Remove any duplicate oids that exist in the list"""
522 """Remove any duplicate oids that exist in the list"""
523 reduced = util.sortdict()
523 reduced = util.sortdict()
524 for p in pointers:
524 for p in pointers:
525 reduced[p.oid()] = p
525 reduced[p.oid()] = p
526 return reduced.values()
526 return reduced.values()
527
527
528 def _verify(oid, content):
528 def _verify(oid, content):
529 realoid = hashlib.sha256(content).hexdigest()
529 realoid = hashlib.sha256(content).hexdigest()
530 if realoid != oid:
530 if realoid != oid:
531 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
531 raise error.Abort(_('detected corrupt lfs object: %s') % oid,
532 hint=_('run hg verify'))
532 hint=_('run hg verify'))
533
533
534 def remote(repo):
534 def remote(repo):
535 """remotestore factory. return a store in _storemap depending on config"""
535 """remotestore factory. return a store in _storemap depending on config
536
537 If ``lfs.url`` is specified, use that remote endpoint. Otherwise, try to
538 infer the endpoint, based on the remote repository using the same path
539 adjustments as git. As an extension, 'http' is supported as well so that
540 ``hg serve`` works out of the box.
541
542 https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md
543 """
536 url = util.url(repo.ui.config('lfs', 'url') or '')
544 url = util.url(repo.ui.config('lfs', 'url') or '')
545 if url.scheme is None:
546 # TODO: investigate 'paths.remote:lfsurl' style path customization,
547 # and fall back to inferring from 'paths.remote' if unspecified.
548 defaulturl = util.url(repo.ui.config('paths', 'default') or b'')
549
550 # TODO: support local paths as well.
551 # TODO: consider the ssh -> https transformation that git applies
552 if defaulturl.scheme in (b'http', b'https'):
553 defaulturl.path = defaulturl.path or b'' + b'.git/info/lfs'
554
555 url = util.url(bytes(defaulturl))
556 repo.ui.note(_('lfs: assuming remote store: %s\n') % url)
557
537 scheme = url.scheme
558 scheme = url.scheme
538 if scheme not in _storemap:
559 if scheme not in _storemap:
539 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
560 raise error.Abort(_('lfs: unknown url scheme: %s') % scheme)
540 return _storemap[scheme](repo, url)
561 return _storemap[scheme](repo, url)
541
562
542 class LfsRemoteError(error.RevlogError):
563 class LfsRemoteError(error.RevlogError):
543 pass
564 pass
@@ -1,386 +1,388
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, hex, nullid, short
13 from mercurial.node import bin, hex, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 revlog,
17 revlog,
18 util,
18 util,
19 )
19 )
20
20
21 from mercurial.utils import (
21 from mercurial.utils import (
22 stringutil,
22 stringutil,
23 )
23 )
24
24
25 from ..largefiles import lfutil
25 from ..largefiles import lfutil
26
26
27 from . import (
27 from . import (
28 blobstore,
28 blobstore,
29 pointer,
29 pointer,
30 )
30 )
31
31
32 def allsupportedversions(orig, ui):
32 def allsupportedversions(orig, ui):
33 versions = orig(ui)
33 versions = orig(ui)
34 versions.add('03')
34 versions.add('03')
35 return versions
35 return versions
36
36
37 def _capabilities(orig, repo, proto):
37 def _capabilities(orig, repo, proto):
38 '''Wrap server command to announce lfs server capability'''
38 '''Wrap server command to announce lfs server capability'''
39 caps = orig(repo, proto)
39 caps = orig(repo, proto)
40 # XXX: change to 'lfs=serve' when separate git server isn't required?
40 # XXX: change to 'lfs=serve' when separate git server isn't required?
41 caps.append('lfs')
41 caps.append('lfs')
42 return caps
42 return caps
43
43
44 def bypasscheckhash(self, text):
44 def bypasscheckhash(self, text):
45 return False
45 return False
46
46
47 def readfromstore(self, text):
47 def readfromstore(self, text):
48 """Read filelog content from local blobstore transform for flagprocessor.
48 """Read filelog content from local blobstore transform for flagprocessor.
49
49
50 Default tranform for flagprocessor, returning contents from blobstore.
50 Default tranform for flagprocessor, returning contents from blobstore.
51 Returns a 2-typle (text, validatehash) where validatehash is True as the
51 Returns a 2-typle (text, validatehash) where validatehash is True as the
52 contents of the blobstore should be checked using checkhash.
52 contents of the blobstore should be checked using checkhash.
53 """
53 """
54 p = pointer.deserialize(text)
54 p = pointer.deserialize(text)
55 oid = p.oid()
55 oid = p.oid()
56 store = self.opener.lfslocalblobstore
56 store = self.opener.lfslocalblobstore
57 if not store.has(oid):
57 if not store.has(oid):
58 p.filename = self.filename
58 p.filename = self.filename
59 self.opener.lfsremoteblobstore.readbatch([p], store)
59 self.opener.lfsremoteblobstore.readbatch([p], store)
60
60
61 # The caller will validate the content
61 # The caller will validate the content
62 text = store.read(oid, verify=False)
62 text = store.read(oid, verify=False)
63
63
64 # pack hg filelog metadata
64 # pack hg filelog metadata
65 hgmeta = {}
65 hgmeta = {}
66 for k in p.keys():
66 for k in p.keys():
67 if k.startswith('x-hg-'):
67 if k.startswith('x-hg-'):
68 name = k[len('x-hg-'):]
68 name = k[len('x-hg-'):]
69 hgmeta[name] = p[k]
69 hgmeta[name] = p[k]
70 if hgmeta or text.startswith('\1\n'):
70 if hgmeta or text.startswith('\1\n'):
71 text = revlog.packmeta(hgmeta, text)
71 text = revlog.packmeta(hgmeta, text)
72
72
73 return (text, True)
73 return (text, True)
74
74
75 def writetostore(self, text):
75 def writetostore(self, text):
76 # hg filelog metadata (includes rename, etc)
76 # hg filelog metadata (includes rename, etc)
77 hgmeta, offset = revlog.parsemeta(text)
77 hgmeta, offset = revlog.parsemeta(text)
78 if offset and offset > 0:
78 if offset and offset > 0:
79 # lfs blob does not contain hg filelog metadata
79 # lfs blob does not contain hg filelog metadata
80 text = text[offset:]
80 text = text[offset:]
81
81
82 # git-lfs only supports sha256
82 # git-lfs only supports sha256
83 oid = hex(hashlib.sha256(text).digest())
83 oid = hex(hashlib.sha256(text).digest())
84 self.opener.lfslocalblobstore.write(oid, text)
84 self.opener.lfslocalblobstore.write(oid, text)
85
85
86 # replace contents with metadata
86 # replace contents with metadata
87 longoid = 'sha256:%s' % oid
87 longoid = 'sha256:%s' % oid
88 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
88 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
89
89
90 # by default, we expect the content to be binary. however, LFS could also
90 # by default, we expect the content to be binary. however, LFS could also
91 # be used for non-binary content. add a special entry for non-binary data.
91 # be used for non-binary content. add a special entry for non-binary data.
92 # this will be used by filectx.isbinary().
92 # this will be used by filectx.isbinary().
93 if not stringutil.binary(text):
93 if not stringutil.binary(text):
94 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
94 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
95 metadata['x-is-binary'] = '0'
95 metadata['x-is-binary'] = '0'
96
96
97 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
97 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
98 if hgmeta is not None:
98 if hgmeta is not None:
99 for k, v in hgmeta.iteritems():
99 for k, v in hgmeta.iteritems():
100 metadata['x-hg-%s' % k] = v
100 metadata['x-hg-%s' % k] = v
101
101
102 rawtext = metadata.serialize()
102 rawtext = metadata.serialize()
103 return (rawtext, False)
103 return (rawtext, False)
104
104
105 def _islfs(rlog, node=None, rev=None):
105 def _islfs(rlog, node=None, rev=None):
106 if rev is None:
106 if rev is None:
107 if node is None:
107 if node is None:
108 # both None - likely working copy content where node is not ready
108 # both None - likely working copy content where node is not ready
109 return False
109 return False
110 rev = rlog.rev(node)
110 rev = rlog.rev(node)
111 else:
111 else:
112 node = rlog.node(rev)
112 node = rlog.node(rev)
113 if node == nullid:
113 if node == nullid:
114 return False
114 return False
115 flags = rlog.flags(rev)
115 flags = rlog.flags(rev)
116 return bool(flags & revlog.REVIDX_EXTSTORED)
116 return bool(flags & revlog.REVIDX_EXTSTORED)
117
117
118 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
118 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
119 cachedelta=None, node=None,
119 cachedelta=None, node=None,
120 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
120 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
121 textlen = len(text)
121 textlen = len(text)
122 # exclude hg rename meta from file size
122 # exclude hg rename meta from file size
123 meta, offset = revlog.parsemeta(text)
123 meta, offset = revlog.parsemeta(text)
124 if offset:
124 if offset:
125 textlen -= offset
125 textlen -= offset
126
126
127 lfstrack = self.opener.options['lfstrack']
127 lfstrack = self.opener.options['lfstrack']
128
128
129 if lfstrack(self.filename, textlen):
129 if lfstrack(self.filename, textlen):
130 flags |= revlog.REVIDX_EXTSTORED
130 flags |= revlog.REVIDX_EXTSTORED
131
131
132 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
132 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
133 node=node, flags=flags, **kwds)
133 node=node, flags=flags, **kwds)
134
134
135 def filelogrenamed(orig, self, node):
135 def filelogrenamed(orig, self, node):
136 if _islfs(self, node):
136 if _islfs(self, node):
137 rawtext = self.revision(node, raw=True)
137 rawtext = self.revision(node, raw=True)
138 if not rawtext:
138 if not rawtext:
139 return False
139 return False
140 metadata = pointer.deserialize(rawtext)
140 metadata = pointer.deserialize(rawtext)
141 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
141 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
142 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
142 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
143 else:
143 else:
144 return False
144 return False
145 return orig(self, node)
145 return orig(self, node)
146
146
147 def filelogsize(orig, self, rev):
147 def filelogsize(orig, self, rev):
148 if _islfs(self, rev=rev):
148 if _islfs(self, rev=rev):
149 # fast path: use lfs metadata to answer size
149 # fast path: use lfs metadata to answer size
150 rawtext = self.revision(rev, raw=True)
150 rawtext = self.revision(rev, raw=True)
151 metadata = pointer.deserialize(rawtext)
151 metadata = pointer.deserialize(rawtext)
152 return int(metadata['size'])
152 return int(metadata['size'])
153 return orig(self, rev)
153 return orig(self, rev)
154
154
155 def filectxcmp(orig, self, fctx):
155 def filectxcmp(orig, self, fctx):
156 """returns True if text is different than fctx"""
156 """returns True if text is different than fctx"""
157 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
157 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
158 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
158 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
159 # fast path: check LFS oid
159 # fast path: check LFS oid
160 p1 = pointer.deserialize(self.rawdata())
160 p1 = pointer.deserialize(self.rawdata())
161 p2 = pointer.deserialize(fctx.rawdata())
161 p2 = pointer.deserialize(fctx.rawdata())
162 return p1.oid() != p2.oid()
162 return p1.oid() != p2.oid()
163 return orig(self, fctx)
163 return orig(self, fctx)
164
164
165 def filectxisbinary(orig, self):
165 def filectxisbinary(orig, self):
166 if self.islfs():
166 if self.islfs():
167 # fast path: use lfs metadata to answer isbinary
167 # fast path: use lfs metadata to answer isbinary
168 metadata = pointer.deserialize(self.rawdata())
168 metadata = pointer.deserialize(self.rawdata())
169 # if lfs metadata says nothing, assume it's binary by default
169 # if lfs metadata says nothing, assume it's binary by default
170 return bool(int(metadata.get('x-is-binary', 1)))
170 return bool(int(metadata.get('x-is-binary', 1)))
171 return orig(self)
171 return orig(self)
172
172
173 def filectxislfs(self):
173 def filectxislfs(self):
174 return _islfs(self.filelog(), self.filenode())
174 return _islfs(self.filelog(), self.filenode())
175
175
176 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
176 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
177 orig(fm, ctx, matcher, path, decode)
177 orig(fm, ctx, matcher, path, decode)
178 fm.data(rawdata=ctx[path].rawdata())
178 fm.data(rawdata=ctx[path].rawdata())
179
179
180 def convertsink(orig, sink):
180 def convertsink(orig, sink):
181 sink = orig(sink)
181 sink = orig(sink)
182 if sink.repotype == 'hg':
182 if sink.repotype == 'hg':
183 class lfssink(sink.__class__):
183 class lfssink(sink.__class__):
184 def putcommit(self, files, copies, parents, commit, source, revmap,
184 def putcommit(self, files, copies, parents, commit, source, revmap,
185 full, cleanp2):
185 full, cleanp2):
186 pc = super(lfssink, self).putcommit
186 pc = super(lfssink, self).putcommit
187 node = pc(files, copies, parents, commit, source, revmap, full,
187 node = pc(files, copies, parents, commit, source, revmap, full,
188 cleanp2)
188 cleanp2)
189
189
190 if 'lfs' not in self.repo.requirements:
190 if 'lfs' not in self.repo.requirements:
191 ctx = self.repo[node]
191 ctx = self.repo[node]
192
192
193 # The file list may contain removed files, so check for
193 # The file list may contain removed files, so check for
194 # membership before assuming it is in the context.
194 # membership before assuming it is in the context.
195 if any(f in ctx and ctx[f].islfs() for f, n in files):
195 if any(f in ctx and ctx[f].islfs() for f, n in files):
196 self.repo.requirements.add('lfs')
196 self.repo.requirements.add('lfs')
197 self.repo._writerequirements()
197 self.repo._writerequirements()
198
198
199 # Permanently enable lfs locally
199 # Permanently enable lfs locally
200 self.repo.vfs.append(
200 self.repo.vfs.append(
201 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
201 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
202
202
203 return node
203 return node
204
204
205 sink.__class__ = lfssink
205 sink.__class__ = lfssink
206
206
207 return sink
207 return sink
208
208
209 def vfsinit(orig, self, othervfs):
209 def vfsinit(orig, self, othervfs):
210 orig(self, othervfs)
210 orig(self, othervfs)
211 # copy lfs related options
211 # copy lfs related options
212 for k, v in othervfs.options.items():
212 for k, v in othervfs.options.items():
213 if k.startswith('lfs'):
213 if k.startswith('lfs'):
214 self.options[k] = v
214 self.options[k] = v
215 # also copy lfs blobstores. note: this can run before reposetup, so lfs
215 # also copy lfs blobstores. note: this can run before reposetup, so lfs
216 # blobstore attributes are not always ready at this time.
216 # blobstore attributes are not always ready at this time.
217 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
217 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
218 if util.safehasattr(othervfs, name):
218 if util.safehasattr(othervfs, name):
219 setattr(self, name, getattr(othervfs, name))
219 setattr(self, name, getattr(othervfs, name))
220
220
221 def hgclone(orig, ui, opts, *args, **kwargs):
221 def hgclone(orig, ui, opts, *args, **kwargs):
222 result = orig(ui, opts, *args, **kwargs)
222 result = orig(ui, opts, *args, **kwargs)
223
223
224 if result is not None:
224 if result is not None:
225 sourcerepo, destrepo = result
225 sourcerepo, destrepo = result
226 repo = destrepo.local()
226 repo = destrepo.local()
227
227
228 # When cloning to a remote repo (like through SSH), no repo is available
228 # When cloning to a remote repo (like through SSH), no repo is available
229 # from the peer. Therefore the hgrc can't be updated.
229 # from the peer. Therefore the hgrc can't be updated.
230 if not repo:
230 if not repo:
231 return result
231 return result
232
232
233 # If lfs is required for this repo, permanently enable it locally
233 # If lfs is required for this repo, permanently enable it locally
234 if 'lfs' in repo.requirements:
234 if 'lfs' in repo.requirements:
235 repo.vfs.append('hgrc',
235 repo.vfs.append('hgrc',
236 util.tonativeeol('\n[extensions]\nlfs=\n'))
236 util.tonativeeol('\n[extensions]\nlfs=\n'))
237
237
238 return result
238 return result
239
239
240 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
240 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
241 orig(sourcerepo, destrepo, bookmarks, defaultpath)
241 orig(sourcerepo, destrepo, bookmarks, defaultpath)
242
242
243 # If lfs is required for this repo, permanently enable it locally
243 # If lfs is required for this repo, permanently enable it locally
244 if 'lfs' in destrepo.requirements:
244 if 'lfs' in destrepo.requirements:
245 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
245 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
246
246
247 def _prefetchfiles(repo, ctx, files):
247 def _prefetchfiles(repo, ctx, files):
248 """Ensure that required LFS blobs are present, fetching them as a group if
248 """Ensure that required LFS blobs are present, fetching them as a group if
249 needed."""
249 needed."""
250 pointers = []
250 pointers = []
251 localstore = repo.svfs.lfslocalblobstore
251 localstore = repo.svfs.lfslocalblobstore
252
252
253 for f in files:
253 for f in files:
254 p = pointerfromctx(ctx, f)
254 p = pointerfromctx(ctx, f)
255 if p and not localstore.has(p.oid()):
255 if p and not localstore.has(p.oid()):
256 p.filename = f
256 p.filename = f
257 pointers.append(p)
257 pointers.append(p)
258
258
259 if pointers:
259 if pointers:
260 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
260 # Recalculating the repo store here allows 'paths.default' that is set
261 # on the repo by a clone command to be used for the update.
262 blobstore.remote(repo).readbatch(pointers, localstore)
261
263
262 def _canskipupload(repo):
264 def _canskipupload(repo):
263 # if remotestore is a null store, upload is a no-op and can be skipped
265 # if remotestore is a null store, upload is a no-op and can be skipped
264 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
266 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
265
267
266 def candownload(repo):
268 def candownload(repo):
267 # if remotestore is a null store, downloads will lead to nothing
269 # if remotestore is a null store, downloads will lead to nothing
268 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
270 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
269
271
270 def uploadblobsfromrevs(repo, revs):
272 def uploadblobsfromrevs(repo, revs):
271 '''upload lfs blobs introduced by revs
273 '''upload lfs blobs introduced by revs
272
274
273 Note: also used by other extensions e. g. infinitepush. avoid renaming.
275 Note: also used by other extensions e. g. infinitepush. avoid renaming.
274 '''
276 '''
275 if _canskipupload(repo):
277 if _canskipupload(repo):
276 return
278 return
277 pointers = extractpointers(repo, revs)
279 pointers = extractpointers(repo, revs)
278 uploadblobs(repo, pointers)
280 uploadblobs(repo, pointers)
279
281
280 def prepush(pushop):
282 def prepush(pushop):
281 """Prepush hook.
283 """Prepush hook.
282
284
283 Read through the revisions to push, looking for filelog entries that can be
285 Read through the revisions to push, looking for filelog entries that can be
284 deserialized into metadata so that we can block the push on their upload to
286 deserialized into metadata so that we can block the push on their upload to
285 the remote blobstore.
287 the remote blobstore.
286 """
288 """
287 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
289 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
288
290
289 def push(orig, repo, remote, *args, **kwargs):
291 def push(orig, repo, remote, *args, **kwargs):
290 """bail on push if the extension isn't enabled on remote when needed"""
292 """bail on push if the extension isn't enabled on remote when needed"""
291 if 'lfs' in repo.requirements:
293 if 'lfs' in repo.requirements:
292 # If the remote peer is for a local repo, the requirement tests in the
294 # If the remote peer is for a local repo, the requirement tests in the
293 # base class method enforce lfs support. Otherwise, some revisions in
295 # base class method enforce lfs support. Otherwise, some revisions in
294 # this repo use lfs, and the remote repo needs the extension loaded.
296 # this repo use lfs, and the remote repo needs the extension loaded.
295 if not remote.local() and not remote.capable('lfs'):
297 if not remote.local() and not remote.capable('lfs'):
296 # This is a copy of the message in exchange.push() when requirements
298 # This is a copy of the message in exchange.push() when requirements
297 # are missing between local repos.
299 # are missing between local repos.
298 m = _("required features are not supported in the destination: %s")
300 m = _("required features are not supported in the destination: %s")
299 raise error.Abort(m % 'lfs',
301 raise error.Abort(m % 'lfs',
300 hint=_('enable the lfs extension on the server'))
302 hint=_('enable the lfs extension on the server'))
301 return orig(repo, remote, *args, **kwargs)
303 return orig(repo, remote, *args, **kwargs)
302
304
303 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
305 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
304 *args, **kwargs):
306 *args, **kwargs):
305 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
307 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
306 uploadblobsfromrevs(repo, outgoing.missing)
308 uploadblobsfromrevs(repo, outgoing.missing)
307 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
309 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
308 **kwargs)
310 **kwargs)
309
311
310 def extractpointers(repo, revs):
312 def extractpointers(repo, revs):
311 """return a list of lfs pointers added by given revs"""
313 """return a list of lfs pointers added by given revs"""
312 repo.ui.debug('lfs: computing set of blobs to upload\n')
314 repo.ui.debug('lfs: computing set of blobs to upload\n')
313 pointers = {}
315 pointers = {}
314 for r in revs:
316 for r in revs:
315 ctx = repo[r]
317 ctx = repo[r]
316 for p in pointersfromctx(ctx).values():
318 for p in pointersfromctx(ctx).values():
317 pointers[p.oid()] = p
319 pointers[p.oid()] = p
318 return sorted(pointers.values())
320 return sorted(pointers.values())
319
321
320 def pointerfromctx(ctx, f, removed=False):
322 def pointerfromctx(ctx, f, removed=False):
321 """return a pointer for the named file from the given changectx, or None if
323 """return a pointer for the named file from the given changectx, or None if
322 the file isn't LFS.
324 the file isn't LFS.
323
325
324 Optionally, the pointer for a file deleted from the context can be returned.
326 Optionally, the pointer for a file deleted from the context can be returned.
325 Since no such pointer is actually stored, and to distinguish from a non LFS
327 Since no such pointer is actually stored, and to distinguish from a non LFS
326 file, this pointer is represented by an empty dict.
328 file, this pointer is represented by an empty dict.
327 """
329 """
328 _ctx = ctx
330 _ctx = ctx
329 if f not in ctx:
331 if f not in ctx:
330 if not removed:
332 if not removed:
331 return None
333 return None
332 if f in ctx.p1():
334 if f in ctx.p1():
333 _ctx = ctx.p1()
335 _ctx = ctx.p1()
334 elif f in ctx.p2():
336 elif f in ctx.p2():
335 _ctx = ctx.p2()
337 _ctx = ctx.p2()
336 else:
338 else:
337 return None
339 return None
338 fctx = _ctx[f]
340 fctx = _ctx[f]
339 if not _islfs(fctx.filelog(), fctx.filenode()):
341 if not _islfs(fctx.filelog(), fctx.filenode()):
340 return None
342 return None
341 try:
343 try:
342 p = pointer.deserialize(fctx.rawdata())
344 p = pointer.deserialize(fctx.rawdata())
343 if ctx == _ctx:
345 if ctx == _ctx:
344 return p
346 return p
345 return {}
347 return {}
346 except pointer.InvalidPointer as ex:
348 except pointer.InvalidPointer as ex:
347 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
349 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
348 % (f, short(_ctx.node()), ex))
350 % (f, short(_ctx.node()), ex))
349
351
350 def pointersfromctx(ctx, removed=False):
352 def pointersfromctx(ctx, removed=False):
351 """return a dict {path: pointer} for given single changectx.
353 """return a dict {path: pointer} for given single changectx.
352
354
353 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
355 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
354 stored for the path is an empty dict.
356 stored for the path is an empty dict.
355 """
357 """
356 result = {}
358 result = {}
357 for f in ctx.files():
359 for f in ctx.files():
358 p = pointerfromctx(ctx, f, removed=removed)
360 p = pointerfromctx(ctx, f, removed=removed)
359 if p is not None:
361 if p is not None:
360 result[f] = p
362 result[f] = p
361 return result
363 return result
362
364
363 def uploadblobs(repo, pointers):
365 def uploadblobs(repo, pointers):
364 """upload given pointers from local blobstore"""
366 """upload given pointers from local blobstore"""
365 if not pointers:
367 if not pointers:
366 return
368 return
367
369
368 remoteblob = repo.svfs.lfsremoteblobstore
370 remoteblob = repo.svfs.lfsremoteblobstore
369 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
371 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
370
372
371 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
373 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
372 orig(ui, srcrepo, dstrepo, requirements)
374 orig(ui, srcrepo, dstrepo, requirements)
373
375
374 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
376 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
375 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
377 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
376
378
377 for dirpath, dirs, files in srclfsvfs.walk():
379 for dirpath, dirs, files in srclfsvfs.walk():
378 for oid in files:
380 for oid in files:
379 ui.write(_('copying lfs blob %s\n') % oid)
381 ui.write(_('copying lfs blob %s\n') % oid)
380 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
382 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
381
383
382 def upgraderequirements(orig, repo):
384 def upgraderequirements(orig, repo):
383 reqs = orig(repo)
385 reqs = orig(repo)
384 if 'lfs' in repo.requirements:
386 if 'lfs' in repo.requirements:
385 reqs.add('lfs')
387 reqs.add('lfs')
386 return reqs
388 return reqs
@@ -1,284 +1,283
1 #testcases lfsremote-on lfsremote-off
1 #testcases lfsremote-on lfsremote-off
2 #require serve no-reposimplestore
2 #require serve no-reposimplestore
3
3
4 This test splits `hg serve` with and without using the extension into separate
4 This test splits `hg serve` with and without using the extension into separate
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 indicates whether or not the extension is loaded. The "X" cases are not tested
7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 individually, because the lfs requirement causes the process to bail early if
8 individually, because the lfs requirement causes the process to bail early if
9 the extension is disabled.
9 the extension is disabled.
10
10
11 . Server
11 . Server
12 .
12 .
13 . No-LFS LFS
13 . No-LFS LFS
14 . +----------------------------+
14 . +----------------------------+
15 . | || D | E | D | E |
15 . | || D | E | D | E |
16 . |---++=======================|
16 . |---++=======================|
17 . C | D || N/A | #1 | X | #4 |
17 . C | D || N/A | #1 | X | #4 |
18 . l No +---++-----------------------|
18 . l No +---++-----------------------|
19 . i LFS | E || #2 | #2 | X | #5 |
19 . i LFS | E || #2 | #2 | X | #5 |
20 . e +---++-----------------------|
20 . e +---++-----------------------|
21 . n | D || X | X | X | X |
21 . n | D || X | X | X | X |
22 . t LFS |---++-----------------------|
22 . t LFS |---++-----------------------|
23 . | E || #3 | #3 | X | #6 |
23 . | E || #3 | #3 | X | #6 |
24 . |---++-----------------------+
24 . |---++-----------------------+
25
25
26 $ hg init server
26 $ hg init server
27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
27 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
28
28
29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
29 Skip the experimental.changegroup3=True config. Failure to agree on this comes
30 first, and causes a "ValueError: no common changegroup version" or "abort:
30 first, and causes a "ValueError: no common changegroup version" or "abort:
31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
31 HTTP Error 500: Internal Server Error", if the extension is only loaded on one
32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
32 side. If that *is* enabled, the subsequent failure is "abort: missing processor
33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
33 for flag '0x2000'!" if the extension is only loaded on one side (possibly also
34 masked by the Internal Server Error message).
34 masked by the Internal Server Error message).
35 $ cat >> $HGRCPATH <<EOF
35 $ cat >> $HGRCPATH <<EOF
36 > [lfs]
36 > [lfs]
37 > url=file:$TESTTMP/dummy-remote/
38 > usercache = null://
37 > usercache = null://
39 > threshold=10
38 > threshold=10
40 > [web]
39 > [web]
41 > allow_push=*
40 > allow_push=*
42 > push_ssl=False
41 > push_ssl=False
43 > EOF
42 > EOF
44
43
45 #if lfsremote-on
44 #if lfsremote-on
46 $ hg --config extensions.lfs= -R server \
45 $ hg --config extensions.lfs= -R server \
47 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
46 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
48 #else
47 #else
49 $ hg --config extensions.lfs=! -R server \
48 $ hg --config extensions.lfs=! -R server \
50 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
49 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
51 #endif
50 #endif
52
51
53 $ cat hg.pid >> $DAEMON_PIDS
52 $ cat hg.pid >> $DAEMON_PIDS
54 $ hg clone -q http://localhost:$HGPORT client
53 $ hg clone -q http://localhost:$HGPORT client
55 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
54 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
56 [1]
55 [1]
57
56
58 --------------------------------------------------------------------------------
57 --------------------------------------------------------------------------------
59 Case #1: client with non-lfs content and the extension disabled; server with
58 Case #1: client with non-lfs content and the extension disabled; server with
60 non-lfs content, and the extension enabled.
59 non-lfs content, and the extension enabled.
61
60
62 $ cd client
61 $ cd client
63 $ echo 'non-lfs' > nonlfs.txt
62 $ echo 'non-lfs' > nonlfs.txt
64 $ hg ci -Aqm 'non-lfs'
63 $ hg ci -Aqm 'non-lfs'
65 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
64 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
66 [1]
65 [1]
67
66
68 #if lfsremote-on
67 #if lfsremote-on
69
68
70 $ hg push -q
69 $ hg push -q
71 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
70 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
72 [1]
71 [1]
73
72
74 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
73 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
75 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
74 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
76 [1]
75 [1]
77
76
78 $ hg init $TESTTMP/client1_pull
77 $ hg init $TESTTMP/client1_pull
79 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
78 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
80 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
79 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
81 [1]
80 [1]
82
81
83 $ hg identify http://localhost:$HGPORT
82 $ hg identify http://localhost:$HGPORT
84 d437e1d24fbd
83 d437e1d24fbd
85
84
86 #endif
85 #endif
87
86
88 --------------------------------------------------------------------------------
87 --------------------------------------------------------------------------------
89 Case #2: client with non-lfs content and the extension enabled; server with
88 Case #2: client with non-lfs content and the extension enabled; server with
90 non-lfs content, and the extension state controlled by #testcases.
89 non-lfs content, and the extension state controlled by #testcases.
91
90
92 $ cat >> $HGRCPATH <<EOF
91 $ cat >> $HGRCPATH <<EOF
93 > [extensions]
92 > [extensions]
94 > lfs =
93 > lfs =
95 > EOF
94 > EOF
96 $ echo 'non-lfs' > nonlfs2.txt
95 $ echo 'non-lfs' > nonlfs2.txt
97 $ hg ci -Aqm 'non-lfs file with lfs client'
96 $ hg ci -Aqm 'non-lfs file with lfs client'
98
97
99 Since no lfs content has been added yet, the push is allowed, even when the
98 Since no lfs content has been added yet, the push is allowed, even when the
100 extension is not enabled remotely.
99 extension is not enabled remotely.
101
100
102 $ hg push -q
101 $ hg push -q
103 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
102 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
104 [1]
103 [1]
105
104
106 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
105 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
107 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
106 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
108 [1]
107 [1]
109
108
110 $ hg init $TESTTMP/client2_pull
109 $ hg init $TESTTMP/client2_pull
111 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
110 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
112 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
111 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
113 [1]
112 [1]
114
113
115 $ hg identify http://localhost:$HGPORT
114 $ hg identify http://localhost:$HGPORT
116 1477875038c6
115 1477875038c6
117
116
118 --------------------------------------------------------------------------------
117 --------------------------------------------------------------------------------
119 Case #3: client with lfs content and the extension enabled; server with
118 Case #3: client with lfs content and the extension enabled; server with
120 non-lfs content, and the extension state controlled by #testcases. The server
119 non-lfs content, and the extension state controlled by #testcases. The server
121 should have an 'lfs' requirement after it picks up its first commit with a blob.
120 should have an 'lfs' requirement after it picks up its first commit with a blob.
122
121
123 $ echo 'this is a big lfs file' > lfs.bin
122 $ echo 'this is a big lfs file' > lfs.bin
124 $ hg ci -Aqm 'lfs'
123 $ hg ci -Aqm 'lfs'
125 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
124 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
126 .hg/requires:lfs
125 .hg/requires:lfs
127
126
128 #if lfsremote-off
127 #if lfsremote-off
129 $ hg push -q
128 $ hg push -q
130 abort: required features are not supported in the destination: lfs
129 abort: required features are not supported in the destination: lfs
131 (enable the lfs extension on the server)
130 (enable the lfs extension on the server)
132 [255]
131 [255]
133 #else
132 #else
134 $ hg push -q
133 $ hg push -q
135 #endif
134 #endif
136 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
135 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
137 .hg/requires:lfs
136 .hg/requires:lfs
138 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
137 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
139
138
140 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
139 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
141 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
140 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
142 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
141 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
143 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
142 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
144
143
145 $ hg init $TESTTMP/client3_pull
144 $ hg init $TESTTMP/client3_pull
146 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
145 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
147 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
146 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
148 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
147 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
149 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
148 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
150
149
151 The difference here is the push failed above when the extension isn't
150 The difference here is the push failed above when the extension isn't
152 enabled on the server.
151 enabled on the server.
153 $ hg identify http://localhost:$HGPORT
152 $ hg identify http://localhost:$HGPORT
154 8374dc4052cb (lfsremote-on !)
153 8374dc4052cb (lfsremote-on !)
155 1477875038c6 (lfsremote-off !)
154 1477875038c6 (lfsremote-off !)
156
155
157 Don't bother testing the lfsremote-off cases- the server won't be able
156 Don't bother testing the lfsremote-off cases- the server won't be able
158 to launch if there's lfs content and the extension is disabled.
157 to launch if there's lfs content and the extension is disabled.
159
158
160 #if lfsremote-on
159 #if lfsremote-on
161
160
162 --------------------------------------------------------------------------------
161 --------------------------------------------------------------------------------
163 Case #4: client with non-lfs content and the extension disabled; server with
162 Case #4: client with non-lfs content and the extension disabled; server with
164 lfs content, and the extension enabled.
163 lfs content, and the extension enabled.
165
164
166 $ cat >> $HGRCPATH <<EOF
165 $ cat >> $HGRCPATH <<EOF
167 > [extensions]
166 > [extensions]
168 > lfs = !
167 > lfs = !
169 > EOF
168 > EOF
170
169
171 $ hg init $TESTTMP/client4
170 $ hg init $TESTTMP/client4
172 $ cd $TESTTMP/client4
171 $ cd $TESTTMP/client4
173 $ cat >> .hg/hgrc <<EOF
172 $ cat >> .hg/hgrc <<EOF
174 > [paths]
173 > [paths]
175 > default = http://localhost:$HGPORT
174 > default = http://localhost:$HGPORT
176 > EOF
175 > EOF
177 $ echo 'non-lfs' > nonlfs2.txt
176 $ echo 'non-lfs' > nonlfs2.txt
178 $ hg ci -Aqm 'non-lfs'
177 $ hg ci -Aqm 'non-lfs'
179 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
178 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
180 $TESTTMP/server/.hg/requires:lfs
179 $TESTTMP/server/.hg/requires:lfs
181
180
182 $ hg push -q --force
181 $ hg push -q --force
183 warning: repository is unrelated
182 warning: repository is unrelated
184 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
183 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
185 $TESTTMP/server/.hg/requires:lfs
184 $TESTTMP/server/.hg/requires:lfs
186
185
187 TODO: fail more gracefully.
186 TODO: fail more gracefully.
188
187
189 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
188 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client4_clone
190 abort: HTTP Error 500: Internal Server Error
189 abort: HTTP Error 500: Internal Server Error
191 [255]
190 [255]
192 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
191 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
193 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
192 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
194 $TESTTMP/server/.hg/requires:lfs
193 $TESTTMP/server/.hg/requires:lfs
195 [2]
194 [2]
196
195
197 TODO: fail more gracefully.
196 TODO: fail more gracefully.
198
197
199 $ hg init $TESTTMP/client4_pull
198 $ hg init $TESTTMP/client4_pull
200 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
199 $ hg -R $TESTTMP/client4_pull pull -q http://localhost:$HGPORT
201 abort: HTTP Error 500: Internal Server Error
200 abort: HTTP Error 500: Internal Server Error
202 [255]
201 [255]
203 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
202 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
204 $TESTTMP/server/.hg/requires:lfs
203 $TESTTMP/server/.hg/requires:lfs
205
204
206 $ hg identify http://localhost:$HGPORT
205 $ hg identify http://localhost:$HGPORT
207 03b080fa9d93
206 03b080fa9d93
208
207
209 --------------------------------------------------------------------------------
208 --------------------------------------------------------------------------------
210 Case #5: client with non-lfs content and the extension enabled; server with
209 Case #5: client with non-lfs content and the extension enabled; server with
211 lfs content, and the extension enabled.
210 lfs content, and the extension enabled.
212
211
213 $ cat >> $HGRCPATH <<EOF
212 $ cat >> $HGRCPATH <<EOF
214 > [extensions]
213 > [extensions]
215 > lfs =
214 > lfs =
216 > EOF
215 > EOF
217 $ echo 'non-lfs' > nonlfs3.txt
216 $ echo 'non-lfs' > nonlfs3.txt
218 $ hg ci -Aqm 'non-lfs file with lfs client'
217 $ hg ci -Aqm 'non-lfs file with lfs client'
219
218
220 $ hg push -q
219 $ hg push -q
221 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
220 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
222 $TESTTMP/server/.hg/requires:lfs
221 $TESTTMP/server/.hg/requires:lfs
223
222
224 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
223 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
225 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
224 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
226 $TESTTMP/client5_clone/.hg/requires:lfs
225 $TESTTMP/client5_clone/.hg/requires:lfs
227 $TESTTMP/server/.hg/requires:lfs
226 $TESTTMP/server/.hg/requires:lfs
228
227
229 $ hg init $TESTTMP/client5_pull
228 $ hg init $TESTTMP/client5_pull
230 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
229 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
231 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
230 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
232 $TESTTMP/client5_pull/.hg/requires:lfs
231 $TESTTMP/client5_pull/.hg/requires:lfs
233 $TESTTMP/server/.hg/requires:lfs
232 $TESTTMP/server/.hg/requires:lfs
234
233
235 $ hg identify http://localhost:$HGPORT
234 $ hg identify http://localhost:$HGPORT
236 c729025cc5e3
235 c729025cc5e3
237
236
238 --------------------------------------------------------------------------------
237 --------------------------------------------------------------------------------
239 Case #6: client with lfs content and the extension enabled; server with
238 Case #6: client with lfs content and the extension enabled; server with
240 lfs content, and the extension enabled.
239 lfs content, and the extension enabled.
241
240
242 $ echo 'this is another lfs file' > lfs2.txt
241 $ echo 'this is another lfs file' > lfs2.txt
243 $ hg ci -Aqm 'lfs file with lfs client'
242 $ hg ci -Aqm 'lfs file with lfs client'
244
243
245 $ hg push -q
244 $ hg push -q
246 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
245 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
247 .hg/requires:lfs
246 .hg/requires:lfs
248 $TESTTMP/server/.hg/requires:lfs
247 $TESTTMP/server/.hg/requires:lfs
249
248
250 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
249 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
251 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
250 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
252 $TESTTMP/client6_clone/.hg/requires:lfs
251 $TESTTMP/client6_clone/.hg/requires:lfs
253 $TESTTMP/server/.hg/requires:lfs
252 $TESTTMP/server/.hg/requires:lfs
254
253
255 $ hg init $TESTTMP/client6_pull
254 $ hg init $TESTTMP/client6_pull
256 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
255 $ hg -R $TESTTMP/client6_pull pull -q http://localhost:$HGPORT
257 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
256 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
258 $TESTTMP/client6_pull/.hg/requires:lfs
257 $TESTTMP/client6_pull/.hg/requires:lfs
259 $TESTTMP/server/.hg/requires:lfs
258 $TESTTMP/server/.hg/requires:lfs
260
259
261 $ hg identify http://localhost:$HGPORT
260 $ hg identify http://localhost:$HGPORT
262 d3b84d50eacb
261 d3b84d50eacb
263
262
264 --------------------------------------------------------------------------------
263 --------------------------------------------------------------------------------
265 Misc: process dies early if a requirement exists and the extension is disabled
264 Misc: process dies early if a requirement exists and the extension is disabled
266
265
267 $ hg --config extensions.lfs=! summary
266 $ hg --config extensions.lfs=! summary
268 abort: repository requires features unknown to this Mercurial: lfs!
267 abort: repository requires features unknown to this Mercurial: lfs!
269 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
268 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
270 [255]
269 [255]
271
270
272 #endif
271 #endif
273
272
274 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
273 $ $PYTHON $TESTDIR/killdaemons.py $DAEMON_PIDS
275
274
276 #if lfsremote-on
275 #if lfsremote-on
277 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
276 $ cat $TESTTMP/errors.log | grep '^[A-Z]'
278 Traceback (most recent call last):
277 Traceback (most recent call last):
279 ValueError: no common changegroup version
278 ValueError: no common changegroup version
280 Traceback (most recent call last):
279 Traceback (most recent call last):
281 ValueError: no common changegroup version
280 ValueError: no common changegroup version
282 #else
281 #else
283 $ cat $TESTTMP/errors.log
282 $ cat $TESTTMP/errors.log
284 #endif
283 #endif
@@ -1,925 +1,933
1 #require no-reposimplestore
1 #require no-reposimplestore
2 #testcases git-server hg-server
2 #testcases git-server hg-server
3
3
4 #if git-server
4 #if git-server
5 #require lfs-test-server
5 #require lfs-test-server
6 #else
6 #else
7 #require serve
7 #require serve
8 #endif
8 #endif
9
9
10 #if git-server
10 #if git-server
11 $ LFS_LISTEN="tcp://:$HGPORT"
11 $ LFS_LISTEN="tcp://:$HGPORT"
12 $ LFS_HOST="localhost:$HGPORT"
12 $ LFS_HOST="localhost:$HGPORT"
13 $ LFS_PUBLIC=1
13 $ LFS_PUBLIC=1
14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
15 #else
15 #else
16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
17 #endif
17 #endif
18
18
19 #if no-windows git-server
19 #if no-windows git-server
20 $ lfs-test-server &> lfs-server.log &
20 $ lfs-test-server &> lfs-server.log &
21 $ echo $! >> $DAEMON_PIDS
21 $ echo $! >> $DAEMON_PIDS
22 #endif
22 #endif
23
23
24 #if windows git-server
24 #if windows git-server
25 $ cat >> $TESTTMP/spawn.py <<EOF
25 $ cat >> $TESTTMP/spawn.py <<EOF
26 > import os
26 > import os
27 > import subprocess
27 > import subprocess
28 > import sys
28 > import sys
29 >
29 >
30 > for path in os.environ["PATH"].split(os.pathsep):
30 > for path in os.environ["PATH"].split(os.pathsep):
31 > exe = os.path.join(path, 'lfs-test-server.exe')
31 > exe = os.path.join(path, 'lfs-test-server.exe')
32 > if os.path.exists(exe):
32 > if os.path.exists(exe):
33 > with open('lfs-server.log', 'wb') as out:
33 > with open('lfs-server.log', 'wb') as out:
34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
35 > sys.stdout.write('%s\n' % p.pid)
35 > sys.stdout.write('%s\n' % p.pid)
36 > sys.exit(0)
36 > sys.exit(0)
37 > sys.exit(1)
37 > sys.exit(1)
38 > EOF
38 > EOF
39 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
39 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
40 #endif
40 #endif
41
41
42 $ cat >> $HGRCPATH <<EOF
42 $ cat >> $HGRCPATH <<EOF
43 > [extensions]
43 > [extensions]
44 > lfs=
44 > lfs=
45 > [lfs]
45 > [lfs]
46 > url=http://foo:bar@$LFS_HOST
46 > url=http://foo:bar@$LFS_HOST
47 > track=all()
47 > track=all()
48 > [web]
48 > [web]
49 > push_ssl = False
49 > push_ssl = False
50 > allow-push = *
50 > allow-push = *
51 > EOF
51 > EOF
52
52
53 Use a separate usercache, otherwise the server sees what the client commits, and
53 Use a separate usercache, otherwise the server sees what the client commits, and
54 never requests a transfer.
54 never requests a transfer.
55
55
56 #if hg-server
56 #if hg-server
57 $ hg init server
57 $ hg init server
58 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
58 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
59 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
59 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
60 $ cat hg.pid >> $DAEMON_PIDS
60 $ cat hg.pid >> $DAEMON_PIDS
61 #endif
61 #endif
62
62
63 $ hg init repo1
63 $ hg init repo1
64 $ cd repo1
64 $ cd repo1
65 $ echo THIS-IS-LFS > a
65 $ echo THIS-IS-LFS > a
66 $ hg commit -m a -A a
66 $ hg commit -m a -A a
67
67
68 A push can be serviced directly from the usercache if it isn't in the local
68 A push can be serviced directly from the usercache if it isn't in the local
69 store.
69 store.
70
70
71 $ hg init ../repo2
71 $ hg init ../repo2
72 $ mv .hg/store/lfs .hg/store/lfs_
72 $ mv .hg/store/lfs .hg/store/lfs_
73 $ hg push ../repo2 --debug
73 $ hg push ../repo2 --debug
74 http auth: user foo, password ***
74 http auth: user foo, password ***
75 pushing to ../repo2
75 pushing to ../repo2
76 http auth: user foo, password ***
76 http auth: user foo, password ***
77 query 1; heads
77 query 1; heads
78 searching for changes
78 searching for changes
79 1 total queries in *s (glob)
79 1 total queries in *s (glob)
80 listing keys for "phases"
80 listing keys for "phases"
81 checking for updated bookmarks
81 checking for updated bookmarks
82 listing keys for "bookmarks"
82 listing keys for "bookmarks"
83 lfs: computing set of blobs to upload
83 lfs: computing set of blobs to upload
84 Status: 200
84 Status: 200
85 Content-Length: 309 (git-server !)
85 Content-Length: 309 (git-server !)
86 Content-Length: 350 (hg-server !)
86 Content-Length: 350 (hg-server !)
87 Content-Type: application/vnd.git-lfs+json
87 Content-Type: application/vnd.git-lfs+json
88 Date: $HTTP_DATE$
88 Date: $HTTP_DATE$
89 Server: testing stub value (hg-server !)
89 Server: testing stub value (hg-server !)
90 {
90 {
91 "objects": [
91 "objects": [
92 {
92 {
93 "actions": {
93 "actions": {
94 "upload": {
94 "upload": {
95 "expires_at": "$ISO_8601_DATE_TIME$"
95 "expires_at": "$ISO_8601_DATE_TIME$"
96 "header": {
96 "header": {
97 "Accept": "application/vnd.git-lfs"
97 "Accept": "application/vnd.git-lfs"
98 }
98 }
99 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
99 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
100 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
100 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
101 }
101 }
102 }
102 }
103 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
103 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
104 "size": 12
104 "size": 12
105 }
105 }
106 ]
106 ]
107 "transfer": "basic" (hg-server !)
107 "transfer": "basic" (hg-server !)
108 }
108 }
109 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
109 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
110 Status: 200 (git-server !)
110 Status: 200 (git-server !)
111 Status: 201 (hg-server !)
111 Status: 201 (hg-server !)
112 Content-Length: 0
112 Content-Length: 0
113 Content-Type: text/plain; charset=utf-8
113 Content-Type: text/plain; charset=utf-8
114 Date: $HTTP_DATE$
114 Date: $HTTP_DATE$
115 Server: testing stub value (hg-server !)
115 Server: testing stub value (hg-server !)
116 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
116 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
117 lfs: uploaded 1 files (12 bytes)
117 lfs: uploaded 1 files (12 bytes)
118 1 changesets found
118 1 changesets found
119 list of changesets:
119 list of changesets:
120 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
120 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
121 bundle2-output-bundle: "HG20", 4 parts total
121 bundle2-output-bundle: "HG20", 4 parts total
122 bundle2-output-part: "replycaps" * bytes payload (glob)
122 bundle2-output-part: "replycaps" * bytes payload (glob)
123 bundle2-output-part: "check:heads" streamed payload
123 bundle2-output-part: "check:heads" streamed payload
124 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
124 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
125 bundle2-output-part: "phase-heads" 24 bytes payload
125 bundle2-output-part: "phase-heads" 24 bytes payload
126 bundle2-input-bundle: with-transaction
126 bundle2-input-bundle: with-transaction
127 bundle2-input-part: "replycaps" supported
127 bundle2-input-part: "replycaps" supported
128 bundle2-input-part: total payload size * (glob)
128 bundle2-input-part: total payload size * (glob)
129 bundle2-input-part: "check:heads" supported
129 bundle2-input-part: "check:heads" supported
130 bundle2-input-part: total payload size 20
130 bundle2-input-part: total payload size 20
131 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
131 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
132 adding changesets
132 adding changesets
133 add changeset 99a7098854a3
133 add changeset 99a7098854a3
134 adding manifests
134 adding manifests
135 adding file changes
135 adding file changes
136 adding a revisions
136 adding a revisions
137 added 1 changesets with 1 changes to 1 files
137 added 1 changesets with 1 changes to 1 files
138 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
138 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
139 bundle2-input-part: total payload size 617
139 bundle2-input-part: total payload size 617
140 bundle2-input-part: "phase-heads" supported
140 bundle2-input-part: "phase-heads" supported
141 bundle2-input-part: total payload size 24
141 bundle2-input-part: total payload size 24
142 bundle2-input-bundle: 3 parts total
142 bundle2-input-bundle: 3 parts total
143 updating the branch cache
143 updating the branch cache
144 bundle2-output-bundle: "HG20", 1 parts total
144 bundle2-output-bundle: "HG20", 1 parts total
145 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
145 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
146 bundle2-input-bundle: no-transaction
146 bundle2-input-bundle: no-transaction
147 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
147 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
148 bundle2-input-bundle: 0 parts total
148 bundle2-input-bundle: 0 parts total
149 listing keys for "phases"
149 listing keys for "phases"
150 $ mv .hg/store/lfs_ .hg/store/lfs
150 $ mv .hg/store/lfs_ .hg/store/lfs
151
151
152 Clear the cache to force a download
152 Clear the cache to force a download
153 $ rm -rf `hg config lfs.usercache`
153 $ rm -rf `hg config lfs.usercache`
154 $ cd ../repo2
154 $ cd ../repo2
155 $ hg update tip --debug
155 $ hg update tip --debug
156 http auth: user foo, password ***
156 http auth: user foo, password ***
157 resolving manifests
157 resolving manifests
158 branchmerge: False, force: False, partial: False
158 branchmerge: False, force: False, partial: False
159 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
159 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
160 http auth: user foo, password ***
160 Status: 200
161 Status: 200
161 Content-Length: 311 (git-server !)
162 Content-Length: 311 (git-server !)
162 Content-Length: 352 (hg-server !)
163 Content-Length: 352 (hg-server !)
163 Content-Type: application/vnd.git-lfs+json
164 Content-Type: application/vnd.git-lfs+json
164 Date: $HTTP_DATE$
165 Date: $HTTP_DATE$
165 Server: testing stub value (hg-server !)
166 Server: testing stub value (hg-server !)
166 {
167 {
167 "objects": [
168 "objects": [
168 {
169 {
169 "actions": {
170 "actions": {
170 "download": {
171 "download": {
171 "expires_at": "$ISO_8601_DATE_TIME$"
172 "expires_at": "$ISO_8601_DATE_TIME$"
172 "header": {
173 "header": {
173 "Accept": "application/vnd.git-lfs"
174 "Accept": "application/vnd.git-lfs"
174 }
175 }
175 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
176 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
176 }
177 }
177 }
178 }
178 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
179 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
179 "size": 12
180 "size": 12
180 }
181 }
181 ]
182 ]
182 "transfer": "basic" (hg-server !)
183 "transfer": "basic" (hg-server !)
183 }
184 }
184 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
185 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
185 Status: 200
186 Status: 200
186 Content-Length: 12
187 Content-Length: 12
187 Content-Type: text/plain; charset=utf-8 (git-server !)
188 Content-Type: text/plain; charset=utf-8 (git-server !)
188 Content-Type: application/octet-stream (hg-server !)
189 Content-Type: application/octet-stream (hg-server !)
189 Date: $HTTP_DATE$
190 Date: $HTTP_DATE$
190 Server: testing stub value (hg-server !)
191 Server: testing stub value (hg-server !)
191 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
192 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
192 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
193 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
193 a: remote created -> g
194 a: remote created -> g
194 getting a
195 getting a
195 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
196 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
196 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
197 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
197
198
198 When the server has some blobs already. `hg serve` doesn't offer to upload
199 When the server has some blobs already. `hg serve` doesn't offer to upload
199 blobs that it already knows about. Note that lfs-test-server is simply
200 blobs that it already knows about. Note that lfs-test-server is simply
200 toggling the action to 'download'. The Batch API spec says it should omit the
201 toggling the action to 'download'. The Batch API spec says it should omit the
201 actions property completely.
202 actions property completely.
202
203
203 $ hg mv a b
204 $ hg mv a b
204 $ echo ANOTHER-LARGE-FILE > c
205 $ echo ANOTHER-LARGE-FILE > c
205 $ echo ANOTHER-LARGE-FILE2 > d
206 $ echo ANOTHER-LARGE-FILE2 > d
206 $ hg commit -m b-and-c -A b c d
207 $ hg commit -m b-and-c -A b c d
207 $ hg push ../repo1 --debug
208 $ hg push ../repo1 --debug
208 http auth: user foo, password ***
209 http auth: user foo, password ***
209 pushing to ../repo1
210 pushing to ../repo1
210 http auth: user foo, password ***
211 http auth: user foo, password ***
211 query 1; heads
212 query 1; heads
212 searching for changes
213 searching for changes
213 all remote heads known locally
214 all remote heads known locally
214 listing keys for "phases"
215 listing keys for "phases"
215 checking for updated bookmarks
216 checking for updated bookmarks
216 listing keys for "bookmarks"
217 listing keys for "bookmarks"
217 listing keys for "bookmarks"
218 listing keys for "bookmarks"
218 lfs: computing set of blobs to upload
219 lfs: computing set of blobs to upload
219 Status: 200
220 Status: 200
220 Content-Length: 901 (git-server !)
221 Content-Length: 901 (git-server !)
221 Content-Length: 755 (hg-server !)
222 Content-Length: 755 (hg-server !)
222 Content-Type: application/vnd.git-lfs+json
223 Content-Type: application/vnd.git-lfs+json
223 Date: $HTTP_DATE$
224 Date: $HTTP_DATE$
224 Server: testing stub value (hg-server !)
225 Server: testing stub value (hg-server !)
225 {
226 {
226 "objects": [
227 "objects": [
227 {
228 {
228 "actions": { (git-server !)
229 "actions": { (git-server !)
229 "download": { (git-server !)
230 "download": { (git-server !)
230 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
231 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
231 "header": { (git-server !)
232 "header": { (git-server !)
232 "Accept": "application/vnd.git-lfs" (git-server !)
233 "Accept": "application/vnd.git-lfs" (git-server !)
233 } (git-server !)
234 } (git-server !)
234 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
235 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
235 } (git-server !)
236 } (git-server !)
236 } (git-server !)
237 } (git-server !)
237 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
238 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
238 "size": 12
239 "size": 12
239 }
240 }
240 {
241 {
241 "actions": {
242 "actions": {
242 "upload": {
243 "upload": {
243 "expires_at": "$ISO_8601_DATE_TIME$"
244 "expires_at": "$ISO_8601_DATE_TIME$"
244 "header": {
245 "header": {
245 "Accept": "application/vnd.git-lfs"
246 "Accept": "application/vnd.git-lfs"
246 }
247 }
247 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
248 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
248 }
249 }
249 }
250 }
250 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
251 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
251 "size": 20
252 "size": 20
252 }
253 }
253 {
254 {
254 "actions": {
255 "actions": {
255 "upload": {
256 "upload": {
256 "expires_at": "$ISO_8601_DATE_TIME$"
257 "expires_at": "$ISO_8601_DATE_TIME$"
257 "header": {
258 "header": {
258 "Accept": "application/vnd.git-lfs"
259 "Accept": "application/vnd.git-lfs"
259 }
260 }
260 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
261 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
261 }
262 }
262 }
263 }
263 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
264 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
264 "size": 19
265 "size": 19
265 }
266 }
266 ]
267 ]
267 "transfer": "basic" (hg-server !)
268 "transfer": "basic" (hg-server !)
268 }
269 }
269 lfs: need to transfer 2 objects (39 bytes)
270 lfs: need to transfer 2 objects (39 bytes)
270 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
271 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
271 Status: 200 (git-server !)
272 Status: 200 (git-server !)
272 Status: 201 (hg-server !)
273 Status: 201 (hg-server !)
273 Content-Length: 0
274 Content-Length: 0
274 Content-Type: text/plain; charset=utf-8
275 Content-Type: text/plain; charset=utf-8
275 Date: $HTTP_DATE$
276 Date: $HTTP_DATE$
276 Server: testing stub value (hg-server !)
277 Server: testing stub value (hg-server !)
277 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
278 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
278 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
279 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
279 Status: 200 (git-server !)
280 Status: 200 (git-server !)
280 Status: 201 (hg-server !)
281 Status: 201 (hg-server !)
281 Content-Length: 0
282 Content-Length: 0
282 Content-Type: text/plain; charset=utf-8
283 Content-Type: text/plain; charset=utf-8
283 Date: $HTTP_DATE$
284 Date: $HTTP_DATE$
284 Server: testing stub value (hg-server !)
285 Server: testing stub value (hg-server !)
285 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
286 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
286 lfs: uploaded 2 files (39 bytes)
287 lfs: uploaded 2 files (39 bytes)
287 1 changesets found
288 1 changesets found
288 list of changesets:
289 list of changesets:
289 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
290 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
290 bundle2-output-bundle: "HG20", 5 parts total
291 bundle2-output-bundle: "HG20", 5 parts total
291 bundle2-output-part: "replycaps" * bytes payload (glob)
292 bundle2-output-part: "replycaps" * bytes payload (glob)
292 bundle2-output-part: "check:phases" 24 bytes payload
293 bundle2-output-part: "check:phases" 24 bytes payload
293 bundle2-output-part: "check:heads" streamed payload
294 bundle2-output-part: "check:heads" streamed payload
294 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
295 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
295 bundle2-output-part: "phase-heads" 24 bytes payload
296 bundle2-output-part: "phase-heads" 24 bytes payload
296 bundle2-input-bundle: with-transaction
297 bundle2-input-bundle: with-transaction
297 bundle2-input-part: "replycaps" supported
298 bundle2-input-part: "replycaps" supported
298 bundle2-input-part: total payload size * (glob)
299 bundle2-input-part: total payload size * (glob)
299 bundle2-input-part: "check:phases" supported
300 bundle2-input-part: "check:phases" supported
300 bundle2-input-part: total payload size 24
301 bundle2-input-part: total payload size 24
301 bundle2-input-part: "check:heads" supported
302 bundle2-input-part: "check:heads" supported
302 bundle2-input-part: total payload size 20
303 bundle2-input-part: total payload size 20
303 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
304 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
304 adding changesets
305 adding changesets
305 add changeset dfca2c9e2ef2
306 add changeset dfca2c9e2ef2
306 adding manifests
307 adding manifests
307 adding file changes
308 adding file changes
308 adding b revisions
309 adding b revisions
309 adding c revisions
310 adding c revisions
310 adding d revisions
311 adding d revisions
311 added 1 changesets with 3 changes to 3 files
312 added 1 changesets with 3 changes to 3 files
312 bundle2-input-part: total payload size 1315
313 bundle2-input-part: total payload size 1315
313 bundle2-input-part: "phase-heads" supported
314 bundle2-input-part: "phase-heads" supported
314 bundle2-input-part: total payload size 24
315 bundle2-input-part: total payload size 24
315 bundle2-input-bundle: 4 parts total
316 bundle2-input-bundle: 4 parts total
316 updating the branch cache
317 updating the branch cache
317 bundle2-output-bundle: "HG20", 1 parts total
318 bundle2-output-bundle: "HG20", 1 parts total
318 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
319 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
319 bundle2-input-bundle: no-transaction
320 bundle2-input-bundle: no-transaction
320 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
321 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
321 bundle2-input-bundle: 0 parts total
322 bundle2-input-bundle: 0 parts total
322 listing keys for "phases"
323 listing keys for "phases"
323
324
324 Clear the cache to force a download
325 Clear the cache to force a download
325 $ rm -rf `hg config lfs.usercache`
326 $ rm -rf `hg config lfs.usercache`
326 $ hg --repo ../repo1 update tip --debug
327 $ hg --repo ../repo1 update tip --debug
327 http auth: user foo, password ***
328 http auth: user foo, password ***
328 resolving manifests
329 resolving manifests
329 branchmerge: False, force: False, partial: False
330 branchmerge: False, force: False, partial: False
330 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
331 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
332 http auth: user foo, password ***
331 Status: 200
333 Status: 200
332 Content-Length: 608 (git-server !)
334 Content-Length: 608 (git-server !)
333 Content-Length: 670 (hg-server !)
335 Content-Length: 670 (hg-server !)
334 Content-Type: application/vnd.git-lfs+json
336 Content-Type: application/vnd.git-lfs+json
335 Date: $HTTP_DATE$
337 Date: $HTTP_DATE$
336 Server: testing stub value (hg-server !)
338 Server: testing stub value (hg-server !)
337 {
339 {
338 "objects": [
340 "objects": [
339 {
341 {
340 "actions": {
342 "actions": {
341 "download": {
343 "download": {
342 "expires_at": "$ISO_8601_DATE_TIME$"
344 "expires_at": "$ISO_8601_DATE_TIME$"
343 "header": {
345 "header": {
344 "Accept": "application/vnd.git-lfs"
346 "Accept": "application/vnd.git-lfs"
345 }
347 }
346 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
348 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
347 }
349 }
348 }
350 }
349 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
351 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
350 "size": 20
352 "size": 20
351 }
353 }
352 {
354 {
353 "actions": {
355 "actions": {
354 "download": {
356 "download": {
355 "expires_at": "$ISO_8601_DATE_TIME$"
357 "expires_at": "$ISO_8601_DATE_TIME$"
356 "header": {
358 "header": {
357 "Accept": "application/vnd.git-lfs"
359 "Accept": "application/vnd.git-lfs"
358 }
360 }
359 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
361 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
360 }
362 }
361 }
363 }
362 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
364 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
363 "size": 19
365 "size": 19
364 }
366 }
365 ]
367 ]
366 "transfer": "basic" (hg-server !)
368 "transfer": "basic" (hg-server !)
367 }
369 }
368 lfs: need to transfer 2 objects (39 bytes)
370 lfs: need to transfer 2 objects (39 bytes)
369 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
371 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
370 Status: 200
372 Status: 200
371 Content-Length: 20
373 Content-Length: 20
372 Content-Type: text/plain; charset=utf-8 (git-server !)
374 Content-Type: text/plain; charset=utf-8 (git-server !)
373 Content-Type: application/octet-stream (hg-server !)
375 Content-Type: application/octet-stream (hg-server !)
374 Date: $HTTP_DATE$
376 Date: $HTTP_DATE$
375 Server: testing stub value (hg-server !)
377 Server: testing stub value (hg-server !)
376 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
378 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
377 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
379 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
378 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
380 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
379 Status: 200
381 Status: 200
380 Content-Length: 19
382 Content-Length: 19
381 Content-Type: text/plain; charset=utf-8 (git-server !)
383 Content-Type: text/plain; charset=utf-8 (git-server !)
382 Content-Type: application/octet-stream (hg-server !)
384 Content-Type: application/octet-stream (hg-server !)
383 Date: $HTTP_DATE$
385 Date: $HTTP_DATE$
384 Server: testing stub value (hg-server !)
386 Server: testing stub value (hg-server !)
385 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
387 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
386 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
388 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
387 b: remote created -> g
389 b: remote created -> g
388 getting b
390 getting b
389 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
391 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
390 c: remote created -> g
392 c: remote created -> g
391 getting c
393 getting c
392 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
394 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
393 d: remote created -> g
395 d: remote created -> g
394 getting d
396 getting d
395 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
397 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
396 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
398 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
397
399
398 Test a corrupt file download, but clear the cache first to force a download.
400 Test a corrupt file download, but clear the cache first to force a download.
399 `hg serve` indicates a corrupt file without transferring it, unlike
401 `hg serve` indicates a corrupt file without transferring it, unlike
400 lfs-test-server.
402 lfs-test-server.
401
403
402 $ rm -rf `hg config lfs.usercache`
404 $ rm -rf `hg config lfs.usercache`
403 #if git-server
405 #if git-server
404 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
406 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
405 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
407 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
406 #else
408 #else
407 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
409 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
408 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
410 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
409 #endif
411 #endif
410 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
412 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
411 $ rm ../repo1/*
413 $ rm ../repo1/*
412
414
413 TODO: give the proper error indication from `hg serve`
415 TODO: give the proper error indication from `hg serve`
414
416
415 $ hg --repo ../repo1 update -C tip --debug
417 $ hg --repo ../repo1 update -C tip --debug
416 http auth: user foo, password ***
418 http auth: user foo, password ***
417 resolving manifests
419 resolving manifests
418 branchmerge: False, force: True, partial: False
420 branchmerge: False, force: True, partial: False
419 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
421 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
422 http auth: user foo, password ***
420 Status: 200
423 Status: 200
421 Content-Length: 311 (git-server !)
424 Content-Length: 311 (git-server !)
422 Content-Length: 183 (hg-server !)
425 Content-Length: 183 (hg-server !)
423 Content-Type: application/vnd.git-lfs+json
426 Content-Type: application/vnd.git-lfs+json
424 Date: $HTTP_DATE$
427 Date: $HTTP_DATE$
425 Server: testing stub value (hg-server !)
428 Server: testing stub value (hg-server !)
426 {
429 {
427 "objects": [
430 "objects": [
428 {
431 {
429 "actions": { (git-server !)
432 "actions": { (git-server !)
430 "download": { (git-server !)
433 "download": { (git-server !)
431 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
434 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
432 "header": { (git-server !)
435 "header": { (git-server !)
433 "Accept": "application/vnd.git-lfs" (git-server !)
436 "Accept": "application/vnd.git-lfs" (git-server !)
434 } (git-server !)
437 } (git-server !)
435 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
438 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
436 } (git-server !)
439 } (git-server !)
437 "error": { (hg-server !)
440 "error": { (hg-server !)
438 "code": 422 (hg-server !)
441 "code": 422 (hg-server !)
439 "message": "The object is corrupt" (hg-server !)
442 "message": "The object is corrupt" (hg-server !)
440 }
443 }
441 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
444 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
442 "size": 19
445 "size": 19
443 }
446 }
444 ]
447 ]
445 "transfer": "basic" (hg-server !)
448 "transfer": "basic" (hg-server !)
446 }
449 }
447 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
450 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
448 Status: 200 (git-server !)
451 Status: 200 (git-server !)
449 Content-Length: 7 (git-server !)
452 Content-Length: 7 (git-server !)
450 Content-Type: text/plain; charset=utf-8 (git-server !)
453 Content-Type: text/plain; charset=utf-8 (git-server !)
451 Date: $HTTP_DATE$ (git-server !)
454 Date: $HTTP_DATE$ (git-server !)
452 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
455 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
453 abort: LFS server error for "c": Validation error! (hg-server !)
456 abort: LFS server error for "c": Validation error! (hg-server !)
454 [255]
457 [255]
455
458
456 The corrupted blob is not added to the usercache or local store
459 The corrupted blob is not added to the usercache or local store
457
460
458 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
461 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
459 [1]
462 [1]
460 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
463 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
461 [1]
464 [1]
462 #if git-server
465 #if git-server
463 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
466 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
464 #else
467 #else
465 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
468 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
466 #endif
469 #endif
467
470
468 Test a corrupted file upload
471 Test a corrupted file upload
469
472
470 $ echo 'another lfs blob' > b
473 $ echo 'another lfs blob' > b
471 $ hg ci -m 'another blob'
474 $ hg ci -m 'another blob'
472 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
475 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
473 $ hg push --debug ../repo1
476 $ hg push --debug ../repo1
474 http auth: user foo, password ***
477 http auth: user foo, password ***
475 pushing to ../repo1
478 pushing to ../repo1
476 http auth: user foo, password ***
479 http auth: user foo, password ***
477 query 1; heads
480 query 1; heads
478 searching for changes
481 searching for changes
479 all remote heads known locally
482 all remote heads known locally
480 listing keys for "phases"
483 listing keys for "phases"
481 checking for updated bookmarks
484 checking for updated bookmarks
482 listing keys for "bookmarks"
485 listing keys for "bookmarks"
483 listing keys for "bookmarks"
486 listing keys for "bookmarks"
484 lfs: computing set of blobs to upload
487 lfs: computing set of blobs to upload
485 Status: 200
488 Status: 200
486 Content-Length: 309 (git-server !)
489 Content-Length: 309 (git-server !)
487 Content-Length: 350 (hg-server !)
490 Content-Length: 350 (hg-server !)
488 Content-Type: application/vnd.git-lfs+json
491 Content-Type: application/vnd.git-lfs+json
489 Date: $HTTP_DATE$
492 Date: $HTTP_DATE$
490 Server: testing stub value (hg-server !)
493 Server: testing stub value (hg-server !)
491 {
494 {
492 "objects": [
495 "objects": [
493 {
496 {
494 "actions": {
497 "actions": {
495 "upload": {
498 "upload": {
496 "expires_at": "$ISO_8601_DATE_TIME$"
499 "expires_at": "$ISO_8601_DATE_TIME$"
497 "header": {
500 "header": {
498 "Accept": "application/vnd.git-lfs"
501 "Accept": "application/vnd.git-lfs"
499 }
502 }
500 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
503 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
501 }
504 }
502 }
505 }
503 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
506 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
504 "size": 17
507 "size": 17
505 }
508 }
506 ]
509 ]
507 "transfer": "basic" (hg-server !)
510 "transfer": "basic" (hg-server !)
508 }
511 }
509 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
512 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
510 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
513 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
511 (run hg verify)
514 (run hg verify)
512 [255]
515 [255]
513
516
514 Archive will prefetch blobs in a group
517 Archive will prefetch blobs in a group
515
518
516 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
519 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
517 $ hg archive --debug -r 1 ../archive
520 $ hg archive --debug -r 1 ../archive
518 http auth: user foo, password ***
521 http auth: user foo, password ***
522 http auth: user foo, password ***
519 Status: 200
523 Status: 200
520 Content-Length: 905 (git-server !)
524 Content-Length: 905 (git-server !)
521 Content-Length: 988 (hg-server !)
525 Content-Length: 988 (hg-server !)
522 Content-Type: application/vnd.git-lfs+json
526 Content-Type: application/vnd.git-lfs+json
523 Date: $HTTP_DATE$
527 Date: $HTTP_DATE$
524 Server: testing stub value (hg-server !)
528 Server: testing stub value (hg-server !)
525 {
529 {
526 "objects": [
530 "objects": [
527 {
531 {
528 "actions": {
532 "actions": {
529 "download": {
533 "download": {
530 "expires_at": "$ISO_8601_DATE_TIME$"
534 "expires_at": "$ISO_8601_DATE_TIME$"
531 "header": {
535 "header": {
532 "Accept": "application/vnd.git-lfs"
536 "Accept": "application/vnd.git-lfs"
533 }
537 }
534 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
538 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
535 }
539 }
536 }
540 }
537 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
541 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
538 "size": 12
542 "size": 12
539 }
543 }
540 {
544 {
541 "actions": {
545 "actions": {
542 "download": {
546 "download": {
543 "expires_at": "$ISO_8601_DATE_TIME$"
547 "expires_at": "$ISO_8601_DATE_TIME$"
544 "header": {
548 "header": {
545 "Accept": "application/vnd.git-lfs"
549 "Accept": "application/vnd.git-lfs"
546 }
550 }
547 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
551 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
548 }
552 }
549 }
553 }
550 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
554 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
551 "size": 20
555 "size": 20
552 }
556 }
553 {
557 {
554 "actions": {
558 "actions": {
555 "download": {
559 "download": {
556 "expires_at": "$ISO_8601_DATE_TIME$"
560 "expires_at": "$ISO_8601_DATE_TIME$"
557 "header": {
561 "header": {
558 "Accept": "application/vnd.git-lfs"
562 "Accept": "application/vnd.git-lfs"
559 }
563 }
560 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
564 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
561 }
565 }
562 }
566 }
563 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
567 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
564 "size": 19
568 "size": 19
565 }
569 }
566 ]
570 ]
567 "transfer": "basic" (hg-server !)
571 "transfer": "basic" (hg-server !)
568 }
572 }
569 lfs: need to transfer 3 objects (51 bytes)
573 lfs: need to transfer 3 objects (51 bytes)
570 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
574 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
571 Status: 200
575 Status: 200
572 Content-Length: 12
576 Content-Length: 12
573 Content-Type: text/plain; charset=utf-8 (git-server !)
577 Content-Type: text/plain; charset=utf-8 (git-server !)
574 Content-Type: application/octet-stream (hg-server !)
578 Content-Type: application/octet-stream (hg-server !)
575 Date: $HTTP_DATE$
579 Date: $HTTP_DATE$
576 Server: testing stub value (hg-server !)
580 Server: testing stub value (hg-server !)
577 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
581 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
578 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
582 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
579 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
583 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
580 Status: 200
584 Status: 200
581 Content-Length: 20
585 Content-Length: 20
582 Content-Type: text/plain; charset=utf-8 (git-server !)
586 Content-Type: text/plain; charset=utf-8 (git-server !)
583 Content-Type: application/octet-stream (hg-server !)
587 Content-Type: application/octet-stream (hg-server !)
584 Date: $HTTP_DATE$
588 Date: $HTTP_DATE$
585 Server: testing stub value (hg-server !)
589 Server: testing stub value (hg-server !)
586 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
590 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
587 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
591 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
588 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
592 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
589 Status: 200
593 Status: 200
590 Content-Length: 19
594 Content-Length: 19
591 Content-Type: text/plain; charset=utf-8 (git-server !)
595 Content-Type: text/plain; charset=utf-8 (git-server !)
592 Content-Type: application/octet-stream (hg-server !)
596 Content-Type: application/octet-stream (hg-server !)
593 Date: $HTTP_DATE$
597 Date: $HTTP_DATE$
594 Server: testing stub value (hg-server !)
598 Server: testing stub value (hg-server !)
595 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
599 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
596 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
600 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
597 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
601 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
598 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
602 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
599 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
603 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
600 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
604 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
601 $ find ../archive | sort
605 $ find ../archive | sort
602 ../archive
606 ../archive
603 ../archive/.hg_archival.txt
607 ../archive/.hg_archival.txt
604 ../archive/a
608 ../archive/a
605 ../archive/b
609 ../archive/b
606 ../archive/c
610 ../archive/c
607 ../archive/d
611 ../archive/d
608
612
609 Cat will prefetch blobs in a group
613 Cat will prefetch blobs in a group
610
614
611 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
615 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
612 $ hg cat --debug -r 1 a b c
616 $ hg cat --debug -r 1 a b c
613 http auth: user foo, password ***
617 http auth: user foo, password ***
618 http auth: user foo, password ***
614 Status: 200
619 Status: 200
615 Content-Length: 608 (git-server !)
620 Content-Length: 608 (git-server !)
616 Content-Length: 670 (hg-server !)
621 Content-Length: 670 (hg-server !)
617 Content-Type: application/vnd.git-lfs+json
622 Content-Type: application/vnd.git-lfs+json
618 Date: $HTTP_DATE$
623 Date: $HTTP_DATE$
619 Server: testing stub value (hg-server !)
624 Server: testing stub value (hg-server !)
620 {
625 {
621 "objects": [
626 "objects": [
622 {
627 {
623 "actions": {
628 "actions": {
624 "download": {
629 "download": {
625 "expires_at": "$ISO_8601_DATE_TIME$"
630 "expires_at": "$ISO_8601_DATE_TIME$"
626 "header": {
631 "header": {
627 "Accept": "application/vnd.git-lfs"
632 "Accept": "application/vnd.git-lfs"
628 }
633 }
629 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
634 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
630 }
635 }
631 }
636 }
632 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
637 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
633 "size": 12
638 "size": 12
634 }
639 }
635 {
640 {
636 "actions": {
641 "actions": {
637 "download": {
642 "download": {
638 "expires_at": "$ISO_8601_DATE_TIME$"
643 "expires_at": "$ISO_8601_DATE_TIME$"
639 "header": {
644 "header": {
640 "Accept": "application/vnd.git-lfs"
645 "Accept": "application/vnd.git-lfs"
641 }
646 }
642 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
647 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
643 }
648 }
644 }
649 }
645 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
650 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
646 "size": 19
651 "size": 19
647 }
652 }
648 ]
653 ]
649 "transfer": "basic" (hg-server !)
654 "transfer": "basic" (hg-server !)
650 }
655 }
651 lfs: need to transfer 2 objects (31 bytes)
656 lfs: need to transfer 2 objects (31 bytes)
652 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
657 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
653 Status: 200
658 Status: 200
654 Content-Length: 12
659 Content-Length: 12
655 Content-Type: text/plain; charset=utf-8 (git-server !)
660 Content-Type: text/plain; charset=utf-8 (git-server !)
656 Content-Type: application/octet-stream (hg-server !)
661 Content-Type: application/octet-stream (hg-server !)
657 Date: $HTTP_DATE$
662 Date: $HTTP_DATE$
658 Server: testing stub value (hg-server !)
663 Server: testing stub value (hg-server !)
659 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
664 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
660 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
665 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
661 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
666 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
662 Status: 200
667 Status: 200
663 Content-Length: 19
668 Content-Length: 19
664 Content-Type: text/plain; charset=utf-8 (git-server !)
669 Content-Type: text/plain; charset=utf-8 (git-server !)
665 Content-Type: application/octet-stream (hg-server !)
670 Content-Type: application/octet-stream (hg-server !)
666 Date: $HTTP_DATE$
671 Date: $HTTP_DATE$
667 Server: testing stub value (hg-server !)
672 Server: testing stub value (hg-server !)
668 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
673 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
669 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
674 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
670 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
675 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
671 THIS-IS-LFS
676 THIS-IS-LFS
672 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
677 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
673 THIS-IS-LFS
678 THIS-IS-LFS
674 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
679 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
675 ANOTHER-LARGE-FILE
680 ANOTHER-LARGE-FILE
676
681
677 Revert will prefetch blobs in a group
682 Revert will prefetch blobs in a group
678
683
679 $ rm -rf .hg/store/lfs
684 $ rm -rf .hg/store/lfs
680 $ rm -rf `hg config lfs.usercache`
685 $ rm -rf `hg config lfs.usercache`
681 $ rm *
686 $ rm *
682 $ hg revert --all -r 1 --debug
687 $ hg revert --all -r 1 --debug
683 http auth: user foo, password ***
688 http auth: user foo, password ***
684 adding a
689 adding a
685 reverting b
690 reverting b
686 reverting c
691 reverting c
687 reverting d
692 reverting d
693 http auth: user foo, password ***
688 Status: 200
694 Status: 200
689 Content-Length: 905 (git-server !)
695 Content-Length: 905 (git-server !)
690 Content-Length: 988 (hg-server !)
696 Content-Length: 988 (hg-server !)
691 Content-Type: application/vnd.git-lfs+json
697 Content-Type: application/vnd.git-lfs+json
692 Date: $HTTP_DATE$
698 Date: $HTTP_DATE$
693 Server: testing stub value (hg-server !)
699 Server: testing stub value (hg-server !)
694 {
700 {
695 "objects": [
701 "objects": [
696 {
702 {
697 "actions": {
703 "actions": {
698 "download": {
704 "download": {
699 "expires_at": "$ISO_8601_DATE_TIME$"
705 "expires_at": "$ISO_8601_DATE_TIME$"
700 "header": {
706 "header": {
701 "Accept": "application/vnd.git-lfs"
707 "Accept": "application/vnd.git-lfs"
702 }
708 }
703 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
709 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
704 }
710 }
705 }
711 }
706 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
712 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
707 "size": 12
713 "size": 12
708 }
714 }
709 {
715 {
710 "actions": {
716 "actions": {
711 "download": {
717 "download": {
712 "expires_at": "$ISO_8601_DATE_TIME$"
718 "expires_at": "$ISO_8601_DATE_TIME$"
713 "header": {
719 "header": {
714 "Accept": "application/vnd.git-lfs"
720 "Accept": "application/vnd.git-lfs"
715 }
721 }
716 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
722 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
717 }
723 }
718 }
724 }
719 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
725 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
720 "size": 20
726 "size": 20
721 }
727 }
722 {
728 {
723 "actions": {
729 "actions": {
724 "download": {
730 "download": {
725 "expires_at": "$ISO_8601_DATE_TIME$"
731 "expires_at": "$ISO_8601_DATE_TIME$"
726 "header": {
732 "header": {
727 "Accept": "application/vnd.git-lfs"
733 "Accept": "application/vnd.git-lfs"
728 }
734 }
729 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
735 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
730 }
736 }
731 }
737 }
732 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
738 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
733 "size": 19
739 "size": 19
734 }
740 }
735 ]
741 ]
736 "transfer": "basic" (hg-server !)
742 "transfer": "basic" (hg-server !)
737 }
743 }
738 lfs: need to transfer 3 objects (51 bytes)
744 lfs: need to transfer 3 objects (51 bytes)
739 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
745 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
740 Status: 200
746 Status: 200
741 Content-Length: 12
747 Content-Length: 12
742 Content-Type: text/plain; charset=utf-8 (git-server !)
748 Content-Type: text/plain; charset=utf-8 (git-server !)
743 Content-Type: application/octet-stream (hg-server !)
749 Content-Type: application/octet-stream (hg-server !)
744 Date: $HTTP_DATE$
750 Date: $HTTP_DATE$
745 Server: testing stub value (hg-server !)
751 Server: testing stub value (hg-server !)
746 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
752 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
747 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
753 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
748 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
754 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
749 Status: 200
755 Status: 200
750 Content-Length: 20
756 Content-Length: 20
751 Content-Type: text/plain; charset=utf-8 (git-server !)
757 Content-Type: text/plain; charset=utf-8 (git-server !)
752 Content-Type: application/octet-stream (hg-server !)
758 Content-Type: application/octet-stream (hg-server !)
753 Date: $HTTP_DATE$
759 Date: $HTTP_DATE$
754 Server: testing stub value (hg-server !)
760 Server: testing stub value (hg-server !)
755 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
761 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
756 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
762 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
757 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
763 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
758 Status: 200
764 Status: 200
759 Content-Length: 19
765 Content-Length: 19
760 Content-Type: text/plain; charset=utf-8 (git-server !)
766 Content-Type: text/plain; charset=utf-8 (git-server !)
761 Content-Type: application/octet-stream (hg-server !)
767 Content-Type: application/octet-stream (hg-server !)
762 Date: $HTTP_DATE$
768 Date: $HTTP_DATE$
763 Server: testing stub value (hg-server !)
769 Server: testing stub value (hg-server !)
764 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
770 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
765 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
771 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
766 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
772 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
767 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
773 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
768 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
774 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
769 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
775 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
770
776
771 Check error message when the remote missed a blob:
777 Check error message when the remote missed a blob:
772
778
773 $ echo FFFFF > b
779 $ echo FFFFF > b
774 $ hg commit -m b -A b
780 $ hg commit -m b -A b
775 $ echo FFFFF >> b
781 $ echo FFFFF >> b
776 $ hg commit -m b b
782 $ hg commit -m b b
777 $ rm -rf .hg/store/lfs
783 $ rm -rf .hg/store/lfs
778 $ rm -rf `hg config lfs.usercache`
784 $ rm -rf `hg config lfs.usercache`
779 $ hg update -C '.^' --debug
785 $ hg update -C '.^' --debug
780 http auth: user foo, password ***
786 http auth: user foo, password ***
781 resolving manifests
787 resolving manifests
782 branchmerge: False, force: True, partial: False
788 branchmerge: False, force: True, partial: False
783 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
789 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
790 http auth: user foo, password ***
784 Status: 200
791 Status: 200
785 Content-Length: 308 (git-server !)
792 Content-Length: 308 (git-server !)
786 Content-Length: 186 (hg-server !)
793 Content-Length: 186 (hg-server !)
787 Content-Type: application/vnd.git-lfs+json
794 Content-Type: application/vnd.git-lfs+json
788 Date: $HTTP_DATE$
795 Date: $HTTP_DATE$
789 Server: testing stub value (hg-server !)
796 Server: testing stub value (hg-server !)
790 {
797 {
791 "objects": [
798 "objects": [
792 {
799 {
793 "actions": { (git-server !)
800 "actions": { (git-server !)
794 "upload": { (git-server !)
801 "upload": { (git-server !)
795 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
802 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
796 "header": { (git-server !)
803 "header": { (git-server !)
797 "Accept": "application/vnd.git-lfs" (git-server !)
804 "Accept": "application/vnd.git-lfs" (git-server !)
798 } (git-server !)
805 } (git-server !)
799 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
806 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
800 } (git-server !)
807 } (git-server !)
801 "error": { (hg-server !)
808 "error": { (hg-server !)
802 "code": 404 (hg-server !)
809 "code": 404 (hg-server !)
803 "message": "The object does not exist" (hg-server !)
810 "message": "The object does not exist" (hg-server !)
804 }
811 }
805 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
812 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
806 "size": 6
813 "size": 6
807 }
814 }
808 ]
815 ]
809 "transfer": "basic" (hg-server !)
816 "transfer": "basic" (hg-server !)
810 }
817 }
811 abort: LFS server error for "b": The object does not exist!
818 abort: LFS server error for "b": The object does not exist!
812 [255]
819 [255]
813
820
814 Check error message when object does not exist:
821 Check error message when object does not exist:
815
822
816 $ cd $TESTTMP
823 $ cd $TESTTMP
817 $ hg init test && cd test
824 $ hg init test && cd test
818 $ echo "[extensions]" >> .hg/hgrc
825 $ echo "[extensions]" >> .hg/hgrc
819 $ echo "lfs=" >> .hg/hgrc
826 $ echo "lfs=" >> .hg/hgrc
820 $ echo "[lfs]" >> .hg/hgrc
827 $ echo "[lfs]" >> .hg/hgrc
821 $ echo "threshold=1" >> .hg/hgrc
828 $ echo "threshold=1" >> .hg/hgrc
822 $ echo a > a
829 $ echo a > a
823 $ hg add a
830 $ hg add a
824 $ hg commit -m 'test'
831 $ hg commit -m 'test'
825 $ echo aaaaa > a
832 $ echo aaaaa > a
826 $ hg commit -m 'largefile'
833 $ hg commit -m 'largefile'
827 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
834 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
828 version https://git-lfs.github.com/spec/v1
835 version https://git-lfs.github.com/spec/v1
829 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
836 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
830 size 6
837 size 6
831 x-is-binary 0
838 x-is-binary 0
832 $ cd ..
839 $ cd ..
833 $ rm -rf `hg config lfs.usercache`
840 $ rm -rf `hg config lfs.usercache`
834
841
835 (Restart the server in a different location so it no longer has the content)
842 (Restart the server in a different location so it no longer has the content)
836
843
837 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
844 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
838
845
839 #if hg-server
846 #if hg-server
840 $ cat $TESTTMP/access.log $TESTTMP/errors.log
847 $ cat $TESTTMP/access.log $TESTTMP/errors.log
841 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
848 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
842 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
849 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
843 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
850 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
844 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
851 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
845 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
852 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
846 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
853 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
847 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
854 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
848 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
855 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
849 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
856 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
850 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
857 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
851 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
858 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
852 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
859 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
853 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
860 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
854 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
861 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
855 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
862 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
856 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
863 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
857 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
864 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
858 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
865 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
859 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
866 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
860 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
867 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
861 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
868 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
862 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
869 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
863 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
870 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
864 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
871 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
865 #endif
872 #endif
866
873
867 $ rm $DAEMON_PIDS
874 $ rm $DAEMON_PIDS
868 $ mkdir $TESTTMP/lfs-server2
875 $ mkdir $TESTTMP/lfs-server2
869 $ cd $TESTTMP/lfs-server2
876 $ cd $TESTTMP/lfs-server2
870 #if no-windows git-server
877 #if no-windows git-server
871 $ lfs-test-server &> lfs-server.log &
878 $ lfs-test-server &> lfs-server.log &
872 $ echo $! >> $DAEMON_PIDS
879 $ echo $! >> $DAEMON_PIDS
873 #endif
880 #endif
874
881
875 #if windows git-server
882 #if windows git-server
876 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
883 $ $PYTHON $TESTTMP/spawn.py >> $DAEMON_PIDS
877 #endif
884 #endif
878
885
879 #if hg-server
886 #if hg-server
880 $ hg init server2
887 $ hg init server2
881 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
888 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
882 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
889 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
883 $ cat hg.pid >> $DAEMON_PIDS
890 $ cat hg.pid >> $DAEMON_PIDS
884 #endif
891 #endif
885
892
886 $ cd $TESTTMP
893 $ cd $TESTTMP
887 $ hg --debug clone test test2
894 $ hg --debug clone test test2
888 http auth: user foo, password ***
895 http auth: user foo, password ***
889 linked 6 files
896 linked 6 files
890 http auth: user foo, password ***
897 http auth: user foo, password ***
891 updating to branch default
898 updating to branch default
892 resolving manifests
899 resolving manifests
893 branchmerge: False, force: False, partial: False
900 branchmerge: False, force: False, partial: False
894 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
901 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
902 http auth: user foo, password ***
895 Status: 200
903 Status: 200
896 Content-Length: 308 (git-server !)
904 Content-Length: 308 (git-server !)
897 Content-Length: 186 (hg-server !)
905 Content-Length: 186 (hg-server !)
898 Content-Type: application/vnd.git-lfs+json
906 Content-Type: application/vnd.git-lfs+json
899 Date: $HTTP_DATE$
907 Date: $HTTP_DATE$
900 Server: testing stub value (hg-server !)
908 Server: testing stub value (hg-server !)
901 {
909 {
902 "objects": [
910 "objects": [
903 {
911 {
904 "actions": { (git-server !)
912 "actions": { (git-server !)
905 "upload": { (git-server !)
913 "upload": { (git-server !)
906 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
914 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
907 "header": { (git-server !)
915 "header": { (git-server !)
908 "Accept": "application/vnd.git-lfs" (git-server !)
916 "Accept": "application/vnd.git-lfs" (git-server !)
909 } (git-server !)
917 } (git-server !)
910 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
918 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
911 } (git-server !)
919 } (git-server !)
912 "error": { (hg-server !)
920 "error": { (hg-server !)
913 "code": 404 (hg-server !)
921 "code": 404 (hg-server !)
914 "message": "The object does not exist" (hg-server !)
922 "message": "The object does not exist" (hg-server !)
915 }
923 }
916 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
924 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
917 "size": 6
925 "size": 6
918 }
926 }
919 ]
927 ]
920 "transfer": "basic" (hg-server !)
928 "transfer": "basic" (hg-server !)
921 }
929 }
922 abort: LFS server error for "a": The object does not exist!
930 abort: LFS server error for "a": The object does not exist!
923 [255]
931 [255]
924
932
925 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
933 $ $PYTHON $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
General Comments 0
You need to be logged in to leave comments. Login now