##// END OF EJS Templates
lfs: enable workers by default...
Matt Harbison -
r44747:87167caa default
parent child Browse files
Show More
@@ -1,195 +1,192 b''
1 1 Prior to removing (EXPERIMENTAL)
2 2 --------------------------------
3 3
4 4 These things affect UI and/or behavior, and should probably be implemented (or
5 5 ruled out) prior to taking off the experimental shrinkwrap.
6 6
7 7 #. Finish the `hg convert` story
8 8
9 9 * Add an argument to accept a rules file to apply during conversion?
10 10 Currently `lfs.track` is the only way to affect the conversion.
11 11 * drop `lfs.track` config settings
12 12 * splice in `.hglfs` file for normal repo -> lfs conversions?
13 13
14 14 #. Stop uploading blobs when pushing between local repos
15 15
16 16 * Could probably hardlink directly to the other local repo's store
17 17 * Support inferring `lfs.url` for local push/pull (currently only supports
18 18 http)
19 19
20 20 #. Stop uploading blobs on strip/amend/histedit/etc.
21 21
22 22 * This seems to be a side effect of doing it for `hg bundle`, which probably
23 23 makes sense.
24 24
25 25 #. Handle a server with the extension loaded and a client without the extension
26 26 more gracefully.
27 27
28 28 * `changegroup3` is still experimental, and not enabled by default.
29 29 * Figure out how to `introduce LFS to the server repo
30 30 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-September/122281.html>`_.
31 31 See the TODO in test-lfs-serve.t.
32 32
33 33 #. Remove `lfs.retry` hack in client? This came from FB, but it's not clear why
34 34 it is/was needed.
35 35
36 36 #. `hg export` currently writes out the LFS blob. Should it write the pointer
37 37 instead?
38 38
39 39 * `hg diff` is similar, and probably shouldn't see the pointer file
40 40
41 #. `Fix https multiplexing, and re-enable workers
42 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/109916.html>`_.
43
44 41 #. Show to-be-applied rules with `hg files -r 'wdir()' 'set:lfs()'`
45 42
46 43 * `debugignore` can show file + line number, so a dedicated command could be
47 44 useful too.
48 45
49 46 #. Filesets, revsets and templates
50 47
51 48 * A dedicated revset should be faster than `'file(set:lfs())'`
52 49 * Attach `{lfsoid}` and `{lfspointer}` to `general keywords
53 50 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/110251.html>`_,
54 51 IFF the file is a blob
55 52 * Drop existing items that would be redundant with general support
56 53
57 54 #. Can `grep` avoid downloading most things?
58 55
59 56 * Add a command option to skip LFS blobs?
60 57
61 58 #. Add a flag that's visible in `hg files -v` to indicate external storage?
62 59
63 60 #. Server side issues
64 61
65 62 * Check for local disk space before allowing upload. (I've got a patch for
66 63 this.)
67 64 * Make sure the http codes used are appropriate.
68 65 * `Why is copying the Authorization header into the JSON payload necessary
69 66 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116230.html>`_?
70 67 * `LFS-Authenticate` header support in client and server(?)
71 68
72 69 #. Add locks on cache and blob store
73 70
74 71 * This is complicated with a global store, and multiple potentially unrelated
75 72 local repositories that reference the same blob.
76 73 * Alternately, maybe just handle collisions when trying to create the same
77 74 blob in the store somehow.
78 75
79 76 #. Are proper file sizes reported in `debugupgraderepo`?
80 77
81 78 #. Finish prefetching files
82 79
83 80 * `-T {data}` (other than cat?)
84 81 * `verify`
85 82 * `grep`
86 83
87 84 #. Output cleanup
88 85
89 86 * Can we print the url when connecting to the blobstore? (A sudden
90 87 connection refused after pulling commits looks confusing.) Problem is,
91 88 'pushing to main url' is printed, and then lfs wants to upload before going
92 89 back to the main repo transfer, so then *that* could be confusing with
93 90 extra output. (This is kinda improved with 380f5131ee7b and 9f78d10742af.)
94 91
95 92 * Add more progress indicators? Uploading a large repo looks idle for a long
96 93 time while it scans for blobs in each outgoing revision.
97 94
98 95 * Print filenames instead of hashes in error messages
99 96
100 97 * subrepo aware paths, where necessary
101 98
102 99 * Is existing output at the right status/note/debug level?
103 100
104 101 #. Can `verify` be done without downloading everything?
105 102
106 103 * If we know that we are talking to an hg server, we can leverage the fact
107 104 that it validates in the Batch API portion, and skip d/l altogether. OTOH,
108 105 maybe we should download the files unconditionally for forensics. The
109 106 alternative is to define a custom transfer handler that definitively
110 107 verifies without transferring, and then cache those results. When verify
111 108 comes looking, look in the cache instead of actually opening the file and
112 109 processing it.
113 110
114 111 * Yuya has concerns about when blob fetch takes place vs when revlog is
115 112 verified. Since the visible hash matches the blob content, I don't think
116 113 there's a way to verify the pointer file that's actually stored in the
117 114 filelog (other than basic JSON checks). Full verification requires the
118 115 blob. See
119 116 https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116133.html
120 117
121 118 * Opening a corrupt pointer file aborts. It probably shouldn't for verify.
122 119
123 120
124 121 Future ideas/features/polishing
125 122 -------------------------------
126 123
127 124 These aren't in any particular order, and are things that don't have obvious BC
128 125 concerns.
129 126
130 127 #. Garbage collection `(issue5790) <https://bz.mercurial-scm.org/show_bug.cgi?id=5790>`_
131 128
132 129 * This gets complicated because of the global cache, which may or may not
133 130 consist of hardlinks to the repo, and may be in use by other repos. (So
134 131 the gc may be pointless.)
135 132
136 133 #. `Compress blobs <https://github.com/git-lfs/git-lfs/issues/260>`_
137 134
138 135 * 700MB repo becomes 2.5GB with all lfs blobs
139 136 * What implications are there for filesystem paths that don't indicate
140 137 compression? (i.e. how to share with global cache and other local repos?)
141 138 * Probably needs to be stored under `.hg/store/lfs/zstd`, with a repo
142 139 requirement.
143 140 * Allow tuneable compression type and settings?
144 141 * Support compression over the wire if both sides understand the compression?
145 142 * `debugupgraderepo` to convert?
146 143 * Probably not worth supporting compressed and uncompressed concurrently
147 144
148 145 #. Determine things to upload with `readfast()
149 146 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-August/121315.html>`_
150 147
151 148 * Significantly faster when pushing an entire large repo to http.
152 149 * Causes test changes to fileset and templates; may need both this and
153 150 current methods of lookup.
154 151
155 152 #. Is a command to download everything needed? This would allow copying the
156 153 whole to a portable drive. Currently this can be effected by running
157 154 `hg verify`.
158 155
159 156 #. Stop reading in entire file into one buffer when passing through filelog
160 157 interface
161 158
162 159 * `Requires major replumbing to core
163 160 <https://www.mercurial-scm.org/wiki/HandlingLargeFiles>`_
164 161
165 162 #. Keep corrupt files around in 'store/lfs/incoming' for forensics?
166 163
167 164 * Files should be downloaded to 'incoming', and moved to normal location when
168 165 done.
169 166
170 167 #. Client side path enhancements
171 168
172 169 * Support paths.default:lfs = ... style paths
173 170 * SSH -> https server inference
174 171
175 172 * https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/115416.html
176 173 * https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md#guessing-the-server
177 174
178 175 #. Server enhancements
179 176
180 177 * Add support for transfer quotas?
181 178 * Download should be able to send the file in chunks, without reading the
182 179 whole thing into memory
183 180 (https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-March/114584.html)
184 181 * Support for resuming transfers
185 182
186 183 #. Handle 3rd party server storage.
187 184
188 185 * Teach client to handle lfs `verify` action. This is needed after the
189 186 server instructs the client to upload the file to another server, in order
190 187 to tell the server that the upload completed.
191 188 * Teach the server to send redirects if configured, and process `verify`
192 189 requests.
193 190
194 191 #. `Is any hg-git work needed
195 192 <https://groups.google.com/d/msg/hg-git/XYNQuudteeM/ivt8gXoZAAAJ>`_?
@@ -1,426 +1,426 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 The extension reads its configuration from a versioned ``.hglfs``
57 57 configuration file found in the root of the working directory. The
58 58 ``.hglfs`` file uses the same syntax as all other Mercurial
59 59 configuration files. It uses a single section, ``[track]``.
60 60
61 61 The ``[track]`` section specifies which files are stored as LFS (or
62 62 not). Each line is keyed by a file pattern, with a predicate value.
63 63 The first file pattern match is used, so put more specific patterns
64 64 first. The available predicates are ``all()``, ``none()``, and
65 65 ``size()``. See "hg help filesets.size" for the latter.
66 66
67 67 Example versioned ``.hglfs`` file::
68 68
69 69 [track]
70 70 # No Makefile or python file, anywhere, will be LFS
71 71 **Makefile = none()
72 72 **.py = none()
73 73
74 74 **.zip = all()
75 75 **.exe = size(">1MB")
76 76
77 77 # Catchall for everything not matched above
78 78 ** = size(">10MB")
79 79
80 80 Configs::
81 81
82 82 [lfs]
83 83 # Remote endpoint. Multiple protocols are supported:
84 84 # - http(s)://user:pass@example.com/path
85 85 # git-lfs endpoint
86 86 # - file:///tmp/path
87 87 # local filesystem, usually for testing
88 88 # if unset, lfs will assume the remote repository also handles blob storage
89 89 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
90 90 # use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 import sys
126 126
127 127 from mercurial.i18n import _
128 128
129 129 from mercurial import (
130 130 config,
131 131 context,
132 132 error,
133 133 exchange,
134 134 extensions,
135 135 exthelper,
136 136 filelog,
137 137 filesetlang,
138 138 localrepo,
139 139 minifileset,
140 140 node,
141 141 pycompat,
142 142 revlog,
143 143 scmutil,
144 144 templateutil,
145 145 util,
146 146 )
147 147
148 148 from mercurial.interfaces import repository
149 149
150 150 from . import (
151 151 blobstore,
152 152 wireprotolfsserver,
153 153 wrapper,
154 154 )
155 155
156 156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 158 # be specifying the version(s) of Mercurial they are tested with, or
159 159 # leave the attribute unspecified.
160 160 testedwith = b'ships-with-hg-core'
161 161
162 162 eh = exthelper.exthelper()
163 163 eh.merge(wrapper.eh)
164 164 eh.merge(wireprotolfsserver.eh)
165 165
166 166 cmdtable = eh.cmdtable
167 167 configtable = eh.configtable
168 168 extsetup = eh.finalextsetup
169 169 uisetup = eh.finaluisetup
170 170 filesetpredicate = eh.filesetpredicate
171 171 reposetup = eh.finalreposetup
172 172 templatekeyword = eh.templatekeyword
173 173
174 174 eh.configitem(
175 175 b'experimental', b'lfs.serve', default=True,
176 176 )
177 177 eh.configitem(
178 178 b'experimental', b'lfs.user-agent', default=None,
179 179 )
180 180 eh.configitem(
181 181 b'experimental', b'lfs.disableusercache', default=False,
182 182 )
183 183 eh.configitem(
184 b'experimental', b'lfs.worker-enable', default=False,
184 b'experimental', b'lfs.worker-enable', default=True,
185 185 )
186 186
187 187 eh.configitem(
188 188 b'lfs', b'url', default=None,
189 189 )
190 190 eh.configitem(
191 191 b'lfs', b'usercache', default=None,
192 192 )
193 193 # Deprecated
194 194 eh.configitem(
195 195 b'lfs', b'threshold', default=None,
196 196 )
197 197 eh.configitem(
198 198 b'lfs', b'track', default=b'none()',
199 199 )
200 200 eh.configitem(
201 201 b'lfs', b'retry', default=5,
202 202 )
203 203
204 204 lfsprocessor = (
205 205 wrapper.readfromstore,
206 206 wrapper.writetostore,
207 207 wrapper.bypasscheckhash,
208 208 )
209 209
210 210
211 211 def featuresetup(ui, supported):
212 212 # don't die on seeing a repo with the lfs requirement
213 213 supported |= {b'lfs'}
214 214
215 215
216 216 @eh.uisetup
217 217 def _uisetup(ui):
218 218 localrepo.featuresetupfuncs.add(featuresetup)
219 219
220 220
221 221 @eh.reposetup
222 222 def _reposetup(ui, repo):
223 223 # Nothing to do with a remote repo
224 224 if not repo.local():
225 225 return
226 226
227 227 repo.svfs.lfslocalblobstore = blobstore.local(repo)
228 228 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
229 229
230 230 class lfsrepo(repo.__class__):
231 231 @localrepo.unfilteredmethod
232 232 def commitctx(self, ctx, error=False, origctx=None):
233 233 repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
234 234 return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)
235 235
236 236 repo.__class__ = lfsrepo
237 237
238 238 if b'lfs' not in repo.requirements:
239 239
240 240 def checkrequireslfs(ui, repo, **kwargs):
241 241 if b'lfs' in repo.requirements:
242 242 return 0
243 243
244 244 last = kwargs.get('node_last')
245 245 _bin = node.bin
246 246 if last:
247 247 s = repo.set(b'%n:%n', _bin(kwargs['node']), _bin(last))
248 248 else:
249 249 s = repo.set(b'%n', _bin(kwargs['node']))
250 250 match = repo._storenarrowmatch
251 251 for ctx in s:
252 252 # TODO: is there a way to just walk the files in the commit?
253 253 if any(
254 254 ctx[f].islfs() for f in ctx.files() if f in ctx and match(f)
255 255 ):
256 256 repo.requirements.add(b'lfs')
257 257 repo.features.add(repository.REPO_FEATURE_LFS)
258 258 repo._writerequirements()
259 259 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
260 260 break
261 261
262 262 ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
263 263 ui.setconfig(
264 264 b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
265 265 )
266 266 else:
267 267 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
268 268
269 269
270 270 def _trackedmatcher(repo):
271 271 """Return a function (path, size) -> bool indicating whether or not to
272 272 track a given file with lfs."""
273 273 if not repo.wvfs.exists(b'.hglfs'):
274 274 # No '.hglfs' in wdir. Fallback to config for now.
275 275 trackspec = repo.ui.config(b'lfs', b'track')
276 276
277 277 # deprecated config: lfs.threshold
278 278 threshold = repo.ui.configbytes(b'lfs', b'threshold')
279 279 if threshold:
280 280 filesetlang.parse(trackspec) # make sure syntax errors are confined
281 281 trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)
282 282
283 283 return minifileset.compile(trackspec)
284 284
285 285 data = repo.wvfs.tryread(b'.hglfs')
286 286 if not data:
287 287 return lambda p, s: False
288 288
289 289 # Parse errors here will abort with a message that points to the .hglfs file
290 290 # and line number.
291 291 cfg = config.config()
292 292 cfg.parse(b'.hglfs', data)
293 293
294 294 try:
295 295 rules = [
296 296 (minifileset.compile(pattern), minifileset.compile(rule))
297 297 for pattern, rule in cfg.items(b'track')
298 298 ]
299 299 except error.ParseError as e:
300 300 # The original exception gives no indicator that the error is in the
301 301 # .hglfs file, so add that.
302 302
303 303 # TODO: See if the line number of the file can be made available.
304 304 raise error.Abort(_(b'parse error in .hglfs: %s') % e)
305 305
306 306 def _match(path, size):
307 307 for pat, rule in rules:
308 308 if pat(path, size):
309 309 return rule(path, size)
310 310
311 311 return False
312 312
313 313 return _match
314 314
315 315
316 316 # Called by remotefilelog
317 317 def wrapfilelog(filelog):
318 318 wrapfunction = extensions.wrapfunction
319 319
320 320 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
321 321 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
322 322 wrapfunction(filelog, 'size', wrapper.filelogsize)
323 323
324 324
325 325 @eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
326 326 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
327 327 opts = orig(ui, requirements, features)
328 328 for name, module in extensions.extensions(ui):
329 329 if module is sys.modules[__name__]:
330 330 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
331 331 msg = (
332 332 _(b"cannot register multiple processors on flag '%#x'.")
333 333 % revlog.REVIDX_EXTSTORED
334 334 )
335 335 raise error.Abort(msg)
336 336
337 337 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
338 338 break
339 339
340 340 return opts
341 341
342 342
343 343 @eh.extsetup
344 344 def _extsetup(ui):
345 345 wrapfilelog(filelog.filelog)
346 346
347 347 context.basefilectx.islfs = wrapper.filectxislfs
348 348
349 349 scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)
350 350
351 351 # Make bundle choose changegroup3 instead of changegroup2. This affects
352 352 # "hg bundle" command. Note: it does not cover all bundle formats like
353 353 # "packed1". Using "packed1" with lfs will likely cause trouble.
354 354 exchange._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"
355 355
356 356
357 357 @eh.filesetpredicate(b'lfs()')
358 358 def lfsfileset(mctx, x):
359 359 """File that uses LFS storage."""
360 360 # i18n: "lfs" is a keyword
361 361 filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
362 362 ctx = mctx.ctx
363 363
364 364 def lfsfilep(f):
365 365 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
366 366
367 367 return mctx.predicate(lfsfilep, predrepr=b'<lfs>')
368 368
369 369
370 370 @eh.templatekeyword(b'lfs_files', requires={b'ctx'})
371 371 def lfsfiles(context, mapping):
372 372 """List of strings. All files modified, added, or removed by this
373 373 changeset."""
374 374 ctx = context.resource(mapping, b'ctx')
375 375
376 376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
377 377 files = sorted(pointers.keys())
378 378
379 379 def pointer(v):
380 380 # In the file spec, version is first and the other keys are sorted.
381 381 sortkeyfunc = lambda x: (x[0] != b'version', x)
382 382 items = sorted(pycompat.iteritems(pointers[v]), key=sortkeyfunc)
383 383 return util.sortdict(items)
384 384
385 385 makemap = lambda v: {
386 386 b'file': v,
387 387 b'lfsoid': pointers[v].oid() if pointers[v] else None,
388 388 b'lfspointer': templateutil.hybriddict(pointer(v)),
389 389 }
390 390
391 391 # TODO: make the separator ', '?
392 392 f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
393 393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
394 394
395 395
396 396 @eh.command(
397 397 b'debuglfsupload',
398 398 [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
399 399 )
400 400 def debuglfsupload(ui, repo, **opts):
401 401 """upload lfs blobs added by the working copy parent or given revisions"""
402 402 revs = opts.get('rev', [])
403 403 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
404 404 wrapper.uploadblobs(repo, pointers)
405 405
406 406
407 407 @eh.wrapcommand(
408 408 b'verify',
409 409 opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
410 410 )
411 411 def verify(orig, ui, repo, **opts):
412 412 skipflags = repo.ui.configint(b'verify', b'skipflags')
413 413 no_lfs = opts.pop('no_lfs')
414 414
415 415 if skipflags:
416 416 # --lfs overrides the config bit, if set.
417 417 if no_lfs is False:
418 418 skipflags &= ~repository.REVISION_FLAG_EXTSTORED
419 419 else:
420 420 skipflags = 0
421 421
422 422 if no_lfs is True:
423 423 skipflags |= repository.REVISION_FLAG_EXTSTORED
424 424
425 425 with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
426 426 return orig(ui, repo, **opts)
@@ -1,512 +1,513 b''
1 1 #require serve no-reposimplestore no-chg
2 2
3 3 $ cat >> $HGRCPATH <<EOF
4 4 > [extensions]
5 5 > lfs=
6 6 > [lfs]
7 7 > track=all()
8 8 > [web]
9 9 > push_ssl = False
10 10 > allow-push = *
11 11 > EOF
12 12
13 13 Serving LFS files can experimentally be turned off. The long term solution is
14 14 to support the 'verify' action in both client and server, so that the server can
15 15 tell the client to store files elsewhere.
16 16
17 17 $ hg init server
18 18 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
19 19 > --config experimental.lfs.serve=False -R server serve -d \
20 > --config experimental.lfs.worker-enable=False \
20 21 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
21 22 $ cat hg.pid >> $DAEMON_PIDS
22 23
23 24 Uploads fail...
24 25
25 26 $ hg init client
26 27 $ echo 'this-is-an-lfs-file' > client/lfs.bin
27 28 $ hg -R client ci -Am 'initial commit'
28 29 adding lfs.bin
29 30 $ hg -R client push http://localhost:$HGPORT
30 31 pushing to http://localhost:$HGPORT/
31 32 searching for changes
32 33 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
33 34 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "upload" is supported)
34 35 [255]
35 36
36 37 ... so do a local push to make the data available. Remove the blob from the
37 38 default cache, so it attempts to download.
38 39 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
39 40 > --config "lfs.url=null://" \
40 41 > -R client push -q server
41 42 $ mv `hg config lfs.usercache` $TESTTMP/servercache
42 43
43 44 Downloads fail...
44 45
45 46 $ hg clone http://localhost:$HGPORT httpclone
46 47 (remote is using large file support (lfs); lfs will be enabled for this repository)
47 48 requesting all changes
48 49 adding changesets
49 50 adding manifests
50 51 adding file changes
51 52 added 1 changesets with 1 changes to 1 files
52 53 new changesets 525251863cad
53 54 updating to branch default
54 55 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
55 56 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "download" is supported)
56 57 [255]
57 58
58 59 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
59 60
60 61 $ cat $TESTTMP/access.log $TESTTMP/errors.log
61 62 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
62 63 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
63 64 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
64 65 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
65 66 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
66 67 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
67 68 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
68 69 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
69 70 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
70 71
71 72 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
72 73 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
73 74 > -p $HGPORT --pid-file=hg.pid --prefix=subdir/mount/point \
74 75 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
75 76 $ cat hg.pid >> $DAEMON_PIDS
76 77
77 78 Reasonable hint for a misconfigured blob server
78 79
79 80 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT/missing
80 81 abort: LFS HTTP error: HTTP Error 404: Not Found!
81 82 (the "lfs.url" config may be used to override http://localhost:$HGPORT/missing)
82 83 [255]
83 84
84 85 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT2/missing
85 86 abort: LFS error: *onnection *refused*! (glob) (?)
86 87 abort: LFS error: $EADDRNOTAVAIL$! (glob) (?)
87 88 abort: LFS error: No route to host! (?)
88 89 (the "lfs.url" config may be used to override http://localhost:$HGPORT2/missing)
89 90 [255]
90 91
91 92 Blob URIs are correct when --prefix is used
92 93
93 94 $ hg clone --debug http://localhost:$HGPORT/subdir/mount/point cloned2
94 95 using http://localhost:$HGPORT/subdir/mount/point
95 96 sending capabilities command
96 97 (remote is using large file support (lfs); lfs will be enabled for this repository)
97 98 query 1; heads
98 99 sending batch command
99 100 requesting all changes
100 101 sending getbundle command
101 102 bundle2-input-bundle: with-transaction
102 103 bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported
103 104 adding changesets
104 105 add changeset 525251863cad
105 106 adding manifests
106 107 adding file changes
107 108 adding lfs.bin revisions
108 109 bundle2-input-part: total payload size 648
109 110 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
110 111 bundle2-input-part: "phase-heads" supported
111 112 bundle2-input-part: total payload size 24
112 113 bundle2-input-part: "cache:rev-branch-cache" (advisory) supported
113 114 bundle2-input-part: total payload size 39
114 115 bundle2-input-bundle: 4 parts total
115 116 checking for updated bookmarks
116 117 updating the branch cache
117 118 added 1 changesets with 1 changes to 1 files
118 119 new changesets 525251863cad
119 120 updating to branch default
120 121 resolving manifests
121 122 branchmerge: False, force: False, partial: False
122 123 ancestor: 000000000000, local: 000000000000+, remote: 525251863cad
123 124 lfs: assuming remote store: http://localhost:$HGPORT/subdir/mount/point/.git/info/lfs
124 125 Status: 200
125 126 Content-Length: 371
126 127 Content-Type: application/vnd.git-lfs+json
127 128 Date: $HTTP_DATE$
128 129 Server: testing stub value
129 130 {
130 131 "objects": [
131 132 {
132 133 "actions": {
133 134 "download": {
134 135 "expires_at": "$ISO_8601_DATE_TIME$"
135 136 "header": {
136 137 "Accept": "application/vnd.git-lfs"
137 138 }
138 139 "href": "http://localhost:$HGPORT/subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
139 140 }
140 141 }
141 142 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
142 143 "size": 20
143 144 }
144 145 ]
145 146 "transfer": "basic"
146 147 }
147 148 lfs: downloading f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e (20 bytes)
148 149 Status: 200
149 150 Content-Length: 20
150 151 Content-Type: application/octet-stream
151 152 Date: $HTTP_DATE$
152 153 Server: testing stub value
153 154 lfs: adding f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e to the usercache
154 155 lfs: processed: f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
155 156 lfs: downloaded 1 files (20 bytes)
156 157 lfs.bin: remote created -> g
157 158 getting lfs.bin
158 159 lfs: found f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e in the local lfs store
159 160 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
160 161 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
161 162
162 163 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
163 164
164 165 $ cat $TESTTMP/access.log $TESTTMP/errors.log
165 166 $LOCALIP - - [$LOGDATE$] "POST /missing/objects/batch HTTP/1.1" 404 - (glob)
166 167 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=capabilities HTTP/1.1" 200 - (glob)
167 168 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
168 169 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
169 170 $LOCALIP - - [$LOGDATE$] "POST /subdir/mount/point/.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
170 171 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e HTTP/1.1" 200 - (glob)
171 172
172 173 Blobs that already exist in the usercache are linked into the repo store, even
173 174 though the client doesn't send the blob.
174 175
175 176 $ hg init server2
176 177 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server2 serve -d \
177 178 > -p $HGPORT --pid-file=hg.pid \
178 179 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
179 180 $ cat hg.pid >> $DAEMON_PIDS
180 181
181 182 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R cloned2 --debug \
182 183 > push http://localhost:$HGPORT | grep '^[{} ]'
183 184 {
184 185 "objects": [
185 186 {
186 187 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
187 188 "size": 20
188 189 }
189 190 ]
190 191 "transfer": "basic"
191 192 }
192 193 $ find server2/.hg/store/lfs/objects | sort
193 194 server2/.hg/store/lfs/objects
194 195 server2/.hg/store/lfs/objects/f0
195 196 server2/.hg/store/lfs/objects/f0/3217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
196 197 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
197 198 $ cat $TESTTMP/errors.log
198 199
199 200 $ cat >> $TESTTMP/lfsstoreerror.py <<EOF
200 201 > import errno
201 202 > from hgext.lfs import blobstore
202 203 >
203 204 > _numverifies = 0
204 205 > _readerr = True
205 206 >
206 207 > def reposetup(ui, repo):
207 208 > # Nothing to do with a remote repo
208 209 > if not repo.local():
209 210 > return
210 211 >
211 212 > store = repo.svfs.lfslocalblobstore
212 213 > class badstore(store.__class__):
213 214 > def download(self, oid, src, contentlength):
214 215 > '''Called in the server to handle reading from the client in a
215 216 > PUT request.'''
216 217 > origread = src.read
217 218 > def _badread(nbytes):
218 219 > # Simulate bad data/checksum failure from the client
219 220 > return b'0' * len(origread(nbytes))
220 221 > src.read = _badread
221 222 > super(badstore, self).download(oid, src, contentlength)
222 223 >
223 224 > def _read(self, vfs, oid, verify):
224 225 > '''Called in the server to read data for a GET request, and then
225 226 > calls self._verify() on it before returning.'''
226 227 > global _readerr
227 228 > # One time simulation of a read error
228 229 > if _readerr:
229 230 > _readerr = False
230 231 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
231 232 > # Simulate corrupt content on client download
232 233 > blobstore._verify(oid, b'dummy content')
233 234 >
234 235 > def verify(self, oid):
235 236 > '''Called in the server to populate the Batch API response,
236 237 > letting the client re-upload if the file is corrupt.'''
237 238 > # Fail verify in Batch API for one clone command and one push
238 239 > # command with an IOError. Then let it through to access other
239 240 > # functions. Checksum failure is tested elsewhere.
240 241 > global _numverifies
241 242 > _numverifies += 1
242 243 > if _numverifies <= 2:
243 244 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
244 245 > return super(badstore, self).verify(oid)
245 246 >
246 247 > store.__class__ = badstore
247 248 > EOF
248 249
249 250 $ rm -rf `hg config lfs.usercache`
250 251 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
251 252 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
252 253 > --config extensions.lfsstoreerror=$TESTTMP/lfsstoreerror.py \
253 254 > -R server serve -d \
254 255 > -p $HGPORT1 --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
255 256 $ cat hg.pid >> $DAEMON_PIDS
256 257
257 258 Test an I/O error in localstore.verify() (Batch API) with GET
258 259
259 260 $ hg clone http://localhost:$HGPORT1 httpclone2
260 261 (remote is using large file support (lfs); lfs will be enabled for this repository)
261 262 requesting all changes
262 263 adding changesets
263 264 adding manifests
264 265 adding file changes
265 266 added 1 changesets with 1 changes to 1 files
266 267 new changesets 525251863cad
267 268 updating to branch default
268 269 abort: LFS server error for "lfs.bin": Internal server error!
269 270 [255]
270 271
271 272 Test an I/O error in localstore.verify() (Batch API) with PUT
272 273
273 274 $ echo foo > client/lfs.bin
274 275 $ hg -R client ci -m 'mod lfs'
275 276 $ hg -R client push http://localhost:$HGPORT1
276 277 pushing to http://localhost:$HGPORT1/
277 278 searching for changes
278 279 abort: LFS server error for "unknown": Internal server error!
279 280 [255]
280 281 TODO: figure out how to associate the file name in the error above
281 282
282 283 Test a bad checksum sent by the client in the transfer API
283 284
284 285 $ hg -R client push http://localhost:$HGPORT1
285 286 pushing to http://localhost:$HGPORT1/
286 287 searching for changes
287 288 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c, action=upload)!
288 289 [255]
289 290
290 291 $ echo 'test lfs file' > server/lfs3.bin
291 292 $ hg --config experimental.lfs.disableusercache=True \
292 293 > -R server ci -Aqm 'another lfs file'
293 294 $ hg -R client pull -q http://localhost:$HGPORT1
294 295
295 296 Test an I/O error during the processing of the GET request
296 297
297 298 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
298 299 > -R client update -r tip
299 300 abort: LFS HTTP error: HTTP Error 500: Internal Server Error (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
300 301 [255]
301 302
302 303 Test a checksum failure during the processing of the GET request
303 304
304 305 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
305 306 > -R client update -r tip
306 307 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
307 308 [255]
308 309
309 310 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
310 311
311 312 $ cat $TESTTMP/access.log
312 313 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
313 314 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
314 315 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
315 316 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
316 317 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
317 318 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
318 319 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
319 320 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
320 321 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
321 322 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
322 323 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
323 324 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
324 325 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
325 326 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
326 327 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
327 328 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
328 329 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
329 330 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
330 331 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c HTTP/1.1" 422 - (glob)
331 332 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
332 333 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
333 334 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=525251863cad618e55d483555f3d00a2ca99597e&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
334 335 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
335 336 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 500 - (glob)
336 337 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
337 338 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 422 - (glob)
338 339
339 340 $ grep -v ' File "' $TESTTMP/errors.log
340 341 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
341 342 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
342 343 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
343 344 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
344 345 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e: I/O error (glob)
345 346 $LOCALIP - - [$ERRDATE$] HG error: (glob)
346 347 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
347 348 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
348 349 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
349 350 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
350 351 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c: I/O error (glob)
351 352 $LOCALIP - - [$ERRDATE$] HG error: (glob)
352 353 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c': (glob)
353 354 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
354 355 $LOCALIP - - [$ERRDATE$] HG error: localstore.download(oid, req.bodyfh, req.headers[b'Content-Length'])
355 356 $LOCALIP - - [$ERRDATE$] HG error: super(badstore, self).download(oid, src, contentlength)
356 357 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
357 358 $LOCALIP - - [$ERRDATE$] HG error: _(b'corrupt remote lfs object: %s') % oid (glob) (no-py38 !)
358 359 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (no-py3 !)
359 360 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (py3 !)
360 361 $LOCALIP - - [$ERRDATE$] HG error: (glob)
361 362 $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
362 363 Traceback (most recent call last):
363 364 self.do_write()
364 365 self.do_hgweb()
365 366 for chunk in self.server.application(env, self._start_response):
366 367 for r in self._runwsgi(req, res, repo):
367 368 handled = wireprotoserver.handlewsgirequest( (py38 !)
368 369 return _processbasictransfer( (py38 !)
369 370 rctx, req, res, self.check_perm (no-py38 !)
370 371 return func(*(args + a), **kw) (no-py3 !)
371 372 rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm) (no-py38 !)
372 373 res.setbodybytes(localstore.read(oid))
373 374 blob = self._read(self.vfs, oid, verify)
374 375 raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
375 376 *Error: [Errno *] 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d: I/O error (glob)
376 377
377 378 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
378 379 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
379 380 $LOCALIP - - [$ERRDATE$] HG error: res.setbodybytes(localstore.read(oid)) (glob)
380 381 $LOCALIP - - [$ERRDATE$] HG error: blob = self._read(self.vfs, oid, verify) (glob)
381 382 $LOCALIP - - [$ERRDATE$] HG error: blobstore._verify(oid, b'dummy content') (glob)
382 383 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
383 384 $LOCALIP - - [$ERRDATE$] HG error: hint=_(b'run hg verify'), (glob) (no-py38 !)
384 385 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (no-py3 !)
385 386 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (py3 !)
386 387 $LOCALIP - - [$ERRDATE$] HG error: (glob)
387 388
388 389 Basic Authorization headers are returned by the Batch API, and sent back with
389 390 the GET/PUT request.
390 391
391 392 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
392 393
393 394 $ cat >> $HGRCPATH << EOF
394 395 > [experimental]
395 396 > lfs.disableusercache = True
396 397 > [auth]
397 398 > l.schemes=http
398 399 > l.prefix=lo
399 400 > l.username=user
400 401 > l.password=pass
401 402 > EOF
402 403
403 404 $ hg --config extensions.x=$TESTDIR/httpserverauth.py \
404 405 > -R server serve -d -p $HGPORT1 --pid-file=hg.pid \
405 406 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
406 407 $ mv hg.pid $DAEMON_PIDS
407 408
408 409 $ hg clone --debug http://localhost:$HGPORT1 auth_clone | egrep '^[{}]| '
409 410 {
410 411 "objects": [
411 412 {
412 413 "actions": {
413 414 "download": {
414 415 "expires_at": "$ISO_8601_DATE_TIME$"
415 416 "header": {
416 417 "Accept": "application/vnd.git-lfs"
417 418 "Authorization": "Basic dXNlcjpwYXNz"
418 419 }
419 420 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
420 421 }
421 422 }
422 423 "oid": "276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
423 424 "size": 14
424 425 }
425 426 ]
426 427 "transfer": "basic"
427 428 }
428 429
429 430 $ echo 'another blob' > auth_clone/lfs.blob
430 431 $ hg -R auth_clone ci -Aqm 'add blob'
431 432
432 433 $ cat > use_digests.py << EOF
433 434 > from mercurial import (
434 435 > exthelper,
435 436 > url,
436 437 > )
437 438 >
438 439 > eh = exthelper.exthelper()
439 440 > uisetup = eh.finaluisetup
440 441 >
441 442 > @eh.wrapfunction(url, 'opener')
442 443 > def urlopener(orig, *args, **kwargs):
443 444 > opener = orig(*args, **kwargs)
444 445 > opener.addheaders.append((r'X-HgTest-AuthType', r'Digest'))
445 446 > return opener
446 447 > EOF
447 448
448 449 Test that Digest Auth fails gracefully before testing the successful Basic Auth
449 450
450 451 $ hg -R auth_clone push --config extensions.x=use_digests.py
451 452 pushing to http://localhost:$HGPORT1/
452 453 searching for changes
453 454 abort: LFS HTTP error: HTTP Error 401: the server must support Basic Authentication!
454 455 (api=http://localhost:$HGPORT1/.git/info/lfs/objects/batch, action=upload)
455 456 [255]
456 457
457 458 $ hg -R auth_clone --debug push | egrep '^[{}]| '
458 459 {
459 460 "objects": [
460 461 {
461 462 "actions": {
462 463 "upload": {
463 464 "expires_at": "$ISO_8601_DATE_TIME$"
464 465 "header": {
465 466 "Accept": "application/vnd.git-lfs"
466 467 "Authorization": "Basic dXNlcjpwYXNz"
467 468 }
468 469 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
469 470 }
470 471 }
471 472 "oid": "df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
472 473 "size": 13
473 474 }
474 475 ]
475 476 "transfer": "basic"
476 477 }
477 478
478 479 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
479 480
480 481 $ cat $TESTTMP/access.log $TESTTMP/errors.log
481 482 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
482 483 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
483 484 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
484 485 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
485 486 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
486 487 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
487 488 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 200 - (glob)
488 489 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
489 490 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - x-hgtest-authtype:Digest (glob)
490 491 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 401 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
491 492 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
492 493 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
493 494 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
494 495 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
495 496 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
496 497 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 401 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
497 498 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
498 499 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
499 500 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
500 501 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
501 502 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
502 503 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
503 504 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
504 505 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
505 506 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
506 507 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
507 508 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
508 509 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
509 510 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
510 511 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3 HTTP/1.1" 201 - (glob)
511 512 $LOCALIP - - [$LOGDATE$] "POST /?cmd=unbundle HTTP/1.1" 200 - x-hgarg-1:heads=666f726365 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
512 513 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
@@ -1,720 +1,721 b''
1 1 #testcases lfsremote-on lfsremote-off
2 2 #require serve no-reposimplestore no-chg
3 3
4 4 This test splits `hg serve` with and without using the extension into separate
5 5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 8 individually, because the lfs requirement causes the process to bail early if
9 9 the extension is disabled.
10 10
11 11 . Server
12 12 .
13 13 . No-LFS LFS
14 14 . +----------------------------+
15 15 . | || D | E | D | E |
16 16 . |---++=======================|
17 17 . C | D || N/A | #1 | X | #4 |
18 18 . l No +---++-----------------------|
19 19 . i LFS | E || #2 | #2 | X | #5 |
20 20 . e +---++-----------------------|
21 21 . n | D || X | X | X | X |
22 22 . t LFS |---++-----------------------|
23 23 . | E || #3 | #3 | X | #6 |
24 24 . |---++-----------------------+
25 25
26 26 make command server magic visible
27 27
28 28 #if windows
29 29 $ PYTHONPATH="$TESTDIR/../contrib;$PYTHONPATH"
30 30 #else
31 31 $ PYTHONPATH="$TESTDIR/../contrib:$PYTHONPATH"
32 32 #endif
33 33 $ export PYTHONPATH
34 34
35 35 $ hg init server
36 36 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
37 37
38 38 $ cat > $TESTTMP/debugprocessors.py <<EOF
39 39 > from mercurial import (
40 40 > cmdutil,
41 41 > commands,
42 42 > pycompat,
43 43 > registrar,
44 44 > )
45 45 > cmdtable = {}
46 46 > command = registrar.command(cmdtable)
47 47 > @command(b'debugprocessors', [], b'FILE')
48 48 > def debugprocessors(ui, repo, file_=None, **opts):
49 49 > opts = pycompat.byteskwargs(opts)
50 50 > opts[b'changelog'] = False
51 51 > opts[b'manifest'] = False
52 52 > opts[b'dir'] = False
53 53 > rl = cmdutil.openrevlog(repo, b'debugprocessors', file_, opts)
54 54 > for flag, proc in rl._flagprocessors.items():
55 55 > ui.status(b"registered processor '%#x'\n" % (flag))
56 56 > EOF
57 57
58 58 Skip the experimental.changegroup3=True config. Failure to agree on this comes
59 59 first, and causes an "abort: no common changegroup version" if the extension is
60 60 only loaded on one side. If that *is* enabled, the subsequent failure is "abort:
61 61 missing processor for flag '0x2000'!" if the extension is only loaded on one side
62 62 (possibly also masked by the Internal Server Error message).
63 63 $ cat >> $HGRCPATH <<EOF
64 64 > [extensions]
65 65 > debugprocessors = $TESTTMP/debugprocessors.py
66 66 > [experimental]
67 67 > lfs.disableusercache = True
68 > lfs.worker-enable = False
68 69 > [lfs]
69 70 > threshold=10
70 71 > [web]
71 72 > allow_push=*
72 73 > push_ssl=False
73 74 > EOF
74 75
75 76 $ cp $HGRCPATH $HGRCPATH.orig
76 77
77 78 #if lfsremote-on
78 79 $ hg --config extensions.lfs= -R server \
79 80 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
80 81 #else
81 82 $ hg --config extensions.lfs=! -R server \
82 83 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
83 84 #endif
84 85
85 86 $ cat hg.pid >> $DAEMON_PIDS
86 87 $ hg clone -q http://localhost:$HGPORT client
87 88 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
88 89 [1]
89 90
90 91 This trivial repo will force commandserver to load the extension, but not call
91 92 reposetup() on another repo actually being operated on. This gives coverage
92 93 that wrapper functions are not assuming reposetup() was called.
93 94
94 95 $ hg init $TESTTMP/cmdservelfs
95 96 $ cat >> $TESTTMP/cmdservelfs/.hg/hgrc << EOF
96 97 > [extensions]
97 98 > lfs =
98 99 > EOF
99 100
100 101 --------------------------------------------------------------------------------
101 102 Case #1: client with non-lfs content and the extension disabled; server with
102 103 non-lfs content, and the extension enabled.
103 104
104 105 $ cd client
105 106 $ echo 'non-lfs' > nonlfs.txt
106 107 >>> from __future__ import absolute_import
107 108 >>> from hgclient import check, readchannel, runcommand
108 109 >>> @check
109 110 ... def diff(server):
110 111 ... readchannel(server)
111 112 ... # run an arbitrary command in the repo with the extension loaded
112 113 ... runcommand(server, [b'id', b'-R', b'../cmdservelfs'])
113 114 ... # now run a command in a repo without the extension to ensure that
114 115 ... # files are added safely..
115 116 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
116 117 ... # .. and that scmutil.prefetchfiles() safely no-ops..
117 118 ... runcommand(server, [b'diff', b'-r', b'.~1'])
118 119 ... # .. and that debugupgraderepo safely no-ops.
119 120 ... runcommand(server, [b'debugupgraderepo', b'-q', b'--run'])
120 121 *** runcommand id -R ../cmdservelfs
121 122 000000000000 tip
122 123 *** runcommand ci -Aqm non-lfs
123 124 *** runcommand diff -r .~1
124 125 diff -r 000000000000 nonlfs.txt
125 126 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
126 127 +++ b/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
127 128 @@ -0,0 +1,1 @@
128 129 +non-lfs
129 130 *** runcommand debugupgraderepo -q --run
130 131 upgrade will perform the following actions:
131 132
132 133 requirements
133 134 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store
134 135
135 136 sidedata
136 137 Allows storage of extra data alongside a revision.
137 138
138 139 copies-sdc
139 140 Allows to use more efficient algorithm to deal with copy tracing.
140 141
141 142 beginning upgrade...
142 143 repository locked and read-only
143 144 creating temporary repository to stage migrated data: * (glob)
144 145 (it is safe to interrupt this process any time before data migration completes)
145 146 migrating 3 total revisions (1 in filelogs, 1 in manifests, 1 in changelog)
146 147 migrating 324 bytes in store; 129 bytes tracked data
147 148 migrating 1 filelogs containing 1 revisions (73 bytes in store; 8 bytes tracked data)
148 149 finished migrating 1 filelog revisions across 1 filelogs; change in size: 0 bytes
149 150 migrating 1 manifests containing 1 revisions (117 bytes in store; 52 bytes tracked data)
150 151 finished migrating 1 manifest revisions across 1 manifests; change in size: 0 bytes
151 152 migrating changelog containing 1 revisions (134 bytes in store; 69 bytes tracked data)
152 153 finished migrating 1 changelog revisions; change in size: 0 bytes
153 154 finished migrating 3 total revisions; total change in store size: 0 bytes
154 155 copying phaseroots
155 156 data fully migrated to temporary repository
156 157 marking source repository as being upgraded; clients will be unable to read from repository
157 158 starting in-place swap of repository data
158 159 replaced files will be backed up at * (glob)
159 160 replacing store...
160 161 store replacement complete; repository was inconsistent for *s (glob)
161 162 finalizing requirements file and making repository readable again
162 163 removing temporary repository * (glob)
163 164 copy of old repository backed up at * (glob)
164 165 the old repository will not be deleted; remove it to free up disk space once the upgraded repository is verified
165 166
166 167 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
167 168 [1]
168 169
169 170 #if lfsremote-on
170 171
171 172 $ hg push -q
172 173 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
173 174 [1]
174 175
175 176 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
176 177 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
177 178 [1]
178 179
179 180 $ hg init $TESTTMP/client1_pull
180 181 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
181 182 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
182 183 [1]
183 184
184 185 $ hg identify http://localhost:$HGPORT
185 186 d437e1d24fbd
186 187
187 188 #endif
188 189
189 190 --------------------------------------------------------------------------------
190 191 Case #2: client with non-lfs content and the extension enabled; server with
191 192 non-lfs content, and the extension state controlled by #testcases.
192 193
193 194 $ cat >> $HGRCPATH <<EOF
194 195 > [extensions]
195 196 > lfs =
196 197 > EOF
197 198 $ echo 'non-lfs' > nonlfs2.txt
198 199 $ hg ci -Aqm 'non-lfs file with lfs client'
199 200
200 201 Since no lfs content has been added yet, the push is allowed, even when the
201 202 extension is not enabled remotely.
202 203
203 204 $ hg push -q
204 205 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
205 206 [1]
206 207
207 208 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
208 209 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
209 210 [1]
210 211
211 212 $ hg init $TESTTMP/client2_pull
212 213 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
213 214 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
214 215 [1]
215 216
216 217 $ hg identify http://localhost:$HGPORT
217 218 1477875038c6
218 219
219 220 --------------------------------------------------------------------------------
220 221 Case #3: client with lfs content and the extension enabled; server with
221 222 non-lfs content, and the extension state controlled by #testcases. The server
222 223 should have an 'lfs' requirement after it picks up its first commit with a blob.
223 224
224 225 $ echo 'this is a big lfs file' > lfs.bin
225 226 $ hg ci -Aqm 'lfs'
226 227 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
227 228 .hg/requires:lfs
228 229
229 230 #if lfsremote-off
230 231 $ hg push -q
231 232 abort: required features are not supported in the destination: lfs
232 233 (enable the lfs extension on the server)
233 234 [255]
234 235 #else
235 236 $ hg push -q
236 237 #endif
237 238 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
238 239 .hg/requires:lfs
239 240 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
240 241
241 242 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
242 243 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
243 244 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
244 245 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
245 246
246 247 $ hg init $TESTTMP/client3_pull
247 248 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
248 249 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
249 250 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
250 251 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
251 252
252 253 Test that the commit/changegroup requirement check hook can be run multiple
253 254 times.
254 255
255 256 $ hg clone -qr 0 http://localhost:$HGPORT $TESTTMP/cmdserve_client3
256 257
257 258 $ cd ../cmdserve_client3
258 259
259 260 >>> from __future__ import absolute_import
260 261 >>> from hgclient import check, readchannel, runcommand
261 262 >>> @check
262 263 ... def addrequirement(server):
263 264 ... readchannel(server)
264 265 ... # change the repo in a way that adds the lfs requirement
265 266 ... runcommand(server, [b'pull', b'-qu'])
266 267 ... # Now cause the requirement adding hook to fire again, without going
267 268 ... # through reposetup() again.
268 269 ... with open('file.txt', 'wb') as fp:
269 270 ... fp.write(b'data')
270 271 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
271 272 *** runcommand pull -qu
272 273 *** runcommand ci -Aqm non-lfs
273 274
274 275 $ cd ../client
275 276
276 277 The difference here is the push failed above when the extension isn't
277 278 enabled on the server.
278 279 $ hg identify http://localhost:$HGPORT
279 280 8374dc4052cb (lfsremote-on !)
280 281 1477875038c6 (lfsremote-off !)
281 282
282 283 Don't bother testing the lfsremote-off cases- the server won't be able
283 284 to launch if there's lfs content and the extension is disabled.
284 285
285 286 #if lfsremote-on
286 287
287 288 --------------------------------------------------------------------------------
288 289 Case #4: client with non-lfs content and the extension disabled; server with
289 290 lfs content, and the extension enabled.
290 291
291 292 $ cat >> $HGRCPATH <<EOF
292 293 > [extensions]
293 294 > lfs = !
294 295 > EOF
295 296
296 297 $ hg init $TESTTMP/client4
297 298 $ cd $TESTTMP/client4
298 299 $ cat >> .hg/hgrc <<EOF
299 300 > [paths]
300 301 > default = http://localhost:$HGPORT
301 302 > EOF
302 303 $ echo 'non-lfs' > nonlfs2.txt
303 304 $ hg ci -Aqm 'non-lfs'
304 305 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
305 306 $TESTTMP/server/.hg/requires:lfs
306 307
307 308 $ hg push -q --force
308 309 warning: repository is unrelated
309 310 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
310 311 $TESTTMP/server/.hg/requires:lfs
311 312
312 313 $ hg clone http://localhost:$HGPORT $TESTTMP/client4_clone
313 314 (remote is using large file support (lfs), but it is explicitly disabled in the local configuration)
314 315 abort: repository requires features unknown to this Mercurial: lfs!
315 316 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
316 317 [255]
317 318 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
318 319 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
319 320 $TESTTMP/server/.hg/requires:lfs
320 321 [2]
321 322
322 323 TODO: fail more gracefully.
323 324
324 325 $ hg init $TESTTMP/client4_pull
325 326 $ hg -R $TESTTMP/client4_pull pull http://localhost:$HGPORT
326 327 pulling from http://localhost:$HGPORT/
327 328 requesting all changes
328 329 remote: abort: no common changegroup version
329 330 abort: pull failed on remote
330 331 [255]
331 332 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
332 333 $TESTTMP/server/.hg/requires:lfs
333 334
334 335 $ hg identify http://localhost:$HGPORT
335 336 03b080fa9d93
336 337
337 338 --------------------------------------------------------------------------------
338 339 Case #5: client with non-lfs content and the extension enabled; server with
339 340 lfs content, and the extension enabled.
340 341
341 342 $ cat >> $HGRCPATH <<EOF
342 343 > [extensions]
343 344 > lfs =
344 345 > EOF
345 346 $ echo 'non-lfs' > nonlfs3.txt
346 347 $ hg ci -Aqm 'non-lfs file with lfs client'
347 348
348 349 $ hg push -q
349 350 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
350 351 $TESTTMP/server/.hg/requires:lfs
351 352
352 353 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
353 354 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
354 355 $TESTTMP/client5_clone/.hg/requires:lfs
355 356 $TESTTMP/server/.hg/requires:lfs
356 357
357 358 $ hg init $TESTTMP/client5_pull
358 359 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
359 360 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
360 361 $TESTTMP/client5_pull/.hg/requires:lfs
361 362 $TESTTMP/server/.hg/requires:lfs
362 363
363 364 $ hg identify http://localhost:$HGPORT
364 365 c729025cc5e3
365 366
366 367 $ mv $HGRCPATH $HGRCPATH.tmp
367 368 $ cp $HGRCPATH.orig $HGRCPATH
368 369
369 370 >>> from __future__ import absolute_import
370 371 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
371 372 >>> @check
372 373 ... def checkflags(server):
373 374 ... readchannel(server)
374 375 ... bprint(b'')
375 376 ... bprint(b'# LFS required- both lfs and non-lfs revlogs have 0x2000 flag')
376 377 ... stdout.flush()
377 378 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
378 379 ... b'../server'])
379 380 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
380 381 ... b'../server'])
381 382 ... runcommand(server, [b'config', b'extensions', b'--cwd',
382 383 ... b'../server'])
383 384 ...
384 385 ... bprint(b"\n# LFS not enabled- revlogs don't have 0x2000 flag")
385 386 ... stdout.flush()
386 387 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
387 388 ... runcommand(server, [b'config', b'extensions'])
388 389
389 390 # LFS required- both lfs and non-lfs revlogs have 0x2000 flag
390 391 *** runcommand debugprocessors lfs.bin -R ../server
391 392 registered processor '0x8000'
392 393 registered processor '0x2000'
393 394 *** runcommand debugprocessors nonlfs2.txt -R ../server
394 395 registered processor '0x8000'
395 396 registered processor '0x2000'
396 397 *** runcommand config extensions --cwd ../server
397 398 extensions.debugprocessors=$TESTTMP/debugprocessors.py
398 399 extensions.lfs=
399 400
400 401 # LFS not enabled- revlogs don't have 0x2000 flag
401 402 *** runcommand debugprocessors nonlfs3.txt
402 403 registered processor '0x8000'
403 404 *** runcommand config extensions
404 405 extensions.debugprocessors=$TESTTMP/debugprocessors.py
405 406
406 407 $ rm $HGRCPATH
407 408 $ mv $HGRCPATH.tmp $HGRCPATH
408 409
409 410 $ hg clone $TESTTMP/client $TESTTMP/nonlfs -qr 0 --config extensions.lfs=
410 411 $ cat >> $TESTTMP/nonlfs/.hg/hgrc <<EOF
411 412 > [extensions]
412 413 > lfs = !
413 414 > EOF
414 415
415 416 >>> from __future__ import absolute_import, print_function
416 417 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
417 418 >>> @check
418 419 ... def checkflags2(server):
419 420 ... readchannel(server)
420 421 ... bprint(b'')
421 422 ... bprint(b'# LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag')
422 423 ... stdout.flush()
423 424 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
424 425 ... b'../server'])
425 426 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
426 427 ... b'../server'])
427 428 ... runcommand(server, [b'config', b'extensions', b'--cwd',
428 429 ... b'../server'])
429 430 ...
430 431 ... bprint(b'\n# LFS enabled without requirement- revlogs have 0x2000 flag')
431 432 ... stdout.flush()
432 433 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
433 434 ... runcommand(server, [b'config', b'extensions'])
434 435 ...
435 436 ... bprint(b"\n# LFS disabled locally- revlogs don't have 0x2000 flag")
436 437 ... stdout.flush()
437 438 ... runcommand(server, [b'debugprocessors', b'nonlfs.txt', b'-R',
438 439 ... b'../nonlfs'])
439 440 ... runcommand(server, [b'config', b'extensions', b'--cwd',
440 441 ... b'../nonlfs'])
441 442
442 443 # LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag
443 444 *** runcommand debugprocessors lfs.bin -R ../server
444 445 registered processor '0x8000'
445 446 registered processor '0x2000'
446 447 *** runcommand debugprocessors nonlfs2.txt -R ../server
447 448 registered processor '0x8000'
448 449 registered processor '0x2000'
449 450 *** runcommand config extensions --cwd ../server
450 451 extensions.debugprocessors=$TESTTMP/debugprocessors.py
451 452 extensions.lfs=
452 453
453 454 # LFS enabled without requirement- revlogs have 0x2000 flag
454 455 *** runcommand debugprocessors nonlfs3.txt
455 456 registered processor '0x8000'
456 457 registered processor '0x2000'
457 458 *** runcommand config extensions
458 459 extensions.debugprocessors=$TESTTMP/debugprocessors.py
459 460 extensions.lfs=
460 461
461 462 # LFS disabled locally- revlogs don't have 0x2000 flag
462 463 *** runcommand debugprocessors nonlfs.txt -R ../nonlfs
463 464 registered processor '0x8000'
464 465 *** runcommand config extensions --cwd ../nonlfs
465 466 extensions.debugprocessors=$TESTTMP/debugprocessors.py
466 467 extensions.lfs=!
467 468
468 469 --------------------------------------------------------------------------------
469 470 Case #6: client with lfs content and the extension enabled; server with
470 471 lfs content, and the extension enabled.
471 472
472 473 $ echo 'this is another lfs file' > lfs2.txt
473 474 $ hg ci -Aqm 'lfs file with lfs client'
474 475
475 476 $ hg --config paths.default= push -v http://localhost:$HGPORT
476 477 pushing to http://localhost:$HGPORT/
477 478 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
478 479 searching for changes
479 480 remote has heads on branch 'default' that are not known locally: 8374dc4052cb
480 481 lfs: uploading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
481 482 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
482 483 lfs: uploaded 1 files (25 bytes)
483 484 1 changesets found
484 485 uncompressed size of bundle content:
485 486 206 (changelog)
486 487 172 (manifests)
487 488 275 lfs2.txt
488 489 remote: adding changesets
489 490 remote: adding manifests
490 491 remote: adding file changes
491 492 remote: added 1 changesets with 1 changes to 1 files
492 493 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
493 494 .hg/requires:lfs
494 495 $TESTTMP/server/.hg/requires:lfs
495 496
496 497 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
497 498 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
498 499 $TESTTMP/client6_clone/.hg/requires:lfs
499 500 $TESTTMP/server/.hg/requires:lfs
500 501
501 502 $ hg init $TESTTMP/client6_pull
502 503 $ hg -R $TESTTMP/client6_pull pull -u -v http://localhost:$HGPORT
503 504 pulling from http://localhost:$HGPORT/
504 505 requesting all changes
505 506 adding changesets
506 507 adding manifests
507 508 adding file changes
508 509 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
509 510 added 6 changesets with 5 changes to 5 files (+1 heads)
510 511 new changesets d437e1d24fbd:d3b84d50eacb
511 512 resolving manifests
512 513 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
513 514 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
514 515 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
515 516 lfs: downloaded 1 files (25 bytes)
516 517 getting lfs2.txt
517 518 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
518 519 getting nonlfs2.txt
519 520 getting nonlfs3.txt
520 521 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
521 522 updated to "d3b84d50eacb: lfs file with lfs client"
522 523 1 other heads for branch "default"
523 524 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
524 525 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
525 526 $TESTTMP/client6_pull/.hg/requires:lfs
526 527 $TESTTMP/server/.hg/requires:lfs
527 528
528 529 $ hg identify http://localhost:$HGPORT
529 530 d3b84d50eacb
530 531
531 532 --------------------------------------------------------------------------------
532 533 Misc: process dies early if a requirement exists and the extension is disabled
533 534
534 535 $ hg --config extensions.lfs=! summary
535 536 abort: repository requires features unknown to this Mercurial: lfs!
536 537 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
537 538 [255]
538 539
539 540 $ echo 'this is an lfs file' > $TESTTMP/client6_clone/lfspair1.bin
540 541 $ echo 'this is an lfs file too' > $TESTTMP/client6_clone/lfspair2.bin
541 542 $ hg -R $TESTTMP/client6_clone ci -Aqm 'add lfs pair'
542 543 $ hg -R $TESTTMP/client6_clone push -q
543 544
544 545 $ hg clone -qU http://localhost:$HGPORT $TESTTMP/bulkfetch
545 546
546 547 Cat doesn't prefetch unless data is needed (e.g. '-T {rawdata}' doesn't need it)
547 548
548 549 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{rawdata}\n{path}\n'
549 550 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
550 551 version https://git-lfs.github.com/spec/v1
551 552 oid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
552 553 size 20
553 554 x-is-binary 0
554 555
555 556 lfspair1.bin
556 557
557 558 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T json
558 559 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
559 560 [lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
560 561 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
561 562 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
562 563 lfs: downloaded 1 files (20 bytes)
563 564 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
564 565
565 566 {
566 567 "data": "this is an lfs file\n",
567 568 "path": "lfspair1.bin",
568 569 "rawdata": "version https://git-lfs.github.com/spec/v1\noid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782\nsize 20\nx-is-binary 0\n"
569 570 }
570 571 ]
571 572
572 573 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
573 574
574 575 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{data}\n'
575 576 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
576 577 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
577 578 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
578 579 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
579 580 lfs: downloaded 1 files (20 bytes)
580 581 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
581 582 this is an lfs file
582 583
583 584 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair2.bin
584 585 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
585 586 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
586 587 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
587 588 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
588 589 lfs: downloaded 1 files (24 bytes)
589 590 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
590 591 this is an lfs file too
591 592
592 593 Export will prefetch all needed files across all needed revisions
593 594
594 595 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
595 596 $ hg -R $TESTTMP/bulkfetch -v export -r 0:tip -o all.export
596 597 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
597 598 exporting patches:
598 599 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
599 600 lfs: need to transfer 4 objects (92 bytes)
600 601 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
601 602 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
602 603 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
603 604 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
604 605 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
605 606 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
606 607 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
607 608 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
608 609 lfs: downloaded 4 files (92 bytes)
609 610 all.export
610 611 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
611 612 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
612 613 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
613 614 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
614 615
615 616 Export with selected files is used with `extdiff --patch`
616 617
617 618 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
618 619 $ hg --config extensions.extdiff= \
619 620 > -R $TESTTMP/bulkfetch -v extdiff -r 2:tip --patch $TESTTMP/bulkfetch/lfs.bin
620 621 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
621 622 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
622 623 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
623 624 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
624 625 lfs: downloaded 1 files (23 bytes)
625 626 */hg-8374dc4052cb.patch (glob)
626 627 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
627 628 */hg-9640b57e77b1.patch (glob)
628 629 --- */hg-8374dc4052cb.patch * (glob)
629 630 +++ */hg-9640b57e77b1.patch * (glob)
630 631 @@ -2,12 +2,7 @@
631 632 # User test
632 633 # Date 0 0
633 634 # Thu Jan 01 00:00:00 1970 +0000
634 635 -# Node ID 8374dc4052cbd388e79d9dc4ddb29784097aa354
635 636 -# Parent 1477875038c60152e391238920a16381c627b487
636 637 -lfs
637 638 +# Node ID 9640b57e77b14c3a0144fb4478b6cc13e13ea0d1
638 639 +# Parent d3b84d50eacbd56638e11abce6b8616aaba54420
639 640 +add lfs pair
640 641
641 642 -diff -r 1477875038c6 -r 8374dc4052cb lfs.bin
642 643 ---- /dev/null Thu Jan 01 00:00:00 1970 +0000
643 644 -+++ b/lfs.bin Thu Jan 01 00:00:00 1970 +0000
644 645 -@@ -0,0 +1,1 @@
645 646 -+this is a big lfs file
646 647 cleaning up temp directory
647 648 [1]
648 649
649 650 Diff will prefetch files
650 651
651 652 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
652 653 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip
653 654 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
654 655 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
655 656 lfs: need to transfer 4 objects (92 bytes)
656 657 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
657 658 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
658 659 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
659 660 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
660 661 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
661 662 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
662 663 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
663 664 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
664 665 lfs: downloaded 4 files (92 bytes)
665 666 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
666 667 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
667 668 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
668 669 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
669 670 diff -r 8374dc4052cb -r 9640b57e77b1 lfs.bin
670 671 --- a/lfs.bin Thu Jan 01 00:00:00 1970 +0000
671 672 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
672 673 @@ -1,1 +0,0 @@
673 674 -this is a big lfs file
674 675 diff -r 8374dc4052cb -r 9640b57e77b1 lfs2.txt
675 676 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
676 677 +++ b/lfs2.txt Thu Jan 01 00:00:00 1970 +0000
677 678 @@ -0,0 +1,1 @@
678 679 +this is another lfs file
679 680 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair1.bin
680 681 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
681 682 +++ b/lfspair1.bin Thu Jan 01 00:00:00 1970 +0000
682 683 @@ -0,0 +1,1 @@
683 684 +this is an lfs file
684 685 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
685 686 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
686 687 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
687 688 @@ -0,0 +1,1 @@
688 689 +this is an lfs file too
689 690 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs.txt
690 691 --- a/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
691 692 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
692 693 @@ -1,1 +0,0 @@
693 694 -non-lfs
694 695 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs3.txt
695 696 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
696 697 +++ b/nonlfs3.txt Thu Jan 01 00:00:00 1970 +0000
697 698 @@ -0,0 +1,1 @@
698 699 +non-lfs
699 700
700 701 Only the files required by diff are prefetched
701 702
702 703 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
703 704 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip $TESTTMP/bulkfetch/lfspair2.bin
704 705 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
705 706 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
706 707 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
707 708 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
708 709 lfs: downloaded 1 files (24 bytes)
709 710 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
710 711 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
711 712 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
712 713 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
713 714 @@ -0,0 +1,1 @@
714 715 +this is an lfs file too
715 716
716 717 #endif
717 718
718 719 $ "$PYTHON" $TESTDIR/killdaemons.py $DAEMON_PIDS
719 720
720 721 $ cat $TESTTMP/errors.log
@@ -1,941 +1,943 b''
1 1 #require no-reposimplestore no-chg
2 2 #testcases git-server hg-server
3 3
4 4 #if git-server
5 5 #require lfs-test-server
6 6 #else
7 7 #require serve
8 8 #endif
9 9
10 10 #if git-server
11 11 $ LFS_LISTEN="tcp://:$HGPORT"
12 12 $ LFS_HOST="localhost:$HGPORT"
13 13 $ LFS_PUBLIC=1
14 14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
15 15 #else
16 16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
17 17 #endif
18 18
19 19 #if no-windows git-server
20 20 $ lfs-test-server &> lfs-server.log &
21 21 $ echo $! >> $DAEMON_PIDS
22 22 #endif
23 23
24 24 #if windows git-server
25 25 $ cat >> $TESTTMP/spawn.py <<EOF
26 26 > import os
27 27 > import subprocess
28 28 > import sys
29 29 >
30 30 > for path in os.environ["PATH"].split(os.pathsep):
31 31 > exe = os.path.join(path, 'lfs-test-server.exe')
32 32 > if os.path.exists(exe):
33 33 > with open('lfs-server.log', 'wb') as out:
34 34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
35 35 > sys.stdout.write('%s\n' % p.pid)
36 36 > sys.exit(0)
37 37 > sys.exit(1)
38 38 > EOF
39 39 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
40 40 #endif
41 41
42 42 $ cat >> $HGRCPATH <<EOF
43 > [experimental]
44 > lfs.worker-enable = False
43 45 > [extensions]
44 46 > lfs=
45 47 > [lfs]
46 48 > url=http://foo:bar@$LFS_HOST
47 49 > track=all()
48 50 > [web]
49 51 > push_ssl = False
50 52 > allow-push = *
51 53 > EOF
52 54
53 55 Use a separate usercache, otherwise the server sees what the client commits, and
54 56 never requests a transfer.
55 57
56 58 #if hg-server
57 59 $ hg init server
58 60 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
59 61 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
60 62 $ cat hg.pid >> $DAEMON_PIDS
61 63 #endif
62 64
63 65 $ hg init repo1
64 66 $ cd repo1
65 67 $ echo THIS-IS-LFS > a
66 68 $ hg commit -m a -A a
67 69
68 70 A push can be serviced directly from the usercache if it isn't in the local
69 71 store.
70 72
71 73 $ hg init ../repo2
72 74 $ mv .hg/store/lfs .hg/store/lfs_
73 75 $ hg push ../repo2 --debug
74 76 http auth: user foo, password ***
75 77 pushing to ../repo2
76 78 http auth: user foo, password ***
77 79 http auth: user foo, password ***
78 80 query 1; heads
79 81 searching for changes
80 82 1 total queries in *s (glob)
81 83 listing keys for "phases"
82 84 checking for updated bookmarks
83 85 listing keys for "bookmarks"
84 86 lfs: computing set of blobs to upload
85 87 Status: 200
86 88 Content-Length: 309 (git-server !)
87 89 Content-Length: 350 (hg-server !)
88 90 Content-Type: application/vnd.git-lfs+json
89 91 Date: $HTTP_DATE$
90 92 Server: testing stub value (hg-server !)
91 93 {
92 94 "objects": [
93 95 {
94 96 "actions": {
95 97 "upload": {
96 98 "expires_at": "$ISO_8601_DATE_TIME$"
97 99 "header": {
98 100 "Accept": "application/vnd.git-lfs"
99 101 }
100 102 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
101 103 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
102 104 }
103 105 }
104 106 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
105 107 "size": 12
106 108 }
107 109 ]
108 110 "transfer": "basic" (hg-server !)
109 111 }
110 112 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
111 113 Status: 200 (git-server !)
112 114 Status: 201 (hg-server !)
113 115 Content-Length: 0
114 116 Content-Type: text/plain; charset=utf-8
115 117 Date: $HTTP_DATE$
116 118 Server: testing stub value (hg-server !)
117 119 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
118 120 lfs: uploaded 1 files (12 bytes)
119 121 1 changesets found
120 122 list of changesets:
121 123 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
122 124 bundle2-output-bundle: "HG20", 4 parts total
123 125 bundle2-output-part: "replycaps" * bytes payload (glob)
124 126 bundle2-output-part: "check:heads" streamed payload
125 127 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
126 128 bundle2-output-part: "phase-heads" 24 bytes payload
127 129 bundle2-input-bundle: with-transaction
128 130 bundle2-input-part: "replycaps" supported
129 131 bundle2-input-part: total payload size * (glob)
130 132 bundle2-input-part: "check:heads" supported
131 133 bundle2-input-part: total payload size 20
132 134 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
133 135 adding changesets
134 136 add changeset 99a7098854a3
135 137 adding manifests
136 138 adding file changes
137 139 adding a revisions
138 140 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
139 141 bundle2-input-part: total payload size 617
140 142 bundle2-input-part: "phase-heads" supported
141 143 bundle2-input-part: total payload size 24
142 144 bundle2-input-bundle: 4 parts total
143 145 updating the branch cache
144 146 added 1 changesets with 1 changes to 1 files
145 147 bundle2-output-bundle: "HG20", 1 parts total
146 148 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
147 149 bundle2-input-bundle: no-transaction
148 150 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
149 151 bundle2-input-bundle: 1 parts total
150 152 listing keys for "phases"
151 153 $ mv .hg/store/lfs_ .hg/store/lfs
152 154
153 155 Clear the cache to force a download
154 156 $ rm -rf `hg config lfs.usercache`
155 157 $ cd ../repo2
156 158 $ hg update tip --debug
157 159 http auth: user foo, password ***
158 160 resolving manifests
159 161 branchmerge: False, force: False, partial: False
160 162 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
161 163 http auth: user foo, password ***
162 164 Status: 200
163 165 Content-Length: 311 (git-server !)
164 166 Content-Length: 352 (hg-server !)
165 167 Content-Type: application/vnd.git-lfs+json
166 168 Date: $HTTP_DATE$
167 169 Server: testing stub value (hg-server !)
168 170 {
169 171 "objects": [
170 172 {
171 173 "actions": {
172 174 "download": {
173 175 "expires_at": "$ISO_8601_DATE_TIME$"
174 176 "header": {
175 177 "Accept": "application/vnd.git-lfs"
176 178 }
177 179 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
178 180 }
179 181 }
180 182 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
181 183 "size": 12
182 184 }
183 185 ]
184 186 "transfer": "basic" (hg-server !)
185 187 }
186 188 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
187 189 Status: 200
188 190 Content-Length: 12
189 191 Content-Type: text/plain; charset=utf-8 (git-server !)
190 192 Content-Type: application/octet-stream (hg-server !)
191 193 Date: $HTTP_DATE$
192 194 Server: testing stub value (hg-server !)
193 195 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
194 196 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
195 197 lfs: downloaded 1 files (12 bytes)
196 198 a: remote created -> g
197 199 getting a
198 200 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
199 201 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
200 202
201 203 When the server has some blobs already. `hg serve` doesn't offer to upload
202 204 blobs that it already knows about. Note that lfs-test-server is simply
203 205 toggling the action to 'download'. The Batch API spec says it should omit the
204 206 actions property completely.
205 207
206 208 $ hg mv a b
207 209 $ echo ANOTHER-LARGE-FILE > c
208 210 $ echo ANOTHER-LARGE-FILE2 > d
209 211 $ hg commit -m b-and-c -A b c d
210 212 $ hg push ../repo1 --debug
211 213 http auth: user foo, password ***
212 214 pushing to ../repo1
213 215 http auth: user foo, password ***
214 216 http auth: user foo, password ***
215 217 query 1; heads
216 218 searching for changes
217 219 all remote heads known locally
218 220 listing keys for "phases"
219 221 checking for updated bookmarks
220 222 listing keys for "bookmarks"
221 223 listing keys for "bookmarks"
222 224 lfs: computing set of blobs to upload
223 225 Status: 200
224 226 Content-Length: 901 (git-server !)
225 227 Content-Length: 755 (hg-server !)
226 228 Content-Type: application/vnd.git-lfs+json
227 229 Date: $HTTP_DATE$
228 230 Server: testing stub value (hg-server !)
229 231 {
230 232 "objects": [
231 233 {
232 234 "actions": { (git-server !)
233 235 "download": { (git-server !)
234 236 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
235 237 "header": { (git-server !)
236 238 "Accept": "application/vnd.git-lfs" (git-server !)
237 239 } (git-server !)
238 240 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
239 241 } (git-server !)
240 242 } (git-server !)
241 243 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
242 244 "size": 12
243 245 }
244 246 {
245 247 "actions": {
246 248 "upload": {
247 249 "expires_at": "$ISO_8601_DATE_TIME$"
248 250 "header": {
249 251 "Accept": "application/vnd.git-lfs"
250 252 }
251 253 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
252 254 }
253 255 }
254 256 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
255 257 "size": 20
256 258 }
257 259 {
258 260 "actions": {
259 261 "upload": {
260 262 "expires_at": "$ISO_8601_DATE_TIME$"
261 263 "header": {
262 264 "Accept": "application/vnd.git-lfs"
263 265 }
264 266 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
265 267 }
266 268 }
267 269 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
268 270 "size": 19
269 271 }
270 272 ]
271 273 "transfer": "basic" (hg-server !)
272 274 }
273 275 lfs: need to transfer 2 objects (39 bytes)
274 276 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
275 277 Status: 200 (git-server !)
276 278 Status: 201 (hg-server !)
277 279 Content-Length: 0
278 280 Content-Type: text/plain; charset=utf-8
279 281 Date: $HTTP_DATE$
280 282 Server: testing stub value (hg-server !)
281 283 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
282 284 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
283 285 Status: 200 (git-server !)
284 286 Status: 201 (hg-server !)
285 287 Content-Length: 0
286 288 Content-Type: text/plain; charset=utf-8
287 289 Date: $HTTP_DATE$
288 290 Server: testing stub value (hg-server !)
289 291 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
290 292 lfs: uploaded 2 files (39 bytes)
291 293 1 changesets found
292 294 list of changesets:
293 295 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
294 296 bundle2-output-bundle: "HG20", 5 parts total
295 297 bundle2-output-part: "replycaps" * bytes payload (glob)
296 298 bundle2-output-part: "check:phases" 24 bytes payload
297 299 bundle2-output-part: "check:heads" streamed payload
298 300 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
299 301 bundle2-output-part: "phase-heads" 24 bytes payload
300 302 bundle2-input-bundle: with-transaction
301 303 bundle2-input-part: "replycaps" supported
302 304 bundle2-input-part: total payload size * (glob)
303 305 bundle2-input-part: "check:phases" supported
304 306 bundle2-input-part: total payload size 24
305 307 bundle2-input-part: "check:heads" supported
306 308 bundle2-input-part: total payload size 20
307 309 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
308 310 adding changesets
309 311 add changeset dfca2c9e2ef2
310 312 adding manifests
311 313 adding file changes
312 314 adding b revisions
313 315 adding c revisions
314 316 adding d revisions
315 317 bundle2-input-part: total payload size 1315
316 318 bundle2-input-part: "phase-heads" supported
317 319 bundle2-input-part: total payload size 24
318 320 bundle2-input-bundle: 5 parts total
319 321 updating the branch cache
320 322 added 1 changesets with 3 changes to 3 files
321 323 bundle2-output-bundle: "HG20", 1 parts total
322 324 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
323 325 bundle2-input-bundle: no-transaction
324 326 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
325 327 bundle2-input-bundle: 1 parts total
326 328 listing keys for "phases"
327 329
328 330 Clear the cache to force a download
329 331 $ rm -rf `hg config lfs.usercache`
330 332 $ hg --repo ../repo1 update tip --debug
331 333 http auth: user foo, password ***
332 334 resolving manifests
333 335 branchmerge: False, force: False, partial: False
334 336 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
335 337 http auth: user foo, password ***
336 338 Status: 200
337 339 Content-Length: 608 (git-server !)
338 340 Content-Length: 670 (hg-server !)
339 341 Content-Type: application/vnd.git-lfs+json
340 342 Date: $HTTP_DATE$
341 343 Server: testing stub value (hg-server !)
342 344 {
343 345 "objects": [
344 346 {
345 347 "actions": {
346 348 "download": {
347 349 "expires_at": "$ISO_8601_DATE_TIME$"
348 350 "header": {
349 351 "Accept": "application/vnd.git-lfs"
350 352 }
351 353 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
352 354 }
353 355 }
354 356 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
355 357 "size": 20
356 358 }
357 359 {
358 360 "actions": {
359 361 "download": {
360 362 "expires_at": "$ISO_8601_DATE_TIME$"
361 363 "header": {
362 364 "Accept": "application/vnd.git-lfs"
363 365 }
364 366 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
365 367 }
366 368 }
367 369 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
368 370 "size": 19
369 371 }
370 372 ]
371 373 "transfer": "basic" (hg-server !)
372 374 }
373 375 lfs: need to transfer 2 objects (39 bytes)
374 376 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
375 377 Status: 200
376 378 Content-Length: 20
377 379 Content-Type: text/plain; charset=utf-8 (git-server !)
378 380 Content-Type: application/octet-stream (hg-server !)
379 381 Date: $HTTP_DATE$
380 382 Server: testing stub value (hg-server !)
381 383 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
382 384 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
383 385 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
384 386 Status: 200
385 387 Content-Length: 19
386 388 Content-Type: text/plain; charset=utf-8 (git-server !)
387 389 Content-Type: application/octet-stream (hg-server !)
388 390 Date: $HTTP_DATE$
389 391 Server: testing stub value (hg-server !)
390 392 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
391 393 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
392 394 lfs: downloaded 2 files (39 bytes)
393 395 b: remote created -> g
394 396 getting b
395 397 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
396 398 c: remote created -> g
397 399 getting c
398 400 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
399 401 d: remote created -> g
400 402 getting d
401 403 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
402 404 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
403 405
404 406 Test a corrupt file download, but clear the cache first to force a download.
405 407 `hg serve` indicates a corrupt file without transferring it, unlike
406 408 lfs-test-server.
407 409
408 410 $ rm -rf `hg config lfs.usercache`
409 411 #if git-server
410 412 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
411 413 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
412 414 #else
413 415 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
414 416 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
415 417 #endif
416 418 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
417 419 $ rm ../repo1/*
418 420
419 421 TODO: give the proper error indication from `hg serve`
420 422
421 423 $ hg --repo ../repo1 update -C tip --debug
422 424 http auth: user foo, password ***
423 425 resolving manifests
424 426 branchmerge: False, force: True, partial: False
425 427 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
426 428 http auth: user foo, password ***
427 429 Status: 200
428 430 Content-Length: 311 (git-server !)
429 431 Content-Length: 183 (hg-server !)
430 432 Content-Type: application/vnd.git-lfs+json
431 433 Date: $HTTP_DATE$
432 434 Server: testing stub value (hg-server !)
433 435 {
434 436 "objects": [
435 437 {
436 438 "actions": { (git-server !)
437 439 "download": { (git-server !)
438 440 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
439 441 "header": { (git-server !)
440 442 "Accept": "application/vnd.git-lfs" (git-server !)
441 443 } (git-server !)
442 444 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
443 445 } (git-server !)
444 446 "error": { (hg-server !)
445 447 "code": 422 (hg-server !)
446 448 "message": "The object is corrupt" (hg-server !)
447 449 }
448 450 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
449 451 "size": 19
450 452 }
451 453 ]
452 454 "transfer": "basic" (hg-server !)
453 455 }
454 456 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
455 457 Status: 200 (git-server !)
456 458 Content-Length: 7 (git-server !)
457 459 Content-Type: text/plain; charset=utf-8 (git-server !)
458 460 Date: $HTTP_DATE$ (git-server !)
459 461 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
460 462 abort: LFS server error for "c": Validation error! (hg-server !)
461 463 [255]
462 464
463 465 The corrupted blob is not added to the usercache or local store
464 466
465 467 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
466 468 [1]
467 469 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
468 470 [1]
469 471 #if git-server
470 472 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
471 473 #else
472 474 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
473 475 #endif
474 476
475 477 Test a corrupted file upload
476 478
477 479 $ echo 'another lfs blob' > b
478 480 $ hg ci -m 'another blob'
479 481 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
480 482 $ hg push --debug ../repo1
481 483 http auth: user foo, password ***
482 484 pushing to ../repo1
483 485 http auth: user foo, password ***
484 486 http auth: user foo, password ***
485 487 query 1; heads
486 488 searching for changes
487 489 all remote heads known locally
488 490 listing keys for "phases"
489 491 checking for updated bookmarks
490 492 listing keys for "bookmarks"
491 493 listing keys for "bookmarks"
492 494 lfs: computing set of blobs to upload
493 495 Status: 200
494 496 Content-Length: 309 (git-server !)
495 497 Content-Length: 350 (hg-server !)
496 498 Content-Type: application/vnd.git-lfs+json
497 499 Date: $HTTP_DATE$
498 500 Server: testing stub value (hg-server !)
499 501 {
500 502 "objects": [
501 503 {
502 504 "actions": {
503 505 "upload": {
504 506 "expires_at": "$ISO_8601_DATE_TIME$"
505 507 "header": {
506 508 "Accept": "application/vnd.git-lfs"
507 509 }
508 510 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
509 511 }
510 512 }
511 513 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
512 514 "size": 17
513 515 }
514 516 ]
515 517 "transfer": "basic" (hg-server !)
516 518 }
517 519 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
518 520 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
519 521 (run hg verify)
520 522 [255]
521 523
522 524 Archive will prefetch blobs in a group
523 525
524 526 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
525 527 $ hg archive --debug -r 1 ../archive
526 528 http auth: user foo, password ***
527 529 http auth: user foo, password ***
528 530 Status: 200
529 531 Content-Length: 905 (git-server !)
530 532 Content-Length: 988 (hg-server !)
531 533 Content-Type: application/vnd.git-lfs+json
532 534 Date: $HTTP_DATE$
533 535 Server: testing stub value (hg-server !)
534 536 {
535 537 "objects": [
536 538 {
537 539 "actions": {
538 540 "download": {
539 541 "expires_at": "$ISO_8601_DATE_TIME$"
540 542 "header": {
541 543 "Accept": "application/vnd.git-lfs"
542 544 }
543 545 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
544 546 }
545 547 }
546 548 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
547 549 "size": 12
548 550 }
549 551 {
550 552 "actions": {
551 553 "download": {
552 554 "expires_at": "$ISO_8601_DATE_TIME$"
553 555 "header": {
554 556 "Accept": "application/vnd.git-lfs"
555 557 }
556 558 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
557 559 }
558 560 }
559 561 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
560 562 "size": 20
561 563 }
562 564 {
563 565 "actions": {
564 566 "download": {
565 567 "expires_at": "$ISO_8601_DATE_TIME$"
566 568 "header": {
567 569 "Accept": "application/vnd.git-lfs"
568 570 }
569 571 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
570 572 }
571 573 }
572 574 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
573 575 "size": 19
574 576 }
575 577 ]
576 578 "transfer": "basic" (hg-server !)
577 579 }
578 580 lfs: need to transfer 3 objects (51 bytes)
579 581 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
580 582 Status: 200
581 583 Content-Length: 12
582 584 Content-Type: text/plain; charset=utf-8 (git-server !)
583 585 Content-Type: application/octet-stream (hg-server !)
584 586 Date: $HTTP_DATE$
585 587 Server: testing stub value (hg-server !)
586 588 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
587 589 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
588 590 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
589 591 Status: 200
590 592 Content-Length: 20
591 593 Content-Type: text/plain; charset=utf-8 (git-server !)
592 594 Content-Type: application/octet-stream (hg-server !)
593 595 Date: $HTTP_DATE$
594 596 Server: testing stub value (hg-server !)
595 597 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
596 598 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
597 599 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
598 600 Status: 200
599 601 Content-Length: 19
600 602 Content-Type: text/plain; charset=utf-8 (git-server !)
601 603 Content-Type: application/octet-stream (hg-server !)
602 604 Date: $HTTP_DATE$
603 605 Server: testing stub value (hg-server !)
604 606 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
605 607 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
606 608 lfs: downloaded 3 files (51 bytes)
607 609 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
608 610 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
609 611 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
610 612 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
611 613 $ find ../archive | sort
612 614 ../archive
613 615 ../archive/.hg_archival.txt
614 616 ../archive/a
615 617 ../archive/b
616 618 ../archive/c
617 619 ../archive/d
618 620
619 621 Cat will prefetch blobs in a group
620 622
621 623 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
622 624 $ hg cat --debug -r 1 a b c nonexistent
623 625 http auth: user foo, password ***
624 626 http auth: user foo, password ***
625 627 Status: 200
626 628 Content-Length: 608 (git-server !)
627 629 Content-Length: 670 (hg-server !)
628 630 Content-Type: application/vnd.git-lfs+json
629 631 Date: $HTTP_DATE$
630 632 Server: testing stub value (hg-server !)
631 633 {
632 634 "objects": [
633 635 {
634 636 "actions": {
635 637 "download": {
636 638 "expires_at": "$ISO_8601_DATE_TIME$"
637 639 "header": {
638 640 "Accept": "application/vnd.git-lfs"
639 641 }
640 642 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
641 643 }
642 644 }
643 645 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
644 646 "size": 12
645 647 }
646 648 {
647 649 "actions": {
648 650 "download": {
649 651 "expires_at": "$ISO_8601_DATE_TIME$"
650 652 "header": {
651 653 "Accept": "application/vnd.git-lfs"
652 654 }
653 655 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
654 656 }
655 657 }
656 658 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
657 659 "size": 19
658 660 }
659 661 ]
660 662 "transfer": "basic" (hg-server !)
661 663 }
662 664 lfs: need to transfer 2 objects (31 bytes)
663 665 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
664 666 Status: 200
665 667 Content-Length: 12
666 668 Content-Type: text/plain; charset=utf-8 (git-server !)
667 669 Content-Type: application/octet-stream (hg-server !)
668 670 Date: $HTTP_DATE$
669 671 Server: testing stub value (hg-server !)
670 672 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
671 673 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
672 674 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
673 675 Status: 200
674 676 Content-Length: 19
675 677 Content-Type: text/plain; charset=utf-8 (git-server !)
676 678 Content-Type: application/octet-stream (hg-server !)
677 679 Date: $HTTP_DATE$
678 680 Server: testing stub value (hg-server !)
679 681 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
680 682 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
681 683 lfs: downloaded 2 files (31 bytes)
682 684 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
683 685 THIS-IS-LFS
684 686 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
685 687 THIS-IS-LFS
686 688 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
687 689 ANOTHER-LARGE-FILE
688 690 nonexistent: no such file in rev dfca2c9e2ef2
689 691
690 692 Revert will prefetch blobs in a group
691 693
692 694 $ rm -rf .hg/store/lfs
693 695 $ rm -rf `hg config lfs.usercache`
694 696 $ rm *
695 697 $ hg revert --all -r 1 --debug
696 698 http auth: user foo, password ***
697 699 http auth: user foo, password ***
698 700 Status: 200
699 701 Content-Length: 905 (git-server !)
700 702 Content-Length: 988 (hg-server !)
701 703 Content-Type: application/vnd.git-lfs+json
702 704 Date: $HTTP_DATE$
703 705 Server: testing stub value (hg-server !)
704 706 {
705 707 "objects": [
706 708 {
707 709 "actions": {
708 710 "download": {
709 711 "expires_at": "$ISO_8601_DATE_TIME$"
710 712 "header": {
711 713 "Accept": "application/vnd.git-lfs"
712 714 }
713 715 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
714 716 }
715 717 }
716 718 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
717 719 "size": 12
718 720 }
719 721 {
720 722 "actions": {
721 723 "download": {
722 724 "expires_at": "$ISO_8601_DATE_TIME$"
723 725 "header": {
724 726 "Accept": "application/vnd.git-lfs"
725 727 }
726 728 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
727 729 }
728 730 }
729 731 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
730 732 "size": 20
731 733 }
732 734 {
733 735 "actions": {
734 736 "download": {
735 737 "expires_at": "$ISO_8601_DATE_TIME$"
736 738 "header": {
737 739 "Accept": "application/vnd.git-lfs"
738 740 }
739 741 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
740 742 }
741 743 }
742 744 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
743 745 "size": 19
744 746 }
745 747 ]
746 748 "transfer": "basic" (hg-server !)
747 749 }
748 750 lfs: need to transfer 3 objects (51 bytes)
749 751 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
750 752 Status: 200
751 753 Content-Length: 12
752 754 Content-Type: text/plain; charset=utf-8 (git-server !)
753 755 Content-Type: application/octet-stream (hg-server !)
754 756 Date: $HTTP_DATE$
755 757 Server: testing stub value (hg-server !)
756 758 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
757 759 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
758 760 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
759 761 Status: 200
760 762 Content-Length: 20
761 763 Content-Type: text/plain; charset=utf-8 (git-server !)
762 764 Content-Type: application/octet-stream (hg-server !)
763 765 Date: $HTTP_DATE$
764 766 Server: testing stub value (hg-server !)
765 767 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
766 768 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
767 769 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
768 770 Status: 200
769 771 Content-Length: 19
770 772 Content-Type: text/plain; charset=utf-8 (git-server !)
771 773 Content-Type: application/octet-stream (hg-server !)
772 774 Date: $HTTP_DATE$
773 775 Server: testing stub value (hg-server !)
774 776 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
775 777 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
776 778 lfs: downloaded 3 files (51 bytes)
777 779 reverting b
778 780 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
779 781 reverting c
780 782 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
781 783 reverting d
782 784 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
783 785 adding a
784 786 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
785 787
786 788 Check error message when the remote missed a blob:
787 789
788 790 $ echo FFFFF > b
789 791 $ hg commit -m b -A b
790 792 $ echo FFFFF >> b
791 793 $ hg commit -m b b
792 794 $ rm -rf .hg/store/lfs
793 795 $ rm -rf `hg config lfs.usercache`
794 796 $ hg update -C '.^' --debug
795 797 http auth: user foo, password ***
796 798 resolving manifests
797 799 branchmerge: False, force: True, partial: False
798 800 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
799 801 http auth: user foo, password ***
800 802 Status: 200
801 803 Content-Length: 308 (git-server !)
802 804 Content-Length: 186 (hg-server !)
803 805 Content-Type: application/vnd.git-lfs+json
804 806 Date: $HTTP_DATE$
805 807 Server: testing stub value (hg-server !)
806 808 {
807 809 "objects": [
808 810 {
809 811 "actions": { (git-server !)
810 812 "upload": { (git-server !)
811 813 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
812 814 "header": { (git-server !)
813 815 "Accept": "application/vnd.git-lfs" (git-server !)
814 816 } (git-server !)
815 817 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
816 818 } (git-server !)
817 819 "error": { (hg-server !)
818 820 "code": 404 (hg-server !)
819 821 "message": "The object does not exist" (hg-server !)
820 822 }
821 823 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
822 824 "size": 6
823 825 }
824 826 ]
825 827 "transfer": "basic" (hg-server !)
826 828 }
827 829 abort: LFS server error for "b": The object does not exist!
828 830 [255]
829 831
830 832 Check error message when object does not exist:
831 833
832 834 $ cd $TESTTMP
833 835 $ hg init test && cd test
834 836 $ echo "[extensions]" >> .hg/hgrc
835 837 $ echo "lfs=" >> .hg/hgrc
836 838 $ echo "[lfs]" >> .hg/hgrc
837 839 $ echo "threshold=1" >> .hg/hgrc
838 840 $ echo a > a
839 841 $ hg add a
840 842 $ hg commit -m 'test'
841 843 $ echo aaaaa > a
842 844 $ hg commit -m 'largefile'
843 845 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
844 846 version https://git-lfs.github.com/spec/v1
845 847 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
846 848 size 6
847 849 x-is-binary 0
848 850 $ cd ..
849 851 $ rm -rf `hg config lfs.usercache`
850 852
851 853 (Restart the server in a different location so it no longer has the content)
852 854
853 855 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
854 856
855 857 #if hg-server
856 858 $ cat $TESTTMP/access.log $TESTTMP/errors.log
857 859 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
858 860 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
859 861 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
860 862 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
861 863 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
862 864 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
863 865 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
864 866 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
865 867 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
866 868 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
867 869 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
868 870 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
869 871 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
870 872 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
871 873 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
872 874 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
873 875 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
874 876 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
875 877 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
876 878 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
877 879 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
878 880 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
879 881 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
880 882 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
881 883 #endif
882 884
883 885 $ mkdir $TESTTMP/lfs-server2
884 886 $ cd $TESTTMP/lfs-server2
885 887 #if no-windows git-server
886 888 $ lfs-test-server &> lfs-server.log &
887 889 $ echo $! >> $DAEMON_PIDS
888 890 #endif
889 891
890 892 #if windows git-server
891 893 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
892 894 #endif
893 895
894 896 #if hg-server
895 897 $ hg init server2
896 898 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
897 899 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
898 900 $ cat hg.pid >> $DAEMON_PIDS
899 901 #endif
900 902
901 903 $ cd $TESTTMP
902 904 $ hg --debug clone test test2
903 905 http auth: user foo, password ***
904 906 linked 6 files
905 907 http auth: user foo, password ***
906 908 updating to branch default
907 909 resolving manifests
908 910 branchmerge: False, force: False, partial: False
909 911 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
910 912 http auth: user foo, password ***
911 913 Status: 200
912 914 Content-Length: 308 (git-server !)
913 915 Content-Length: 186 (hg-server !)
914 916 Content-Type: application/vnd.git-lfs+json
915 917 Date: $HTTP_DATE$
916 918 Server: testing stub value (hg-server !)
917 919 {
918 920 "objects": [
919 921 {
920 922 "actions": { (git-server !)
921 923 "upload": { (git-server !)
922 924 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
923 925 "header": { (git-server !)
924 926 "Accept": "application/vnd.git-lfs" (git-server !)
925 927 } (git-server !)
926 928 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
927 929 } (git-server !)
928 930 "error": { (hg-server !)
929 931 "code": 404 (hg-server !)
930 932 "message": "The object does not exist" (hg-server !)
931 933 }
932 934 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
933 935 "size": 6
934 936 }
935 937 ]
936 938 "transfer": "basic" (hg-server !)
937 939 }
938 940 abort: LFS server error for "a": The object does not exist!
939 941 [255]
940 942
941 943 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
General Comments 0
You need to be logged in to leave comments. Login now