##// END OF EJS Templates
lfs: enable workers by default...
Matt Harbison -
r44747:87167caa default
parent child Browse files
Show More
@@ -1,195 +1,192 b''
1 Prior to removing (EXPERIMENTAL)
1 Prior to removing (EXPERIMENTAL)
2 --------------------------------
2 --------------------------------
3
3
4 These things affect UI and/or behavior, and should probably be implemented (or
4 These things affect UI and/or behavior, and should probably be implemented (or
5 ruled out) prior to taking off the experimental shrinkwrap.
5 ruled out) prior to taking off the experimental shrinkwrap.
6
6
7 #. Finish the `hg convert` story
7 #. Finish the `hg convert` story
8
8
9 * Add an argument to accept a rules file to apply during conversion?
9 * Add an argument to accept a rules file to apply during conversion?
10 Currently `lfs.track` is the only way to affect the conversion.
10 Currently `lfs.track` is the only way to affect the conversion.
11 * drop `lfs.track` config settings
11 * drop `lfs.track` config settings
12 * splice in `.hglfs` file for normal repo -> lfs conversions?
12 * splice in `.hglfs` file for normal repo -> lfs conversions?
13
13
14 #. Stop uploading blobs when pushing between local repos
14 #. Stop uploading blobs when pushing between local repos
15
15
16 * Could probably hardlink directly to the other local repo's store
16 * Could probably hardlink directly to the other local repo's store
17 * Support inferring `lfs.url` for local push/pull (currently only supports
17 * Support inferring `lfs.url` for local push/pull (currently only supports
18 http)
18 http)
19
19
20 #. Stop uploading blobs on strip/amend/histedit/etc.
20 #. Stop uploading blobs on strip/amend/histedit/etc.
21
21
22 * This seems to be a side effect of doing it for `hg bundle`, which probably
22 * This seems to be a side effect of doing it for `hg bundle`, which probably
23 makes sense.
23 makes sense.
24
24
25 #. Handle a server with the extension loaded and a client without the extension
25 #. Handle a server with the extension loaded and a client without the extension
26 more gracefully.
26 more gracefully.
27
27
28 * `changegroup3` is still experimental, and not enabled by default.
28 * `changegroup3` is still experimental, and not enabled by default.
29 * Figure out how to `introduce LFS to the server repo
29 * Figure out how to `introduce LFS to the server repo
30 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-September/122281.html>`_.
30 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-September/122281.html>`_.
31 See the TODO in test-lfs-serve.t.
31 See the TODO in test-lfs-serve.t.
32
32
33 #. Remove `lfs.retry` hack in client? This came from FB, but it's not clear why
33 #. Remove `lfs.retry` hack in client? This came from FB, but it's not clear why
34 it is/was needed.
34 it is/was needed.
35
35
36 #. `hg export` currently writes out the LFS blob. Should it write the pointer
36 #. `hg export` currently writes out the LFS blob. Should it write the pointer
37 instead?
37 instead?
38
38
39 * `hg diff` is similar, and probably shouldn't see the pointer file
39 * `hg diff` is similar, and probably shouldn't see the pointer file
40
40
41 #. `Fix https multiplexing, and re-enable workers
42 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/109916.html>`_.
43
44 #. Show to-be-applied rules with `hg files -r 'wdir()' 'set:lfs()'`
41 #. Show to-be-applied rules with `hg files -r 'wdir()' 'set:lfs()'`
45
42
46 * `debugignore` can show file + line number, so a dedicated command could be
43 * `debugignore` can show file + line number, so a dedicated command could be
47 useful too.
44 useful too.
48
45
49 #. Filesets, revsets and templates
46 #. Filesets, revsets and templates
50
47
51 * A dedicated revset should be faster than `'file(set:lfs())'`
48 * A dedicated revset should be faster than `'file(set:lfs())'`
52 * Attach `{lfsoid}` and `{lfspointer}` to `general keywords
49 * Attach `{lfsoid}` and `{lfspointer}` to `general keywords
53 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/110251.html>`_,
50 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-January/110251.html>`_,
54 IFF the file is a blob
51 IFF the file is a blob
55 * Drop existing items that would be redundant with general support
52 * Drop existing items that would be redundant with general support
56
53
57 #. Can `grep` avoid downloading most things?
54 #. Can `grep` avoid downloading most things?
58
55
59 * Add a command option to skip LFS blobs?
56 * Add a command option to skip LFS blobs?
60
57
61 #. Add a flag that's visible in `hg files -v` to indicate external storage?
58 #. Add a flag that's visible in `hg files -v` to indicate external storage?
62
59
63 #. Server side issues
60 #. Server side issues
64
61
65 * Check for local disk space before allowing upload. (I've got a patch for
62 * Check for local disk space before allowing upload. (I've got a patch for
66 this.)
63 this.)
67 * Make sure the http codes used are appropriate.
64 * Make sure the http codes used are appropriate.
68 * `Why is copying the Authorization header into the JSON payload necessary
65 * `Why is copying the Authorization header into the JSON payload necessary
69 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116230.html>`_?
66 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116230.html>`_?
70 * `LFS-Authenticate` header support in client and server(?)
67 * `LFS-Authenticate` header support in client and server(?)
71
68
72 #. Add locks on cache and blob store
69 #. Add locks on cache and blob store
73
70
74 * This is complicated with a global store, and multiple potentially unrelated
71 * This is complicated with a global store, and multiple potentially unrelated
75 local repositories that reference the same blob.
72 local repositories that reference the same blob.
76 * Alternately, maybe just handle collisions when trying to create the same
73 * Alternately, maybe just handle collisions when trying to create the same
77 blob in the store somehow.
74 blob in the store somehow.
78
75
79 #. Are proper file sizes reported in `debugupgraderepo`?
76 #. Are proper file sizes reported in `debugupgraderepo`?
80
77
81 #. Finish prefetching files
78 #. Finish prefetching files
82
79
83 * `-T {data}` (other than cat?)
80 * `-T {data}` (other than cat?)
84 * `verify`
81 * `verify`
85 * `grep`
82 * `grep`
86
83
87 #. Output cleanup
84 #. Output cleanup
88
85
89 * Can we print the url when connecting to the blobstore? (A sudden
86 * Can we print the url when connecting to the blobstore? (A sudden
90 connection refused after pulling commits looks confusing.) Problem is,
87 connection refused after pulling commits looks confusing.) Problem is,
91 'pushing to main url' is printed, and then lfs wants to upload before going
88 'pushing to main url' is printed, and then lfs wants to upload before going
92 back to the main repo transfer, so then *that* could be confusing with
89 back to the main repo transfer, so then *that* could be confusing with
93 extra output. (This is kinda improved with 380f5131ee7b and 9f78d10742af.)
90 extra output. (This is kinda improved with 380f5131ee7b and 9f78d10742af.)
94
91
95 * Add more progress indicators? Uploading a large repo looks idle for a long
92 * Add more progress indicators? Uploading a large repo looks idle for a long
96 time while it scans for blobs in each outgoing revision.
93 time while it scans for blobs in each outgoing revision.
97
94
98 * Print filenames instead of hashes in error messages
95 * Print filenames instead of hashes in error messages
99
96
100 * subrepo aware paths, where necessary
97 * subrepo aware paths, where necessary
101
98
102 * Is existing output at the right status/note/debug level?
99 * Is existing output at the right status/note/debug level?
103
100
104 #. Can `verify` be done without downloading everything?
101 #. Can `verify` be done without downloading everything?
105
102
106 * If we know that we are talking to an hg server, we can leverage the fact
103 * If we know that we are talking to an hg server, we can leverage the fact
107 that it validates in the Batch API portion, and skip d/l altogether. OTOH,
104 that it validates in the Batch API portion, and skip d/l altogether. OTOH,
108 maybe we should download the files unconditionally for forensics. The
105 maybe we should download the files unconditionally for forensics. The
109 alternative is to define a custom transfer handler that definitively
106 alternative is to define a custom transfer handler that definitively
110 verifies without transferring, and then cache those results. When verify
107 verifies without transferring, and then cache those results. When verify
111 comes looking, look in the cache instead of actually opening the file and
108 comes looking, look in the cache instead of actually opening the file and
112 processing it.
109 processing it.
113
110
114 * Yuya has concerns about when blob fetch takes place vs when revlog is
111 * Yuya has concerns about when blob fetch takes place vs when revlog is
115 verified. Since the visible hash matches the blob content, I don't think
112 verified. Since the visible hash matches the blob content, I don't think
116 there's a way to verify the pointer file that's actually stored in the
113 there's a way to verify the pointer file that's actually stored in the
117 filelog (other than basic JSON checks). Full verification requires the
114 filelog (other than basic JSON checks). Full verification requires the
118 blob. See
115 blob. See
119 https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116133.html
116 https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/116133.html
120
117
121 * Opening a corrupt pointer file aborts. It probably shouldn't for verify.
118 * Opening a corrupt pointer file aborts. It probably shouldn't for verify.
122
119
123
120
124 Future ideas/features/polishing
121 Future ideas/features/polishing
125 -------------------------------
122 -------------------------------
126
123
127 These aren't in any particular order, and are things that don't have obvious BC
124 These aren't in any particular order, and are things that don't have obvious BC
128 concerns.
125 concerns.
129
126
130 #. Garbage collection `(issue5790) <https://bz.mercurial-scm.org/show_bug.cgi?id=5790>`_
127 #. Garbage collection `(issue5790) <https://bz.mercurial-scm.org/show_bug.cgi?id=5790>`_
131
128
132 * This gets complicated because of the global cache, which may or may not
129 * This gets complicated because of the global cache, which may or may not
133 consist of hardlinks to the repo, and may be in use by other repos. (So
130 consist of hardlinks to the repo, and may be in use by other repos. (So
134 the gc may be pointless.)
131 the gc may be pointless.)
135
132
136 #. `Compress blobs <https://github.com/git-lfs/git-lfs/issues/260>`_
133 #. `Compress blobs <https://github.com/git-lfs/git-lfs/issues/260>`_
137
134
138 * 700MB repo becomes 2.5GB with all lfs blobs
135 * 700MB repo becomes 2.5GB with all lfs blobs
139 * What implications are there for filesystem paths that don't indicate
136 * What implications are there for filesystem paths that don't indicate
140 compression? (i.e. how to share with global cache and other local repos?)
137 compression? (i.e. how to share with global cache and other local repos?)
141 * Probably needs to be stored under `.hg/store/lfs/zstd`, with a repo
138 * Probably needs to be stored under `.hg/store/lfs/zstd`, with a repo
142 requirement.
139 requirement.
143 * Allow tuneable compression type and settings?
140 * Allow tuneable compression type and settings?
144 * Support compression over the wire if both sides understand the compression?
141 * Support compression over the wire if both sides understand the compression?
145 * `debugupgraderepo` to convert?
142 * `debugupgraderepo` to convert?
146 * Probably not worth supporting compressed and uncompressed concurrently
143 * Probably not worth supporting compressed and uncompressed concurrently
147
144
148 #. Determine things to upload with `readfast()
145 #. Determine things to upload with `readfast()
149 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-August/121315.html>`_
146 <https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-August/121315.html>`_
150
147
151 * Significantly faster when pushing an entire large repo to http.
148 * Significantly faster when pushing an entire large repo to http.
152 * Causes test changes to fileset and templates; may need both this and
149 * Causes test changes to fileset and templates; may need both this and
153 current methods of lookup.
150 current methods of lookup.
154
151
155 #. Is a command to download everything needed? This would allow copying the
152 #. Is a command to download everything needed? This would allow copying the
156 whole to a portable drive. Currently this can be effected by running
153 whole to a portable drive. Currently this can be effected by running
157 `hg verify`.
154 `hg verify`.
158
155
159 #. Stop reading in entire file into one buffer when passing through filelog
156 #. Stop reading in entire file into one buffer when passing through filelog
160 interface
157 interface
161
158
162 * `Requires major replumbing to core
159 * `Requires major replumbing to core
163 <https://www.mercurial-scm.org/wiki/HandlingLargeFiles>`_
160 <https://www.mercurial-scm.org/wiki/HandlingLargeFiles>`_
164
161
165 #. Keep corrupt files around in 'store/lfs/incoming' for forensics?
162 #. Keep corrupt files around in 'store/lfs/incoming' for forensics?
166
163
167 * Files should be downloaded to 'incoming', and moved to normal location when
164 * Files should be downloaded to 'incoming', and moved to normal location when
168 done.
165 done.
169
166
170 #. Client side path enhancements
167 #. Client side path enhancements
171
168
172 * Support paths.default:lfs = ... style paths
169 * Support paths.default:lfs = ... style paths
173 * SSH -> https server inference
170 * SSH -> https server inference
174
171
175 * https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/115416.html
172 * https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-April/115416.html
176 * https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md#guessing-the-server
173 * https://github.com/git-lfs/git-lfs/blob/master/docs/api/server-discovery.md#guessing-the-server
177
174
178 #. Server enhancements
175 #. Server enhancements
179
176
180 * Add support for transfer quotas?
177 * Add support for transfer quotas?
181 * Download should be able to send the file in chunks, without reading the
178 * Download should be able to send the file in chunks, without reading the
182 whole thing into memory
179 whole thing into memory
183 (https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-March/114584.html)
180 (https://www.mercurial-scm.org/pipermail/mercurial-devel/2018-March/114584.html)
184 * Support for resuming transfers
181 * Support for resuming transfers
185
182
186 #. Handle 3rd party server storage.
183 #. Handle 3rd party server storage.
187
184
188 * Teach client to handle lfs `verify` action. This is needed after the
185 * Teach client to handle lfs `verify` action. This is needed after the
189 server instructs the client to upload the file to another server, in order
186 server instructs the client to upload the file to another server, in order
190 to tell the server that the upload completed.
187 to tell the server that the upload completed.
191 * Teach the server to send redirects if configured, and process `verify`
188 * Teach the server to send redirects if configured, and process `verify`
192 requests.
189 requests.
193
190
194 #. `Is any hg-git work needed
191 #. `Is any hg-git work needed
195 <https://groups.google.com/d/msg/hg-git/XYNQuudteeM/ivt8gXoZAAAJ>`_?
192 <https://groups.google.com/d/msg/hg-git/XYNQuudteeM/ivt8gXoZAAAJ>`_?
@@ -1,426 +1,426 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 The extension reads its configuration from a versioned ``.hglfs``
56 The extension reads its configuration from a versioned ``.hglfs``
57 configuration file found in the root of the working directory. The
57 configuration file found in the root of the working directory. The
58 ``.hglfs`` file uses the same syntax as all other Mercurial
58 ``.hglfs`` file uses the same syntax as all other Mercurial
59 configuration files. It uses a single section, ``[track]``.
59 configuration files. It uses a single section, ``[track]``.
60
60
61 The ``[track]`` section specifies which files are stored as LFS (or
61 The ``[track]`` section specifies which files are stored as LFS (or
62 not). Each line is keyed by a file pattern, with a predicate value.
62 not). Each line is keyed by a file pattern, with a predicate value.
63 The first file pattern match is used, so put more specific patterns
63 The first file pattern match is used, so put more specific patterns
64 first. The available predicates are ``all()``, ``none()``, and
64 first. The available predicates are ``all()``, ``none()``, and
65 ``size()``. See "hg help filesets.size" for the latter.
65 ``size()``. See "hg help filesets.size" for the latter.
66
66
67 Example versioned ``.hglfs`` file::
67 Example versioned ``.hglfs`` file::
68
68
69 [track]
69 [track]
70 # No Makefile or python file, anywhere, will be LFS
70 # No Makefile or python file, anywhere, will be LFS
71 **Makefile = none()
71 **Makefile = none()
72 **.py = none()
72 **.py = none()
73
73
74 **.zip = all()
74 **.zip = all()
75 **.exe = size(">1MB")
75 **.exe = size(">1MB")
76
76
77 # Catchall for everything not matched above
77 # Catchall for everything not matched above
78 ** = size(">10MB")
78 ** = size(">10MB")
79
79
80 Configs::
80 Configs::
81
81
82 [lfs]
82 [lfs]
83 # Remote endpoint. Multiple protocols are supported:
83 # Remote endpoint. Multiple protocols are supported:
84 # - http(s)://user:pass@example.com/path
84 # - http(s)://user:pass@example.com/path
85 # git-lfs endpoint
85 # git-lfs endpoint
86 # - file:///tmp/path
86 # - file:///tmp/path
87 # local filesystem, usually for testing
87 # local filesystem, usually for testing
88 # if unset, lfs will assume the remote repository also handles blob storage
88 # if unset, lfs will assume the remote repository also handles blob storage
89 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
89 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
90 # use this value.
90 # use this value.
91 # (default: unset)
91 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
92 url = https://example.com/repo.git/info/lfs
93
93
94 # Which files to track in LFS. Path tests are "**.extname" for file
94 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
95 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
96 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
97 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
98 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
99 #
100 # Some examples:
100 # Some examples:
101 # - all() # everything
101 # - all() # everything
102 # - none() # nothing
102 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
103 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
104 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
106 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
109 # (default: none())
110 #
110 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
112 # will eventually be deprecated and removed.
113 track = size(">10M")
113 track = size(">10M")
114
114
115 # how many times to retry before giving up on transferring an object
115 # how many times to retry before giving up on transferring an object
116 retry = 5
116 retry = 5
117
117
118 # the local directory to store lfs files for sharing across local clones.
118 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
119 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
120 usercache = /path/to/global/cache
121 """
121 """
122
122
123 from __future__ import absolute_import
123 from __future__ import absolute_import
124
124
125 import sys
125 import sys
126
126
127 from mercurial.i18n import _
127 from mercurial.i18n import _
128
128
129 from mercurial import (
129 from mercurial import (
130 config,
130 config,
131 context,
131 context,
132 error,
132 error,
133 exchange,
133 exchange,
134 extensions,
134 extensions,
135 exthelper,
135 exthelper,
136 filelog,
136 filelog,
137 filesetlang,
137 filesetlang,
138 localrepo,
138 localrepo,
139 minifileset,
139 minifileset,
140 node,
140 node,
141 pycompat,
141 pycompat,
142 revlog,
142 revlog,
143 scmutil,
143 scmutil,
144 templateutil,
144 templateutil,
145 util,
145 util,
146 )
146 )
147
147
148 from mercurial.interfaces import repository
148 from mercurial.interfaces import repository
149
149
150 from . import (
150 from . import (
151 blobstore,
151 blobstore,
152 wireprotolfsserver,
152 wireprotolfsserver,
153 wrapper,
153 wrapper,
154 )
154 )
155
155
156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 # be specifying the version(s) of Mercurial they are tested with, or
158 # be specifying the version(s) of Mercurial they are tested with, or
159 # leave the attribute unspecified.
159 # leave the attribute unspecified.
160 testedwith = b'ships-with-hg-core'
160 testedwith = b'ships-with-hg-core'
161
161
162 eh = exthelper.exthelper()
162 eh = exthelper.exthelper()
163 eh.merge(wrapper.eh)
163 eh.merge(wrapper.eh)
164 eh.merge(wireprotolfsserver.eh)
164 eh.merge(wireprotolfsserver.eh)
165
165
166 cmdtable = eh.cmdtable
166 cmdtable = eh.cmdtable
167 configtable = eh.configtable
167 configtable = eh.configtable
168 extsetup = eh.finalextsetup
168 extsetup = eh.finalextsetup
169 uisetup = eh.finaluisetup
169 uisetup = eh.finaluisetup
170 filesetpredicate = eh.filesetpredicate
170 filesetpredicate = eh.filesetpredicate
171 reposetup = eh.finalreposetup
171 reposetup = eh.finalreposetup
172 templatekeyword = eh.templatekeyword
172 templatekeyword = eh.templatekeyword
173
173
174 eh.configitem(
174 eh.configitem(
175 b'experimental', b'lfs.serve', default=True,
175 b'experimental', b'lfs.serve', default=True,
176 )
176 )
177 eh.configitem(
177 eh.configitem(
178 b'experimental', b'lfs.user-agent', default=None,
178 b'experimental', b'lfs.user-agent', default=None,
179 )
179 )
180 eh.configitem(
180 eh.configitem(
181 b'experimental', b'lfs.disableusercache', default=False,
181 b'experimental', b'lfs.disableusercache', default=False,
182 )
182 )
183 eh.configitem(
183 eh.configitem(
184 b'experimental', b'lfs.worker-enable', default=False,
184 b'experimental', b'lfs.worker-enable', default=True,
185 )
185 )
186
186
187 eh.configitem(
187 eh.configitem(
188 b'lfs', b'url', default=None,
188 b'lfs', b'url', default=None,
189 )
189 )
190 eh.configitem(
190 eh.configitem(
191 b'lfs', b'usercache', default=None,
191 b'lfs', b'usercache', default=None,
192 )
192 )
193 # Deprecated
193 # Deprecated
194 eh.configitem(
194 eh.configitem(
195 b'lfs', b'threshold', default=None,
195 b'lfs', b'threshold', default=None,
196 )
196 )
197 eh.configitem(
197 eh.configitem(
198 b'lfs', b'track', default=b'none()',
198 b'lfs', b'track', default=b'none()',
199 )
199 )
200 eh.configitem(
200 eh.configitem(
201 b'lfs', b'retry', default=5,
201 b'lfs', b'retry', default=5,
202 )
202 )
203
203
204 lfsprocessor = (
204 lfsprocessor = (
205 wrapper.readfromstore,
205 wrapper.readfromstore,
206 wrapper.writetostore,
206 wrapper.writetostore,
207 wrapper.bypasscheckhash,
207 wrapper.bypasscheckhash,
208 )
208 )
209
209
210
210
211 def featuresetup(ui, supported):
211 def featuresetup(ui, supported):
212 # don't die on seeing a repo with the lfs requirement
212 # don't die on seeing a repo with the lfs requirement
213 supported |= {b'lfs'}
213 supported |= {b'lfs'}
214
214
215
215
216 @eh.uisetup
216 @eh.uisetup
217 def _uisetup(ui):
217 def _uisetup(ui):
218 localrepo.featuresetupfuncs.add(featuresetup)
218 localrepo.featuresetupfuncs.add(featuresetup)
219
219
220
220
221 @eh.reposetup
221 @eh.reposetup
222 def _reposetup(ui, repo):
222 def _reposetup(ui, repo):
223 # Nothing to do with a remote repo
223 # Nothing to do with a remote repo
224 if not repo.local():
224 if not repo.local():
225 return
225 return
226
226
227 repo.svfs.lfslocalblobstore = blobstore.local(repo)
227 repo.svfs.lfslocalblobstore = blobstore.local(repo)
228 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
228 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
229
229
230 class lfsrepo(repo.__class__):
230 class lfsrepo(repo.__class__):
231 @localrepo.unfilteredmethod
231 @localrepo.unfilteredmethod
232 def commitctx(self, ctx, error=False, origctx=None):
232 def commitctx(self, ctx, error=False, origctx=None):
233 repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
233 repo.svfs.options[b'lfstrack'] = _trackedmatcher(self)
234 return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)
234 return super(lfsrepo, self).commitctx(ctx, error, origctx=origctx)
235
235
236 repo.__class__ = lfsrepo
236 repo.__class__ = lfsrepo
237
237
238 if b'lfs' not in repo.requirements:
238 if b'lfs' not in repo.requirements:
239
239
240 def checkrequireslfs(ui, repo, **kwargs):
240 def checkrequireslfs(ui, repo, **kwargs):
241 if b'lfs' in repo.requirements:
241 if b'lfs' in repo.requirements:
242 return 0
242 return 0
243
243
244 last = kwargs.get('node_last')
244 last = kwargs.get('node_last')
245 _bin = node.bin
245 _bin = node.bin
246 if last:
246 if last:
247 s = repo.set(b'%n:%n', _bin(kwargs['node']), _bin(last))
247 s = repo.set(b'%n:%n', _bin(kwargs['node']), _bin(last))
248 else:
248 else:
249 s = repo.set(b'%n', _bin(kwargs['node']))
249 s = repo.set(b'%n', _bin(kwargs['node']))
250 match = repo._storenarrowmatch
250 match = repo._storenarrowmatch
251 for ctx in s:
251 for ctx in s:
252 # TODO: is there a way to just walk the files in the commit?
252 # TODO: is there a way to just walk the files in the commit?
253 if any(
253 if any(
254 ctx[f].islfs() for f in ctx.files() if f in ctx and match(f)
254 ctx[f].islfs() for f in ctx.files() if f in ctx and match(f)
255 ):
255 ):
256 repo.requirements.add(b'lfs')
256 repo.requirements.add(b'lfs')
257 repo.features.add(repository.REPO_FEATURE_LFS)
257 repo.features.add(repository.REPO_FEATURE_LFS)
258 repo._writerequirements()
258 repo._writerequirements()
259 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
259 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
260 break
260 break
261
261
262 ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
262 ui.setconfig(b'hooks', b'commit.lfs', checkrequireslfs, b'lfs')
263 ui.setconfig(
263 ui.setconfig(
264 b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
264 b'hooks', b'pretxnchangegroup.lfs', checkrequireslfs, b'lfs'
265 )
265 )
266 else:
266 else:
267 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
267 repo.prepushoutgoinghooks.add(b'lfs', wrapper.prepush)
268
268
269
269
270 def _trackedmatcher(repo):
270 def _trackedmatcher(repo):
271 """Return a function (path, size) -> bool indicating whether or not to
271 """Return a function (path, size) -> bool indicating whether or not to
272 track a given file with lfs."""
272 track a given file with lfs."""
273 if not repo.wvfs.exists(b'.hglfs'):
273 if not repo.wvfs.exists(b'.hglfs'):
274 # No '.hglfs' in wdir. Fallback to config for now.
274 # No '.hglfs' in wdir. Fallback to config for now.
275 trackspec = repo.ui.config(b'lfs', b'track')
275 trackspec = repo.ui.config(b'lfs', b'track')
276
276
277 # deprecated config: lfs.threshold
277 # deprecated config: lfs.threshold
278 threshold = repo.ui.configbytes(b'lfs', b'threshold')
278 threshold = repo.ui.configbytes(b'lfs', b'threshold')
279 if threshold:
279 if threshold:
280 filesetlang.parse(trackspec) # make sure syntax errors are confined
280 filesetlang.parse(trackspec) # make sure syntax errors are confined
281 trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)
281 trackspec = b"(%s) | size('>%d')" % (trackspec, threshold)
282
282
283 return minifileset.compile(trackspec)
283 return minifileset.compile(trackspec)
284
284
285 data = repo.wvfs.tryread(b'.hglfs')
285 data = repo.wvfs.tryread(b'.hglfs')
286 if not data:
286 if not data:
287 return lambda p, s: False
287 return lambda p, s: False
288
288
289 # Parse errors here will abort with a message that points to the .hglfs file
289 # Parse errors here will abort with a message that points to the .hglfs file
290 # and line number.
290 # and line number.
291 cfg = config.config()
291 cfg = config.config()
292 cfg.parse(b'.hglfs', data)
292 cfg.parse(b'.hglfs', data)
293
293
294 try:
294 try:
295 rules = [
295 rules = [
296 (minifileset.compile(pattern), minifileset.compile(rule))
296 (minifileset.compile(pattern), minifileset.compile(rule))
297 for pattern, rule in cfg.items(b'track')
297 for pattern, rule in cfg.items(b'track')
298 ]
298 ]
299 except error.ParseError as e:
299 except error.ParseError as e:
300 # The original exception gives no indicator that the error is in the
300 # The original exception gives no indicator that the error is in the
301 # .hglfs file, so add that.
301 # .hglfs file, so add that.
302
302
303 # TODO: See if the line number of the file can be made available.
303 # TODO: See if the line number of the file can be made available.
304 raise error.Abort(_(b'parse error in .hglfs: %s') % e)
304 raise error.Abort(_(b'parse error in .hglfs: %s') % e)
305
305
306 def _match(path, size):
306 def _match(path, size):
307 for pat, rule in rules:
307 for pat, rule in rules:
308 if pat(path, size):
308 if pat(path, size):
309 return rule(path, size)
309 return rule(path, size)
310
310
311 return False
311 return False
312
312
313 return _match
313 return _match
314
314
315
315
316 # Called by remotefilelog
316 # Called by remotefilelog
317 def wrapfilelog(filelog):
317 def wrapfilelog(filelog):
318 wrapfunction = extensions.wrapfunction
318 wrapfunction = extensions.wrapfunction
319
319
320 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
320 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
321 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
321 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
322 wrapfunction(filelog, 'size', wrapper.filelogsize)
322 wrapfunction(filelog, 'size', wrapper.filelogsize)
323
323
324
324
325 @eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
325 @eh.wrapfunction(localrepo, b'resolverevlogstorevfsoptions')
326 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
326 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
327 opts = orig(ui, requirements, features)
327 opts = orig(ui, requirements, features)
328 for name, module in extensions.extensions(ui):
328 for name, module in extensions.extensions(ui):
329 if module is sys.modules[__name__]:
329 if module is sys.modules[__name__]:
330 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
330 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
331 msg = (
331 msg = (
332 _(b"cannot register multiple processors on flag '%#x'.")
332 _(b"cannot register multiple processors on flag '%#x'.")
333 % revlog.REVIDX_EXTSTORED
333 % revlog.REVIDX_EXTSTORED
334 )
334 )
335 raise error.Abort(msg)
335 raise error.Abort(msg)
336
336
337 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
337 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
338 break
338 break
339
339
340 return opts
340 return opts
341
341
342
342
343 @eh.extsetup
343 @eh.extsetup
344 def _extsetup(ui):
344 def _extsetup(ui):
345 wrapfilelog(filelog.filelog)
345 wrapfilelog(filelog.filelog)
346
346
347 context.basefilectx.islfs = wrapper.filectxislfs
347 context.basefilectx.islfs = wrapper.filectxislfs
348
348
349 scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)
349 scmutil.fileprefetchhooks.add(b'lfs', wrapper._prefetchfiles)
350
350
351 # Make bundle choose changegroup3 instead of changegroup2. This affects
351 # Make bundle choose changegroup3 instead of changegroup2. This affects
352 # "hg bundle" command. Note: it does not cover all bundle formats like
352 # "hg bundle" command. Note: it does not cover all bundle formats like
353 # "packed1". Using "packed1" with lfs will likely cause trouble.
353 # "packed1". Using "packed1" with lfs will likely cause trouble.
354 exchange._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"
354 exchange._bundlespeccontentopts[b"v2"][b"cg.version"] = b"03"
355
355
356
356
357 @eh.filesetpredicate(b'lfs()')
357 @eh.filesetpredicate(b'lfs()')
358 def lfsfileset(mctx, x):
358 def lfsfileset(mctx, x):
359 """File that uses LFS storage."""
359 """File that uses LFS storage."""
360 # i18n: "lfs" is a keyword
360 # i18n: "lfs" is a keyword
361 filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
361 filesetlang.getargs(x, 0, 0, _(b"lfs takes no arguments"))
362 ctx = mctx.ctx
362 ctx = mctx.ctx
363
363
364 def lfsfilep(f):
364 def lfsfilep(f):
365 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
365 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
366
366
367 return mctx.predicate(lfsfilep, predrepr=b'<lfs>')
367 return mctx.predicate(lfsfilep, predrepr=b'<lfs>')
368
368
369
369
370 @eh.templatekeyword(b'lfs_files', requires={b'ctx'})
370 @eh.templatekeyword(b'lfs_files', requires={b'ctx'})
371 def lfsfiles(context, mapping):
371 def lfsfiles(context, mapping):
372 """List of strings. All files modified, added, or removed by this
372 """List of strings. All files modified, added, or removed by this
373 changeset."""
373 changeset."""
374 ctx = context.resource(mapping, b'ctx')
374 ctx = context.resource(mapping, b'ctx')
375
375
376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
377 files = sorted(pointers.keys())
377 files = sorted(pointers.keys())
378
378
379 def pointer(v):
379 def pointer(v):
380 # In the file spec, version is first and the other keys are sorted.
380 # In the file spec, version is first and the other keys are sorted.
381 sortkeyfunc = lambda x: (x[0] != b'version', x)
381 sortkeyfunc = lambda x: (x[0] != b'version', x)
382 items = sorted(pycompat.iteritems(pointers[v]), key=sortkeyfunc)
382 items = sorted(pycompat.iteritems(pointers[v]), key=sortkeyfunc)
383 return util.sortdict(items)
383 return util.sortdict(items)
384
384
385 makemap = lambda v: {
385 makemap = lambda v: {
386 b'file': v,
386 b'file': v,
387 b'lfsoid': pointers[v].oid() if pointers[v] else None,
387 b'lfsoid': pointers[v].oid() if pointers[v] else None,
388 b'lfspointer': templateutil.hybriddict(pointer(v)),
388 b'lfspointer': templateutil.hybriddict(pointer(v)),
389 }
389 }
390
390
391 # TODO: make the separator ', '?
391 # TODO: make the separator ', '?
392 f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
392 f = templateutil._showcompatlist(context, mapping, b'lfs_file', files)
393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
394
394
395
395
396 @eh.command(
396 @eh.command(
397 b'debuglfsupload',
397 b'debuglfsupload',
398 [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
398 [(b'r', b'rev', [], _(b'upload large files introduced by REV'))],
399 )
399 )
400 def debuglfsupload(ui, repo, **opts):
400 def debuglfsupload(ui, repo, **opts):
401 """upload lfs blobs added by the working copy parent or given revisions"""
401 """upload lfs blobs added by the working copy parent or given revisions"""
402 revs = opts.get('rev', [])
402 revs = opts.get('rev', [])
403 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
403 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
404 wrapper.uploadblobs(repo, pointers)
404 wrapper.uploadblobs(repo, pointers)
405
405
406
406
407 @eh.wrapcommand(
407 @eh.wrapcommand(
408 b'verify',
408 b'verify',
409 opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
409 opts=[(b'', b'no-lfs', None, _(b'skip missing lfs blob content'))],
410 )
410 )
411 def verify(orig, ui, repo, **opts):
411 def verify(orig, ui, repo, **opts):
412 skipflags = repo.ui.configint(b'verify', b'skipflags')
412 skipflags = repo.ui.configint(b'verify', b'skipflags')
413 no_lfs = opts.pop('no_lfs')
413 no_lfs = opts.pop('no_lfs')
414
414
415 if skipflags:
415 if skipflags:
416 # --lfs overrides the config bit, if set.
416 # --lfs overrides the config bit, if set.
417 if no_lfs is False:
417 if no_lfs is False:
418 skipflags &= ~repository.REVISION_FLAG_EXTSTORED
418 skipflags &= ~repository.REVISION_FLAG_EXTSTORED
419 else:
419 else:
420 skipflags = 0
420 skipflags = 0
421
421
422 if no_lfs is True:
422 if no_lfs is True:
423 skipflags |= repository.REVISION_FLAG_EXTSTORED
423 skipflags |= repository.REVISION_FLAG_EXTSTORED
424
424
425 with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
425 with ui.configoverride({(b'verify', b'skipflags'): skipflags}):
426 return orig(ui, repo, **opts)
426 return orig(ui, repo, **opts)
@@ -1,512 +1,513 b''
1 #require serve no-reposimplestore no-chg
1 #require serve no-reposimplestore no-chg
2
2
3 $ cat >> $HGRCPATH <<EOF
3 $ cat >> $HGRCPATH <<EOF
4 > [extensions]
4 > [extensions]
5 > lfs=
5 > lfs=
6 > [lfs]
6 > [lfs]
7 > track=all()
7 > track=all()
8 > [web]
8 > [web]
9 > push_ssl = False
9 > push_ssl = False
10 > allow-push = *
10 > allow-push = *
11 > EOF
11 > EOF
12
12
13 Serving LFS files can experimentally be turned off. The long term solution is
13 Serving LFS files can experimentally be turned off. The long term solution is
14 to support the 'verify' action in both client and server, so that the server can
14 to support the 'verify' action in both client and server, so that the server can
15 tell the client to store files elsewhere.
15 tell the client to store files elsewhere.
16
16
17 $ hg init server
17 $ hg init server
18 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
18 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
19 > --config experimental.lfs.serve=False -R server serve -d \
19 > --config experimental.lfs.serve=False -R server serve -d \
20 > --config experimental.lfs.worker-enable=False \
20 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
21 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
21 $ cat hg.pid >> $DAEMON_PIDS
22 $ cat hg.pid >> $DAEMON_PIDS
22
23
23 Uploads fail...
24 Uploads fail...
24
25
25 $ hg init client
26 $ hg init client
26 $ echo 'this-is-an-lfs-file' > client/lfs.bin
27 $ echo 'this-is-an-lfs-file' > client/lfs.bin
27 $ hg -R client ci -Am 'initial commit'
28 $ hg -R client ci -Am 'initial commit'
28 adding lfs.bin
29 adding lfs.bin
29 $ hg -R client push http://localhost:$HGPORT
30 $ hg -R client push http://localhost:$HGPORT
30 pushing to http://localhost:$HGPORT/
31 pushing to http://localhost:$HGPORT/
31 searching for changes
32 searching for changes
32 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
33 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
33 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "upload" is supported)
34 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "upload" is supported)
34 [255]
35 [255]
35
36
36 ... so do a local push to make the data available. Remove the blob from the
37 ... so do a local push to make the data available. Remove the blob from the
37 default cache, so it attempts to download.
38 default cache, so it attempts to download.
38 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
39 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
39 > --config "lfs.url=null://" \
40 > --config "lfs.url=null://" \
40 > -R client push -q server
41 > -R client push -q server
41 $ mv `hg config lfs.usercache` $TESTTMP/servercache
42 $ mv `hg config lfs.usercache` $TESTTMP/servercache
42
43
43 Downloads fail...
44 Downloads fail...
44
45
45 $ hg clone http://localhost:$HGPORT httpclone
46 $ hg clone http://localhost:$HGPORT httpclone
46 (remote is using large file support (lfs); lfs will be enabled for this repository)
47 (remote is using large file support (lfs); lfs will be enabled for this repository)
47 requesting all changes
48 requesting all changes
48 adding changesets
49 adding changesets
49 adding manifests
50 adding manifests
50 adding file changes
51 adding file changes
51 added 1 changesets with 1 changes to 1 files
52 added 1 changesets with 1 changes to 1 files
52 new changesets 525251863cad
53 new changesets 525251863cad
53 updating to branch default
54 updating to branch default
54 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
55 abort: LFS HTTP error: HTTP Error 400: no such method: .git!
55 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "download" is supported)
56 (check that lfs serving is enabled on http://localhost:$HGPORT/.git/info/lfs and "download" is supported)
56 [255]
57 [255]
57
58
58 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
59 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
59
60
60 $ cat $TESTTMP/access.log $TESTTMP/errors.log
61 $ cat $TESTTMP/access.log $TESTTMP/errors.log
61 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
62 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
62 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
63 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
63 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
64 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
64 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
65 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
65 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
66 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
66 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
67 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
67 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
68 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
68 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
69 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
69 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
70 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 400 - (glob)
70
71
71 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
72 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
72 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
73 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
73 > -p $HGPORT --pid-file=hg.pid --prefix=subdir/mount/point \
74 > -p $HGPORT --pid-file=hg.pid --prefix=subdir/mount/point \
74 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
75 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
75 $ cat hg.pid >> $DAEMON_PIDS
76 $ cat hg.pid >> $DAEMON_PIDS
76
77
77 Reasonable hint for a misconfigured blob server
78 Reasonable hint for a misconfigured blob server
78
79
79 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT/missing
80 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT/missing
80 abort: LFS HTTP error: HTTP Error 404: Not Found!
81 abort: LFS HTTP error: HTTP Error 404: Not Found!
81 (the "lfs.url" config may be used to override http://localhost:$HGPORT/missing)
82 (the "lfs.url" config may be used to override http://localhost:$HGPORT/missing)
82 [255]
83 [255]
83
84
84 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT2/missing
85 $ hg -R httpclone update default --config lfs.url=http://localhost:$HGPORT2/missing
85 abort: LFS error: *onnection *refused*! (glob) (?)
86 abort: LFS error: *onnection *refused*! (glob) (?)
86 abort: LFS error: $EADDRNOTAVAIL$! (glob) (?)
87 abort: LFS error: $EADDRNOTAVAIL$! (glob) (?)
87 abort: LFS error: No route to host! (?)
88 abort: LFS error: No route to host! (?)
88 (the "lfs.url" config may be used to override http://localhost:$HGPORT2/missing)
89 (the "lfs.url" config may be used to override http://localhost:$HGPORT2/missing)
89 [255]
90 [255]
90
91
91 Blob URIs are correct when --prefix is used
92 Blob URIs are correct when --prefix is used
92
93
93 $ hg clone --debug http://localhost:$HGPORT/subdir/mount/point cloned2
94 $ hg clone --debug http://localhost:$HGPORT/subdir/mount/point cloned2
94 using http://localhost:$HGPORT/subdir/mount/point
95 using http://localhost:$HGPORT/subdir/mount/point
95 sending capabilities command
96 sending capabilities command
96 (remote is using large file support (lfs); lfs will be enabled for this repository)
97 (remote is using large file support (lfs); lfs will be enabled for this repository)
97 query 1; heads
98 query 1; heads
98 sending batch command
99 sending batch command
99 requesting all changes
100 requesting all changes
100 sending getbundle command
101 sending getbundle command
101 bundle2-input-bundle: with-transaction
102 bundle2-input-bundle: with-transaction
102 bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported
103 bundle2-input-part: "changegroup" (params: 1 mandatory 1 advisory) supported
103 adding changesets
104 adding changesets
104 add changeset 525251863cad
105 add changeset 525251863cad
105 adding manifests
106 adding manifests
106 adding file changes
107 adding file changes
107 adding lfs.bin revisions
108 adding lfs.bin revisions
108 bundle2-input-part: total payload size 648
109 bundle2-input-part: total payload size 648
109 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
110 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
110 bundle2-input-part: "phase-heads" supported
111 bundle2-input-part: "phase-heads" supported
111 bundle2-input-part: total payload size 24
112 bundle2-input-part: total payload size 24
112 bundle2-input-part: "cache:rev-branch-cache" (advisory) supported
113 bundle2-input-part: "cache:rev-branch-cache" (advisory) supported
113 bundle2-input-part: total payload size 39
114 bundle2-input-part: total payload size 39
114 bundle2-input-bundle: 4 parts total
115 bundle2-input-bundle: 4 parts total
115 checking for updated bookmarks
116 checking for updated bookmarks
116 updating the branch cache
117 updating the branch cache
117 added 1 changesets with 1 changes to 1 files
118 added 1 changesets with 1 changes to 1 files
118 new changesets 525251863cad
119 new changesets 525251863cad
119 updating to branch default
120 updating to branch default
120 resolving manifests
121 resolving manifests
121 branchmerge: False, force: False, partial: False
122 branchmerge: False, force: False, partial: False
122 ancestor: 000000000000, local: 000000000000+, remote: 525251863cad
123 ancestor: 000000000000, local: 000000000000+, remote: 525251863cad
123 lfs: assuming remote store: http://localhost:$HGPORT/subdir/mount/point/.git/info/lfs
124 lfs: assuming remote store: http://localhost:$HGPORT/subdir/mount/point/.git/info/lfs
124 Status: 200
125 Status: 200
125 Content-Length: 371
126 Content-Length: 371
126 Content-Type: application/vnd.git-lfs+json
127 Content-Type: application/vnd.git-lfs+json
127 Date: $HTTP_DATE$
128 Date: $HTTP_DATE$
128 Server: testing stub value
129 Server: testing stub value
129 {
130 {
130 "objects": [
131 "objects": [
131 {
132 {
132 "actions": {
133 "actions": {
133 "download": {
134 "download": {
134 "expires_at": "$ISO_8601_DATE_TIME$"
135 "expires_at": "$ISO_8601_DATE_TIME$"
135 "header": {
136 "header": {
136 "Accept": "application/vnd.git-lfs"
137 "Accept": "application/vnd.git-lfs"
137 }
138 }
138 "href": "http://localhost:$HGPORT/subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
139 "href": "http://localhost:$HGPORT/subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
139 }
140 }
140 }
141 }
141 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
142 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
142 "size": 20
143 "size": 20
143 }
144 }
144 ]
145 ]
145 "transfer": "basic"
146 "transfer": "basic"
146 }
147 }
147 lfs: downloading f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e (20 bytes)
148 lfs: downloading f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e (20 bytes)
148 Status: 200
149 Status: 200
149 Content-Length: 20
150 Content-Length: 20
150 Content-Type: application/octet-stream
151 Content-Type: application/octet-stream
151 Date: $HTTP_DATE$
152 Date: $HTTP_DATE$
152 Server: testing stub value
153 Server: testing stub value
153 lfs: adding f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e to the usercache
154 lfs: adding f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e to the usercache
154 lfs: processed: f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
155 lfs: processed: f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
155 lfs: downloaded 1 files (20 bytes)
156 lfs: downloaded 1 files (20 bytes)
156 lfs.bin: remote created -> g
157 lfs.bin: remote created -> g
157 getting lfs.bin
158 getting lfs.bin
158 lfs: found f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e in the local lfs store
159 lfs: found f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e in the local lfs store
159 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
160 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
160 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
161 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
161
162
162 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
163 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
163
164
164 $ cat $TESTTMP/access.log $TESTTMP/errors.log
165 $ cat $TESTTMP/access.log $TESTTMP/errors.log
165 $LOCALIP - - [$LOGDATE$] "POST /missing/objects/batch HTTP/1.1" 404 - (glob)
166 $LOCALIP - - [$LOGDATE$] "POST /missing/objects/batch HTTP/1.1" 404 - (glob)
166 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=capabilities HTTP/1.1" 200 - (glob)
167 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=capabilities HTTP/1.1" 200 - (glob)
167 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
168 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
168 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
169 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
169 $LOCALIP - - [$LOGDATE$] "POST /subdir/mount/point/.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
170 $LOCALIP - - [$LOGDATE$] "POST /subdir/mount/point/.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
170 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e HTTP/1.1" 200 - (glob)
171 $LOCALIP - - [$LOGDATE$] "GET /subdir/mount/point/.hg/lfs/objects/f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e HTTP/1.1" 200 - (glob)
171
172
172 Blobs that already exist in the usercache are linked into the repo store, even
173 Blobs that already exist in the usercache are linked into the repo store, even
173 though the client doesn't send the blob.
174 though the client doesn't send the blob.
174
175
175 $ hg init server2
176 $ hg init server2
176 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server2 serve -d \
177 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server2 serve -d \
177 > -p $HGPORT --pid-file=hg.pid \
178 > -p $HGPORT --pid-file=hg.pid \
178 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
179 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
179 $ cat hg.pid >> $DAEMON_PIDS
180 $ cat hg.pid >> $DAEMON_PIDS
180
181
181 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R cloned2 --debug \
182 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R cloned2 --debug \
182 > push http://localhost:$HGPORT | grep '^[{} ]'
183 > push http://localhost:$HGPORT | grep '^[{} ]'
183 {
184 {
184 "objects": [
185 "objects": [
185 {
186 {
186 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
187 "oid": "f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e"
187 "size": 20
188 "size": 20
188 }
189 }
189 ]
190 ]
190 "transfer": "basic"
191 "transfer": "basic"
191 }
192 }
192 $ find server2/.hg/store/lfs/objects | sort
193 $ find server2/.hg/store/lfs/objects | sort
193 server2/.hg/store/lfs/objects
194 server2/.hg/store/lfs/objects
194 server2/.hg/store/lfs/objects/f0
195 server2/.hg/store/lfs/objects/f0
195 server2/.hg/store/lfs/objects/f0/3217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
196 server2/.hg/store/lfs/objects/f0/3217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e
196 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
197 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
197 $ cat $TESTTMP/errors.log
198 $ cat $TESTTMP/errors.log
198
199
199 $ cat >> $TESTTMP/lfsstoreerror.py <<EOF
200 $ cat >> $TESTTMP/lfsstoreerror.py <<EOF
200 > import errno
201 > import errno
201 > from hgext.lfs import blobstore
202 > from hgext.lfs import blobstore
202 >
203 >
203 > _numverifies = 0
204 > _numverifies = 0
204 > _readerr = True
205 > _readerr = True
205 >
206 >
206 > def reposetup(ui, repo):
207 > def reposetup(ui, repo):
207 > # Nothing to do with a remote repo
208 > # Nothing to do with a remote repo
208 > if not repo.local():
209 > if not repo.local():
209 > return
210 > return
210 >
211 >
211 > store = repo.svfs.lfslocalblobstore
212 > store = repo.svfs.lfslocalblobstore
212 > class badstore(store.__class__):
213 > class badstore(store.__class__):
213 > def download(self, oid, src, contentlength):
214 > def download(self, oid, src, contentlength):
214 > '''Called in the server to handle reading from the client in a
215 > '''Called in the server to handle reading from the client in a
215 > PUT request.'''
216 > PUT request.'''
216 > origread = src.read
217 > origread = src.read
217 > def _badread(nbytes):
218 > def _badread(nbytes):
218 > # Simulate bad data/checksum failure from the client
219 > # Simulate bad data/checksum failure from the client
219 > return b'0' * len(origread(nbytes))
220 > return b'0' * len(origread(nbytes))
220 > src.read = _badread
221 > src.read = _badread
221 > super(badstore, self).download(oid, src, contentlength)
222 > super(badstore, self).download(oid, src, contentlength)
222 >
223 >
223 > def _read(self, vfs, oid, verify):
224 > def _read(self, vfs, oid, verify):
224 > '''Called in the server to read data for a GET request, and then
225 > '''Called in the server to read data for a GET request, and then
225 > calls self._verify() on it before returning.'''
226 > calls self._verify() on it before returning.'''
226 > global _readerr
227 > global _readerr
227 > # One time simulation of a read error
228 > # One time simulation of a read error
228 > if _readerr:
229 > if _readerr:
229 > _readerr = False
230 > _readerr = False
230 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
231 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
231 > # Simulate corrupt content on client download
232 > # Simulate corrupt content on client download
232 > blobstore._verify(oid, b'dummy content')
233 > blobstore._verify(oid, b'dummy content')
233 >
234 >
234 > def verify(self, oid):
235 > def verify(self, oid):
235 > '''Called in the server to populate the Batch API response,
236 > '''Called in the server to populate the Batch API response,
236 > letting the client re-upload if the file is corrupt.'''
237 > letting the client re-upload if the file is corrupt.'''
237 > # Fail verify in Batch API for one clone command and one push
238 > # Fail verify in Batch API for one clone command and one push
238 > # command with an IOError. Then let it through to access other
239 > # command with an IOError. Then let it through to access other
239 > # functions. Checksum failure is tested elsewhere.
240 > # functions. Checksum failure is tested elsewhere.
240 > global _numverifies
241 > global _numverifies
241 > _numverifies += 1
242 > _numverifies += 1
242 > if _numverifies <= 2:
243 > if _numverifies <= 2:
243 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
244 > raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
244 > return super(badstore, self).verify(oid)
245 > return super(badstore, self).verify(oid)
245 >
246 >
246 > store.__class__ = badstore
247 > store.__class__ = badstore
247 > EOF
248 > EOF
248
249
249 $ rm -rf `hg config lfs.usercache`
250 $ rm -rf `hg config lfs.usercache`
250 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
251 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
251 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
252 $ hg --config "lfs.usercache=$TESTTMP/servercache" \
252 > --config extensions.lfsstoreerror=$TESTTMP/lfsstoreerror.py \
253 > --config extensions.lfsstoreerror=$TESTTMP/lfsstoreerror.py \
253 > -R server serve -d \
254 > -R server serve -d \
254 > -p $HGPORT1 --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
255 > -p $HGPORT1 --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
255 $ cat hg.pid >> $DAEMON_PIDS
256 $ cat hg.pid >> $DAEMON_PIDS
256
257
257 Test an I/O error in localstore.verify() (Batch API) with GET
258 Test an I/O error in localstore.verify() (Batch API) with GET
258
259
259 $ hg clone http://localhost:$HGPORT1 httpclone2
260 $ hg clone http://localhost:$HGPORT1 httpclone2
260 (remote is using large file support (lfs); lfs will be enabled for this repository)
261 (remote is using large file support (lfs); lfs will be enabled for this repository)
261 requesting all changes
262 requesting all changes
262 adding changesets
263 adding changesets
263 adding manifests
264 adding manifests
264 adding file changes
265 adding file changes
265 added 1 changesets with 1 changes to 1 files
266 added 1 changesets with 1 changes to 1 files
266 new changesets 525251863cad
267 new changesets 525251863cad
267 updating to branch default
268 updating to branch default
268 abort: LFS server error for "lfs.bin": Internal server error!
269 abort: LFS server error for "lfs.bin": Internal server error!
269 [255]
270 [255]
270
271
271 Test an I/O error in localstore.verify() (Batch API) with PUT
272 Test an I/O error in localstore.verify() (Batch API) with PUT
272
273
273 $ echo foo > client/lfs.bin
274 $ echo foo > client/lfs.bin
274 $ hg -R client ci -m 'mod lfs'
275 $ hg -R client ci -m 'mod lfs'
275 $ hg -R client push http://localhost:$HGPORT1
276 $ hg -R client push http://localhost:$HGPORT1
276 pushing to http://localhost:$HGPORT1/
277 pushing to http://localhost:$HGPORT1/
277 searching for changes
278 searching for changes
278 abort: LFS server error for "unknown": Internal server error!
279 abort: LFS server error for "unknown": Internal server error!
279 [255]
280 [255]
280 TODO: figure out how to associate the file name in the error above
281 TODO: figure out how to associate the file name in the error above
281
282
282 Test a bad checksum sent by the client in the transfer API
283 Test a bad checksum sent by the client in the transfer API
283
284
284 $ hg -R client push http://localhost:$HGPORT1
285 $ hg -R client push http://localhost:$HGPORT1
285 pushing to http://localhost:$HGPORT1/
286 pushing to http://localhost:$HGPORT1/
286 searching for changes
287 searching for changes
287 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c, action=upload)!
288 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c, action=upload)!
288 [255]
289 [255]
289
290
290 $ echo 'test lfs file' > server/lfs3.bin
291 $ echo 'test lfs file' > server/lfs3.bin
291 $ hg --config experimental.lfs.disableusercache=True \
292 $ hg --config experimental.lfs.disableusercache=True \
292 > -R server ci -Aqm 'another lfs file'
293 > -R server ci -Aqm 'another lfs file'
293 $ hg -R client pull -q http://localhost:$HGPORT1
294 $ hg -R client pull -q http://localhost:$HGPORT1
294
295
295 Test an I/O error during the processing of the GET request
296 Test an I/O error during the processing of the GET request
296
297
297 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
298 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
298 > -R client update -r tip
299 > -R client update -r tip
299 abort: LFS HTTP error: HTTP Error 500: Internal Server Error (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
300 abort: LFS HTTP error: HTTP Error 500: Internal Server Error (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
300 [255]
301 [255]
301
302
302 Test a checksum failure during the processing of the GET request
303 Test a checksum failure during the processing of the GET request
303
304
304 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
305 $ hg --config lfs.url=http://localhost:$HGPORT1/.git/info/lfs \
305 > -R client update -r tip
306 > -R client update -r tip
306 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
307 abort: LFS HTTP error: HTTP Error 422: corrupt blob (oid=276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d, action=download)!
307 [255]
308 [255]
308
309
309 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
310 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
310
311
311 $ cat $TESTTMP/access.log
312 $ cat $TESTTMP/access.log
312 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
313 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
313 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
314 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
314 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
315 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
315 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
316 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
316 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
317 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
317 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
318 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
318 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
319 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
319 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
320 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
320 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
321 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
321 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
322 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
322 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
323 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
323 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
324 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
324 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
325 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
325 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
326 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
326 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
327 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
327 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
328 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
328 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
329 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
329 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
330 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
330 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c HTTP/1.1" 422 - (glob)
331 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c HTTP/1.1" 422 - (glob)
331 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
332 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
332 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
333 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D392c05922088bacf8e68a6939b480017afbf245d x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
333 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=525251863cad618e55d483555f3d00a2ca99597e&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
334 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=525251863cad618e55d483555f3d00a2ca99597e&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
334 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
335 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
335 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 500 - (glob)
336 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 500 - (glob)
336 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
337 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
337 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 422 - (glob)
338 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 422 - (glob)
338
339
339 $ grep -v ' File "' $TESTTMP/errors.log
340 $ grep -v ' File "' $TESTTMP/errors.log
340 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
341 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
341 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
342 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
342 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
343 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
343 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
344 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
344 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e: I/O error (glob)
345 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] f03217a32529a28a42d03b1244fe09b6e0f9fd06d7b966d4d50567be2abe6c0e: I/O error (glob)
345 $LOCALIP - - [$ERRDATE$] HG error: (glob)
346 $LOCALIP - - [$ERRDATE$] HG error: (glob)
346 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
347 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.git/info/lfs/objects/batch': (glob)
347 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
348 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
348 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
349 $LOCALIP - - [$ERRDATE$] HG error: verifies = store.verify(oid) (glob)
349 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
350 $LOCALIP - - [$ERRDATE$] HG error: raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8")) (glob)
350 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c: I/O error (glob)
351 $LOCALIP - - [$ERRDATE$] HG error: *Error: [Errno *] b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c: I/O error (glob)
351 $LOCALIP - - [$ERRDATE$] HG error: (glob)
352 $LOCALIP - - [$ERRDATE$] HG error: (glob)
352 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c': (glob)
353 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c': (glob)
353 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
354 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
354 $LOCALIP - - [$ERRDATE$] HG error: localstore.download(oid, req.bodyfh, req.headers[b'Content-Length'])
355 $LOCALIP - - [$ERRDATE$] HG error: localstore.download(oid, req.bodyfh, req.headers[b'Content-Length'])
355 $LOCALIP - - [$ERRDATE$] HG error: super(badstore, self).download(oid, src, contentlength)
356 $LOCALIP - - [$ERRDATE$] HG error: super(badstore, self).download(oid, src, contentlength)
356 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
357 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
357 $LOCALIP - - [$ERRDATE$] HG error: _(b'corrupt remote lfs object: %s') % oid (glob) (no-py38 !)
358 $LOCALIP - - [$ERRDATE$] HG error: _(b'corrupt remote lfs object: %s') % oid (glob) (no-py38 !)
358 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (no-py3 !)
359 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (no-py3 !)
359 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (py3 !)
360 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: corrupt remote lfs object: b5bb9d8014a0f9b1d61e21e796d78dccdf1352f23cd32812f4850b878ae4944c (py3 !)
360 $LOCALIP - - [$ERRDATE$] HG error: (glob)
361 $LOCALIP - - [$ERRDATE$] HG error: (glob)
361 $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
362 $LOCALIP - - [$ERRDATE$] Exception happened during processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
362 Traceback (most recent call last):
363 Traceback (most recent call last):
363 self.do_write()
364 self.do_write()
364 self.do_hgweb()
365 self.do_hgweb()
365 for chunk in self.server.application(env, self._start_response):
366 for chunk in self.server.application(env, self._start_response):
366 for r in self._runwsgi(req, res, repo):
367 for r in self._runwsgi(req, res, repo):
367 handled = wireprotoserver.handlewsgirequest( (py38 !)
368 handled = wireprotoserver.handlewsgirequest( (py38 !)
368 return _processbasictransfer( (py38 !)
369 return _processbasictransfer( (py38 !)
369 rctx, req, res, self.check_perm (no-py38 !)
370 rctx, req, res, self.check_perm (no-py38 !)
370 return func(*(args + a), **kw) (no-py3 !)
371 return func(*(args + a), **kw) (no-py3 !)
371 rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm) (no-py38 !)
372 rctx.repo, req, res, lambda perm: checkperm(rctx, req, perm) (no-py38 !)
372 res.setbodybytes(localstore.read(oid))
373 res.setbodybytes(localstore.read(oid))
373 blob = self._read(self.vfs, oid, verify)
374 blob = self._read(self.vfs, oid, verify)
374 raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
375 raise IOError(errno.EIO, r'%s: I/O error' % oid.decode("utf-8"))
375 *Error: [Errno *] 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d: I/O error (glob)
376 *Error: [Errno *] 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d: I/O error (glob)
376
377
377 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
378 $LOCALIP - - [$ERRDATE$] HG error: Exception happened while processing request '/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d': (glob)
378 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
379 $LOCALIP - - [$ERRDATE$] HG error: Traceback (most recent call last): (glob)
379 $LOCALIP - - [$ERRDATE$] HG error: res.setbodybytes(localstore.read(oid)) (glob)
380 $LOCALIP - - [$ERRDATE$] HG error: res.setbodybytes(localstore.read(oid)) (glob)
380 $LOCALIP - - [$ERRDATE$] HG error: blob = self._read(self.vfs, oid, verify) (glob)
381 $LOCALIP - - [$ERRDATE$] HG error: blob = self._read(self.vfs, oid, verify) (glob)
381 $LOCALIP - - [$ERRDATE$] HG error: blobstore._verify(oid, b'dummy content') (glob)
382 $LOCALIP - - [$ERRDATE$] HG error: blobstore._verify(oid, b'dummy content') (glob)
382 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
383 $LOCALIP - - [$ERRDATE$] HG error: raise LfsCorruptionError( (glob) (py38 !)
383 $LOCALIP - - [$ERRDATE$] HG error: hint=_(b'run hg verify'), (glob) (no-py38 !)
384 $LOCALIP - - [$ERRDATE$] HG error: hint=_(b'run hg verify'), (glob) (no-py38 !)
384 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (no-py3 !)
385 $LOCALIP - - [$ERRDATE$] HG error: LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (no-py3 !)
385 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (py3 !)
386 $LOCALIP - - [$ERRDATE$] HG error: hgext.lfs.blobstore.LfsCorruptionError: detected corrupt lfs object: 276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d (py3 !)
386 $LOCALIP - - [$ERRDATE$] HG error: (glob)
387 $LOCALIP - - [$ERRDATE$] HG error: (glob)
387
388
388 Basic Authorization headers are returned by the Batch API, and sent back with
389 Basic Authorization headers are returned by the Batch API, and sent back with
389 the GET/PUT request.
390 the GET/PUT request.
390
391
391 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
392 $ rm -f $TESTTMP/access.log $TESTTMP/errors.log
392
393
393 $ cat >> $HGRCPATH << EOF
394 $ cat >> $HGRCPATH << EOF
394 > [experimental]
395 > [experimental]
395 > lfs.disableusercache = True
396 > lfs.disableusercache = True
396 > [auth]
397 > [auth]
397 > l.schemes=http
398 > l.schemes=http
398 > l.prefix=lo
399 > l.prefix=lo
399 > l.username=user
400 > l.username=user
400 > l.password=pass
401 > l.password=pass
401 > EOF
402 > EOF
402
403
403 $ hg --config extensions.x=$TESTDIR/httpserverauth.py \
404 $ hg --config extensions.x=$TESTDIR/httpserverauth.py \
404 > -R server serve -d -p $HGPORT1 --pid-file=hg.pid \
405 > -R server serve -d -p $HGPORT1 --pid-file=hg.pid \
405 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
406 > -A $TESTTMP/access.log -E $TESTTMP/errors.log
406 $ mv hg.pid $DAEMON_PIDS
407 $ mv hg.pid $DAEMON_PIDS
407
408
408 $ hg clone --debug http://localhost:$HGPORT1 auth_clone | egrep '^[{}]| '
409 $ hg clone --debug http://localhost:$HGPORT1 auth_clone | egrep '^[{}]| '
409 {
410 {
410 "objects": [
411 "objects": [
411 {
412 {
412 "actions": {
413 "actions": {
413 "download": {
414 "download": {
414 "expires_at": "$ISO_8601_DATE_TIME$"
415 "expires_at": "$ISO_8601_DATE_TIME$"
415 "header": {
416 "header": {
416 "Accept": "application/vnd.git-lfs"
417 "Accept": "application/vnd.git-lfs"
417 "Authorization": "Basic dXNlcjpwYXNz"
418 "Authorization": "Basic dXNlcjpwYXNz"
418 }
419 }
419 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
420 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
420 }
421 }
421 }
422 }
422 "oid": "276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
423 "oid": "276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d"
423 "size": 14
424 "size": 14
424 }
425 }
425 ]
426 ]
426 "transfer": "basic"
427 "transfer": "basic"
427 }
428 }
428
429
429 $ echo 'another blob' > auth_clone/lfs.blob
430 $ echo 'another blob' > auth_clone/lfs.blob
430 $ hg -R auth_clone ci -Aqm 'add blob'
431 $ hg -R auth_clone ci -Aqm 'add blob'
431
432
432 $ cat > use_digests.py << EOF
433 $ cat > use_digests.py << EOF
433 > from mercurial import (
434 > from mercurial import (
434 > exthelper,
435 > exthelper,
435 > url,
436 > url,
436 > )
437 > )
437 >
438 >
438 > eh = exthelper.exthelper()
439 > eh = exthelper.exthelper()
439 > uisetup = eh.finaluisetup
440 > uisetup = eh.finaluisetup
440 >
441 >
441 > @eh.wrapfunction(url, 'opener')
442 > @eh.wrapfunction(url, 'opener')
442 > def urlopener(orig, *args, **kwargs):
443 > def urlopener(orig, *args, **kwargs):
443 > opener = orig(*args, **kwargs)
444 > opener = orig(*args, **kwargs)
444 > opener.addheaders.append((r'X-HgTest-AuthType', r'Digest'))
445 > opener.addheaders.append((r'X-HgTest-AuthType', r'Digest'))
445 > return opener
446 > return opener
446 > EOF
447 > EOF
447
448
448 Test that Digest Auth fails gracefully before testing the successful Basic Auth
449 Test that Digest Auth fails gracefully before testing the successful Basic Auth
449
450
450 $ hg -R auth_clone push --config extensions.x=use_digests.py
451 $ hg -R auth_clone push --config extensions.x=use_digests.py
451 pushing to http://localhost:$HGPORT1/
452 pushing to http://localhost:$HGPORT1/
452 searching for changes
453 searching for changes
453 abort: LFS HTTP error: HTTP Error 401: the server must support Basic Authentication!
454 abort: LFS HTTP error: HTTP Error 401: the server must support Basic Authentication!
454 (api=http://localhost:$HGPORT1/.git/info/lfs/objects/batch, action=upload)
455 (api=http://localhost:$HGPORT1/.git/info/lfs/objects/batch, action=upload)
455 [255]
456 [255]
456
457
457 $ hg -R auth_clone --debug push | egrep '^[{}]| '
458 $ hg -R auth_clone --debug push | egrep '^[{}]| '
458 {
459 {
459 "objects": [
460 "objects": [
460 {
461 {
461 "actions": {
462 "actions": {
462 "upload": {
463 "upload": {
463 "expires_at": "$ISO_8601_DATE_TIME$"
464 "expires_at": "$ISO_8601_DATE_TIME$"
464 "header": {
465 "header": {
465 "Accept": "application/vnd.git-lfs"
466 "Accept": "application/vnd.git-lfs"
466 "Authorization": "Basic dXNlcjpwYXNz"
467 "Authorization": "Basic dXNlcjpwYXNz"
467 }
468 }
468 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
469 "href": "http://localhost:$HGPORT1/.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
469 }
470 }
470 }
471 }
471 "oid": "df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
472 "oid": "df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3"
472 "size": 13
473 "size": 13
473 }
474 }
474 ]
475 ]
475 "transfer": "basic"
476 "transfer": "basic"
476 }
477 }
477
478
478 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
479 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
479
480
480 $ cat $TESTTMP/access.log $TESTTMP/errors.log
481 $ cat $TESTTMP/access.log $TESTTMP/errors.log
481 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
482 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
482 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
483 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
483 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
484 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
484 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
485 $LOCALIP - - [$LOGDATE$] "GET /?cmd=getbundle HTTP/1.1" 200 - x-hgarg-1:bookmarks=1&bundlecaps=HG20%2Cbundle2%3DHG20%250Abookmarks%250Achangegroup%253D01%252C02%252C03%250Adigests%253Dmd5%252Csha1%252Csha512%250Aerror%253Dabort%252Cunsupportedcontent%252Cpushraced%252Cpushkey%250Ahgtagsfnodes%250Alistkeys%250Aphases%253Dheads%250Apushkey%250Aremote-changegroup%253Dhttp%252Chttps%250Arev-branch-cache%250Astream%253Dv2&cg=1&common=0000000000000000000000000000000000000000&heads=506bf3d83f78c54b89e81c6411adee19fdf02156+525251863cad618e55d483555f3d00a2ca99597e&listkeys=bookmarks&phases=1 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
485 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
486 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
486 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
487 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
487 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 200 - (glob)
488 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/276f73cfd75f9fb519810df5f5d96d6594ca2521abd86cbcd92122f7d51a1f3d HTTP/1.1" 200 - (glob)
488 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
489 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
489 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - x-hgtest-authtype:Digest (glob)
490 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - x-hgtest-authtype:Digest (glob)
490 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 401 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
491 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 401 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
491 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
492 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
492 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
493 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
493 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
494 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
494 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
495 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
495 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
496 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
496 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 401 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
497 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 401 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
497 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
498 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
498 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
499 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 401 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
499 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
500 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull x-hgtest-authtype:Digest (glob)
500 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
501 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - x-hgtest-authtype:Digest (glob)
501 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
502 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 401 - (glob)
502 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
503 $LOCALIP - - [$LOGDATE$] "GET /?cmd=capabilities HTTP/1.1" 200 - (glob)
503 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
504 $LOCALIP - - [$LOGDATE$] "GET /?cmd=batch HTTP/1.1" 200 - x-hgarg-1:cmds=heads+%3Bknown+nodes%3D525251863cad618e55d483555f3d00a2ca99597e+4d9397055dc0c205f3132f331f36353ab1a525a3 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
504 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
505 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
505 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
506 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
506 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
507 $LOCALIP - - [$LOGDATE$] "GET /?cmd=branchmap HTTP/1.1" 200 - x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
507 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
508 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=bookmarks x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
508 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
509 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 401 - (glob)
509 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
510 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
510 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3 HTTP/1.1" 201 - (glob)
511 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/df14287d8d75f076a6459e7a3703ca583ca9fb3f4918caed10c77ac8622d49b3 HTTP/1.1" 201 - (glob)
511 $LOCALIP - - [$LOGDATE$] "POST /?cmd=unbundle HTTP/1.1" 200 - x-hgarg-1:heads=666f726365 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
512 $LOCALIP - - [$LOGDATE$] "POST /?cmd=unbundle HTTP/1.1" 200 - x-hgarg-1:heads=666f726365 x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
512 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
513 $LOCALIP - - [$LOGDATE$] "GET /?cmd=listkeys HTTP/1.1" 200 - x-hgarg-1:namespace=phases x-hgproto-1:0.1 0.2 comp=$USUAL_COMPRESSIONS$ partial-pull (glob)
@@ -1,720 +1,721 b''
1 #testcases lfsremote-on lfsremote-off
1 #testcases lfsremote-on lfsremote-off
2 #require serve no-reposimplestore no-chg
2 #require serve no-reposimplestore no-chg
3
3
4 This test splits `hg serve` with and without using the extension into separate
4 This test splits `hg serve` with and without using the extension into separate
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
5 tests cases. The tests are broken down as follows, where "LFS"/"No-LFS"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
6 indicates whether or not there are commits that use an LFS file, and "D"/"E"
7 indicates whether or not the extension is loaded. The "X" cases are not tested
7 indicates whether or not the extension is loaded. The "X" cases are not tested
8 individually, because the lfs requirement causes the process to bail early if
8 individually, because the lfs requirement causes the process to bail early if
9 the extension is disabled.
9 the extension is disabled.
10
10
11 . Server
11 . Server
12 .
12 .
13 . No-LFS LFS
13 . No-LFS LFS
14 . +----------------------------+
14 . +----------------------------+
15 . | || D | E | D | E |
15 . | || D | E | D | E |
16 . |---++=======================|
16 . |---++=======================|
17 . C | D || N/A | #1 | X | #4 |
17 . C | D || N/A | #1 | X | #4 |
18 . l No +---++-----------------------|
18 . l No +---++-----------------------|
19 . i LFS | E || #2 | #2 | X | #5 |
19 . i LFS | E || #2 | #2 | X | #5 |
20 . e +---++-----------------------|
20 . e +---++-----------------------|
21 . n | D || X | X | X | X |
21 . n | D || X | X | X | X |
22 . t LFS |---++-----------------------|
22 . t LFS |---++-----------------------|
23 . | E || #3 | #3 | X | #6 |
23 . | E || #3 | #3 | X | #6 |
24 . |---++-----------------------+
24 . |---++-----------------------+
25
25
26 make command server magic visible
26 make command server magic visible
27
27
28 #if windows
28 #if windows
29 $ PYTHONPATH="$TESTDIR/../contrib;$PYTHONPATH"
29 $ PYTHONPATH="$TESTDIR/../contrib;$PYTHONPATH"
30 #else
30 #else
31 $ PYTHONPATH="$TESTDIR/../contrib:$PYTHONPATH"
31 $ PYTHONPATH="$TESTDIR/../contrib:$PYTHONPATH"
32 #endif
32 #endif
33 $ export PYTHONPATH
33 $ export PYTHONPATH
34
34
35 $ hg init server
35 $ hg init server
36 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
36 $ SERVER_REQUIRES="$TESTTMP/server/.hg/requires"
37
37
38 $ cat > $TESTTMP/debugprocessors.py <<EOF
38 $ cat > $TESTTMP/debugprocessors.py <<EOF
39 > from mercurial import (
39 > from mercurial import (
40 > cmdutil,
40 > cmdutil,
41 > commands,
41 > commands,
42 > pycompat,
42 > pycompat,
43 > registrar,
43 > registrar,
44 > )
44 > )
45 > cmdtable = {}
45 > cmdtable = {}
46 > command = registrar.command(cmdtable)
46 > command = registrar.command(cmdtable)
47 > @command(b'debugprocessors', [], b'FILE')
47 > @command(b'debugprocessors', [], b'FILE')
48 > def debugprocessors(ui, repo, file_=None, **opts):
48 > def debugprocessors(ui, repo, file_=None, **opts):
49 > opts = pycompat.byteskwargs(opts)
49 > opts = pycompat.byteskwargs(opts)
50 > opts[b'changelog'] = False
50 > opts[b'changelog'] = False
51 > opts[b'manifest'] = False
51 > opts[b'manifest'] = False
52 > opts[b'dir'] = False
52 > opts[b'dir'] = False
53 > rl = cmdutil.openrevlog(repo, b'debugprocessors', file_, opts)
53 > rl = cmdutil.openrevlog(repo, b'debugprocessors', file_, opts)
54 > for flag, proc in rl._flagprocessors.items():
54 > for flag, proc in rl._flagprocessors.items():
55 > ui.status(b"registered processor '%#x'\n" % (flag))
55 > ui.status(b"registered processor '%#x'\n" % (flag))
56 > EOF
56 > EOF
57
57
58 Skip the experimental.changegroup3=True config. Failure to agree on this comes
58 Skip the experimental.changegroup3=True config. Failure to agree on this comes
59 first, and causes an "abort: no common changegroup version" if the extension is
59 first, and causes an "abort: no common changegroup version" if the extension is
60 only loaded on one side. If that *is* enabled, the subsequent failure is "abort:
60 only loaded on one side. If that *is* enabled, the subsequent failure is "abort:
61 missing processor for flag '0x2000'!" if the extension is only loaded on one side
61 missing processor for flag '0x2000'!" if the extension is only loaded on one side
62 (possibly also masked by the Internal Server Error message).
62 (possibly also masked by the Internal Server Error message).
63 $ cat >> $HGRCPATH <<EOF
63 $ cat >> $HGRCPATH <<EOF
64 > [extensions]
64 > [extensions]
65 > debugprocessors = $TESTTMP/debugprocessors.py
65 > debugprocessors = $TESTTMP/debugprocessors.py
66 > [experimental]
66 > [experimental]
67 > lfs.disableusercache = True
67 > lfs.disableusercache = True
68 > lfs.worker-enable = False
68 > [lfs]
69 > [lfs]
69 > threshold=10
70 > threshold=10
70 > [web]
71 > [web]
71 > allow_push=*
72 > allow_push=*
72 > push_ssl=False
73 > push_ssl=False
73 > EOF
74 > EOF
74
75
75 $ cp $HGRCPATH $HGRCPATH.orig
76 $ cp $HGRCPATH $HGRCPATH.orig
76
77
77 #if lfsremote-on
78 #if lfsremote-on
78 $ hg --config extensions.lfs= -R server \
79 $ hg --config extensions.lfs= -R server \
79 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
80 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
80 #else
81 #else
81 $ hg --config extensions.lfs=! -R server \
82 $ hg --config extensions.lfs=! -R server \
82 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
83 > serve -p $HGPORT -d --pid-file=hg.pid --errorlog=$TESTTMP/errors.log
83 #endif
84 #endif
84
85
85 $ cat hg.pid >> $DAEMON_PIDS
86 $ cat hg.pid >> $DAEMON_PIDS
86 $ hg clone -q http://localhost:$HGPORT client
87 $ hg clone -q http://localhost:$HGPORT client
87 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
88 $ grep 'lfs' client/.hg/requires $SERVER_REQUIRES
88 [1]
89 [1]
89
90
90 This trivial repo will force commandserver to load the extension, but not call
91 This trivial repo will force commandserver to load the extension, but not call
91 reposetup() on another repo actually being operated on. This gives coverage
92 reposetup() on another repo actually being operated on. This gives coverage
92 that wrapper functions are not assuming reposetup() was called.
93 that wrapper functions are not assuming reposetup() was called.
93
94
94 $ hg init $TESTTMP/cmdservelfs
95 $ hg init $TESTTMP/cmdservelfs
95 $ cat >> $TESTTMP/cmdservelfs/.hg/hgrc << EOF
96 $ cat >> $TESTTMP/cmdservelfs/.hg/hgrc << EOF
96 > [extensions]
97 > [extensions]
97 > lfs =
98 > lfs =
98 > EOF
99 > EOF
99
100
100 --------------------------------------------------------------------------------
101 --------------------------------------------------------------------------------
101 Case #1: client with non-lfs content and the extension disabled; server with
102 Case #1: client with non-lfs content and the extension disabled; server with
102 non-lfs content, and the extension enabled.
103 non-lfs content, and the extension enabled.
103
104
104 $ cd client
105 $ cd client
105 $ echo 'non-lfs' > nonlfs.txt
106 $ echo 'non-lfs' > nonlfs.txt
106 >>> from __future__ import absolute_import
107 >>> from __future__ import absolute_import
107 >>> from hgclient import check, readchannel, runcommand
108 >>> from hgclient import check, readchannel, runcommand
108 >>> @check
109 >>> @check
109 ... def diff(server):
110 ... def diff(server):
110 ... readchannel(server)
111 ... readchannel(server)
111 ... # run an arbitrary command in the repo with the extension loaded
112 ... # run an arbitrary command in the repo with the extension loaded
112 ... runcommand(server, [b'id', b'-R', b'../cmdservelfs'])
113 ... runcommand(server, [b'id', b'-R', b'../cmdservelfs'])
113 ... # now run a command in a repo without the extension to ensure that
114 ... # now run a command in a repo without the extension to ensure that
114 ... # files are added safely..
115 ... # files are added safely..
115 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
116 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
116 ... # .. and that scmutil.prefetchfiles() safely no-ops..
117 ... # .. and that scmutil.prefetchfiles() safely no-ops..
117 ... runcommand(server, [b'diff', b'-r', b'.~1'])
118 ... runcommand(server, [b'diff', b'-r', b'.~1'])
118 ... # .. and that debugupgraderepo safely no-ops.
119 ... # .. and that debugupgraderepo safely no-ops.
119 ... runcommand(server, [b'debugupgraderepo', b'-q', b'--run'])
120 ... runcommand(server, [b'debugupgraderepo', b'-q', b'--run'])
120 *** runcommand id -R ../cmdservelfs
121 *** runcommand id -R ../cmdservelfs
121 000000000000 tip
122 000000000000 tip
122 *** runcommand ci -Aqm non-lfs
123 *** runcommand ci -Aqm non-lfs
123 *** runcommand diff -r .~1
124 *** runcommand diff -r .~1
124 diff -r 000000000000 nonlfs.txt
125 diff -r 000000000000 nonlfs.txt
125 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
126 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
126 +++ b/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
127 +++ b/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
127 @@ -0,0 +1,1 @@
128 @@ -0,0 +1,1 @@
128 +non-lfs
129 +non-lfs
129 *** runcommand debugupgraderepo -q --run
130 *** runcommand debugupgraderepo -q --run
130 upgrade will perform the following actions:
131 upgrade will perform the following actions:
131
132
132 requirements
133 requirements
133 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store
134 preserved: dotencode, fncache, generaldelta, revlogv1, sparserevlog, store
134
135
135 sidedata
136 sidedata
136 Allows storage of extra data alongside a revision.
137 Allows storage of extra data alongside a revision.
137
138
138 copies-sdc
139 copies-sdc
139 Allows to use more efficient algorithm to deal with copy tracing.
140 Allows to use more efficient algorithm to deal with copy tracing.
140
141
141 beginning upgrade...
142 beginning upgrade...
142 repository locked and read-only
143 repository locked and read-only
143 creating temporary repository to stage migrated data: * (glob)
144 creating temporary repository to stage migrated data: * (glob)
144 (it is safe to interrupt this process any time before data migration completes)
145 (it is safe to interrupt this process any time before data migration completes)
145 migrating 3 total revisions (1 in filelogs, 1 in manifests, 1 in changelog)
146 migrating 3 total revisions (1 in filelogs, 1 in manifests, 1 in changelog)
146 migrating 324 bytes in store; 129 bytes tracked data
147 migrating 324 bytes in store; 129 bytes tracked data
147 migrating 1 filelogs containing 1 revisions (73 bytes in store; 8 bytes tracked data)
148 migrating 1 filelogs containing 1 revisions (73 bytes in store; 8 bytes tracked data)
148 finished migrating 1 filelog revisions across 1 filelogs; change in size: 0 bytes
149 finished migrating 1 filelog revisions across 1 filelogs; change in size: 0 bytes
149 migrating 1 manifests containing 1 revisions (117 bytes in store; 52 bytes tracked data)
150 migrating 1 manifests containing 1 revisions (117 bytes in store; 52 bytes tracked data)
150 finished migrating 1 manifest revisions across 1 manifests; change in size: 0 bytes
151 finished migrating 1 manifest revisions across 1 manifests; change in size: 0 bytes
151 migrating changelog containing 1 revisions (134 bytes in store; 69 bytes tracked data)
152 migrating changelog containing 1 revisions (134 bytes in store; 69 bytes tracked data)
152 finished migrating 1 changelog revisions; change in size: 0 bytes
153 finished migrating 1 changelog revisions; change in size: 0 bytes
153 finished migrating 3 total revisions; total change in store size: 0 bytes
154 finished migrating 3 total revisions; total change in store size: 0 bytes
154 copying phaseroots
155 copying phaseroots
155 data fully migrated to temporary repository
156 data fully migrated to temporary repository
156 marking source repository as being upgraded; clients will be unable to read from repository
157 marking source repository as being upgraded; clients will be unable to read from repository
157 starting in-place swap of repository data
158 starting in-place swap of repository data
158 replaced files will be backed up at * (glob)
159 replaced files will be backed up at * (glob)
159 replacing store...
160 replacing store...
160 store replacement complete; repository was inconsistent for *s (glob)
161 store replacement complete; repository was inconsistent for *s (glob)
161 finalizing requirements file and making repository readable again
162 finalizing requirements file and making repository readable again
162 removing temporary repository * (glob)
163 removing temporary repository * (glob)
163 copy of old repository backed up at * (glob)
164 copy of old repository backed up at * (glob)
164 the old repository will not be deleted; remove it to free up disk space once the upgraded repository is verified
165 the old repository will not be deleted; remove it to free up disk space once the upgraded repository is verified
165
166
166 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
167 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
167 [1]
168 [1]
168
169
169 #if lfsremote-on
170 #if lfsremote-on
170
171
171 $ hg push -q
172 $ hg push -q
172 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
173 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
173 [1]
174 [1]
174
175
175 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
176 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client1_clone
176 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
177 $ grep 'lfs' $TESTTMP/client1_clone/.hg/requires $SERVER_REQUIRES
177 [1]
178 [1]
178
179
179 $ hg init $TESTTMP/client1_pull
180 $ hg init $TESTTMP/client1_pull
180 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
181 $ hg -R $TESTTMP/client1_pull pull -q http://localhost:$HGPORT
181 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
182 $ grep 'lfs' $TESTTMP/client1_pull/.hg/requires $SERVER_REQUIRES
182 [1]
183 [1]
183
184
184 $ hg identify http://localhost:$HGPORT
185 $ hg identify http://localhost:$HGPORT
185 d437e1d24fbd
186 d437e1d24fbd
186
187
187 #endif
188 #endif
188
189
189 --------------------------------------------------------------------------------
190 --------------------------------------------------------------------------------
190 Case #2: client with non-lfs content and the extension enabled; server with
191 Case #2: client with non-lfs content and the extension enabled; server with
191 non-lfs content, and the extension state controlled by #testcases.
192 non-lfs content, and the extension state controlled by #testcases.
192
193
193 $ cat >> $HGRCPATH <<EOF
194 $ cat >> $HGRCPATH <<EOF
194 > [extensions]
195 > [extensions]
195 > lfs =
196 > lfs =
196 > EOF
197 > EOF
197 $ echo 'non-lfs' > nonlfs2.txt
198 $ echo 'non-lfs' > nonlfs2.txt
198 $ hg ci -Aqm 'non-lfs file with lfs client'
199 $ hg ci -Aqm 'non-lfs file with lfs client'
199
200
200 Since no lfs content has been added yet, the push is allowed, even when the
201 Since no lfs content has been added yet, the push is allowed, even when the
201 extension is not enabled remotely.
202 extension is not enabled remotely.
202
203
203 $ hg push -q
204 $ hg push -q
204 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
205 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
205 [1]
206 [1]
206
207
207 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
208 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client2_clone
208 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
209 $ grep 'lfs' $TESTTMP/client2_clone/.hg/requires $SERVER_REQUIRES
209 [1]
210 [1]
210
211
211 $ hg init $TESTTMP/client2_pull
212 $ hg init $TESTTMP/client2_pull
212 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
213 $ hg -R $TESTTMP/client2_pull pull -q http://localhost:$HGPORT
213 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
214 $ grep 'lfs' $TESTTMP/client2_pull/.hg/requires $SERVER_REQUIRES
214 [1]
215 [1]
215
216
216 $ hg identify http://localhost:$HGPORT
217 $ hg identify http://localhost:$HGPORT
217 1477875038c6
218 1477875038c6
218
219
219 --------------------------------------------------------------------------------
220 --------------------------------------------------------------------------------
220 Case #3: client with lfs content and the extension enabled; server with
221 Case #3: client with lfs content and the extension enabled; server with
221 non-lfs content, and the extension state controlled by #testcases. The server
222 non-lfs content, and the extension state controlled by #testcases. The server
222 should have an 'lfs' requirement after it picks up its first commit with a blob.
223 should have an 'lfs' requirement after it picks up its first commit with a blob.
223
224
224 $ echo 'this is a big lfs file' > lfs.bin
225 $ echo 'this is a big lfs file' > lfs.bin
225 $ hg ci -Aqm 'lfs'
226 $ hg ci -Aqm 'lfs'
226 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
227 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
227 .hg/requires:lfs
228 .hg/requires:lfs
228
229
229 #if lfsremote-off
230 #if lfsremote-off
230 $ hg push -q
231 $ hg push -q
231 abort: required features are not supported in the destination: lfs
232 abort: required features are not supported in the destination: lfs
232 (enable the lfs extension on the server)
233 (enable the lfs extension on the server)
233 [255]
234 [255]
234 #else
235 #else
235 $ hg push -q
236 $ hg push -q
236 #endif
237 #endif
237 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
238 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
238 .hg/requires:lfs
239 .hg/requires:lfs
239 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
240 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
240
241
241 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
242 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client3_clone
242 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
243 $ grep 'lfs' $TESTTMP/client3_clone/.hg/requires $SERVER_REQUIRES || true
243 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
244 $TESTTMP/client3_clone/.hg/requires:lfs (lfsremote-on !)
244 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
245 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
245
246
246 $ hg init $TESTTMP/client3_pull
247 $ hg init $TESTTMP/client3_pull
247 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
248 $ hg -R $TESTTMP/client3_pull pull -q http://localhost:$HGPORT
248 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
249 $ grep 'lfs' $TESTTMP/client3_pull/.hg/requires $SERVER_REQUIRES || true
249 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
250 $TESTTMP/client3_pull/.hg/requires:lfs (lfsremote-on !)
250 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
251 $TESTTMP/server/.hg/requires:lfs (lfsremote-on !)
251
252
252 Test that the commit/changegroup requirement check hook can be run multiple
253 Test that the commit/changegroup requirement check hook can be run multiple
253 times.
254 times.
254
255
255 $ hg clone -qr 0 http://localhost:$HGPORT $TESTTMP/cmdserve_client3
256 $ hg clone -qr 0 http://localhost:$HGPORT $TESTTMP/cmdserve_client3
256
257
257 $ cd ../cmdserve_client3
258 $ cd ../cmdserve_client3
258
259
259 >>> from __future__ import absolute_import
260 >>> from __future__ import absolute_import
260 >>> from hgclient import check, readchannel, runcommand
261 >>> from hgclient import check, readchannel, runcommand
261 >>> @check
262 >>> @check
262 ... def addrequirement(server):
263 ... def addrequirement(server):
263 ... readchannel(server)
264 ... readchannel(server)
264 ... # change the repo in a way that adds the lfs requirement
265 ... # change the repo in a way that adds the lfs requirement
265 ... runcommand(server, [b'pull', b'-qu'])
266 ... runcommand(server, [b'pull', b'-qu'])
266 ... # Now cause the requirement adding hook to fire again, without going
267 ... # Now cause the requirement adding hook to fire again, without going
267 ... # through reposetup() again.
268 ... # through reposetup() again.
268 ... with open('file.txt', 'wb') as fp:
269 ... with open('file.txt', 'wb') as fp:
269 ... fp.write(b'data')
270 ... fp.write(b'data')
270 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
271 ... runcommand(server, [b'ci', b'-Aqm', b'non-lfs'])
271 *** runcommand pull -qu
272 *** runcommand pull -qu
272 *** runcommand ci -Aqm non-lfs
273 *** runcommand ci -Aqm non-lfs
273
274
274 $ cd ../client
275 $ cd ../client
275
276
276 The difference here is the push failed above when the extension isn't
277 The difference here is the push failed above when the extension isn't
277 enabled on the server.
278 enabled on the server.
278 $ hg identify http://localhost:$HGPORT
279 $ hg identify http://localhost:$HGPORT
279 8374dc4052cb (lfsremote-on !)
280 8374dc4052cb (lfsremote-on !)
280 1477875038c6 (lfsremote-off !)
281 1477875038c6 (lfsremote-off !)
281
282
282 Don't bother testing the lfsremote-off cases- the server won't be able
283 Don't bother testing the lfsremote-off cases- the server won't be able
283 to launch if there's lfs content and the extension is disabled.
284 to launch if there's lfs content and the extension is disabled.
284
285
285 #if lfsremote-on
286 #if lfsremote-on
286
287
287 --------------------------------------------------------------------------------
288 --------------------------------------------------------------------------------
288 Case #4: client with non-lfs content and the extension disabled; server with
289 Case #4: client with non-lfs content and the extension disabled; server with
289 lfs content, and the extension enabled.
290 lfs content, and the extension enabled.
290
291
291 $ cat >> $HGRCPATH <<EOF
292 $ cat >> $HGRCPATH <<EOF
292 > [extensions]
293 > [extensions]
293 > lfs = !
294 > lfs = !
294 > EOF
295 > EOF
295
296
296 $ hg init $TESTTMP/client4
297 $ hg init $TESTTMP/client4
297 $ cd $TESTTMP/client4
298 $ cd $TESTTMP/client4
298 $ cat >> .hg/hgrc <<EOF
299 $ cat >> .hg/hgrc <<EOF
299 > [paths]
300 > [paths]
300 > default = http://localhost:$HGPORT
301 > default = http://localhost:$HGPORT
301 > EOF
302 > EOF
302 $ echo 'non-lfs' > nonlfs2.txt
303 $ echo 'non-lfs' > nonlfs2.txt
303 $ hg ci -Aqm 'non-lfs'
304 $ hg ci -Aqm 'non-lfs'
304 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
305 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
305 $TESTTMP/server/.hg/requires:lfs
306 $TESTTMP/server/.hg/requires:lfs
306
307
307 $ hg push -q --force
308 $ hg push -q --force
308 warning: repository is unrelated
309 warning: repository is unrelated
309 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
310 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
310 $TESTTMP/server/.hg/requires:lfs
311 $TESTTMP/server/.hg/requires:lfs
311
312
312 $ hg clone http://localhost:$HGPORT $TESTTMP/client4_clone
313 $ hg clone http://localhost:$HGPORT $TESTTMP/client4_clone
313 (remote is using large file support (lfs), but it is explicitly disabled in the local configuration)
314 (remote is using large file support (lfs), but it is explicitly disabled in the local configuration)
314 abort: repository requires features unknown to this Mercurial: lfs!
315 abort: repository requires features unknown to this Mercurial: lfs!
315 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
316 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
316 [255]
317 [255]
317 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
318 $ grep 'lfs' $TESTTMP/client4_clone/.hg/requires $SERVER_REQUIRES
318 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
319 grep: $TESTTMP/client4_clone/.hg/requires: $ENOENT$
319 $TESTTMP/server/.hg/requires:lfs
320 $TESTTMP/server/.hg/requires:lfs
320 [2]
321 [2]
321
322
322 TODO: fail more gracefully.
323 TODO: fail more gracefully.
323
324
324 $ hg init $TESTTMP/client4_pull
325 $ hg init $TESTTMP/client4_pull
325 $ hg -R $TESTTMP/client4_pull pull http://localhost:$HGPORT
326 $ hg -R $TESTTMP/client4_pull pull http://localhost:$HGPORT
326 pulling from http://localhost:$HGPORT/
327 pulling from http://localhost:$HGPORT/
327 requesting all changes
328 requesting all changes
328 remote: abort: no common changegroup version
329 remote: abort: no common changegroup version
329 abort: pull failed on remote
330 abort: pull failed on remote
330 [255]
331 [255]
331 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
332 $ grep 'lfs' $TESTTMP/client4_pull/.hg/requires $SERVER_REQUIRES
332 $TESTTMP/server/.hg/requires:lfs
333 $TESTTMP/server/.hg/requires:lfs
333
334
334 $ hg identify http://localhost:$HGPORT
335 $ hg identify http://localhost:$HGPORT
335 03b080fa9d93
336 03b080fa9d93
336
337
337 --------------------------------------------------------------------------------
338 --------------------------------------------------------------------------------
338 Case #5: client with non-lfs content and the extension enabled; server with
339 Case #5: client with non-lfs content and the extension enabled; server with
339 lfs content, and the extension enabled.
340 lfs content, and the extension enabled.
340
341
341 $ cat >> $HGRCPATH <<EOF
342 $ cat >> $HGRCPATH <<EOF
342 > [extensions]
343 > [extensions]
343 > lfs =
344 > lfs =
344 > EOF
345 > EOF
345 $ echo 'non-lfs' > nonlfs3.txt
346 $ echo 'non-lfs' > nonlfs3.txt
346 $ hg ci -Aqm 'non-lfs file with lfs client'
347 $ hg ci -Aqm 'non-lfs file with lfs client'
347
348
348 $ hg push -q
349 $ hg push -q
349 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
350 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
350 $TESTTMP/server/.hg/requires:lfs
351 $TESTTMP/server/.hg/requires:lfs
351
352
352 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
353 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client5_clone
353 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
354 $ grep 'lfs' $TESTTMP/client5_clone/.hg/requires $SERVER_REQUIRES
354 $TESTTMP/client5_clone/.hg/requires:lfs
355 $TESTTMP/client5_clone/.hg/requires:lfs
355 $TESTTMP/server/.hg/requires:lfs
356 $TESTTMP/server/.hg/requires:lfs
356
357
357 $ hg init $TESTTMP/client5_pull
358 $ hg init $TESTTMP/client5_pull
358 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
359 $ hg -R $TESTTMP/client5_pull pull -q http://localhost:$HGPORT
359 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
360 $ grep 'lfs' $TESTTMP/client5_pull/.hg/requires $SERVER_REQUIRES
360 $TESTTMP/client5_pull/.hg/requires:lfs
361 $TESTTMP/client5_pull/.hg/requires:lfs
361 $TESTTMP/server/.hg/requires:lfs
362 $TESTTMP/server/.hg/requires:lfs
362
363
363 $ hg identify http://localhost:$HGPORT
364 $ hg identify http://localhost:$HGPORT
364 c729025cc5e3
365 c729025cc5e3
365
366
366 $ mv $HGRCPATH $HGRCPATH.tmp
367 $ mv $HGRCPATH $HGRCPATH.tmp
367 $ cp $HGRCPATH.orig $HGRCPATH
368 $ cp $HGRCPATH.orig $HGRCPATH
368
369
369 >>> from __future__ import absolute_import
370 >>> from __future__ import absolute_import
370 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
371 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
371 >>> @check
372 >>> @check
372 ... def checkflags(server):
373 ... def checkflags(server):
373 ... readchannel(server)
374 ... readchannel(server)
374 ... bprint(b'')
375 ... bprint(b'')
375 ... bprint(b'# LFS required- both lfs and non-lfs revlogs have 0x2000 flag')
376 ... bprint(b'# LFS required- both lfs and non-lfs revlogs have 0x2000 flag')
376 ... stdout.flush()
377 ... stdout.flush()
377 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
378 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
378 ... b'../server'])
379 ... b'../server'])
379 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
380 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
380 ... b'../server'])
381 ... b'../server'])
381 ... runcommand(server, [b'config', b'extensions', b'--cwd',
382 ... runcommand(server, [b'config', b'extensions', b'--cwd',
382 ... b'../server'])
383 ... b'../server'])
383 ...
384 ...
384 ... bprint(b"\n# LFS not enabled- revlogs don't have 0x2000 flag")
385 ... bprint(b"\n# LFS not enabled- revlogs don't have 0x2000 flag")
385 ... stdout.flush()
386 ... stdout.flush()
386 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
387 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
387 ... runcommand(server, [b'config', b'extensions'])
388 ... runcommand(server, [b'config', b'extensions'])
388
389
389 # LFS required- both lfs and non-lfs revlogs have 0x2000 flag
390 # LFS required- both lfs and non-lfs revlogs have 0x2000 flag
390 *** runcommand debugprocessors lfs.bin -R ../server
391 *** runcommand debugprocessors lfs.bin -R ../server
391 registered processor '0x8000'
392 registered processor '0x8000'
392 registered processor '0x2000'
393 registered processor '0x2000'
393 *** runcommand debugprocessors nonlfs2.txt -R ../server
394 *** runcommand debugprocessors nonlfs2.txt -R ../server
394 registered processor '0x8000'
395 registered processor '0x8000'
395 registered processor '0x2000'
396 registered processor '0x2000'
396 *** runcommand config extensions --cwd ../server
397 *** runcommand config extensions --cwd ../server
397 extensions.debugprocessors=$TESTTMP/debugprocessors.py
398 extensions.debugprocessors=$TESTTMP/debugprocessors.py
398 extensions.lfs=
399 extensions.lfs=
399
400
400 # LFS not enabled- revlogs don't have 0x2000 flag
401 # LFS not enabled- revlogs don't have 0x2000 flag
401 *** runcommand debugprocessors nonlfs3.txt
402 *** runcommand debugprocessors nonlfs3.txt
402 registered processor '0x8000'
403 registered processor '0x8000'
403 *** runcommand config extensions
404 *** runcommand config extensions
404 extensions.debugprocessors=$TESTTMP/debugprocessors.py
405 extensions.debugprocessors=$TESTTMP/debugprocessors.py
405
406
406 $ rm $HGRCPATH
407 $ rm $HGRCPATH
407 $ mv $HGRCPATH.tmp $HGRCPATH
408 $ mv $HGRCPATH.tmp $HGRCPATH
408
409
409 $ hg clone $TESTTMP/client $TESTTMP/nonlfs -qr 0 --config extensions.lfs=
410 $ hg clone $TESTTMP/client $TESTTMP/nonlfs -qr 0 --config extensions.lfs=
410 $ cat >> $TESTTMP/nonlfs/.hg/hgrc <<EOF
411 $ cat >> $TESTTMP/nonlfs/.hg/hgrc <<EOF
411 > [extensions]
412 > [extensions]
412 > lfs = !
413 > lfs = !
413 > EOF
414 > EOF
414
415
415 >>> from __future__ import absolute_import, print_function
416 >>> from __future__ import absolute_import, print_function
416 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
417 >>> from hgclient import bprint, check, readchannel, runcommand, stdout
417 >>> @check
418 >>> @check
418 ... def checkflags2(server):
419 ... def checkflags2(server):
419 ... readchannel(server)
420 ... readchannel(server)
420 ... bprint(b'')
421 ... bprint(b'')
421 ... bprint(b'# LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag')
422 ... bprint(b'# LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag')
422 ... stdout.flush()
423 ... stdout.flush()
423 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
424 ... runcommand(server, [b'debugprocessors', b'lfs.bin', b'-R',
424 ... b'../server'])
425 ... b'../server'])
425 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
426 ... runcommand(server, [b'debugprocessors', b'nonlfs2.txt', b'-R',
426 ... b'../server'])
427 ... b'../server'])
427 ... runcommand(server, [b'config', b'extensions', b'--cwd',
428 ... runcommand(server, [b'config', b'extensions', b'--cwd',
428 ... b'../server'])
429 ... b'../server'])
429 ...
430 ...
430 ... bprint(b'\n# LFS enabled without requirement- revlogs have 0x2000 flag')
431 ... bprint(b'\n# LFS enabled without requirement- revlogs have 0x2000 flag')
431 ... stdout.flush()
432 ... stdout.flush()
432 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
433 ... runcommand(server, [b'debugprocessors', b'nonlfs3.txt'])
433 ... runcommand(server, [b'config', b'extensions'])
434 ... runcommand(server, [b'config', b'extensions'])
434 ...
435 ...
435 ... bprint(b"\n# LFS disabled locally- revlogs don't have 0x2000 flag")
436 ... bprint(b"\n# LFS disabled locally- revlogs don't have 0x2000 flag")
436 ... stdout.flush()
437 ... stdout.flush()
437 ... runcommand(server, [b'debugprocessors', b'nonlfs.txt', b'-R',
438 ... runcommand(server, [b'debugprocessors', b'nonlfs.txt', b'-R',
438 ... b'../nonlfs'])
439 ... b'../nonlfs'])
439 ... runcommand(server, [b'config', b'extensions', b'--cwd',
440 ... runcommand(server, [b'config', b'extensions', b'--cwd',
440 ... b'../nonlfs'])
441 ... b'../nonlfs'])
441
442
442 # LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag
443 # LFS enabled- both lfs and non-lfs revlogs have 0x2000 flag
443 *** runcommand debugprocessors lfs.bin -R ../server
444 *** runcommand debugprocessors lfs.bin -R ../server
444 registered processor '0x8000'
445 registered processor '0x8000'
445 registered processor '0x2000'
446 registered processor '0x2000'
446 *** runcommand debugprocessors nonlfs2.txt -R ../server
447 *** runcommand debugprocessors nonlfs2.txt -R ../server
447 registered processor '0x8000'
448 registered processor '0x8000'
448 registered processor '0x2000'
449 registered processor '0x2000'
449 *** runcommand config extensions --cwd ../server
450 *** runcommand config extensions --cwd ../server
450 extensions.debugprocessors=$TESTTMP/debugprocessors.py
451 extensions.debugprocessors=$TESTTMP/debugprocessors.py
451 extensions.lfs=
452 extensions.lfs=
452
453
453 # LFS enabled without requirement- revlogs have 0x2000 flag
454 # LFS enabled without requirement- revlogs have 0x2000 flag
454 *** runcommand debugprocessors nonlfs3.txt
455 *** runcommand debugprocessors nonlfs3.txt
455 registered processor '0x8000'
456 registered processor '0x8000'
456 registered processor '0x2000'
457 registered processor '0x2000'
457 *** runcommand config extensions
458 *** runcommand config extensions
458 extensions.debugprocessors=$TESTTMP/debugprocessors.py
459 extensions.debugprocessors=$TESTTMP/debugprocessors.py
459 extensions.lfs=
460 extensions.lfs=
460
461
461 # LFS disabled locally- revlogs don't have 0x2000 flag
462 # LFS disabled locally- revlogs don't have 0x2000 flag
462 *** runcommand debugprocessors nonlfs.txt -R ../nonlfs
463 *** runcommand debugprocessors nonlfs.txt -R ../nonlfs
463 registered processor '0x8000'
464 registered processor '0x8000'
464 *** runcommand config extensions --cwd ../nonlfs
465 *** runcommand config extensions --cwd ../nonlfs
465 extensions.debugprocessors=$TESTTMP/debugprocessors.py
466 extensions.debugprocessors=$TESTTMP/debugprocessors.py
466 extensions.lfs=!
467 extensions.lfs=!
467
468
468 --------------------------------------------------------------------------------
469 --------------------------------------------------------------------------------
469 Case #6: client with lfs content and the extension enabled; server with
470 Case #6: client with lfs content and the extension enabled; server with
470 lfs content, and the extension enabled.
471 lfs content, and the extension enabled.
471
472
472 $ echo 'this is another lfs file' > lfs2.txt
473 $ echo 'this is another lfs file' > lfs2.txt
473 $ hg ci -Aqm 'lfs file with lfs client'
474 $ hg ci -Aqm 'lfs file with lfs client'
474
475
475 $ hg --config paths.default= push -v http://localhost:$HGPORT
476 $ hg --config paths.default= push -v http://localhost:$HGPORT
476 pushing to http://localhost:$HGPORT/
477 pushing to http://localhost:$HGPORT/
477 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
478 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
478 searching for changes
479 searching for changes
479 remote has heads on branch 'default' that are not known locally: 8374dc4052cb
480 remote has heads on branch 'default' that are not known locally: 8374dc4052cb
480 lfs: uploading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
481 lfs: uploading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
481 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
482 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
482 lfs: uploaded 1 files (25 bytes)
483 lfs: uploaded 1 files (25 bytes)
483 1 changesets found
484 1 changesets found
484 uncompressed size of bundle content:
485 uncompressed size of bundle content:
485 206 (changelog)
486 206 (changelog)
486 172 (manifests)
487 172 (manifests)
487 275 lfs2.txt
488 275 lfs2.txt
488 remote: adding changesets
489 remote: adding changesets
489 remote: adding manifests
490 remote: adding manifests
490 remote: adding file changes
491 remote: adding file changes
491 remote: added 1 changesets with 1 changes to 1 files
492 remote: added 1 changesets with 1 changes to 1 files
492 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
493 $ grep 'lfs' .hg/requires $SERVER_REQUIRES
493 .hg/requires:lfs
494 .hg/requires:lfs
494 $TESTTMP/server/.hg/requires:lfs
495 $TESTTMP/server/.hg/requires:lfs
495
496
496 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
497 $ hg clone -q http://localhost:$HGPORT $TESTTMP/client6_clone
497 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
498 $ grep 'lfs' $TESTTMP/client6_clone/.hg/requires $SERVER_REQUIRES
498 $TESTTMP/client6_clone/.hg/requires:lfs
499 $TESTTMP/client6_clone/.hg/requires:lfs
499 $TESTTMP/server/.hg/requires:lfs
500 $TESTTMP/server/.hg/requires:lfs
500
501
501 $ hg init $TESTTMP/client6_pull
502 $ hg init $TESTTMP/client6_pull
502 $ hg -R $TESTTMP/client6_pull pull -u -v http://localhost:$HGPORT
503 $ hg -R $TESTTMP/client6_pull pull -u -v http://localhost:$HGPORT
503 pulling from http://localhost:$HGPORT/
504 pulling from http://localhost:$HGPORT/
504 requesting all changes
505 requesting all changes
505 adding changesets
506 adding changesets
506 adding manifests
507 adding manifests
507 adding file changes
508 adding file changes
508 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
509 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
509 added 6 changesets with 5 changes to 5 files (+1 heads)
510 added 6 changesets with 5 changes to 5 files (+1 heads)
510 new changesets d437e1d24fbd:d3b84d50eacb
511 new changesets d437e1d24fbd:d3b84d50eacb
511 resolving manifests
512 resolving manifests
512 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
513 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
513 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
514 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
514 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
515 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
515 lfs: downloaded 1 files (25 bytes)
516 lfs: downloaded 1 files (25 bytes)
516 getting lfs2.txt
517 getting lfs2.txt
517 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
518 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
518 getting nonlfs2.txt
519 getting nonlfs2.txt
519 getting nonlfs3.txt
520 getting nonlfs3.txt
520 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
521 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
521 updated to "d3b84d50eacb: lfs file with lfs client"
522 updated to "d3b84d50eacb: lfs file with lfs client"
522 1 other heads for branch "default"
523 1 other heads for branch "default"
523 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
524 (sent 3 HTTP requests and * bytes; received * bytes in responses) (glob)
524 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
525 $ grep 'lfs' $TESTTMP/client6_pull/.hg/requires $SERVER_REQUIRES
525 $TESTTMP/client6_pull/.hg/requires:lfs
526 $TESTTMP/client6_pull/.hg/requires:lfs
526 $TESTTMP/server/.hg/requires:lfs
527 $TESTTMP/server/.hg/requires:lfs
527
528
528 $ hg identify http://localhost:$HGPORT
529 $ hg identify http://localhost:$HGPORT
529 d3b84d50eacb
530 d3b84d50eacb
530
531
531 --------------------------------------------------------------------------------
532 --------------------------------------------------------------------------------
532 Misc: process dies early if a requirement exists and the extension is disabled
533 Misc: process dies early if a requirement exists and the extension is disabled
533
534
534 $ hg --config extensions.lfs=! summary
535 $ hg --config extensions.lfs=! summary
535 abort: repository requires features unknown to this Mercurial: lfs!
536 abort: repository requires features unknown to this Mercurial: lfs!
536 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
537 (see https://mercurial-scm.org/wiki/MissingRequirement for more information)
537 [255]
538 [255]
538
539
539 $ echo 'this is an lfs file' > $TESTTMP/client6_clone/lfspair1.bin
540 $ echo 'this is an lfs file' > $TESTTMP/client6_clone/lfspair1.bin
540 $ echo 'this is an lfs file too' > $TESTTMP/client6_clone/lfspair2.bin
541 $ echo 'this is an lfs file too' > $TESTTMP/client6_clone/lfspair2.bin
541 $ hg -R $TESTTMP/client6_clone ci -Aqm 'add lfs pair'
542 $ hg -R $TESTTMP/client6_clone ci -Aqm 'add lfs pair'
542 $ hg -R $TESTTMP/client6_clone push -q
543 $ hg -R $TESTTMP/client6_clone push -q
543
544
544 $ hg clone -qU http://localhost:$HGPORT $TESTTMP/bulkfetch
545 $ hg clone -qU http://localhost:$HGPORT $TESTTMP/bulkfetch
545
546
546 Cat doesn't prefetch unless data is needed (e.g. '-T {rawdata}' doesn't need it)
547 Cat doesn't prefetch unless data is needed (e.g. '-T {rawdata}' doesn't need it)
547
548
548 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{rawdata}\n{path}\n'
549 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{rawdata}\n{path}\n'
549 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
550 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
550 version https://git-lfs.github.com/spec/v1
551 version https://git-lfs.github.com/spec/v1
551 oid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
552 oid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
552 size 20
553 size 20
553 x-is-binary 0
554 x-is-binary 0
554
555
555 lfspair1.bin
556 lfspair1.bin
556
557
557 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T json
558 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T json
558 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
559 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
559 [lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
560 [lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
560 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
561 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
561 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
562 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
562 lfs: downloaded 1 files (20 bytes)
563 lfs: downloaded 1 files (20 bytes)
563 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
564 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
564
565
565 {
566 {
566 "data": "this is an lfs file\n",
567 "data": "this is an lfs file\n",
567 "path": "lfspair1.bin",
568 "path": "lfspair1.bin",
568 "rawdata": "version https://git-lfs.github.com/spec/v1\noid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782\nsize 20\nx-is-binary 0\n"
569 "rawdata": "version https://git-lfs.github.com/spec/v1\noid sha256:cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782\nsize 20\nx-is-binary 0\n"
569 }
570 }
570 ]
571 ]
571
572
572 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
573 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
573
574
574 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{data}\n'
575 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair1.bin -T '{data}\n'
575 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
576 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
576 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
577 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
577 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
578 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
578 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
579 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
579 lfs: downloaded 1 files (20 bytes)
580 lfs: downloaded 1 files (20 bytes)
580 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
581 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
581 this is an lfs file
582 this is an lfs file
582
583
583 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair2.bin
584 $ hg --cwd $TESTTMP/bulkfetch cat -vr tip lfspair2.bin
584 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
585 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
585 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
586 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
586 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
587 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
587 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
588 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
588 lfs: downloaded 1 files (24 bytes)
589 lfs: downloaded 1 files (24 bytes)
589 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
590 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
590 this is an lfs file too
591 this is an lfs file too
591
592
592 Export will prefetch all needed files across all needed revisions
593 Export will prefetch all needed files across all needed revisions
593
594
594 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
595 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
595 $ hg -R $TESTTMP/bulkfetch -v export -r 0:tip -o all.export
596 $ hg -R $TESTTMP/bulkfetch -v export -r 0:tip -o all.export
596 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
597 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
597 exporting patches:
598 exporting patches:
598 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
599 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
599 lfs: need to transfer 4 objects (92 bytes)
600 lfs: need to transfer 4 objects (92 bytes)
600 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
601 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
601 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
602 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
602 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
603 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
603 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
604 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
604 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
605 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
605 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
606 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
606 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
607 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
607 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
608 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
608 lfs: downloaded 4 files (92 bytes)
609 lfs: downloaded 4 files (92 bytes)
609 all.export
610 all.export
610 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
611 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
611 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
612 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
612 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
613 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
613 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
614 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
614
615
615 Export with selected files is used with `extdiff --patch`
616 Export with selected files is used with `extdiff --patch`
616
617
617 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
618 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
618 $ hg --config extensions.extdiff= \
619 $ hg --config extensions.extdiff= \
619 > -R $TESTTMP/bulkfetch -v extdiff -r 2:tip --patch $TESTTMP/bulkfetch/lfs.bin
620 > -R $TESTTMP/bulkfetch -v extdiff -r 2:tip --patch $TESTTMP/bulkfetch/lfs.bin
620 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
621 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
621 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
622 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
622 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
623 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
623 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
624 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
624 lfs: downloaded 1 files (23 bytes)
625 lfs: downloaded 1 files (23 bytes)
625 */hg-8374dc4052cb.patch (glob)
626 */hg-8374dc4052cb.patch (glob)
626 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
627 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
627 */hg-9640b57e77b1.patch (glob)
628 */hg-9640b57e77b1.patch (glob)
628 --- */hg-8374dc4052cb.patch * (glob)
629 --- */hg-8374dc4052cb.patch * (glob)
629 +++ */hg-9640b57e77b1.patch * (glob)
630 +++ */hg-9640b57e77b1.patch * (glob)
630 @@ -2,12 +2,7 @@
631 @@ -2,12 +2,7 @@
631 # User test
632 # User test
632 # Date 0 0
633 # Date 0 0
633 # Thu Jan 01 00:00:00 1970 +0000
634 # Thu Jan 01 00:00:00 1970 +0000
634 -# Node ID 8374dc4052cbd388e79d9dc4ddb29784097aa354
635 -# Node ID 8374dc4052cbd388e79d9dc4ddb29784097aa354
635 -# Parent 1477875038c60152e391238920a16381c627b487
636 -# Parent 1477875038c60152e391238920a16381c627b487
636 -lfs
637 -lfs
637 +# Node ID 9640b57e77b14c3a0144fb4478b6cc13e13ea0d1
638 +# Node ID 9640b57e77b14c3a0144fb4478b6cc13e13ea0d1
638 +# Parent d3b84d50eacbd56638e11abce6b8616aaba54420
639 +# Parent d3b84d50eacbd56638e11abce6b8616aaba54420
639 +add lfs pair
640 +add lfs pair
640
641
641 -diff -r 1477875038c6 -r 8374dc4052cb lfs.bin
642 -diff -r 1477875038c6 -r 8374dc4052cb lfs.bin
642 ---- /dev/null Thu Jan 01 00:00:00 1970 +0000
643 ---- /dev/null Thu Jan 01 00:00:00 1970 +0000
643 -+++ b/lfs.bin Thu Jan 01 00:00:00 1970 +0000
644 -+++ b/lfs.bin Thu Jan 01 00:00:00 1970 +0000
644 -@@ -0,0 +1,1 @@
645 -@@ -0,0 +1,1 @@
645 -+this is a big lfs file
646 -+this is a big lfs file
646 cleaning up temp directory
647 cleaning up temp directory
647 [1]
648 [1]
648
649
649 Diff will prefetch files
650 Diff will prefetch files
650
651
651 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
652 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
652 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip
653 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip
653 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
654 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
654 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
655 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
655 lfs: need to transfer 4 objects (92 bytes)
656 lfs: need to transfer 4 objects (92 bytes)
656 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
657 lfs: downloading a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de (25 bytes)
657 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
658 lfs: processed: a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de
658 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
659 lfs: downloading bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc (23 bytes)
659 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
660 lfs: processed: bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc
660 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
661 lfs: downloading cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 (20 bytes)
661 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
662 lfs: processed: cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782
662 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
663 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
663 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
664 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
664 lfs: downloaded 4 files (92 bytes)
665 lfs: downloaded 4 files (92 bytes)
665 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
666 lfs: found bed80f00180ac404b843628ab56a1c1984d6145c391cd1628a7dd7d2598d71fc in the local lfs store
666 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
667 lfs: found a82f1c5cea0d40e3bb3a849686bb4e6ae47ca27e614de55c1ed0325698ef68de in the local lfs store
667 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
668 lfs: found cf1b2787b74e66547d931b6ebe28ff63303e803cb2baa14a8f57c4383d875782 in the local lfs store
668 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
669 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
669 diff -r 8374dc4052cb -r 9640b57e77b1 lfs.bin
670 diff -r 8374dc4052cb -r 9640b57e77b1 lfs.bin
670 --- a/lfs.bin Thu Jan 01 00:00:00 1970 +0000
671 --- a/lfs.bin Thu Jan 01 00:00:00 1970 +0000
671 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
672 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
672 @@ -1,1 +0,0 @@
673 @@ -1,1 +0,0 @@
673 -this is a big lfs file
674 -this is a big lfs file
674 diff -r 8374dc4052cb -r 9640b57e77b1 lfs2.txt
675 diff -r 8374dc4052cb -r 9640b57e77b1 lfs2.txt
675 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
676 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
676 +++ b/lfs2.txt Thu Jan 01 00:00:00 1970 +0000
677 +++ b/lfs2.txt Thu Jan 01 00:00:00 1970 +0000
677 @@ -0,0 +1,1 @@
678 @@ -0,0 +1,1 @@
678 +this is another lfs file
679 +this is another lfs file
679 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair1.bin
680 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair1.bin
680 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
681 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
681 +++ b/lfspair1.bin Thu Jan 01 00:00:00 1970 +0000
682 +++ b/lfspair1.bin Thu Jan 01 00:00:00 1970 +0000
682 @@ -0,0 +1,1 @@
683 @@ -0,0 +1,1 @@
683 +this is an lfs file
684 +this is an lfs file
684 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
685 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
685 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
686 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
686 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
687 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
687 @@ -0,0 +1,1 @@
688 @@ -0,0 +1,1 @@
688 +this is an lfs file too
689 +this is an lfs file too
689 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs.txt
690 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs.txt
690 --- a/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
691 --- a/nonlfs.txt Thu Jan 01 00:00:00 1970 +0000
691 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
692 +++ /dev/null Thu Jan 01 00:00:00 1970 +0000
692 @@ -1,1 +0,0 @@
693 @@ -1,1 +0,0 @@
693 -non-lfs
694 -non-lfs
694 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs3.txt
695 diff -r 8374dc4052cb -r 9640b57e77b1 nonlfs3.txt
695 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
696 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
696 +++ b/nonlfs3.txt Thu Jan 01 00:00:00 1970 +0000
697 +++ b/nonlfs3.txt Thu Jan 01 00:00:00 1970 +0000
697 @@ -0,0 +1,1 @@
698 @@ -0,0 +1,1 @@
698 +non-lfs
699 +non-lfs
699
700
700 Only the files required by diff are prefetched
701 Only the files required by diff are prefetched
701
702
702 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
703 $ rm -r $TESTTMP/bulkfetch/.hg/store/lfs
703 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip $TESTTMP/bulkfetch/lfspair2.bin
704 $ hg -R $TESTTMP/bulkfetch -v diff -r 2:tip $TESTTMP/bulkfetch/lfspair2.bin
704 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
705 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
705 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
706 lfs: assuming remote store: http://localhost:$HGPORT/.git/info/lfs
706 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
707 lfs: downloading d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e (24 bytes)
707 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
708 lfs: processed: d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e
708 lfs: downloaded 1 files (24 bytes)
709 lfs: downloaded 1 files (24 bytes)
709 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
710 lfs: found d96eda2c74b56e95cfb5ffb66b6503e198cc6fc4a09dc877de925feebc65786e in the local lfs store
710 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
711 diff -r 8374dc4052cb -r 9640b57e77b1 lfspair2.bin
711 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
712 --- /dev/null Thu Jan 01 00:00:00 1970 +0000
712 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
713 +++ b/lfspair2.bin Thu Jan 01 00:00:00 1970 +0000
713 @@ -0,0 +1,1 @@
714 @@ -0,0 +1,1 @@
714 +this is an lfs file too
715 +this is an lfs file too
715
716
716 #endif
717 #endif
717
718
718 $ "$PYTHON" $TESTDIR/killdaemons.py $DAEMON_PIDS
719 $ "$PYTHON" $TESTDIR/killdaemons.py $DAEMON_PIDS
719
720
720 $ cat $TESTTMP/errors.log
721 $ cat $TESTTMP/errors.log
@@ -1,941 +1,943 b''
1 #require no-reposimplestore no-chg
1 #require no-reposimplestore no-chg
2 #testcases git-server hg-server
2 #testcases git-server hg-server
3
3
4 #if git-server
4 #if git-server
5 #require lfs-test-server
5 #require lfs-test-server
6 #else
6 #else
7 #require serve
7 #require serve
8 #endif
8 #endif
9
9
10 #if git-server
10 #if git-server
11 $ LFS_LISTEN="tcp://:$HGPORT"
11 $ LFS_LISTEN="tcp://:$HGPORT"
12 $ LFS_HOST="localhost:$HGPORT"
12 $ LFS_HOST="localhost:$HGPORT"
13 $ LFS_PUBLIC=1
13 $ LFS_PUBLIC=1
14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
14 $ export LFS_LISTEN LFS_HOST LFS_PUBLIC
15 #else
15 #else
16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
16 $ LFS_HOST="localhost:$HGPORT/.git/info/lfs"
17 #endif
17 #endif
18
18
19 #if no-windows git-server
19 #if no-windows git-server
20 $ lfs-test-server &> lfs-server.log &
20 $ lfs-test-server &> lfs-server.log &
21 $ echo $! >> $DAEMON_PIDS
21 $ echo $! >> $DAEMON_PIDS
22 #endif
22 #endif
23
23
24 #if windows git-server
24 #if windows git-server
25 $ cat >> $TESTTMP/spawn.py <<EOF
25 $ cat >> $TESTTMP/spawn.py <<EOF
26 > import os
26 > import os
27 > import subprocess
27 > import subprocess
28 > import sys
28 > import sys
29 >
29 >
30 > for path in os.environ["PATH"].split(os.pathsep):
30 > for path in os.environ["PATH"].split(os.pathsep):
31 > exe = os.path.join(path, 'lfs-test-server.exe')
31 > exe = os.path.join(path, 'lfs-test-server.exe')
32 > if os.path.exists(exe):
32 > if os.path.exists(exe):
33 > with open('lfs-server.log', 'wb') as out:
33 > with open('lfs-server.log', 'wb') as out:
34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
34 > p = subprocess.Popen(exe, stdout=out, stderr=out)
35 > sys.stdout.write('%s\n' % p.pid)
35 > sys.stdout.write('%s\n' % p.pid)
36 > sys.exit(0)
36 > sys.exit(0)
37 > sys.exit(1)
37 > sys.exit(1)
38 > EOF
38 > EOF
39 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
39 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
40 #endif
40 #endif
41
41
42 $ cat >> $HGRCPATH <<EOF
42 $ cat >> $HGRCPATH <<EOF
43 > [experimental]
44 > lfs.worker-enable = False
43 > [extensions]
45 > [extensions]
44 > lfs=
46 > lfs=
45 > [lfs]
47 > [lfs]
46 > url=http://foo:bar@$LFS_HOST
48 > url=http://foo:bar@$LFS_HOST
47 > track=all()
49 > track=all()
48 > [web]
50 > [web]
49 > push_ssl = False
51 > push_ssl = False
50 > allow-push = *
52 > allow-push = *
51 > EOF
53 > EOF
52
54
53 Use a separate usercache, otherwise the server sees what the client commits, and
55 Use a separate usercache, otherwise the server sees what the client commits, and
54 never requests a transfer.
56 never requests a transfer.
55
57
56 #if hg-server
58 #if hg-server
57 $ hg init server
59 $ hg init server
58 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
60 $ hg --config "lfs.usercache=$TESTTMP/servercache" -R server serve -d \
59 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
61 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
60 $ cat hg.pid >> $DAEMON_PIDS
62 $ cat hg.pid >> $DAEMON_PIDS
61 #endif
63 #endif
62
64
63 $ hg init repo1
65 $ hg init repo1
64 $ cd repo1
66 $ cd repo1
65 $ echo THIS-IS-LFS > a
67 $ echo THIS-IS-LFS > a
66 $ hg commit -m a -A a
68 $ hg commit -m a -A a
67
69
68 A push can be serviced directly from the usercache if it isn't in the local
70 A push can be serviced directly from the usercache if it isn't in the local
69 store.
71 store.
70
72
71 $ hg init ../repo2
73 $ hg init ../repo2
72 $ mv .hg/store/lfs .hg/store/lfs_
74 $ mv .hg/store/lfs .hg/store/lfs_
73 $ hg push ../repo2 --debug
75 $ hg push ../repo2 --debug
74 http auth: user foo, password ***
76 http auth: user foo, password ***
75 pushing to ../repo2
77 pushing to ../repo2
76 http auth: user foo, password ***
78 http auth: user foo, password ***
77 http auth: user foo, password ***
79 http auth: user foo, password ***
78 query 1; heads
80 query 1; heads
79 searching for changes
81 searching for changes
80 1 total queries in *s (glob)
82 1 total queries in *s (glob)
81 listing keys for "phases"
83 listing keys for "phases"
82 checking for updated bookmarks
84 checking for updated bookmarks
83 listing keys for "bookmarks"
85 listing keys for "bookmarks"
84 lfs: computing set of blobs to upload
86 lfs: computing set of blobs to upload
85 Status: 200
87 Status: 200
86 Content-Length: 309 (git-server !)
88 Content-Length: 309 (git-server !)
87 Content-Length: 350 (hg-server !)
89 Content-Length: 350 (hg-server !)
88 Content-Type: application/vnd.git-lfs+json
90 Content-Type: application/vnd.git-lfs+json
89 Date: $HTTP_DATE$
91 Date: $HTTP_DATE$
90 Server: testing stub value (hg-server !)
92 Server: testing stub value (hg-server !)
91 {
93 {
92 "objects": [
94 "objects": [
93 {
95 {
94 "actions": {
96 "actions": {
95 "upload": {
97 "upload": {
96 "expires_at": "$ISO_8601_DATE_TIME$"
98 "expires_at": "$ISO_8601_DATE_TIME$"
97 "header": {
99 "header": {
98 "Accept": "application/vnd.git-lfs"
100 "Accept": "application/vnd.git-lfs"
99 }
101 }
100 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
102 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
101 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
103 "href": "http://localhost:$HGPORT/.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (hg-server !)
102 }
104 }
103 }
105 }
104 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
106 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
105 "size": 12
107 "size": 12
106 }
108 }
107 ]
109 ]
108 "transfer": "basic" (hg-server !)
110 "transfer": "basic" (hg-server !)
109 }
111 }
110 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
112 lfs: uploading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
111 Status: 200 (git-server !)
113 Status: 200 (git-server !)
112 Status: 201 (hg-server !)
114 Status: 201 (hg-server !)
113 Content-Length: 0
115 Content-Length: 0
114 Content-Type: text/plain; charset=utf-8
116 Content-Type: text/plain; charset=utf-8
115 Date: $HTTP_DATE$
117 Date: $HTTP_DATE$
116 Server: testing stub value (hg-server !)
118 Server: testing stub value (hg-server !)
117 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
119 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
118 lfs: uploaded 1 files (12 bytes)
120 lfs: uploaded 1 files (12 bytes)
119 1 changesets found
121 1 changesets found
120 list of changesets:
122 list of changesets:
121 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
123 99a7098854a3984a5c9eab0fc7a2906697b7cb5c
122 bundle2-output-bundle: "HG20", 4 parts total
124 bundle2-output-bundle: "HG20", 4 parts total
123 bundle2-output-part: "replycaps" * bytes payload (glob)
125 bundle2-output-part: "replycaps" * bytes payload (glob)
124 bundle2-output-part: "check:heads" streamed payload
126 bundle2-output-part: "check:heads" streamed payload
125 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
127 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
126 bundle2-output-part: "phase-heads" 24 bytes payload
128 bundle2-output-part: "phase-heads" 24 bytes payload
127 bundle2-input-bundle: with-transaction
129 bundle2-input-bundle: with-transaction
128 bundle2-input-part: "replycaps" supported
130 bundle2-input-part: "replycaps" supported
129 bundle2-input-part: total payload size * (glob)
131 bundle2-input-part: total payload size * (glob)
130 bundle2-input-part: "check:heads" supported
132 bundle2-input-part: "check:heads" supported
131 bundle2-input-part: total payload size 20
133 bundle2-input-part: total payload size 20
132 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
134 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
133 adding changesets
135 adding changesets
134 add changeset 99a7098854a3
136 add changeset 99a7098854a3
135 adding manifests
137 adding manifests
136 adding file changes
138 adding file changes
137 adding a revisions
139 adding a revisions
138 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
140 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
139 bundle2-input-part: total payload size 617
141 bundle2-input-part: total payload size 617
140 bundle2-input-part: "phase-heads" supported
142 bundle2-input-part: "phase-heads" supported
141 bundle2-input-part: total payload size 24
143 bundle2-input-part: total payload size 24
142 bundle2-input-bundle: 4 parts total
144 bundle2-input-bundle: 4 parts total
143 updating the branch cache
145 updating the branch cache
144 added 1 changesets with 1 changes to 1 files
146 added 1 changesets with 1 changes to 1 files
145 bundle2-output-bundle: "HG20", 1 parts total
147 bundle2-output-bundle: "HG20", 1 parts total
146 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
148 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
147 bundle2-input-bundle: no-transaction
149 bundle2-input-bundle: no-transaction
148 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
150 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
149 bundle2-input-bundle: 1 parts total
151 bundle2-input-bundle: 1 parts total
150 listing keys for "phases"
152 listing keys for "phases"
151 $ mv .hg/store/lfs_ .hg/store/lfs
153 $ mv .hg/store/lfs_ .hg/store/lfs
152
154
153 Clear the cache to force a download
155 Clear the cache to force a download
154 $ rm -rf `hg config lfs.usercache`
156 $ rm -rf `hg config lfs.usercache`
155 $ cd ../repo2
157 $ cd ../repo2
156 $ hg update tip --debug
158 $ hg update tip --debug
157 http auth: user foo, password ***
159 http auth: user foo, password ***
158 resolving manifests
160 resolving manifests
159 branchmerge: False, force: False, partial: False
161 branchmerge: False, force: False, partial: False
160 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
162 ancestor: 000000000000, local: 000000000000+, remote: 99a7098854a3
161 http auth: user foo, password ***
163 http auth: user foo, password ***
162 Status: 200
164 Status: 200
163 Content-Length: 311 (git-server !)
165 Content-Length: 311 (git-server !)
164 Content-Length: 352 (hg-server !)
166 Content-Length: 352 (hg-server !)
165 Content-Type: application/vnd.git-lfs+json
167 Content-Type: application/vnd.git-lfs+json
166 Date: $HTTP_DATE$
168 Date: $HTTP_DATE$
167 Server: testing stub value (hg-server !)
169 Server: testing stub value (hg-server !)
168 {
170 {
169 "objects": [
171 "objects": [
170 {
172 {
171 "actions": {
173 "actions": {
172 "download": {
174 "download": {
173 "expires_at": "$ISO_8601_DATE_TIME$"
175 "expires_at": "$ISO_8601_DATE_TIME$"
174 "header": {
176 "header": {
175 "Accept": "application/vnd.git-lfs"
177 "Accept": "application/vnd.git-lfs"
176 }
178 }
177 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
179 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
178 }
180 }
179 }
181 }
180 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
182 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
181 "size": 12
183 "size": 12
182 }
184 }
183 ]
185 ]
184 "transfer": "basic" (hg-server !)
186 "transfer": "basic" (hg-server !)
185 }
187 }
186 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
188 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
187 Status: 200
189 Status: 200
188 Content-Length: 12
190 Content-Length: 12
189 Content-Type: text/plain; charset=utf-8 (git-server !)
191 Content-Type: text/plain; charset=utf-8 (git-server !)
190 Content-Type: application/octet-stream (hg-server !)
192 Content-Type: application/octet-stream (hg-server !)
191 Date: $HTTP_DATE$
193 Date: $HTTP_DATE$
192 Server: testing stub value (hg-server !)
194 Server: testing stub value (hg-server !)
193 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
195 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
194 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
196 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
195 lfs: downloaded 1 files (12 bytes)
197 lfs: downloaded 1 files (12 bytes)
196 a: remote created -> g
198 a: remote created -> g
197 getting a
199 getting a
198 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
200 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
199 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
201 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
200
202
201 When the server has some blobs already. `hg serve` doesn't offer to upload
203 When the server has some blobs already. `hg serve` doesn't offer to upload
202 blobs that it already knows about. Note that lfs-test-server is simply
204 blobs that it already knows about. Note that lfs-test-server is simply
203 toggling the action to 'download'. The Batch API spec says it should omit the
205 toggling the action to 'download'. The Batch API spec says it should omit the
204 actions property completely.
206 actions property completely.
205
207
206 $ hg mv a b
208 $ hg mv a b
207 $ echo ANOTHER-LARGE-FILE > c
209 $ echo ANOTHER-LARGE-FILE > c
208 $ echo ANOTHER-LARGE-FILE2 > d
210 $ echo ANOTHER-LARGE-FILE2 > d
209 $ hg commit -m b-and-c -A b c d
211 $ hg commit -m b-and-c -A b c d
210 $ hg push ../repo1 --debug
212 $ hg push ../repo1 --debug
211 http auth: user foo, password ***
213 http auth: user foo, password ***
212 pushing to ../repo1
214 pushing to ../repo1
213 http auth: user foo, password ***
215 http auth: user foo, password ***
214 http auth: user foo, password ***
216 http auth: user foo, password ***
215 query 1; heads
217 query 1; heads
216 searching for changes
218 searching for changes
217 all remote heads known locally
219 all remote heads known locally
218 listing keys for "phases"
220 listing keys for "phases"
219 checking for updated bookmarks
221 checking for updated bookmarks
220 listing keys for "bookmarks"
222 listing keys for "bookmarks"
221 listing keys for "bookmarks"
223 listing keys for "bookmarks"
222 lfs: computing set of blobs to upload
224 lfs: computing set of blobs to upload
223 Status: 200
225 Status: 200
224 Content-Length: 901 (git-server !)
226 Content-Length: 901 (git-server !)
225 Content-Length: 755 (hg-server !)
227 Content-Length: 755 (hg-server !)
226 Content-Type: application/vnd.git-lfs+json
228 Content-Type: application/vnd.git-lfs+json
227 Date: $HTTP_DATE$
229 Date: $HTTP_DATE$
228 Server: testing stub value (hg-server !)
230 Server: testing stub value (hg-server !)
229 {
231 {
230 "objects": [
232 "objects": [
231 {
233 {
232 "actions": { (git-server !)
234 "actions": { (git-server !)
233 "download": { (git-server !)
235 "download": { (git-server !)
234 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
236 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
235 "header": { (git-server !)
237 "header": { (git-server !)
236 "Accept": "application/vnd.git-lfs" (git-server !)
238 "Accept": "application/vnd.git-lfs" (git-server !)
237 } (git-server !)
239 } (git-server !)
238 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
240 "href": "http://localhost:$HGPORT/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (git-server !)
239 } (git-server !)
241 } (git-server !)
240 } (git-server !)
242 } (git-server !)
241 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
243 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
242 "size": 12
244 "size": 12
243 }
245 }
244 {
246 {
245 "actions": {
247 "actions": {
246 "upload": {
248 "upload": {
247 "expires_at": "$ISO_8601_DATE_TIME$"
249 "expires_at": "$ISO_8601_DATE_TIME$"
248 "header": {
250 "header": {
249 "Accept": "application/vnd.git-lfs"
251 "Accept": "application/vnd.git-lfs"
250 }
252 }
251 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
253 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
252 }
254 }
253 }
255 }
254 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
256 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
255 "size": 20
257 "size": 20
256 }
258 }
257 {
259 {
258 "actions": {
260 "actions": {
259 "upload": {
261 "upload": {
260 "expires_at": "$ISO_8601_DATE_TIME$"
262 "expires_at": "$ISO_8601_DATE_TIME$"
261 "header": {
263 "header": {
262 "Accept": "application/vnd.git-lfs"
264 "Accept": "application/vnd.git-lfs"
263 }
265 }
264 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
266 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
265 }
267 }
266 }
268 }
267 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
269 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
268 "size": 19
270 "size": 19
269 }
271 }
270 ]
272 ]
271 "transfer": "basic" (hg-server !)
273 "transfer": "basic" (hg-server !)
272 }
274 }
273 lfs: need to transfer 2 objects (39 bytes)
275 lfs: need to transfer 2 objects (39 bytes)
274 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
276 lfs: uploading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
275 Status: 200 (git-server !)
277 Status: 200 (git-server !)
276 Status: 201 (hg-server !)
278 Status: 201 (hg-server !)
277 Content-Length: 0
279 Content-Length: 0
278 Content-Type: text/plain; charset=utf-8
280 Content-Type: text/plain; charset=utf-8
279 Date: $HTTP_DATE$
281 Date: $HTTP_DATE$
280 Server: testing stub value (hg-server !)
282 Server: testing stub value (hg-server !)
281 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
283 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
282 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
284 lfs: uploading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
283 Status: 200 (git-server !)
285 Status: 200 (git-server !)
284 Status: 201 (hg-server !)
286 Status: 201 (hg-server !)
285 Content-Length: 0
287 Content-Length: 0
286 Content-Type: text/plain; charset=utf-8
288 Content-Type: text/plain; charset=utf-8
287 Date: $HTTP_DATE$
289 Date: $HTTP_DATE$
288 Server: testing stub value (hg-server !)
290 Server: testing stub value (hg-server !)
289 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
291 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
290 lfs: uploaded 2 files (39 bytes)
292 lfs: uploaded 2 files (39 bytes)
291 1 changesets found
293 1 changesets found
292 list of changesets:
294 list of changesets:
293 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
295 dfca2c9e2ef24996aa61ba2abd99277d884b3d63
294 bundle2-output-bundle: "HG20", 5 parts total
296 bundle2-output-bundle: "HG20", 5 parts total
295 bundle2-output-part: "replycaps" * bytes payload (glob)
297 bundle2-output-part: "replycaps" * bytes payload (glob)
296 bundle2-output-part: "check:phases" 24 bytes payload
298 bundle2-output-part: "check:phases" 24 bytes payload
297 bundle2-output-part: "check:heads" streamed payload
299 bundle2-output-part: "check:heads" streamed payload
298 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
300 bundle2-output-part: "changegroup" (params: 1 mandatory) streamed payload
299 bundle2-output-part: "phase-heads" 24 bytes payload
301 bundle2-output-part: "phase-heads" 24 bytes payload
300 bundle2-input-bundle: with-transaction
302 bundle2-input-bundle: with-transaction
301 bundle2-input-part: "replycaps" supported
303 bundle2-input-part: "replycaps" supported
302 bundle2-input-part: total payload size * (glob)
304 bundle2-input-part: total payload size * (glob)
303 bundle2-input-part: "check:phases" supported
305 bundle2-input-part: "check:phases" supported
304 bundle2-input-part: total payload size 24
306 bundle2-input-part: total payload size 24
305 bundle2-input-part: "check:heads" supported
307 bundle2-input-part: "check:heads" supported
306 bundle2-input-part: total payload size 20
308 bundle2-input-part: total payload size 20
307 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
309 bundle2-input-part: "changegroup" (params: 1 mandatory) supported
308 adding changesets
310 adding changesets
309 add changeset dfca2c9e2ef2
311 add changeset dfca2c9e2ef2
310 adding manifests
312 adding manifests
311 adding file changes
313 adding file changes
312 adding b revisions
314 adding b revisions
313 adding c revisions
315 adding c revisions
314 adding d revisions
316 adding d revisions
315 bundle2-input-part: total payload size 1315
317 bundle2-input-part: total payload size 1315
316 bundle2-input-part: "phase-heads" supported
318 bundle2-input-part: "phase-heads" supported
317 bundle2-input-part: total payload size 24
319 bundle2-input-part: total payload size 24
318 bundle2-input-bundle: 5 parts total
320 bundle2-input-bundle: 5 parts total
319 updating the branch cache
321 updating the branch cache
320 added 1 changesets with 3 changes to 3 files
322 added 1 changesets with 3 changes to 3 files
321 bundle2-output-bundle: "HG20", 1 parts total
323 bundle2-output-bundle: "HG20", 1 parts total
322 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
324 bundle2-output-part: "reply:changegroup" (advisory) (params: 0 advisory) empty payload
323 bundle2-input-bundle: no-transaction
325 bundle2-input-bundle: no-transaction
324 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
326 bundle2-input-part: "reply:changegroup" (advisory) (params: 0 advisory) supported
325 bundle2-input-bundle: 1 parts total
327 bundle2-input-bundle: 1 parts total
326 listing keys for "phases"
328 listing keys for "phases"
327
329
328 Clear the cache to force a download
330 Clear the cache to force a download
329 $ rm -rf `hg config lfs.usercache`
331 $ rm -rf `hg config lfs.usercache`
330 $ hg --repo ../repo1 update tip --debug
332 $ hg --repo ../repo1 update tip --debug
331 http auth: user foo, password ***
333 http auth: user foo, password ***
332 resolving manifests
334 resolving manifests
333 branchmerge: False, force: False, partial: False
335 branchmerge: False, force: False, partial: False
334 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
336 ancestor: 99a7098854a3, local: 99a7098854a3+, remote: dfca2c9e2ef2
335 http auth: user foo, password ***
337 http auth: user foo, password ***
336 Status: 200
338 Status: 200
337 Content-Length: 608 (git-server !)
339 Content-Length: 608 (git-server !)
338 Content-Length: 670 (hg-server !)
340 Content-Length: 670 (hg-server !)
339 Content-Type: application/vnd.git-lfs+json
341 Content-Type: application/vnd.git-lfs+json
340 Date: $HTTP_DATE$
342 Date: $HTTP_DATE$
341 Server: testing stub value (hg-server !)
343 Server: testing stub value (hg-server !)
342 {
344 {
343 "objects": [
345 "objects": [
344 {
346 {
345 "actions": {
347 "actions": {
346 "download": {
348 "download": {
347 "expires_at": "$ISO_8601_DATE_TIME$"
349 "expires_at": "$ISO_8601_DATE_TIME$"
348 "header": {
350 "header": {
349 "Accept": "application/vnd.git-lfs"
351 "Accept": "application/vnd.git-lfs"
350 }
352 }
351 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
353 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
352 }
354 }
353 }
355 }
354 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
356 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
355 "size": 20
357 "size": 20
356 }
358 }
357 {
359 {
358 "actions": {
360 "actions": {
359 "download": {
361 "download": {
360 "expires_at": "$ISO_8601_DATE_TIME$"
362 "expires_at": "$ISO_8601_DATE_TIME$"
361 "header": {
363 "header": {
362 "Accept": "application/vnd.git-lfs"
364 "Accept": "application/vnd.git-lfs"
363 }
365 }
364 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
366 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
365 }
367 }
366 }
368 }
367 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
369 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
368 "size": 19
370 "size": 19
369 }
371 }
370 ]
372 ]
371 "transfer": "basic" (hg-server !)
373 "transfer": "basic" (hg-server !)
372 }
374 }
373 lfs: need to transfer 2 objects (39 bytes)
375 lfs: need to transfer 2 objects (39 bytes)
374 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
376 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
375 Status: 200
377 Status: 200
376 Content-Length: 20
378 Content-Length: 20
377 Content-Type: text/plain; charset=utf-8 (git-server !)
379 Content-Type: text/plain; charset=utf-8 (git-server !)
378 Content-Type: application/octet-stream (hg-server !)
380 Content-Type: application/octet-stream (hg-server !)
379 Date: $HTTP_DATE$
381 Date: $HTTP_DATE$
380 Server: testing stub value (hg-server !)
382 Server: testing stub value (hg-server !)
381 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
383 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
382 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
384 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
383 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
385 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
384 Status: 200
386 Status: 200
385 Content-Length: 19
387 Content-Length: 19
386 Content-Type: text/plain; charset=utf-8 (git-server !)
388 Content-Type: text/plain; charset=utf-8 (git-server !)
387 Content-Type: application/octet-stream (hg-server !)
389 Content-Type: application/octet-stream (hg-server !)
388 Date: $HTTP_DATE$
390 Date: $HTTP_DATE$
389 Server: testing stub value (hg-server !)
391 Server: testing stub value (hg-server !)
390 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
392 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
391 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
393 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
392 lfs: downloaded 2 files (39 bytes)
394 lfs: downloaded 2 files (39 bytes)
393 b: remote created -> g
395 b: remote created -> g
394 getting b
396 getting b
395 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
397 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
396 c: remote created -> g
398 c: remote created -> g
397 getting c
399 getting c
398 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
400 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
399 d: remote created -> g
401 d: remote created -> g
400 getting d
402 getting d
401 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
403 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
402 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
404 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
403
405
404 Test a corrupt file download, but clear the cache first to force a download.
406 Test a corrupt file download, but clear the cache first to force a download.
405 `hg serve` indicates a corrupt file without transferring it, unlike
407 `hg serve` indicates a corrupt file without transferring it, unlike
406 lfs-test-server.
408 lfs-test-server.
407
409
408 $ rm -rf `hg config lfs.usercache`
410 $ rm -rf `hg config lfs.usercache`
409 #if git-server
411 #if git-server
410 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
412 $ cp $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
411 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
413 $ echo 'damage' > $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
412 #else
414 #else
413 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
415 $ cp $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 blob
414 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
416 $ echo 'damage' > $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
415 #endif
417 #endif
416 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
418 $ rm ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
417 $ rm ../repo1/*
419 $ rm ../repo1/*
418
420
419 TODO: give the proper error indication from `hg serve`
421 TODO: give the proper error indication from `hg serve`
420
422
421 $ hg --repo ../repo1 update -C tip --debug
423 $ hg --repo ../repo1 update -C tip --debug
422 http auth: user foo, password ***
424 http auth: user foo, password ***
423 resolving manifests
425 resolving manifests
424 branchmerge: False, force: True, partial: False
426 branchmerge: False, force: True, partial: False
425 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
427 ancestor: dfca2c9e2ef2+, local: dfca2c9e2ef2+, remote: dfca2c9e2ef2
426 http auth: user foo, password ***
428 http auth: user foo, password ***
427 Status: 200
429 Status: 200
428 Content-Length: 311 (git-server !)
430 Content-Length: 311 (git-server !)
429 Content-Length: 183 (hg-server !)
431 Content-Length: 183 (hg-server !)
430 Content-Type: application/vnd.git-lfs+json
432 Content-Type: application/vnd.git-lfs+json
431 Date: $HTTP_DATE$
433 Date: $HTTP_DATE$
432 Server: testing stub value (hg-server !)
434 Server: testing stub value (hg-server !)
433 {
435 {
434 "objects": [
436 "objects": [
435 {
437 {
436 "actions": { (git-server !)
438 "actions": { (git-server !)
437 "download": { (git-server !)
439 "download": { (git-server !)
438 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
440 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
439 "header": { (git-server !)
441 "header": { (git-server !)
440 "Accept": "application/vnd.git-lfs" (git-server !)
442 "Accept": "application/vnd.git-lfs" (git-server !)
441 } (git-server !)
443 } (git-server !)
442 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
444 "href": "http://localhost:$HGPORT/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (git-server !)
443 } (git-server !)
445 } (git-server !)
444 "error": { (hg-server !)
446 "error": { (hg-server !)
445 "code": 422 (hg-server !)
447 "code": 422 (hg-server !)
446 "message": "The object is corrupt" (hg-server !)
448 "message": "The object is corrupt" (hg-server !)
447 }
449 }
448 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
450 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
449 "size": 19
451 "size": 19
450 }
452 }
451 ]
453 ]
452 "transfer": "basic" (hg-server !)
454 "transfer": "basic" (hg-server !)
453 }
455 }
454 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
456 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes) (git-server !)
455 Status: 200 (git-server !)
457 Status: 200 (git-server !)
456 Content-Length: 7 (git-server !)
458 Content-Length: 7 (git-server !)
457 Content-Type: text/plain; charset=utf-8 (git-server !)
459 Content-Type: text/plain; charset=utf-8 (git-server !)
458 Date: $HTTP_DATE$ (git-server !)
460 Date: $HTTP_DATE$ (git-server !)
459 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
461 abort: corrupt remote lfs object: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (git-server !)
460 abort: LFS server error for "c": Validation error! (hg-server !)
462 abort: LFS server error for "c": Validation error! (hg-server !)
461 [255]
463 [255]
462
464
463 The corrupted blob is not added to the usercache or local store
465 The corrupted blob is not added to the usercache or local store
464
466
465 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
467 $ test -f ../repo1/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
466 [1]
468 [1]
467 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
469 $ test -f `hg config lfs.usercache`/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
468 [1]
470 [1]
469 #if git-server
471 #if git-server
470 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
472 $ cp blob $TESTTMP/lfs-content/d1/1e/1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
471 #else
473 #else
472 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
474 $ cp blob $TESTTMP/server/.hg/store/lfs/objects/d1/1e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
473 #endif
475 #endif
474
476
475 Test a corrupted file upload
477 Test a corrupted file upload
476
478
477 $ echo 'another lfs blob' > b
479 $ echo 'another lfs blob' > b
478 $ hg ci -m 'another blob'
480 $ hg ci -m 'another blob'
479 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
481 $ echo 'damage' > .hg/store/lfs/objects/e6/59058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
480 $ hg push --debug ../repo1
482 $ hg push --debug ../repo1
481 http auth: user foo, password ***
483 http auth: user foo, password ***
482 pushing to ../repo1
484 pushing to ../repo1
483 http auth: user foo, password ***
485 http auth: user foo, password ***
484 http auth: user foo, password ***
486 http auth: user foo, password ***
485 query 1; heads
487 query 1; heads
486 searching for changes
488 searching for changes
487 all remote heads known locally
489 all remote heads known locally
488 listing keys for "phases"
490 listing keys for "phases"
489 checking for updated bookmarks
491 checking for updated bookmarks
490 listing keys for "bookmarks"
492 listing keys for "bookmarks"
491 listing keys for "bookmarks"
493 listing keys for "bookmarks"
492 lfs: computing set of blobs to upload
494 lfs: computing set of blobs to upload
493 Status: 200
495 Status: 200
494 Content-Length: 309 (git-server !)
496 Content-Length: 309 (git-server !)
495 Content-Length: 350 (hg-server !)
497 Content-Length: 350 (hg-server !)
496 Content-Type: application/vnd.git-lfs+json
498 Content-Type: application/vnd.git-lfs+json
497 Date: $HTTP_DATE$
499 Date: $HTTP_DATE$
498 Server: testing stub value (hg-server !)
500 Server: testing stub value (hg-server !)
499 {
501 {
500 "objects": [
502 "objects": [
501 {
503 {
502 "actions": {
504 "actions": {
503 "upload": {
505 "upload": {
504 "expires_at": "$ISO_8601_DATE_TIME$"
506 "expires_at": "$ISO_8601_DATE_TIME$"
505 "header": {
507 "header": {
506 "Accept": "application/vnd.git-lfs"
508 "Accept": "application/vnd.git-lfs"
507 }
509 }
508 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
510 "href": "http://localhost:$HGPORT/*/e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0" (glob)
509 }
511 }
510 }
512 }
511 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
513 "oid": "e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0"
512 "size": 17
514 "size": 17
513 }
515 }
514 ]
516 ]
515 "transfer": "basic" (hg-server !)
517 "transfer": "basic" (hg-server !)
516 }
518 }
517 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
519 lfs: uploading e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0 (17 bytes)
518 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
520 abort: detected corrupt lfs object: e659058e26b07b39d2a9c7145b3f99b41f797b6621c8076600e9cb7ee88291f0
519 (run hg verify)
521 (run hg verify)
520 [255]
522 [255]
521
523
522 Archive will prefetch blobs in a group
524 Archive will prefetch blobs in a group
523
525
524 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
526 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
525 $ hg archive --debug -r 1 ../archive
527 $ hg archive --debug -r 1 ../archive
526 http auth: user foo, password ***
528 http auth: user foo, password ***
527 http auth: user foo, password ***
529 http auth: user foo, password ***
528 Status: 200
530 Status: 200
529 Content-Length: 905 (git-server !)
531 Content-Length: 905 (git-server !)
530 Content-Length: 988 (hg-server !)
532 Content-Length: 988 (hg-server !)
531 Content-Type: application/vnd.git-lfs+json
533 Content-Type: application/vnd.git-lfs+json
532 Date: $HTTP_DATE$
534 Date: $HTTP_DATE$
533 Server: testing stub value (hg-server !)
535 Server: testing stub value (hg-server !)
534 {
536 {
535 "objects": [
537 "objects": [
536 {
538 {
537 "actions": {
539 "actions": {
538 "download": {
540 "download": {
539 "expires_at": "$ISO_8601_DATE_TIME$"
541 "expires_at": "$ISO_8601_DATE_TIME$"
540 "header": {
542 "header": {
541 "Accept": "application/vnd.git-lfs"
543 "Accept": "application/vnd.git-lfs"
542 }
544 }
543 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
545 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
544 }
546 }
545 }
547 }
546 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
548 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
547 "size": 12
549 "size": 12
548 }
550 }
549 {
551 {
550 "actions": {
552 "actions": {
551 "download": {
553 "download": {
552 "expires_at": "$ISO_8601_DATE_TIME$"
554 "expires_at": "$ISO_8601_DATE_TIME$"
553 "header": {
555 "header": {
554 "Accept": "application/vnd.git-lfs"
556 "Accept": "application/vnd.git-lfs"
555 }
557 }
556 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
558 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
557 }
559 }
558 }
560 }
559 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
561 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
560 "size": 20
562 "size": 20
561 }
563 }
562 {
564 {
563 "actions": {
565 "actions": {
564 "download": {
566 "download": {
565 "expires_at": "$ISO_8601_DATE_TIME$"
567 "expires_at": "$ISO_8601_DATE_TIME$"
566 "header": {
568 "header": {
567 "Accept": "application/vnd.git-lfs"
569 "Accept": "application/vnd.git-lfs"
568 }
570 }
569 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
571 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
570 }
572 }
571 }
573 }
572 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
574 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
573 "size": 19
575 "size": 19
574 }
576 }
575 ]
577 ]
576 "transfer": "basic" (hg-server !)
578 "transfer": "basic" (hg-server !)
577 }
579 }
578 lfs: need to transfer 3 objects (51 bytes)
580 lfs: need to transfer 3 objects (51 bytes)
579 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
581 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
580 Status: 200
582 Status: 200
581 Content-Length: 12
583 Content-Length: 12
582 Content-Type: text/plain; charset=utf-8 (git-server !)
584 Content-Type: text/plain; charset=utf-8 (git-server !)
583 Content-Type: application/octet-stream (hg-server !)
585 Content-Type: application/octet-stream (hg-server !)
584 Date: $HTTP_DATE$
586 Date: $HTTP_DATE$
585 Server: testing stub value (hg-server !)
587 Server: testing stub value (hg-server !)
586 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
588 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
587 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
589 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
588 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
590 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
589 Status: 200
591 Status: 200
590 Content-Length: 20
592 Content-Length: 20
591 Content-Type: text/plain; charset=utf-8 (git-server !)
593 Content-Type: text/plain; charset=utf-8 (git-server !)
592 Content-Type: application/octet-stream (hg-server !)
594 Content-Type: application/octet-stream (hg-server !)
593 Date: $HTTP_DATE$
595 Date: $HTTP_DATE$
594 Server: testing stub value (hg-server !)
596 Server: testing stub value (hg-server !)
595 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
597 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
596 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
598 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
597 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
599 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
598 Status: 200
600 Status: 200
599 Content-Length: 19
601 Content-Length: 19
600 Content-Type: text/plain; charset=utf-8 (git-server !)
602 Content-Type: text/plain; charset=utf-8 (git-server !)
601 Content-Type: application/octet-stream (hg-server !)
603 Content-Type: application/octet-stream (hg-server !)
602 Date: $HTTP_DATE$
604 Date: $HTTP_DATE$
603 Server: testing stub value (hg-server !)
605 Server: testing stub value (hg-server !)
604 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
606 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
605 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
607 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
606 lfs: downloaded 3 files (51 bytes)
608 lfs: downloaded 3 files (51 bytes)
607 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
609 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
608 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
610 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
609 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
611 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
610 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
612 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
611 $ find ../archive | sort
613 $ find ../archive | sort
612 ../archive
614 ../archive
613 ../archive/.hg_archival.txt
615 ../archive/.hg_archival.txt
614 ../archive/a
616 ../archive/a
615 ../archive/b
617 ../archive/b
616 ../archive/c
618 ../archive/c
617 ../archive/d
619 ../archive/d
618
620
619 Cat will prefetch blobs in a group
621 Cat will prefetch blobs in a group
620
622
621 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
623 $ rm -rf .hg/store/lfs `hg config lfs.usercache`
622 $ hg cat --debug -r 1 a b c nonexistent
624 $ hg cat --debug -r 1 a b c nonexistent
623 http auth: user foo, password ***
625 http auth: user foo, password ***
624 http auth: user foo, password ***
626 http auth: user foo, password ***
625 Status: 200
627 Status: 200
626 Content-Length: 608 (git-server !)
628 Content-Length: 608 (git-server !)
627 Content-Length: 670 (hg-server !)
629 Content-Length: 670 (hg-server !)
628 Content-Type: application/vnd.git-lfs+json
630 Content-Type: application/vnd.git-lfs+json
629 Date: $HTTP_DATE$
631 Date: $HTTP_DATE$
630 Server: testing stub value (hg-server !)
632 Server: testing stub value (hg-server !)
631 {
633 {
632 "objects": [
634 "objects": [
633 {
635 {
634 "actions": {
636 "actions": {
635 "download": {
637 "download": {
636 "expires_at": "$ISO_8601_DATE_TIME$"
638 "expires_at": "$ISO_8601_DATE_TIME$"
637 "header": {
639 "header": {
638 "Accept": "application/vnd.git-lfs"
640 "Accept": "application/vnd.git-lfs"
639 }
641 }
640 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
642 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
641 }
643 }
642 }
644 }
643 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
645 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
644 "size": 12
646 "size": 12
645 }
647 }
646 {
648 {
647 "actions": {
649 "actions": {
648 "download": {
650 "download": {
649 "expires_at": "$ISO_8601_DATE_TIME$"
651 "expires_at": "$ISO_8601_DATE_TIME$"
650 "header": {
652 "header": {
651 "Accept": "application/vnd.git-lfs"
653 "Accept": "application/vnd.git-lfs"
652 }
654 }
653 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
655 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
654 }
656 }
655 }
657 }
656 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
658 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
657 "size": 19
659 "size": 19
658 }
660 }
659 ]
661 ]
660 "transfer": "basic" (hg-server !)
662 "transfer": "basic" (hg-server !)
661 }
663 }
662 lfs: need to transfer 2 objects (31 bytes)
664 lfs: need to transfer 2 objects (31 bytes)
663 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
665 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
664 Status: 200
666 Status: 200
665 Content-Length: 12
667 Content-Length: 12
666 Content-Type: text/plain; charset=utf-8 (git-server !)
668 Content-Type: text/plain; charset=utf-8 (git-server !)
667 Content-Type: application/octet-stream (hg-server !)
669 Content-Type: application/octet-stream (hg-server !)
668 Date: $HTTP_DATE$
670 Date: $HTTP_DATE$
669 Server: testing stub value (hg-server !)
671 Server: testing stub value (hg-server !)
670 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
672 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
671 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
673 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
672 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
674 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
673 Status: 200
675 Status: 200
674 Content-Length: 19
676 Content-Length: 19
675 Content-Type: text/plain; charset=utf-8 (git-server !)
677 Content-Type: text/plain; charset=utf-8 (git-server !)
676 Content-Type: application/octet-stream (hg-server !)
678 Content-Type: application/octet-stream (hg-server !)
677 Date: $HTTP_DATE$
679 Date: $HTTP_DATE$
678 Server: testing stub value (hg-server !)
680 Server: testing stub value (hg-server !)
679 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
681 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
680 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
682 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
681 lfs: downloaded 2 files (31 bytes)
683 lfs: downloaded 2 files (31 bytes)
682 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
684 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
683 THIS-IS-LFS
685 THIS-IS-LFS
684 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
686 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
685 THIS-IS-LFS
687 THIS-IS-LFS
686 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
688 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
687 ANOTHER-LARGE-FILE
689 ANOTHER-LARGE-FILE
688 nonexistent: no such file in rev dfca2c9e2ef2
690 nonexistent: no such file in rev dfca2c9e2ef2
689
691
690 Revert will prefetch blobs in a group
692 Revert will prefetch blobs in a group
691
693
692 $ rm -rf .hg/store/lfs
694 $ rm -rf .hg/store/lfs
693 $ rm -rf `hg config lfs.usercache`
695 $ rm -rf `hg config lfs.usercache`
694 $ rm *
696 $ rm *
695 $ hg revert --all -r 1 --debug
697 $ hg revert --all -r 1 --debug
696 http auth: user foo, password ***
698 http auth: user foo, password ***
697 http auth: user foo, password ***
699 http auth: user foo, password ***
698 Status: 200
700 Status: 200
699 Content-Length: 905 (git-server !)
701 Content-Length: 905 (git-server !)
700 Content-Length: 988 (hg-server !)
702 Content-Length: 988 (hg-server !)
701 Content-Type: application/vnd.git-lfs+json
703 Content-Type: application/vnd.git-lfs+json
702 Date: $HTTP_DATE$
704 Date: $HTTP_DATE$
703 Server: testing stub value (hg-server !)
705 Server: testing stub value (hg-server !)
704 {
706 {
705 "objects": [
707 "objects": [
706 {
708 {
707 "actions": {
709 "actions": {
708 "download": {
710 "download": {
709 "expires_at": "$ISO_8601_DATE_TIME$"
711 "expires_at": "$ISO_8601_DATE_TIME$"
710 "header": {
712 "header": {
711 "Accept": "application/vnd.git-lfs"
713 "Accept": "application/vnd.git-lfs"
712 }
714 }
713 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
715 "href": "http://localhost:$HGPORT/*/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b" (glob)
714 }
716 }
715 }
717 }
716 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
718 "oid": "31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b"
717 "size": 12
719 "size": 12
718 }
720 }
719 {
721 {
720 "actions": {
722 "actions": {
721 "download": {
723 "download": {
722 "expires_at": "$ISO_8601_DATE_TIME$"
724 "expires_at": "$ISO_8601_DATE_TIME$"
723 "header": {
725 "header": {
724 "Accept": "application/vnd.git-lfs"
726 "Accept": "application/vnd.git-lfs"
725 }
727 }
726 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
728 "href": "http://localhost:$HGPORT/*/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19" (glob)
727 }
729 }
728 }
730 }
729 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
731 "oid": "37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19"
730 "size": 20
732 "size": 20
731 }
733 }
732 {
734 {
733 "actions": {
735 "actions": {
734 "download": {
736 "download": {
735 "expires_at": "$ISO_8601_DATE_TIME$"
737 "expires_at": "$ISO_8601_DATE_TIME$"
736 "header": {
738 "header": {
737 "Accept": "application/vnd.git-lfs"
739 "Accept": "application/vnd.git-lfs"
738 }
740 }
739 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
741 "href": "http://localhost:$HGPORT/*/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998" (glob)
740 }
742 }
741 }
743 }
742 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
744 "oid": "d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998"
743 "size": 19
745 "size": 19
744 }
746 }
745 ]
747 ]
746 "transfer": "basic" (hg-server !)
748 "transfer": "basic" (hg-server !)
747 }
749 }
748 lfs: need to transfer 3 objects (51 bytes)
750 lfs: need to transfer 3 objects (51 bytes)
749 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
751 lfs: downloading 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b (12 bytes)
750 Status: 200
752 Status: 200
751 Content-Length: 12
753 Content-Length: 12
752 Content-Type: text/plain; charset=utf-8 (git-server !)
754 Content-Type: text/plain; charset=utf-8 (git-server !)
753 Content-Type: application/octet-stream (hg-server !)
755 Content-Type: application/octet-stream (hg-server !)
754 Date: $HTTP_DATE$
756 Date: $HTTP_DATE$
755 Server: testing stub value (hg-server !)
757 Server: testing stub value (hg-server !)
756 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
758 lfs: adding 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b to the usercache
757 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
759 lfs: processed: 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b
758 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
760 lfs: downloading 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 (20 bytes)
759 Status: 200
761 Status: 200
760 Content-Length: 20
762 Content-Length: 20
761 Content-Type: text/plain; charset=utf-8 (git-server !)
763 Content-Type: text/plain; charset=utf-8 (git-server !)
762 Content-Type: application/octet-stream (hg-server !)
764 Content-Type: application/octet-stream (hg-server !)
763 Date: $HTTP_DATE$
765 Date: $HTTP_DATE$
764 Server: testing stub value (hg-server !)
766 Server: testing stub value (hg-server !)
765 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
767 lfs: adding 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 to the usercache
766 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
768 lfs: processed: 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19
767 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
769 lfs: downloading d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 (19 bytes)
768 Status: 200
770 Status: 200
769 Content-Length: 19
771 Content-Length: 19
770 Content-Type: text/plain; charset=utf-8 (git-server !)
772 Content-Type: text/plain; charset=utf-8 (git-server !)
771 Content-Type: application/octet-stream (hg-server !)
773 Content-Type: application/octet-stream (hg-server !)
772 Date: $HTTP_DATE$
774 Date: $HTTP_DATE$
773 Server: testing stub value (hg-server !)
775 Server: testing stub value (hg-server !)
774 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
776 lfs: adding d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 to the usercache
775 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
777 lfs: processed: d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998
776 lfs: downloaded 3 files (51 bytes)
778 lfs: downloaded 3 files (51 bytes)
777 reverting b
779 reverting b
778 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
780 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
779 reverting c
781 reverting c
780 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
782 lfs: found d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 in the local lfs store
781 reverting d
783 reverting d
782 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
784 lfs: found 37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 in the local lfs store
783 adding a
785 adding a
784 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
786 lfs: found 31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b in the local lfs store
785
787
786 Check error message when the remote missed a blob:
788 Check error message when the remote missed a blob:
787
789
788 $ echo FFFFF > b
790 $ echo FFFFF > b
789 $ hg commit -m b -A b
791 $ hg commit -m b -A b
790 $ echo FFFFF >> b
792 $ echo FFFFF >> b
791 $ hg commit -m b b
793 $ hg commit -m b b
792 $ rm -rf .hg/store/lfs
794 $ rm -rf .hg/store/lfs
793 $ rm -rf `hg config lfs.usercache`
795 $ rm -rf `hg config lfs.usercache`
794 $ hg update -C '.^' --debug
796 $ hg update -C '.^' --debug
795 http auth: user foo, password ***
797 http auth: user foo, password ***
796 resolving manifests
798 resolving manifests
797 branchmerge: False, force: True, partial: False
799 branchmerge: False, force: True, partial: False
798 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
800 ancestor: 62fdbaf221c6+, local: 62fdbaf221c6+, remote: ef0564edf47e
799 http auth: user foo, password ***
801 http auth: user foo, password ***
800 Status: 200
802 Status: 200
801 Content-Length: 308 (git-server !)
803 Content-Length: 308 (git-server !)
802 Content-Length: 186 (hg-server !)
804 Content-Length: 186 (hg-server !)
803 Content-Type: application/vnd.git-lfs+json
805 Content-Type: application/vnd.git-lfs+json
804 Date: $HTTP_DATE$
806 Date: $HTTP_DATE$
805 Server: testing stub value (hg-server !)
807 Server: testing stub value (hg-server !)
806 {
808 {
807 "objects": [
809 "objects": [
808 {
810 {
809 "actions": { (git-server !)
811 "actions": { (git-server !)
810 "upload": { (git-server !)
812 "upload": { (git-server !)
811 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
813 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
812 "header": { (git-server !)
814 "header": { (git-server !)
813 "Accept": "application/vnd.git-lfs" (git-server !)
815 "Accept": "application/vnd.git-lfs" (git-server !)
814 } (git-server !)
816 } (git-server !)
815 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
817 "href": "http://localhost:$HGPORT/objects/8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13" (git-server !)
816 } (git-server !)
818 } (git-server !)
817 "error": { (hg-server !)
819 "error": { (hg-server !)
818 "code": 404 (hg-server !)
820 "code": 404 (hg-server !)
819 "message": "The object does not exist" (hg-server !)
821 "message": "The object does not exist" (hg-server !)
820 }
822 }
821 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
823 "oid": "8e6ea5f6c066b44a0efa43bcce86aea73f17e6e23f0663df0251e7524e140a13"
822 "size": 6
824 "size": 6
823 }
825 }
824 ]
826 ]
825 "transfer": "basic" (hg-server !)
827 "transfer": "basic" (hg-server !)
826 }
828 }
827 abort: LFS server error for "b": The object does not exist!
829 abort: LFS server error for "b": The object does not exist!
828 [255]
830 [255]
829
831
830 Check error message when object does not exist:
832 Check error message when object does not exist:
831
833
832 $ cd $TESTTMP
834 $ cd $TESTTMP
833 $ hg init test && cd test
835 $ hg init test && cd test
834 $ echo "[extensions]" >> .hg/hgrc
836 $ echo "[extensions]" >> .hg/hgrc
835 $ echo "lfs=" >> .hg/hgrc
837 $ echo "lfs=" >> .hg/hgrc
836 $ echo "[lfs]" >> .hg/hgrc
838 $ echo "[lfs]" >> .hg/hgrc
837 $ echo "threshold=1" >> .hg/hgrc
839 $ echo "threshold=1" >> .hg/hgrc
838 $ echo a > a
840 $ echo a > a
839 $ hg add a
841 $ hg add a
840 $ hg commit -m 'test'
842 $ hg commit -m 'test'
841 $ echo aaaaa > a
843 $ echo aaaaa > a
842 $ hg commit -m 'largefile'
844 $ hg commit -m 'largefile'
843 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
845 $ hg debugdata a 1 # verify this is no the file content but includes "oid", the LFS "pointer".
844 version https://git-lfs.github.com/spec/v1
846 version https://git-lfs.github.com/spec/v1
845 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
847 oid sha256:bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a
846 size 6
848 size 6
847 x-is-binary 0
849 x-is-binary 0
848 $ cd ..
850 $ cd ..
849 $ rm -rf `hg config lfs.usercache`
851 $ rm -rf `hg config lfs.usercache`
850
852
851 (Restart the server in a different location so it no longer has the content)
853 (Restart the server in a different location so it no longer has the content)
852
854
853 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
855 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
854
856
855 #if hg-server
857 #if hg-server
856 $ cat $TESTTMP/access.log $TESTTMP/errors.log
858 $ cat $TESTTMP/access.log $TESTTMP/errors.log
857 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
859 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
858 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
860 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 201 - (glob)
859 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
861 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
860 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
862 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
861 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
863 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
862 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
864 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 201 - (glob)
863 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
865 $LOCALIP - - [$LOGDATE$] "PUT /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 201 - (glob)
864 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
866 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
865 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
867 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
866 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
868 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
867 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
869 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
868 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
870 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
869 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
871 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
870 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
872 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
871 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
873 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
872 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
874 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
873 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
875 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
874 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
876 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
875 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
877 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
876 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
878 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
877 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
879 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/31cf46fbc4ecd458a0943c5b4881f1f5a6dd36c53d6167d5b69ac45149b38e5b HTTP/1.1" 200 - (glob)
878 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
880 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/37a65ab78d5ecda767e8622c248b5dbff1e68b1678ab0e730d5eb8601ec8ad19 HTTP/1.1" 200 - (glob)
879 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
881 $LOCALIP - - [$LOGDATE$] "GET /.hg/lfs/objects/d11e1a642b60813aee592094109b406089b8dff4cb157157f753418ec7857998 HTTP/1.1" 200 - (glob)
880 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
882 $LOCALIP - - [$LOGDATE$] "POST /.git/info/lfs/objects/batch HTTP/1.1" 200 - (glob)
881 #endif
883 #endif
882
884
883 $ mkdir $TESTTMP/lfs-server2
885 $ mkdir $TESTTMP/lfs-server2
884 $ cd $TESTTMP/lfs-server2
886 $ cd $TESTTMP/lfs-server2
885 #if no-windows git-server
887 #if no-windows git-server
886 $ lfs-test-server &> lfs-server.log &
888 $ lfs-test-server &> lfs-server.log &
887 $ echo $! >> $DAEMON_PIDS
889 $ echo $! >> $DAEMON_PIDS
888 #endif
890 #endif
889
891
890 #if windows git-server
892 #if windows git-server
891 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
893 $ "$PYTHON" $TESTTMP/spawn.py >> $DAEMON_PIDS
892 #endif
894 #endif
893
895
894 #if hg-server
896 #if hg-server
895 $ hg init server2
897 $ hg init server2
896 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
898 $ hg --config "lfs.usercache=$TESTTMP/servercache2" -R server2 serve -d \
897 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
899 > -p $HGPORT --pid-file=hg.pid -A $TESTTMP/access.log -E $TESTTMP/errors.log
898 $ cat hg.pid >> $DAEMON_PIDS
900 $ cat hg.pid >> $DAEMON_PIDS
899 #endif
901 #endif
900
902
901 $ cd $TESTTMP
903 $ cd $TESTTMP
902 $ hg --debug clone test test2
904 $ hg --debug clone test test2
903 http auth: user foo, password ***
905 http auth: user foo, password ***
904 linked 6 files
906 linked 6 files
905 http auth: user foo, password ***
907 http auth: user foo, password ***
906 updating to branch default
908 updating to branch default
907 resolving manifests
909 resolving manifests
908 branchmerge: False, force: False, partial: False
910 branchmerge: False, force: False, partial: False
909 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
911 ancestor: 000000000000, local: 000000000000+, remote: d2a338f184a8
910 http auth: user foo, password ***
912 http auth: user foo, password ***
911 Status: 200
913 Status: 200
912 Content-Length: 308 (git-server !)
914 Content-Length: 308 (git-server !)
913 Content-Length: 186 (hg-server !)
915 Content-Length: 186 (hg-server !)
914 Content-Type: application/vnd.git-lfs+json
916 Content-Type: application/vnd.git-lfs+json
915 Date: $HTTP_DATE$
917 Date: $HTTP_DATE$
916 Server: testing stub value (hg-server !)
918 Server: testing stub value (hg-server !)
917 {
919 {
918 "objects": [
920 "objects": [
919 {
921 {
920 "actions": { (git-server !)
922 "actions": { (git-server !)
921 "upload": { (git-server !)
923 "upload": { (git-server !)
922 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
924 "expires_at": "$ISO_8601_DATE_TIME$" (git-server !)
923 "header": { (git-server !)
925 "header": { (git-server !)
924 "Accept": "application/vnd.git-lfs" (git-server !)
926 "Accept": "application/vnd.git-lfs" (git-server !)
925 } (git-server !)
927 } (git-server !)
926 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
928 "href": "http://localhost:$HGPORT/objects/bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a" (git-server !)
927 } (git-server !)
929 } (git-server !)
928 "error": { (hg-server !)
930 "error": { (hg-server !)
929 "code": 404 (hg-server !)
931 "code": 404 (hg-server !)
930 "message": "The object does not exist" (hg-server !)
932 "message": "The object does not exist" (hg-server !)
931 }
933 }
932 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
934 "oid": "bdc26931acfb734b142a8d675f205becf27560dc461f501822de13274fe6fc8a"
933 "size": 6
935 "size": 6
934 }
936 }
935 ]
937 ]
936 "transfer": "basic" (hg-server !)
938 "transfer": "basic" (hg-server !)
937 }
939 }
938 abort: LFS server error for "a": The object does not exist!
940 abort: LFS server error for "a": The object does not exist!
939 [255]
941 [255]
940
942
941 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
943 $ "$PYTHON" $RUNTESTDIR/killdaemons.py $DAEMON_PIDS
General Comments 0
You need to be logged in to leave comments. Login now