##// END OF EJS Templates
largefiles: redo heads interception...
Joerg Sonnenberger -
r46816:bd31462a default
parent child Browse files
Show More
@@ -1,209 +1,202
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''track large binary files
9 '''track large binary files
10
10
11 Large binary files tend to be not very compressible, not very
11 Large binary files tend to be not very compressible, not very
12 diffable, and not at all mergeable. Such files are not handled
12 diffable, and not at all mergeable. Such files are not handled
13 efficiently by Mercurial's storage format (revlog), which is based on
13 efficiently by Mercurial's storage format (revlog), which is based on
14 compressed binary deltas; storing large binary files as regular
14 compressed binary deltas; storing large binary files as regular
15 Mercurial files wastes bandwidth and disk space and increases
15 Mercurial files wastes bandwidth and disk space and increases
16 Mercurial's memory usage. The largefiles extension addresses these
16 Mercurial's memory usage. The largefiles extension addresses these
17 problems by adding a centralized client-server layer on top of
17 problems by adding a centralized client-server layer on top of
18 Mercurial: largefiles live in a *central store* out on the network
18 Mercurial: largefiles live in a *central store* out on the network
19 somewhere, and you only fetch the revisions that you need when you
19 somewhere, and you only fetch the revisions that you need when you
20 need them.
20 need them.
21
21
22 largefiles works by maintaining a "standin file" in .hglf/ for each
22 largefiles works by maintaining a "standin file" in .hglf/ for each
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
24 newline) and are tracked by Mercurial. Largefile revisions are
24 newline) and are tracked by Mercurial. Largefile revisions are
25 identified by the SHA-1 hash of their contents, which is written to
25 identified by the SHA-1 hash of their contents, which is written to
26 the standin. largefiles uses that revision ID to get/put largefile
26 the standin. largefiles uses that revision ID to get/put largefile
27 revisions from/to the central store. This saves both disk space and
27 revisions from/to the central store. This saves both disk space and
28 bandwidth, since you don't need to retrieve all historical revisions
28 bandwidth, since you don't need to retrieve all historical revisions
29 of large files when you clone or pull.
29 of large files when you clone or pull.
30
30
31 To start a new repository or add new large binary files, just add
31 To start a new repository or add new large binary files, just add
32 --large to your :hg:`add` command. For example::
32 --large to your :hg:`add` command. For example::
33
33
34 $ dd if=/dev/urandom of=randomdata count=2000
34 $ dd if=/dev/urandom of=randomdata count=2000
35 $ hg add --large randomdata
35 $ hg add --large randomdata
36 $ hg commit -m "add randomdata as a largefile"
36 $ hg commit -m "add randomdata as a largefile"
37
37
38 When you push a changeset that adds/modifies largefiles to a remote
38 When you push a changeset that adds/modifies largefiles to a remote
39 repository, its largefile revisions will be uploaded along with it.
39 repository, its largefile revisions will be uploaded along with it.
40 Note that the remote Mercurial must also have the largefiles extension
40 Note that the remote Mercurial must also have the largefiles extension
41 enabled for this to work.
41 enabled for this to work.
42
42
43 When you pull a changeset that affects largefiles from a remote
43 When you pull a changeset that affects largefiles from a remote
44 repository, the largefiles for the changeset will by default not be
44 repository, the largefiles for the changeset will by default not be
45 pulled down. However, when you update to such a revision, any
45 pulled down. However, when you update to such a revision, any
46 largefiles needed by that revision are downloaded and cached (if
46 largefiles needed by that revision are downloaded and cached (if
47 they have never been downloaded before). One way to pull largefiles
47 they have never been downloaded before). One way to pull largefiles
48 when pulling is thus to use --update, which will update your working
48 when pulling is thus to use --update, which will update your working
49 copy to the latest pulled revision (and thereby downloading any new
49 copy to the latest pulled revision (and thereby downloading any new
50 largefiles).
50 largefiles).
51
51
52 If you want to pull largefiles you don't need for update yet, then
52 If you want to pull largefiles you don't need for update yet, then
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
54
54
55 If you know you are pulling from a non-default location and want to
55 If you know you are pulling from a non-default location and want to
56 download all the largefiles that correspond to the new changesets at
56 download all the largefiles that correspond to the new changesets at
57 the same time, then you can pull with `--lfrev "pulled()"`.
57 the same time, then you can pull with `--lfrev "pulled()"`.
58
58
59 If you just want to ensure that you will have the largefiles needed to
59 If you just want to ensure that you will have the largefiles needed to
60 merge or rebase with new heads that you are pulling, then you can pull
60 merge or rebase with new heads that you are pulling, then you can pull
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
62 that are new in the heads you are pulling.
62 that are new in the heads you are pulling.
63
63
64 Keep in mind that network access may now be required to update to
64 Keep in mind that network access may now be required to update to
65 changesets that you have not previously updated to. The nature of the
65 changesets that you have not previously updated to. The nature of the
66 largefiles extension means that updating is no longer guaranteed to
66 largefiles extension means that updating is no longer guaranteed to
67 be a local-only operation.
67 be a local-only operation.
68
68
69 If you already have large files tracked by Mercurial without the
69 If you already have large files tracked by Mercurial without the
70 largefiles extension, you will need to convert your repository in
70 largefiles extension, you will need to convert your repository in
71 order to benefit from largefiles. This is done with the
71 order to benefit from largefiles. This is done with the
72 :hg:`lfconvert` command::
72 :hg:`lfconvert` command::
73
73
74 $ hg lfconvert --size 10 oldrepo newrepo
74 $ hg lfconvert --size 10 oldrepo newrepo
75
75
76 In repositories that already have largefiles in them, any new file
76 In repositories that already have largefiles in them, any new file
77 over 10MB will automatically be added as a largefile. To change this
77 over 10MB will automatically be added as a largefile. To change this
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
79 to the minimum size in megabytes to track as a largefile, or use the
79 to the minimum size in megabytes to track as a largefile, or use the
80 --lfsize option to the add command (also in megabytes)::
80 --lfsize option to the add command (also in megabytes)::
81
81
82 [largefiles]
82 [largefiles]
83 minsize = 2
83 minsize = 2
84
84
85 $ hg add --lfsize 2
85 $ hg add --lfsize 2
86
86
87 The ``largefiles.patterns`` config option allows you to specify a list
87 The ``largefiles.patterns`` config option allows you to specify a list
88 of filename patterns (see :hg:`help patterns`) that should always be
88 of filename patterns (see :hg:`help patterns`) that should always be
89 tracked as largefiles::
89 tracked as largefiles::
90
90
91 [largefiles]
91 [largefiles]
92 patterns =
92 patterns =
93 *.jpg
93 *.jpg
94 re:.*\\.(png|bmp)$
94 re:.*\\.(png|bmp)$
95 library.zip
95 library.zip
96 content/audio/*
96 content/audio/*
97
97
98 Files that match one of these patterns will be added as largefiles
98 Files that match one of these patterns will be added as largefiles
99 regardless of their size.
99 regardless of their size.
100
100
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
102 will be ignored for any repositories not already containing a
102 will be ignored for any repositories not already containing a
103 largefile. To add the first largefile to a repository, you must
103 largefile. To add the first largefile to a repository, you must
104 explicitly do so with the --large flag passed to the :hg:`add`
104 explicitly do so with the --large flag passed to the :hg:`add`
105 command.
105 command.
106 '''
106 '''
107 from __future__ import absolute_import
107 from __future__ import absolute_import
108
108
109 from mercurial import (
109 from mercurial import (
110 cmdutil,
110 cmdutil,
111 extensions,
111 extensions,
112 exthelper,
112 exthelper,
113 hg,
113 hg,
114 httppeer,
114 httppeer,
115 localrepo,
115 localrepo,
116 sshpeer,
116 sshpeer,
117 wireprotov1server,
117 wireprotov1server,
118 )
118 )
119
119
120 from . import (
120 from . import (
121 lfcommands,
121 lfcommands,
122 overrides,
122 overrides,
123 proto,
123 proto,
124 reposetup,
124 reposetup,
125 )
125 )
126
126
127 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
127 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
128 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
128 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
129 # be specifying the version(s) of Mercurial they are tested with, or
129 # be specifying the version(s) of Mercurial they are tested with, or
130 # leave the attribute unspecified.
130 # leave the attribute unspecified.
131 testedwith = b'ships-with-hg-core'
131 testedwith = b'ships-with-hg-core'
132
132
133 eh = exthelper.exthelper()
133 eh = exthelper.exthelper()
134 eh.merge(lfcommands.eh)
134 eh.merge(lfcommands.eh)
135 eh.merge(overrides.eh)
135 eh.merge(overrides.eh)
136 eh.merge(proto.eh)
136 eh.merge(proto.eh)
137
137
138 eh.configitem(
138 eh.configitem(
139 b'largefiles',
139 b'largefiles',
140 b'minsize',
140 b'minsize',
141 default=eh.configitem.dynamicdefault,
141 default=eh.configitem.dynamicdefault,
142 )
142 )
143 eh.configitem(
143 eh.configitem(
144 b'largefiles',
144 b'largefiles',
145 b'patterns',
145 b'patterns',
146 default=list,
146 default=list,
147 )
147 )
148 eh.configitem(
148 eh.configitem(
149 b'largefiles',
149 b'largefiles',
150 b'usercache',
150 b'usercache',
151 default=None,
151 default=None,
152 )
152 )
153
153
154 cmdtable = eh.cmdtable
154 cmdtable = eh.cmdtable
155 configtable = eh.configtable
155 configtable = eh.configtable
156 extsetup = eh.finalextsetup
156 extsetup = eh.finalextsetup
157 reposetup = reposetup.reposetup
157 reposetup = reposetup.reposetup
158 uisetup = eh.finaluisetup
158 uisetup = eh.finaluisetup
159
159
160
160
161 def featuresetup(ui, supported):
161 def featuresetup(ui, supported):
162 # don't die on seeing a repo with the largefiles requirement
162 # don't die on seeing a repo with the largefiles requirement
163 supported |= {b'largefiles'}
163 supported |= {b'largefiles'}
164
164
165
165
166 @eh.uisetup
166 @eh.uisetup
167 def _uisetup(ui):
167 def _uisetup(ui):
168 localrepo.featuresetupfuncs.add(featuresetup)
168 localrepo.featuresetupfuncs.add(featuresetup)
169 hg.wirepeersetupfuncs.append(proto.wirereposetup)
169 hg.wirepeersetupfuncs.append(proto.wirereposetup)
170
170
171 cmdutil.outgoinghooks.add(b'largefiles', overrides.outgoinghook)
171 cmdutil.outgoinghooks.add(b'largefiles', overrides.outgoinghook)
172 cmdutil.summaryremotehooks.add(b'largefiles', overrides.summaryremotehook)
172 cmdutil.summaryremotehooks.add(b'largefiles', overrides.summaryremotehook)
173
173
174 # create the new wireproto commands ...
174 # create the new wireproto commands ...
175 wireprotov1server.wireprotocommand(b'putlfile', b'sha', permission=b'push')(
175 wireprotov1server.wireprotocommand(b'putlfile', b'sha', permission=b'push')(
176 proto.putlfile
176 proto.putlfile
177 )
177 )
178 wireprotov1server.wireprotocommand(b'getlfile', b'sha', permission=b'pull')(
178 wireprotov1server.wireprotocommand(b'getlfile', b'sha', permission=b'pull')(
179 proto.getlfile
179 proto.getlfile
180 )
180 )
181 wireprotov1server.wireprotocommand(
181 wireprotov1server.wireprotocommand(
182 b'statlfile', b'sha', permission=b'pull'
182 b'statlfile', b'sha', permission=b'pull'
183 )(proto.statlfile)
183 )(proto.statlfile)
184 wireprotov1server.wireprotocommand(b'lheads', b'', permission=b'pull')(
184 wireprotov1server.wireprotocommand(b'lheads', b'', permission=b'pull')(
185 wireprotov1server.heads
185 wireprotov1server.heads
186 )
186 )
187
187
188 extensions.wrapfunction(
188 extensions.wrapfunction(
189 wireprotov1server.commands[b'heads'], b'func', proto.heads
189 wireprotov1server.commands[b'heads'], b'func', proto.heads
190 )
190 )
191 # TODO also wrap wireproto.commandsv2 once heads is implemented there.
191 # TODO also wrap wireproto.commandsv2 once heads is implemented there.
192
192
193 # can't do this in reposetup because it needs to have happened before
194 # wirerepo.__init__ is called
195 proto.ssholdcallstream = sshpeer.sshv1peer._callstream
196 proto.httpoldcallstream = httppeer.httppeer._callstream
197 sshpeer.sshv1peer._callstream = proto.sshrepocallstream
198 httppeer.httppeer._callstream = proto.httprepocallstream
199
200 # override some extensions' stuff as well
193 # override some extensions' stuff as well
201 for name, module in extensions.extensions():
194 for name, module in extensions.extensions():
202 if name == b'rebase':
195 if name == b'rebase':
203 # TODO: teach exthelper to handle this
196 # TODO: teach exthelper to handle this
204 extensions.wrapfunction(
197 extensions.wrapfunction(
205 module, b'rebase', overrides.overriderebasecmd
198 module, b'rebase', overrides.overriderebasecmd
206 )
199 )
207
200
208
201
209 revsetpredicate = eh.revsetpredicate
202 revsetpredicate = eh.revsetpredicate
@@ -1,221 +1,218
1 # Copyright 2011 Fog Creek Software
1 # Copyright 2011 Fog Creek Software
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5 from __future__ import absolute_import
5 from __future__ import absolute_import
6
6
7 import os
7 import os
8 import re
8 import re
9
9
10 from mercurial.i18n import _
10 from mercurial.i18n import _
11 from mercurial.pycompat import open
11 from mercurial.pycompat import open
12
12
13 from mercurial import (
13 from mercurial import (
14 error,
14 error,
15 exthelper,
15 exthelper,
16 httppeer,
16 httppeer,
17 util,
17 util,
18 wireprototypes,
18 wireprototypes,
19 wireprotov1peer,
19 wireprotov1peer,
20 wireprotov1server,
20 wireprotov1server,
21 )
21 )
22
22
23 from . import lfutil
23 from . import lfutil
24
24
25 urlerr = util.urlerr
25 urlerr = util.urlerr
26 urlreq = util.urlreq
26 urlreq = util.urlreq
27
27
28 LARGEFILES_REQUIRED_MSG = (
28 LARGEFILES_REQUIRED_MSG = (
29 b'\nThis repository uses the largefiles extension.'
29 b'\nThis repository uses the largefiles extension.'
30 b'\n\nPlease enable it in your Mercurial config '
30 b'\n\nPlease enable it in your Mercurial config '
31 b'file.\n'
31 b'file.\n'
32 )
32 )
33
33
34 eh = exthelper.exthelper()
34 eh = exthelper.exthelper()
35
35
36 # these will all be replaced by largefiles.uisetup
37 ssholdcallstream = None
38 httpoldcallstream = None
39
40
36
41 def putlfile(repo, proto, sha):
37 def putlfile(repo, proto, sha):
42 """Server command for putting a largefile into a repository's local store
38 """Server command for putting a largefile into a repository's local store
43 and into the user cache."""
39 and into the user cache."""
44 with proto.mayberedirectstdio() as output:
40 with proto.mayberedirectstdio() as output:
45 path = lfutil.storepath(repo, sha)
41 path = lfutil.storepath(repo, sha)
46 util.makedirs(os.path.dirname(path))
42 util.makedirs(os.path.dirname(path))
47 tmpfp = util.atomictempfile(path, createmode=repo.store.createmode)
43 tmpfp = util.atomictempfile(path, createmode=repo.store.createmode)
48
44
49 try:
45 try:
50 for p in proto.getpayload():
46 for p in proto.getpayload():
51 tmpfp.write(p)
47 tmpfp.write(p)
52 tmpfp._fp.seek(0)
48 tmpfp._fp.seek(0)
53 if sha != lfutil.hexsha1(tmpfp._fp):
49 if sha != lfutil.hexsha1(tmpfp._fp):
54 raise IOError(0, _(b'largefile contents do not match hash'))
50 raise IOError(0, _(b'largefile contents do not match hash'))
55 tmpfp.close()
51 tmpfp.close()
56 lfutil.linktousercache(repo, sha)
52 lfutil.linktousercache(repo, sha)
57 except IOError as e:
53 except IOError as e:
58 repo.ui.warn(
54 repo.ui.warn(
59 _(b'largefiles: failed to put %s into store: %s\n')
55 _(b'largefiles: failed to put %s into store: %s\n')
60 % (sha, e.strerror)
56 % (sha, e.strerror)
61 )
57 )
62 return wireprototypes.pushres(
58 return wireprototypes.pushres(
63 1, output.getvalue() if output else b''
59 1, output.getvalue() if output else b''
64 )
60 )
65 finally:
61 finally:
66 tmpfp.discard()
62 tmpfp.discard()
67
63
68 return wireprototypes.pushres(0, output.getvalue() if output else b'')
64 return wireprototypes.pushres(0, output.getvalue() if output else b'')
69
65
70
66
71 def getlfile(repo, proto, sha):
67 def getlfile(repo, proto, sha):
72 """Server command for retrieving a largefile from the repository-local
68 """Server command for retrieving a largefile from the repository-local
73 cache or user cache."""
69 cache or user cache."""
74 filename = lfutil.findfile(repo, sha)
70 filename = lfutil.findfile(repo, sha)
75 if not filename:
71 if not filename:
76 raise error.Abort(
72 raise error.Abort(
77 _(b'requested largefile %s not present in cache') % sha
73 _(b'requested largefile %s not present in cache') % sha
78 )
74 )
79 f = open(filename, b'rb')
75 f = open(filename, b'rb')
80 length = os.fstat(f.fileno())[6]
76 length = os.fstat(f.fileno())[6]
81
77
82 # Since we can't set an HTTP content-length header here, and
78 # Since we can't set an HTTP content-length header here, and
83 # Mercurial core provides no way to give the length of a streamres
79 # Mercurial core provides no way to give the length of a streamres
84 # (and reading the entire file into RAM would be ill-advised), we
80 # (and reading the entire file into RAM would be ill-advised), we
85 # just send the length on the first line of the response, like the
81 # just send the length on the first line of the response, like the
86 # ssh proto does for string responses.
82 # ssh proto does for string responses.
87 def generator():
83 def generator():
88 yield b'%d\n' % length
84 yield b'%d\n' % length
89 for chunk in util.filechunkiter(f):
85 for chunk in util.filechunkiter(f):
90 yield chunk
86 yield chunk
91
87
92 return wireprototypes.streamreslegacy(gen=generator())
88 return wireprototypes.streamreslegacy(gen=generator())
93
89
94
90
95 def statlfile(repo, proto, sha):
91 def statlfile(repo, proto, sha):
96 """Server command for checking if a largefile is present - returns '2\n' if
92 """Server command for checking if a largefile is present - returns '2\n' if
97 the largefile is missing, '0\n' if it seems to be in good condition.
93 the largefile is missing, '0\n' if it seems to be in good condition.
98
94
99 The value 1 is reserved for mismatched checksum, but that is too expensive
95 The value 1 is reserved for mismatched checksum, but that is too expensive
100 to be verified on every stat and must be caught be running 'hg verify'
96 to be verified on every stat and must be caught be running 'hg verify'
101 server side."""
97 server side."""
102 filename = lfutil.findfile(repo, sha)
98 filename = lfutil.findfile(repo, sha)
103 if not filename:
99 if not filename:
104 return wireprototypes.bytesresponse(b'2\n')
100 return wireprototypes.bytesresponse(b'2\n')
105 return wireprototypes.bytesresponse(b'0\n')
101 return wireprototypes.bytesresponse(b'0\n')
106
102
107
103
108 def wirereposetup(ui, repo):
104 def wirereposetup(ui, repo):
105 orig_commandexecutor = repo.commandexecutor
106
109 class lfileswirerepository(repo.__class__):
107 class lfileswirerepository(repo.__class__):
108 def commandexecutor(self):
109 executor = orig_commandexecutor()
110 if self.capable(b'largefiles'):
111 orig_callcommand = executor.callcommand
112
113 class lfscommandexecutor(executor.__class__):
114 def callcommand(self, command, args):
115 if command == b'heads':
116 command = b'lheads'
117 return orig_callcommand(command, args)
118
119 executor.__class__ = lfscommandexecutor
120 return executor
121
122 @wireprotov1peer.batchable
123 def lheads(self):
124 return self.heads.batchable(self)
125
110 def putlfile(self, sha, fd):
126 def putlfile(self, sha, fd):
111 # unfortunately, httprepository._callpush tries to convert its
127 # unfortunately, httprepository._callpush tries to convert its
112 # input file-like into a bundle before sending it, so we can't use
128 # input file-like into a bundle before sending it, so we can't use
113 # it ...
129 # it ...
114 if issubclass(self.__class__, httppeer.httppeer):
130 if issubclass(self.__class__, httppeer.httppeer):
115 res = self._call(
131 res = self._call(
116 b'putlfile',
132 b'putlfile',
117 data=fd,
133 data=fd,
118 sha=sha,
134 sha=sha,
119 headers={'content-type': 'application/mercurial-0.1'},
135 headers={'content-type': 'application/mercurial-0.1'},
120 )
136 )
121 try:
137 try:
122 d, output = res.split(b'\n', 1)
138 d, output = res.split(b'\n', 1)
123 for l in output.splitlines(True):
139 for l in output.splitlines(True):
124 self.ui.warn(_(b'remote: '), l) # assume l ends with \n
140 self.ui.warn(_(b'remote: '), l) # assume l ends with \n
125 return int(d)
141 return int(d)
126 except ValueError:
142 except ValueError:
127 self.ui.warn(_(b'unexpected putlfile response: %r\n') % res)
143 self.ui.warn(_(b'unexpected putlfile response: %r\n') % res)
128 return 1
144 return 1
129 # ... but we can't use sshrepository._call because the data=
145 # ... but we can't use sshrepository._call because the data=
130 # argument won't get sent, and _callpush does exactly what we want
146 # argument won't get sent, and _callpush does exactly what we want
131 # in this case: send the data straight through
147 # in this case: send the data straight through
132 else:
148 else:
133 try:
149 try:
134 ret, output = self._callpush(b"putlfile", fd, sha=sha)
150 ret, output = self._callpush(b"putlfile", fd, sha=sha)
135 if ret == b"":
151 if ret == b"":
136 raise error.ResponseError(
152 raise error.ResponseError(
137 _(b'putlfile failed:'), output
153 _(b'putlfile failed:'), output
138 )
154 )
139 return int(ret)
155 return int(ret)
140 except IOError:
156 except IOError:
141 return 1
157 return 1
142 except ValueError:
158 except ValueError:
143 raise error.ResponseError(
159 raise error.ResponseError(
144 _(b'putlfile failed (unexpected response):'), ret
160 _(b'putlfile failed (unexpected response):'), ret
145 )
161 )
146
162
147 def getlfile(self, sha):
163 def getlfile(self, sha):
148 """returns an iterable with the chunks of the file with sha sha"""
164 """returns an iterable with the chunks of the file with sha sha"""
149 stream = self._callstream(b"getlfile", sha=sha)
165 stream = self._callstream(b"getlfile", sha=sha)
150 length = stream.readline()
166 length = stream.readline()
151 try:
167 try:
152 length = int(length)
168 length = int(length)
153 except ValueError:
169 except ValueError:
154 self._abort(
170 self._abort(
155 error.ResponseError(_(b"unexpected response:"), length)
171 error.ResponseError(_(b"unexpected response:"), length)
156 )
172 )
157
173
158 # SSH streams will block if reading more than length
174 # SSH streams will block if reading more than length
159 for chunk in util.filechunkiter(stream, limit=length):
175 for chunk in util.filechunkiter(stream, limit=length):
160 yield chunk
176 yield chunk
161 # HTTP streams must hit the end to process the last empty
177 # HTTP streams must hit the end to process the last empty
162 # chunk of Chunked-Encoding so the connection can be reused.
178 # chunk of Chunked-Encoding so the connection can be reused.
163 if issubclass(self.__class__, httppeer.httppeer):
179 if issubclass(self.__class__, httppeer.httppeer):
164 chunk = stream.read(1)
180 chunk = stream.read(1)
165 if chunk:
181 if chunk:
166 self._abort(
182 self._abort(
167 error.ResponseError(_(b"unexpected response:"), chunk)
183 error.ResponseError(_(b"unexpected response:"), chunk)
168 )
184 )
169
185
170 @wireprotov1peer.batchable
186 @wireprotov1peer.batchable
171 def statlfile(self, sha):
187 def statlfile(self, sha):
172 f = wireprotov1peer.future()
188 f = wireprotov1peer.future()
173 result = {b'sha': sha}
189 result = {b'sha': sha}
174 yield result, f
190 yield result, f
175 try:
191 try:
176 yield int(f.value)
192 yield int(f.value)
177 except (ValueError, urlerr.httperror):
193 except (ValueError, urlerr.httperror):
178 # If the server returns anything but an integer followed by a
194 # If the server returns anything but an integer followed by a
179 # newline, newline, it's not speaking our language; if we get
195 # newline, newline, it's not speaking our language; if we get
180 # an HTTP error, we can't be sure the largefile is present;
196 # an HTTP error, we can't be sure the largefile is present;
181 # either way, consider it missing.
197 # either way, consider it missing.
182 yield 2
198 yield 2
183
199
184 repo.__class__ = lfileswirerepository
200 repo.__class__ = lfileswirerepository
185
201
186
202
187 # advertise the largefiles=serve capability
203 # advertise the largefiles=serve capability
188 @eh.wrapfunction(wireprotov1server, b'_capabilities')
204 @eh.wrapfunction(wireprotov1server, b'_capabilities')
189 def _capabilities(orig, repo, proto):
205 def _capabilities(orig, repo, proto):
190 '''announce largefile server capability'''
206 '''announce largefile server capability'''
191 caps = orig(repo, proto)
207 caps = orig(repo, proto)
192 caps.append(b'largefiles=serve')
208 caps.append(b'largefiles=serve')
193 return caps
209 return caps
194
210
195
211
196 def heads(orig, repo, proto):
212 def heads(orig, repo, proto):
197 """Wrap server command - largefile capable clients will know to call
213 """Wrap server command - largefile capable clients will know to call
198 lheads instead"""
214 lheads instead"""
199 if lfutil.islfilesrepo(repo):
215 if lfutil.islfilesrepo(repo):
200 return wireprototypes.ooberror(LARGEFILES_REQUIRED_MSG)
216 return wireprototypes.ooberror(LARGEFILES_REQUIRED_MSG)
201
217
202 return orig(repo, proto)
218 return orig(repo, proto)
203
204
205 def sshrepocallstream(self, cmd, **args):
206 if cmd == b'heads' and self.capable(b'largefiles'):
207 cmd = b'lheads'
208 if cmd == b'batch' and self.capable(b'largefiles'):
209 args['cmds'] = args[r'cmds'].replace(b'heads ', b'lheads ')
210 return ssholdcallstream(self, cmd, **args)
211
212
213 headsre = re.compile(br'(^|;)heads\b')
214
215
216 def httprepocallstream(self, cmd, **args):
217 if cmd == b'heads' and self.capable(b'largefiles'):
218 cmd = b'lheads'
219 if cmd == b'batch' and self.capable(b'largefiles'):
220 args['cmds'] = headsre.sub(b'lheads', args['cmds'])
221 return httpoldcallstream(self, cmd, **args)
General Comments 0
You need to be logged in to leave comments. Login now