##// END OF EJS Templates
exthelper: drop the addattr() decorator...
Matt Harbison -
r41315:c9e1104e default
parent child Browse files
Show More
@@ -1,380 +1,383 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will assume the remote repository also handles blob storage
90 # if unset, lfs will assume the remote repository also handles blob storage
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
92 # use this value.
92 # use this value.
93 # (default: unset)
93 # (default: unset)
94 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
95
95
96 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
97 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
98 # are relative to the repository root.
98 # are relative to the repository root.
99 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
100 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
101 #
101 #
102 # Some examples:
102 # Some examples:
103 # - all() # everything
103 # - all() # everything
104 # - none() # nothing
104 # - none() # nothing
105 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
106 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 # (default: none())
111 # (default: none())
112 #
112 #
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
115 track = size(">10M")
115 track = size(">10M")
116
116
117 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
118 retry = 5
118 retry = 5
119
119
120 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
121 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
122 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 import sys
127 import sys
128
128
129 from mercurial.i18n import _
129 from mercurial.i18n import _
130
130
131 from mercurial import (
131 from mercurial import (
132 config,
132 config,
133 context,
133 error,
134 error,
134 exchange,
135 exchange,
135 extensions,
136 extensions,
136 exthelper,
137 exthelper,
137 filelog,
138 filelog,
138 filesetlang,
139 filesetlang,
139 localrepo,
140 localrepo,
140 minifileset,
141 minifileset,
141 node,
142 node,
142 pycompat,
143 pycompat,
143 repository,
144 repository,
144 revlog,
145 revlog,
145 scmutil,
146 scmutil,
146 templateutil,
147 templateutil,
147 util,
148 util,
148 )
149 )
149
150
150 from . import (
151 from . import (
151 blobstore,
152 blobstore,
152 wireprotolfsserver,
153 wireprotolfsserver,
153 wrapper,
154 wrapper,
154 )
155 )
155
156
156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 # be specifying the version(s) of Mercurial they are tested with, or
159 # be specifying the version(s) of Mercurial they are tested with, or
159 # leave the attribute unspecified.
160 # leave the attribute unspecified.
160 testedwith = 'ships-with-hg-core'
161 testedwith = 'ships-with-hg-core'
161
162
162 eh = exthelper.exthelper()
163 eh = exthelper.exthelper()
163 eh.merge(wrapper.eh)
164 eh.merge(wrapper.eh)
164 eh.merge(wireprotolfsserver.eh)
165 eh.merge(wireprotolfsserver.eh)
165
166
166 cmdtable = eh.cmdtable
167 cmdtable = eh.cmdtable
167 configtable = eh.configtable
168 configtable = eh.configtable
168 extsetup = eh.finalextsetup
169 extsetup = eh.finalextsetup
169 uisetup = eh.finaluisetup
170 uisetup = eh.finaluisetup
170 filesetpredicate = eh.filesetpredicate
171 filesetpredicate = eh.filesetpredicate
171 reposetup = eh.finalreposetup
172 reposetup = eh.finalreposetup
172 templatekeyword = eh.templatekeyword
173 templatekeyword = eh.templatekeyword
173
174
174 eh.configitem('experimental', 'lfs.serve',
175 eh.configitem('experimental', 'lfs.serve',
175 default=True,
176 default=True,
176 )
177 )
177 eh.configitem('experimental', 'lfs.user-agent',
178 eh.configitem('experimental', 'lfs.user-agent',
178 default=None,
179 default=None,
179 )
180 )
180 eh.configitem('experimental', 'lfs.disableusercache',
181 eh.configitem('experimental', 'lfs.disableusercache',
181 default=False,
182 default=False,
182 )
183 )
183 eh.configitem('experimental', 'lfs.worker-enable',
184 eh.configitem('experimental', 'lfs.worker-enable',
184 default=False,
185 default=False,
185 )
186 )
186
187
187 eh.configitem('lfs', 'url',
188 eh.configitem('lfs', 'url',
188 default=None,
189 default=None,
189 )
190 )
190 eh.configitem('lfs', 'usercache',
191 eh.configitem('lfs', 'usercache',
191 default=None,
192 default=None,
192 )
193 )
193 # Deprecated
194 # Deprecated
194 eh.configitem('lfs', 'threshold',
195 eh.configitem('lfs', 'threshold',
195 default=None,
196 default=None,
196 )
197 )
197 eh.configitem('lfs', 'track',
198 eh.configitem('lfs', 'track',
198 default='none()',
199 default='none()',
199 )
200 )
200 eh.configitem('lfs', 'retry',
201 eh.configitem('lfs', 'retry',
201 default=5,
202 default=5,
202 )
203 )
203
204
204 lfsprocessor = (
205 lfsprocessor = (
205 wrapper.readfromstore,
206 wrapper.readfromstore,
206 wrapper.writetostore,
207 wrapper.writetostore,
207 wrapper.bypasscheckhash,
208 wrapper.bypasscheckhash,
208 )
209 )
209
210
210 def featuresetup(ui, supported):
211 def featuresetup(ui, supported):
211 # don't die on seeing a repo with the lfs requirement
212 # don't die on seeing a repo with the lfs requirement
212 supported |= {'lfs'}
213 supported |= {'lfs'}
213
214
214 @eh.uisetup
215 @eh.uisetup
215 def _uisetup(ui):
216 def _uisetup(ui):
216 localrepo.featuresetupfuncs.add(featuresetup)
217 localrepo.featuresetupfuncs.add(featuresetup)
217
218
218 @eh.reposetup
219 @eh.reposetup
219 def _reposetup(ui, repo):
220 def _reposetup(ui, repo):
220 # Nothing to do with a remote repo
221 # Nothing to do with a remote repo
221 if not repo.local():
222 if not repo.local():
222 return
223 return
223
224
224 repo.svfs.lfslocalblobstore = blobstore.local(repo)
225 repo.svfs.lfslocalblobstore = blobstore.local(repo)
225 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
226 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
226
227
227 class lfsrepo(repo.__class__):
228 class lfsrepo(repo.__class__):
228 @localrepo.unfilteredmethod
229 @localrepo.unfilteredmethod
229 def commitctx(self, ctx, error=False):
230 def commitctx(self, ctx, error=False):
230 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
231 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
231 return super(lfsrepo, self).commitctx(ctx, error)
232 return super(lfsrepo, self).commitctx(ctx, error)
232
233
233 repo.__class__ = lfsrepo
234 repo.__class__ = lfsrepo
234
235
235 if 'lfs' not in repo.requirements:
236 if 'lfs' not in repo.requirements:
236 def checkrequireslfs(ui, repo, **kwargs):
237 def checkrequireslfs(ui, repo, **kwargs):
237 if 'lfs' in repo.requirements:
238 if 'lfs' in repo.requirements:
238 return 0
239 return 0
239
240
240 last = kwargs.get(r'node_last')
241 last = kwargs.get(r'node_last')
241 _bin = node.bin
242 _bin = node.bin
242 if last:
243 if last:
243 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
244 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
244 else:
245 else:
245 s = repo.set('%n', _bin(kwargs[r'node']))
246 s = repo.set('%n', _bin(kwargs[r'node']))
246 match = repo._storenarrowmatch
247 match = repo._storenarrowmatch
247 for ctx in s:
248 for ctx in s:
248 # TODO: is there a way to just walk the files in the commit?
249 # TODO: is there a way to just walk the files in the commit?
249 if any(ctx[f].islfs() for f in ctx.files()
250 if any(ctx[f].islfs() for f in ctx.files()
250 if f in ctx and match(f)):
251 if f in ctx and match(f)):
251 repo.requirements.add('lfs')
252 repo.requirements.add('lfs')
252 repo.features.add(repository.REPO_FEATURE_LFS)
253 repo.features.add(repository.REPO_FEATURE_LFS)
253 repo._writerequirements()
254 repo._writerequirements()
254 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
255 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
255 break
256 break
256
257
257 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
258 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
258 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
259 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
259 else:
260 else:
260 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
261 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
261
262
262 def _trackedmatcher(repo):
263 def _trackedmatcher(repo):
263 """Return a function (path, size) -> bool indicating whether or not to
264 """Return a function (path, size) -> bool indicating whether or not to
264 track a given file with lfs."""
265 track a given file with lfs."""
265 if not repo.wvfs.exists('.hglfs'):
266 if not repo.wvfs.exists('.hglfs'):
266 # No '.hglfs' in wdir. Fallback to config for now.
267 # No '.hglfs' in wdir. Fallback to config for now.
267 trackspec = repo.ui.config('lfs', 'track')
268 trackspec = repo.ui.config('lfs', 'track')
268
269
269 # deprecated config: lfs.threshold
270 # deprecated config: lfs.threshold
270 threshold = repo.ui.configbytes('lfs', 'threshold')
271 threshold = repo.ui.configbytes('lfs', 'threshold')
271 if threshold:
272 if threshold:
272 filesetlang.parse(trackspec) # make sure syntax errors are confined
273 filesetlang.parse(trackspec) # make sure syntax errors are confined
273 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
274 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
274
275
275 return minifileset.compile(trackspec)
276 return minifileset.compile(trackspec)
276
277
277 data = repo.wvfs.tryread('.hglfs')
278 data = repo.wvfs.tryread('.hglfs')
278 if not data:
279 if not data:
279 return lambda p, s: False
280 return lambda p, s: False
280
281
281 # Parse errors here will abort with a message that points to the .hglfs file
282 # Parse errors here will abort with a message that points to the .hglfs file
282 # and line number.
283 # and line number.
283 cfg = config.config()
284 cfg = config.config()
284 cfg.parse('.hglfs', data)
285 cfg.parse('.hglfs', data)
285
286
286 try:
287 try:
287 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
288 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
288 for pattern, rule in cfg.items('track')]
289 for pattern, rule in cfg.items('track')]
289 except error.ParseError as e:
290 except error.ParseError as e:
290 # The original exception gives no indicator that the error is in the
291 # The original exception gives no indicator that the error is in the
291 # .hglfs file, so add that.
292 # .hglfs file, so add that.
292
293
293 # TODO: See if the line number of the file can be made available.
294 # TODO: See if the line number of the file can be made available.
294 raise error.Abort(_('parse error in .hglfs: %s') % e)
295 raise error.Abort(_('parse error in .hglfs: %s') % e)
295
296
296 def _match(path, size):
297 def _match(path, size):
297 for pat, rule in rules:
298 for pat, rule in rules:
298 if pat(path, size):
299 if pat(path, size):
299 return rule(path, size)
300 return rule(path, size)
300
301
301 return False
302 return False
302
303
303 return _match
304 return _match
304
305
305 # Called by remotefilelog
306 # Called by remotefilelog
306 def wrapfilelog(filelog):
307 def wrapfilelog(filelog):
307 wrapfunction = extensions.wrapfunction
308 wrapfunction = extensions.wrapfunction
308
309
309 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
310 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
310 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
311 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
311 wrapfunction(filelog, 'size', wrapper.filelogsize)
312 wrapfunction(filelog, 'size', wrapper.filelogsize)
312
313
313 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
314 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
314 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
315 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
315 opts = orig(ui, requirements, features)
316 opts = orig(ui, requirements, features)
316 for name, module in extensions.extensions(ui):
317 for name, module in extensions.extensions(ui):
317 if module is sys.modules[__name__]:
318 if module is sys.modules[__name__]:
318 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
319 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
319 msg = (_(b"cannot register multiple processors on flag '%#x'.")
320 msg = (_(b"cannot register multiple processors on flag '%#x'.")
320 % revlog.REVIDX_EXTSTORED)
321 % revlog.REVIDX_EXTSTORED)
321 raise error.Abort(msg)
322 raise error.Abort(msg)
322
323
323 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
324 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
324 break
325 break
325
326
326 return opts
327 return opts
327
328
328 @eh.extsetup
329 @eh.extsetup
329 def _extsetup(ui):
330 def _extsetup(ui):
330 wrapfilelog(filelog.filelog)
331 wrapfilelog(filelog.filelog)
331
332
333 context.basefilectx.islfs = wrapper.filectxislfs
334
332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
335 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
333
336
334 # Make bundle choose changegroup3 instead of changegroup2. This affects
337 # Make bundle choose changegroup3 instead of changegroup2. This affects
335 # "hg bundle" command. Note: it does not cover all bundle formats like
338 # "hg bundle" command. Note: it does not cover all bundle formats like
336 # "packed1". Using "packed1" with lfs will likely cause trouble.
339 # "packed1". Using "packed1" with lfs will likely cause trouble.
337 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
340 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
338
341
339 @eh.filesetpredicate('lfs()')
342 @eh.filesetpredicate('lfs()')
340 def lfsfileset(mctx, x):
343 def lfsfileset(mctx, x):
341 """File that uses LFS storage."""
344 """File that uses LFS storage."""
342 # i18n: "lfs" is a keyword
345 # i18n: "lfs" is a keyword
343 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
346 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
344 ctx = mctx.ctx
347 ctx = mctx.ctx
345 def lfsfilep(f):
348 def lfsfilep(f):
346 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
349 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
347 return mctx.predicate(lfsfilep, predrepr='<lfs>')
350 return mctx.predicate(lfsfilep, predrepr='<lfs>')
348
351
349 @eh.templatekeyword('lfs_files', requires={'ctx'})
352 @eh.templatekeyword('lfs_files', requires={'ctx'})
350 def lfsfiles(context, mapping):
353 def lfsfiles(context, mapping):
351 """List of strings. All files modified, added, or removed by this
354 """List of strings. All files modified, added, or removed by this
352 changeset."""
355 changeset."""
353 ctx = context.resource(mapping, 'ctx')
356 ctx = context.resource(mapping, 'ctx')
354
357
355 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
358 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
356 files = sorted(pointers.keys())
359 files = sorted(pointers.keys())
357
360
358 def pointer(v):
361 def pointer(v):
359 # In the file spec, version is first and the other keys are sorted.
362 # In the file spec, version is first and the other keys are sorted.
360 sortkeyfunc = lambda x: (x[0] != 'version', x)
363 sortkeyfunc = lambda x: (x[0] != 'version', x)
361 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
364 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
362 return util.sortdict(items)
365 return util.sortdict(items)
363
366
364 makemap = lambda v: {
367 makemap = lambda v: {
365 'file': v,
368 'file': v,
366 'lfsoid': pointers[v].oid() if pointers[v] else None,
369 'lfsoid': pointers[v].oid() if pointers[v] else None,
367 'lfspointer': templateutil.hybriddict(pointer(v)),
370 'lfspointer': templateutil.hybriddict(pointer(v)),
368 }
371 }
369
372
370 # TODO: make the separator ', '?
373 # TODO: make the separator ', '?
371 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
374 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
372 return templateutil.hybrid(f, files, makemap, pycompat.identity)
375 return templateutil.hybrid(f, files, makemap, pycompat.identity)
373
376
374 @eh.command('debuglfsupload',
377 @eh.command('debuglfsupload',
375 [('r', 'rev', [], _('upload large files introduced by REV'))])
378 [('r', 'rev', [], _('upload large files introduced by REV'))])
376 def debuglfsupload(ui, repo, **opts):
379 def debuglfsupload(ui, repo, **opts):
377 """upload lfs blobs added by the working copy parent or given revisions"""
380 """upload lfs blobs added by the working copy parent or given revisions"""
378 revs = opts.get(r'rev', [])
381 revs = opts.get(r'rev', [])
379 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
382 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
380 wrapper.uploadblobs(repo, pointers)
383 wrapper.uploadblobs(repo, pointers)
@@ -1,447 +1,446 b''
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, hex, nullid, short
13 from mercurial.node import bin, hex, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 bundle2,
16 bundle2,
17 changegroup,
17 changegroup,
18 cmdutil,
18 cmdutil,
19 context,
19 context,
20 error,
20 error,
21 exchange,
21 exchange,
22 exthelper,
22 exthelper,
23 localrepo,
23 localrepo,
24 repository,
24 repository,
25 revlog,
25 revlog,
26 scmutil,
26 scmutil,
27 upgrade,
27 upgrade,
28 util,
28 util,
29 vfs as vfsmod,
29 vfs as vfsmod,
30 wireprotov1server,
30 wireprotov1server,
31 )
31 )
32
32
33 from mercurial.utils import (
33 from mercurial.utils import (
34 storageutil,
34 storageutil,
35 stringutil,
35 stringutil,
36 )
36 )
37
37
38 from ..largefiles import lfutil
38 from ..largefiles import lfutil
39
39
40 from . import (
40 from . import (
41 blobstore,
41 blobstore,
42 pointer,
42 pointer,
43 )
43 )
44
44
45 eh = exthelper.exthelper()
45 eh = exthelper.exthelper()
46
46
47 @eh.wrapfunction(localrepo, 'makefilestorage')
47 @eh.wrapfunction(localrepo, 'makefilestorage')
48 def localrepomakefilestorage(orig, requirements, features, **kwargs):
48 def localrepomakefilestorage(orig, requirements, features, **kwargs):
49 if b'lfs' in requirements:
49 if b'lfs' in requirements:
50 features.add(repository.REPO_FEATURE_LFS)
50 features.add(repository.REPO_FEATURE_LFS)
51
51
52 return orig(requirements=requirements, features=features, **kwargs)
52 return orig(requirements=requirements, features=features, **kwargs)
53
53
54 @eh.wrapfunction(changegroup, 'allsupportedversions')
54 @eh.wrapfunction(changegroup, 'allsupportedversions')
55 def allsupportedversions(orig, ui):
55 def allsupportedversions(orig, ui):
56 versions = orig(ui)
56 versions = orig(ui)
57 versions.add('03')
57 versions.add('03')
58 return versions
58 return versions
59
59
60 @eh.wrapfunction(wireprotov1server, '_capabilities')
60 @eh.wrapfunction(wireprotov1server, '_capabilities')
61 def _capabilities(orig, repo, proto):
61 def _capabilities(orig, repo, proto):
62 '''Wrap server command to announce lfs server capability'''
62 '''Wrap server command to announce lfs server capability'''
63 caps = orig(repo, proto)
63 caps = orig(repo, proto)
64 if util.safehasattr(repo.svfs, 'lfslocalblobstore'):
64 if util.safehasattr(repo.svfs, 'lfslocalblobstore'):
65 # Advertise a slightly different capability when lfs is *required*, so
65 # Advertise a slightly different capability when lfs is *required*, so
66 # that the client knows it MUST load the extension. If lfs is not
66 # that the client knows it MUST load the extension. If lfs is not
67 # required on the server, there's no reason to autoload the extension
67 # required on the server, there's no reason to autoload the extension
68 # on the client.
68 # on the client.
69 if b'lfs' in repo.requirements:
69 if b'lfs' in repo.requirements:
70 caps.append('lfs-serve')
70 caps.append('lfs-serve')
71
71
72 caps.append('lfs')
72 caps.append('lfs')
73 return caps
73 return caps
74
74
75 def bypasscheckhash(self, text):
75 def bypasscheckhash(self, text):
76 return False
76 return False
77
77
78 def readfromstore(self, text):
78 def readfromstore(self, text):
79 """Read filelog content from local blobstore transform for flagprocessor.
79 """Read filelog content from local blobstore transform for flagprocessor.
80
80
81 Default tranform for flagprocessor, returning contents from blobstore.
81 Default tranform for flagprocessor, returning contents from blobstore.
82 Returns a 2-typle (text, validatehash) where validatehash is True as the
82 Returns a 2-typle (text, validatehash) where validatehash is True as the
83 contents of the blobstore should be checked using checkhash.
83 contents of the blobstore should be checked using checkhash.
84 """
84 """
85 p = pointer.deserialize(text)
85 p = pointer.deserialize(text)
86 oid = p.oid()
86 oid = p.oid()
87 store = self.opener.lfslocalblobstore
87 store = self.opener.lfslocalblobstore
88 if not store.has(oid):
88 if not store.has(oid):
89 p.filename = self.filename
89 p.filename = self.filename
90 self.opener.lfsremoteblobstore.readbatch([p], store)
90 self.opener.lfsremoteblobstore.readbatch([p], store)
91
91
92 # The caller will validate the content
92 # The caller will validate the content
93 text = store.read(oid, verify=False)
93 text = store.read(oid, verify=False)
94
94
95 # pack hg filelog metadata
95 # pack hg filelog metadata
96 hgmeta = {}
96 hgmeta = {}
97 for k in p.keys():
97 for k in p.keys():
98 if k.startswith('x-hg-'):
98 if k.startswith('x-hg-'):
99 name = k[len('x-hg-'):]
99 name = k[len('x-hg-'):]
100 hgmeta[name] = p[k]
100 hgmeta[name] = p[k]
101 if hgmeta or text.startswith('\1\n'):
101 if hgmeta or text.startswith('\1\n'):
102 text = storageutil.packmeta(hgmeta, text)
102 text = storageutil.packmeta(hgmeta, text)
103
103
104 return (text, True)
104 return (text, True)
105
105
106 def writetostore(self, text):
106 def writetostore(self, text):
107 # hg filelog metadata (includes rename, etc)
107 # hg filelog metadata (includes rename, etc)
108 hgmeta, offset = storageutil.parsemeta(text)
108 hgmeta, offset = storageutil.parsemeta(text)
109 if offset and offset > 0:
109 if offset and offset > 0:
110 # lfs blob does not contain hg filelog metadata
110 # lfs blob does not contain hg filelog metadata
111 text = text[offset:]
111 text = text[offset:]
112
112
113 # git-lfs only supports sha256
113 # git-lfs only supports sha256
114 oid = hex(hashlib.sha256(text).digest())
114 oid = hex(hashlib.sha256(text).digest())
115 self.opener.lfslocalblobstore.write(oid, text)
115 self.opener.lfslocalblobstore.write(oid, text)
116
116
117 # replace contents with metadata
117 # replace contents with metadata
118 longoid = 'sha256:%s' % oid
118 longoid = 'sha256:%s' % oid
119 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
119 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
120
120
121 # by default, we expect the content to be binary. however, LFS could also
121 # by default, we expect the content to be binary. however, LFS could also
122 # be used for non-binary content. add a special entry for non-binary data.
122 # be used for non-binary content. add a special entry for non-binary data.
123 # this will be used by filectx.isbinary().
123 # this will be used by filectx.isbinary().
124 if not stringutil.binary(text):
124 if not stringutil.binary(text):
125 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
125 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
126 metadata['x-is-binary'] = '0'
126 metadata['x-is-binary'] = '0'
127
127
128 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
128 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
129 if hgmeta is not None:
129 if hgmeta is not None:
130 for k, v in hgmeta.iteritems():
130 for k, v in hgmeta.iteritems():
131 metadata['x-hg-%s' % k] = v
131 metadata['x-hg-%s' % k] = v
132
132
133 rawtext = metadata.serialize()
133 rawtext = metadata.serialize()
134 return (rawtext, False)
134 return (rawtext, False)
135
135
136 def _islfs(rlog, node=None, rev=None):
136 def _islfs(rlog, node=None, rev=None):
137 if rev is None:
137 if rev is None:
138 if node is None:
138 if node is None:
139 # both None - likely working copy content where node is not ready
139 # both None - likely working copy content where node is not ready
140 return False
140 return False
141 rev = rlog._revlog.rev(node)
141 rev = rlog._revlog.rev(node)
142 else:
142 else:
143 node = rlog._revlog.node(rev)
143 node = rlog._revlog.node(rev)
144 if node == nullid:
144 if node == nullid:
145 return False
145 return False
146 flags = rlog._revlog.flags(rev)
146 flags = rlog._revlog.flags(rev)
147 return bool(flags & revlog.REVIDX_EXTSTORED)
147 return bool(flags & revlog.REVIDX_EXTSTORED)
148
148
149 # Wrapping may also be applied by remotefilelog
149 # Wrapping may also be applied by remotefilelog
150 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
150 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
151 cachedelta=None, node=None,
151 cachedelta=None, node=None,
152 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
152 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
153 # The matcher isn't available if reposetup() wasn't called.
153 # The matcher isn't available if reposetup() wasn't called.
154 lfstrack = self._revlog.opener.options.get('lfstrack')
154 lfstrack = self._revlog.opener.options.get('lfstrack')
155
155
156 if lfstrack:
156 if lfstrack:
157 textlen = len(text)
157 textlen = len(text)
158 # exclude hg rename meta from file size
158 # exclude hg rename meta from file size
159 meta, offset = storageutil.parsemeta(text)
159 meta, offset = storageutil.parsemeta(text)
160 if offset:
160 if offset:
161 textlen -= offset
161 textlen -= offset
162
162
163 if lfstrack(self._revlog.filename, textlen):
163 if lfstrack(self._revlog.filename, textlen):
164 flags |= revlog.REVIDX_EXTSTORED
164 flags |= revlog.REVIDX_EXTSTORED
165
165
166 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
166 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
167 node=node, flags=flags, **kwds)
167 node=node, flags=flags, **kwds)
168
168
169 # Wrapping may also be applied by remotefilelog
169 # Wrapping may also be applied by remotefilelog
170 def filelogrenamed(orig, self, node):
170 def filelogrenamed(orig, self, node):
171 if _islfs(self, node):
171 if _islfs(self, node):
172 rawtext = self._revlog.revision(node, raw=True)
172 rawtext = self._revlog.revision(node, raw=True)
173 if not rawtext:
173 if not rawtext:
174 return False
174 return False
175 metadata = pointer.deserialize(rawtext)
175 metadata = pointer.deserialize(rawtext)
176 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
176 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
177 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
177 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
178 else:
178 else:
179 return False
179 return False
180 return orig(self, node)
180 return orig(self, node)
181
181
182 # Wrapping may also be applied by remotefilelog
182 # Wrapping may also be applied by remotefilelog
183 def filelogsize(orig, self, rev):
183 def filelogsize(orig, self, rev):
184 if _islfs(self, rev=rev):
184 if _islfs(self, rev=rev):
185 # fast path: use lfs metadata to answer size
185 # fast path: use lfs metadata to answer size
186 rawtext = self._revlog.revision(rev, raw=True)
186 rawtext = self._revlog.revision(rev, raw=True)
187 metadata = pointer.deserialize(rawtext)
187 metadata = pointer.deserialize(rawtext)
188 return int(metadata['size'])
188 return int(metadata['size'])
189 return orig(self, rev)
189 return orig(self, rev)
190
190
191 @eh.wrapfunction(context.basefilectx, 'cmp')
191 @eh.wrapfunction(context.basefilectx, 'cmp')
192 def filectxcmp(orig, self, fctx):
192 def filectxcmp(orig, self, fctx):
193 """returns True if text is different than fctx"""
193 """returns True if text is different than fctx"""
194 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
194 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
195 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
195 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
196 # fast path: check LFS oid
196 # fast path: check LFS oid
197 p1 = pointer.deserialize(self.rawdata())
197 p1 = pointer.deserialize(self.rawdata())
198 p2 = pointer.deserialize(fctx.rawdata())
198 p2 = pointer.deserialize(fctx.rawdata())
199 return p1.oid() != p2.oid()
199 return p1.oid() != p2.oid()
200 return orig(self, fctx)
200 return orig(self, fctx)
201
201
202 @eh.wrapfunction(context.basefilectx, 'isbinary')
202 @eh.wrapfunction(context.basefilectx, 'isbinary')
203 def filectxisbinary(orig, self):
203 def filectxisbinary(orig, self):
204 if self.islfs():
204 if self.islfs():
205 # fast path: use lfs metadata to answer isbinary
205 # fast path: use lfs metadata to answer isbinary
206 metadata = pointer.deserialize(self.rawdata())
206 metadata = pointer.deserialize(self.rawdata())
207 # if lfs metadata says nothing, assume it's binary by default
207 # if lfs metadata says nothing, assume it's binary by default
208 return bool(int(metadata.get('x-is-binary', 1)))
208 return bool(int(metadata.get('x-is-binary', 1)))
209 return orig(self)
209 return orig(self)
210
210
211 @eh.addattr(context.basefilectx, 'islfs')
212 def filectxislfs(self):
211 def filectxislfs(self):
213 return _islfs(self.filelog(), self.filenode())
212 return _islfs(self.filelog(), self.filenode())
214
213
215 @eh.wrapfunction(cmdutil, '_updatecatformatter')
214 @eh.wrapfunction(cmdutil, '_updatecatformatter')
216 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
215 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
217 orig(fm, ctx, matcher, path, decode)
216 orig(fm, ctx, matcher, path, decode)
218 fm.data(rawdata=ctx[path].rawdata())
217 fm.data(rawdata=ctx[path].rawdata())
219
218
220 @eh.wrapfunction(scmutil, 'wrapconvertsink')
219 @eh.wrapfunction(scmutil, 'wrapconvertsink')
221 def convertsink(orig, sink):
220 def convertsink(orig, sink):
222 sink = orig(sink)
221 sink = orig(sink)
223 if sink.repotype == 'hg':
222 if sink.repotype == 'hg':
224 class lfssink(sink.__class__):
223 class lfssink(sink.__class__):
225 def putcommit(self, files, copies, parents, commit, source, revmap,
224 def putcommit(self, files, copies, parents, commit, source, revmap,
226 full, cleanp2):
225 full, cleanp2):
227 pc = super(lfssink, self).putcommit
226 pc = super(lfssink, self).putcommit
228 node = pc(files, copies, parents, commit, source, revmap, full,
227 node = pc(files, copies, parents, commit, source, revmap, full,
229 cleanp2)
228 cleanp2)
230
229
231 if 'lfs' not in self.repo.requirements:
230 if 'lfs' not in self.repo.requirements:
232 ctx = self.repo[node]
231 ctx = self.repo[node]
233
232
234 # The file list may contain removed files, so check for
233 # The file list may contain removed files, so check for
235 # membership before assuming it is in the context.
234 # membership before assuming it is in the context.
236 if any(f in ctx and ctx[f].islfs() for f, n in files):
235 if any(f in ctx and ctx[f].islfs() for f, n in files):
237 self.repo.requirements.add('lfs')
236 self.repo.requirements.add('lfs')
238 self.repo._writerequirements()
237 self.repo._writerequirements()
239
238
240 return node
239 return node
241
240
242 sink.__class__ = lfssink
241 sink.__class__ = lfssink
243
242
244 return sink
243 return sink
245
244
246 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
245 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
247 # options and blob stores are passed from othervfs to the new readonlyvfs.
246 # options and blob stores are passed from othervfs to the new readonlyvfs.
248 @eh.wrapfunction(vfsmod.readonlyvfs, '__init__')
247 @eh.wrapfunction(vfsmod.readonlyvfs, '__init__')
249 def vfsinit(orig, self, othervfs):
248 def vfsinit(orig, self, othervfs):
250 orig(self, othervfs)
249 orig(self, othervfs)
251 # copy lfs related options
250 # copy lfs related options
252 for k, v in othervfs.options.items():
251 for k, v in othervfs.options.items():
253 if k.startswith('lfs'):
252 if k.startswith('lfs'):
254 self.options[k] = v
253 self.options[k] = v
255 # also copy lfs blobstores. note: this can run before reposetup, so lfs
254 # also copy lfs blobstores. note: this can run before reposetup, so lfs
256 # blobstore attributes are not always ready at this time.
255 # blobstore attributes are not always ready at this time.
257 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
256 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
258 if util.safehasattr(othervfs, name):
257 if util.safehasattr(othervfs, name):
259 setattr(self, name, getattr(othervfs, name))
258 setattr(self, name, getattr(othervfs, name))
260
259
261 def _prefetchfiles(repo, revs, match):
260 def _prefetchfiles(repo, revs, match):
262 """Ensure that required LFS blobs are present, fetching them as a group if
261 """Ensure that required LFS blobs are present, fetching them as a group if
263 needed."""
262 needed."""
264 if not util.safehasattr(repo.svfs, 'lfslocalblobstore'):
263 if not util.safehasattr(repo.svfs, 'lfslocalblobstore'):
265 return
264 return
266
265
267 pointers = []
266 pointers = []
268 oids = set()
267 oids = set()
269 localstore = repo.svfs.lfslocalblobstore
268 localstore = repo.svfs.lfslocalblobstore
270
269
271 for rev in revs:
270 for rev in revs:
272 ctx = repo[rev]
271 ctx = repo[rev]
273 for f in ctx.walk(match):
272 for f in ctx.walk(match):
274 p = pointerfromctx(ctx, f)
273 p = pointerfromctx(ctx, f)
275 if p and p.oid() not in oids and not localstore.has(p.oid()):
274 if p and p.oid() not in oids and not localstore.has(p.oid()):
276 p.filename = f
275 p.filename = f
277 pointers.append(p)
276 pointers.append(p)
278 oids.add(p.oid())
277 oids.add(p.oid())
279
278
280 if pointers:
279 if pointers:
281 # Recalculating the repo store here allows 'paths.default' that is set
280 # Recalculating the repo store here allows 'paths.default' that is set
282 # on the repo by a clone command to be used for the update.
281 # on the repo by a clone command to be used for the update.
283 blobstore.remote(repo).readbatch(pointers, localstore)
282 blobstore.remote(repo).readbatch(pointers, localstore)
284
283
285 def _canskipupload(repo):
284 def _canskipupload(repo):
286 # Skip if this hasn't been passed to reposetup()
285 # Skip if this hasn't been passed to reposetup()
287 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
286 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
288 return True
287 return True
289
288
290 # if remotestore is a null store, upload is a no-op and can be skipped
289 # if remotestore is a null store, upload is a no-op and can be skipped
291 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
290 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
292
291
293 def candownload(repo):
292 def candownload(repo):
294 # Skip if this hasn't been passed to reposetup()
293 # Skip if this hasn't been passed to reposetup()
295 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
294 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
296 return False
295 return False
297
296
298 # if remotestore is a null store, downloads will lead to nothing
297 # if remotestore is a null store, downloads will lead to nothing
299 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
298 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
300
299
301 def uploadblobsfromrevs(repo, revs):
300 def uploadblobsfromrevs(repo, revs):
302 '''upload lfs blobs introduced by revs
301 '''upload lfs blobs introduced by revs
303
302
304 Note: also used by other extensions e. g. infinitepush. avoid renaming.
303 Note: also used by other extensions e. g. infinitepush. avoid renaming.
305 '''
304 '''
306 if _canskipupload(repo):
305 if _canskipupload(repo):
307 return
306 return
308 pointers = extractpointers(repo, revs)
307 pointers = extractpointers(repo, revs)
309 uploadblobs(repo, pointers)
308 uploadblobs(repo, pointers)
310
309
311 def prepush(pushop):
310 def prepush(pushop):
312 """Prepush hook.
311 """Prepush hook.
313
312
314 Read through the revisions to push, looking for filelog entries that can be
313 Read through the revisions to push, looking for filelog entries that can be
315 deserialized into metadata so that we can block the push on their upload to
314 deserialized into metadata so that we can block the push on their upload to
316 the remote blobstore.
315 the remote blobstore.
317 """
316 """
318 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
317 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
319
318
320 @eh.wrapfunction(exchange, 'push')
319 @eh.wrapfunction(exchange, 'push')
321 def push(orig, repo, remote, *args, **kwargs):
320 def push(orig, repo, remote, *args, **kwargs):
322 """bail on push if the extension isn't enabled on remote when needed, and
321 """bail on push if the extension isn't enabled on remote when needed, and
323 update the remote store based on the destination path."""
322 update the remote store based on the destination path."""
324 if 'lfs' in repo.requirements:
323 if 'lfs' in repo.requirements:
325 # If the remote peer is for a local repo, the requirement tests in the
324 # If the remote peer is for a local repo, the requirement tests in the
326 # base class method enforce lfs support. Otherwise, some revisions in
325 # base class method enforce lfs support. Otherwise, some revisions in
327 # this repo use lfs, and the remote repo needs the extension loaded.
326 # this repo use lfs, and the remote repo needs the extension loaded.
328 if not remote.local() and not remote.capable('lfs'):
327 if not remote.local() and not remote.capable('lfs'):
329 # This is a copy of the message in exchange.push() when requirements
328 # This is a copy of the message in exchange.push() when requirements
330 # are missing between local repos.
329 # are missing between local repos.
331 m = _("required features are not supported in the destination: %s")
330 m = _("required features are not supported in the destination: %s")
332 raise error.Abort(m % 'lfs',
331 raise error.Abort(m % 'lfs',
333 hint=_('enable the lfs extension on the server'))
332 hint=_('enable the lfs extension on the server'))
334
333
335 # Repositories where this extension is disabled won't have the field.
334 # Repositories where this extension is disabled won't have the field.
336 # But if there's a requirement, then the extension must be loaded AND
335 # But if there's a requirement, then the extension must be loaded AND
337 # there may be blobs to push.
336 # there may be blobs to push.
338 remotestore = repo.svfs.lfsremoteblobstore
337 remotestore = repo.svfs.lfsremoteblobstore
339 try:
338 try:
340 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
339 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
341 return orig(repo, remote, *args, **kwargs)
340 return orig(repo, remote, *args, **kwargs)
342 finally:
341 finally:
343 repo.svfs.lfsremoteblobstore = remotestore
342 repo.svfs.lfsremoteblobstore = remotestore
344 else:
343 else:
345 return orig(repo, remote, *args, **kwargs)
344 return orig(repo, remote, *args, **kwargs)
346
345
347 # when writing a bundle via "hg bundle" command, upload related LFS blobs
346 # when writing a bundle via "hg bundle" command, upload related LFS blobs
348 @eh.wrapfunction(bundle2, 'writenewbundle')
347 @eh.wrapfunction(bundle2, 'writenewbundle')
349 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
348 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
350 *args, **kwargs):
349 *args, **kwargs):
351 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
350 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
352 uploadblobsfromrevs(repo, outgoing.missing)
351 uploadblobsfromrevs(repo, outgoing.missing)
353 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
352 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
354 **kwargs)
353 **kwargs)
355
354
356 def extractpointers(repo, revs):
355 def extractpointers(repo, revs):
357 """return a list of lfs pointers added by given revs"""
356 """return a list of lfs pointers added by given revs"""
358 repo.ui.debug('lfs: computing set of blobs to upload\n')
357 repo.ui.debug('lfs: computing set of blobs to upload\n')
359 pointers = {}
358 pointers = {}
360
359
361 makeprogress = repo.ui.makeprogress
360 makeprogress = repo.ui.makeprogress
362 with makeprogress(_('lfs search'), _('changesets'), len(revs)) as progress:
361 with makeprogress(_('lfs search'), _('changesets'), len(revs)) as progress:
363 for r in revs:
362 for r in revs:
364 ctx = repo[r]
363 ctx = repo[r]
365 for p in pointersfromctx(ctx).values():
364 for p in pointersfromctx(ctx).values():
366 pointers[p.oid()] = p
365 pointers[p.oid()] = p
367 progress.increment()
366 progress.increment()
368 return sorted(pointers.values(), key=lambda p: p.oid())
367 return sorted(pointers.values(), key=lambda p: p.oid())
369
368
370 def pointerfromctx(ctx, f, removed=False):
369 def pointerfromctx(ctx, f, removed=False):
371 """return a pointer for the named file from the given changectx, or None if
370 """return a pointer for the named file from the given changectx, or None if
372 the file isn't LFS.
371 the file isn't LFS.
373
372
374 Optionally, the pointer for a file deleted from the context can be returned.
373 Optionally, the pointer for a file deleted from the context can be returned.
375 Since no such pointer is actually stored, and to distinguish from a non LFS
374 Since no such pointer is actually stored, and to distinguish from a non LFS
376 file, this pointer is represented by an empty dict.
375 file, this pointer is represented by an empty dict.
377 """
376 """
378 _ctx = ctx
377 _ctx = ctx
379 if f not in ctx:
378 if f not in ctx:
380 if not removed:
379 if not removed:
381 return None
380 return None
382 if f in ctx.p1():
381 if f in ctx.p1():
383 _ctx = ctx.p1()
382 _ctx = ctx.p1()
384 elif f in ctx.p2():
383 elif f in ctx.p2():
385 _ctx = ctx.p2()
384 _ctx = ctx.p2()
386 else:
385 else:
387 return None
386 return None
388 fctx = _ctx[f]
387 fctx = _ctx[f]
389 if not _islfs(fctx.filelog(), fctx.filenode()):
388 if not _islfs(fctx.filelog(), fctx.filenode()):
390 return None
389 return None
391 try:
390 try:
392 p = pointer.deserialize(fctx.rawdata())
391 p = pointer.deserialize(fctx.rawdata())
393 if ctx == _ctx:
392 if ctx == _ctx:
394 return p
393 return p
395 return {}
394 return {}
396 except pointer.InvalidPointer as ex:
395 except pointer.InvalidPointer as ex:
397 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
396 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
398 % (f, short(_ctx.node()), ex))
397 % (f, short(_ctx.node()), ex))
399
398
400 def pointersfromctx(ctx, removed=False):
399 def pointersfromctx(ctx, removed=False):
401 """return a dict {path: pointer} for given single changectx.
400 """return a dict {path: pointer} for given single changectx.
402
401
403 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
402 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
404 stored for the path is an empty dict.
403 stored for the path is an empty dict.
405 """
404 """
406 result = {}
405 result = {}
407 m = ctx.repo().narrowmatch()
406 m = ctx.repo().narrowmatch()
408
407
409 # TODO: consider manifest.fastread() instead
408 # TODO: consider manifest.fastread() instead
410 for f in ctx.files():
409 for f in ctx.files():
411 if not m(f):
410 if not m(f):
412 continue
411 continue
413 p = pointerfromctx(ctx, f, removed=removed)
412 p = pointerfromctx(ctx, f, removed=removed)
414 if p is not None:
413 if p is not None:
415 result[f] = p
414 result[f] = p
416 return result
415 return result
417
416
418 def uploadblobs(repo, pointers):
417 def uploadblobs(repo, pointers):
419 """upload given pointers from local blobstore"""
418 """upload given pointers from local blobstore"""
420 if not pointers:
419 if not pointers:
421 return
420 return
422
421
423 remoteblob = repo.svfs.lfsremoteblobstore
422 remoteblob = repo.svfs.lfsremoteblobstore
424 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
423 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
425
424
426 @eh.wrapfunction(upgrade, '_finishdatamigration')
425 @eh.wrapfunction(upgrade, '_finishdatamigration')
427 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
426 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
428 orig(ui, srcrepo, dstrepo, requirements)
427 orig(ui, srcrepo, dstrepo, requirements)
429
428
430 # Skip if this hasn't been passed to reposetup()
429 # Skip if this hasn't been passed to reposetup()
431 if (util.safehasattr(srcrepo.svfs, 'lfslocalblobstore') and
430 if (util.safehasattr(srcrepo.svfs, 'lfslocalblobstore') and
432 util.safehasattr(dstrepo.svfs, 'lfslocalblobstore')):
431 util.safehasattr(dstrepo.svfs, 'lfslocalblobstore')):
433 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
432 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
434 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
433 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
435
434
436 for dirpath, dirs, files in srclfsvfs.walk():
435 for dirpath, dirs, files in srclfsvfs.walk():
437 for oid in files:
436 for oid in files:
438 ui.write(_('copying lfs blob %s\n') % oid)
437 ui.write(_('copying lfs blob %s\n') % oid)
439 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
438 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
440
439
441 @eh.wrapfunction(upgrade, 'preservedrequirements')
440 @eh.wrapfunction(upgrade, 'preservedrequirements')
442 @eh.wrapfunction(upgrade, 'supporteddestrequirements')
441 @eh.wrapfunction(upgrade, 'supporteddestrequirements')
443 def upgraderequirements(orig, repo):
442 def upgraderequirements(orig, repo):
444 reqs = orig(repo)
443 reqs = orig(repo)
445 if 'lfs' in repo.requirements:
444 if 'lfs' in repo.requirements:
446 reqs.add('lfs')
445 reqs.add('lfs')
447 return reqs
446 return reqs
@@ -1,330 +1,300 b''
1 # Copyright 2012 Logilab SA <contact@logilab.fr>
1 # Copyright 2012 Logilab SA <contact@logilab.fr>
2 # Pierre-Yves David <pierre-yves.david@ens-lyon.org>
2 # Pierre-Yves David <pierre-yves.david@ens-lyon.org>
3 # Octobus <contact@octobus.net>
3 # Octobus <contact@octobus.net>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 #####################################################################
8 #####################################################################
9 ### Extension helper ###
9 ### Extension helper ###
10 #####################################################################
10 #####################################################################
11
11
12 from __future__ import absolute_import
12 from __future__ import absolute_import
13
13
14 from . import (
14 from . import (
15 commands,
15 commands,
16 error,
16 error,
17 extensions,
17 extensions,
18 registrar,
18 registrar,
19 )
19 )
20
20
21 class exthelper(object):
21 class exthelper(object):
22 """Helper for modular extension setup
22 """Helper for modular extension setup
23
23
24 A single helper should be instantiated for each module of an
24 A single helper should be instantiated for each module of an
25 extension, where a command or function needs to be wrapped, or a
25 extension, where a command or function needs to be wrapped, or a
26 command, extension hook, fileset, revset or template needs to be
26 command, extension hook, fileset, revset or template needs to be
27 registered. Helper methods are then used as decorators for
27 registered. Helper methods are then used as decorators for
28 these various purposes. If an extension spans multiple modules,
28 these various purposes. If an extension spans multiple modules,
29 all helper instances should be merged in the main module.
29 all helper instances should be merged in the main module.
30
30
31 All decorators return the original function and may be chained.
31 All decorators return the original function and may be chained.
32
32
33 Aside from the helper functions with examples below, several
33 Aside from the helper functions with examples below, several
34 registrar method aliases are available for adding commands,
34 registrar method aliases are available for adding commands,
35 configitems, filesets, revsets, and templates. Simply decorate
35 configitems, filesets, revsets, and templates. Simply decorate
36 the appropriate methods, and assign the corresponding exthelper
36 the appropriate methods, and assign the corresponding exthelper
37 variable to a module level variable of the extension. The
37 variable to a module level variable of the extension. The
38 extension loading mechanism will handle the rest.
38 extension loading mechanism will handle the rest.
39
39
40 example::
40 example::
41
41
42 # ext.py
42 # ext.py
43 eh = exthelper.exthelper()
43 eh = exthelper.exthelper()
44
44
45 # As needed:
45 # As needed:
46 cmdtable = eh.cmdtable
46 cmdtable = eh.cmdtable
47 configtable = eh.configtable
47 configtable = eh.configtable
48 filesetpredicate = eh.filesetpredicate
48 filesetpredicate = eh.filesetpredicate
49 revsetpredicate = eh.revsetpredicate
49 revsetpredicate = eh.revsetpredicate
50 templatekeyword = eh.templatekeyword
50 templatekeyword = eh.templatekeyword
51
51
52 @eh.command('mynewcommand',
52 @eh.command('mynewcommand',
53 [('r', 'rev', [], _('operate on these revisions'))],
53 [('r', 'rev', [], _('operate on these revisions'))],
54 _('-r REV...'),
54 _('-r REV...'),
55 helpcategory=command.CATEGORY_XXX)
55 helpcategory=command.CATEGORY_XXX)
56 def newcommand(ui, repo, *revs, **opts):
56 def newcommand(ui, repo, *revs, **opts):
57 # implementation goes here
57 # implementation goes here
58
58
59 eh.configitem('experimental', 'foo',
59 eh.configitem('experimental', 'foo',
60 default=False,
60 default=False,
61 )
61 )
62
62
63 @eh.filesetpredicate('lfs()')
63 @eh.filesetpredicate('lfs()')
64 def filesetbabar(mctx, x):
64 def filesetbabar(mctx, x):
65 return mctx.predicate(...)
65 return mctx.predicate(...)
66
66
67 @eh.revsetpredicate('hidden')
67 @eh.revsetpredicate('hidden')
68 def revsetbabar(repo, subset, x):
68 def revsetbabar(repo, subset, x):
69 args = revset.getargs(x, 0, 0, 'babar accept no argument')
69 args = revset.getargs(x, 0, 0, 'babar accept no argument')
70 return [r for r in subset if 'babar' in repo[r].description()]
70 return [r for r in subset if 'babar' in repo[r].description()]
71
71
72 @eh.templatekeyword('babar')
72 @eh.templatekeyword('babar')
73 def kwbabar(ctx):
73 def kwbabar(ctx):
74 return 'babar'
74 return 'babar'
75 """
75 """
76
76
77 def __init__(self):
77 def __init__(self):
78 self._uipopulatecallables = []
78 self._uipopulatecallables = []
79 self._uicallables = []
79 self._uicallables = []
80 self._extcallables = []
80 self._extcallables = []
81 self._repocallables = []
81 self._repocallables = []
82 self._commandwrappers = []
82 self._commandwrappers = []
83 self._extcommandwrappers = []
83 self._extcommandwrappers = []
84 self._functionwrappers = []
84 self._functionwrappers = []
85 self._duckpunchers = []
86 self.cmdtable = {}
85 self.cmdtable = {}
87 self.command = registrar.command(self.cmdtable)
86 self.command = registrar.command(self.cmdtable)
88 self.configtable = {}
87 self.configtable = {}
89 self.configitem = registrar.configitem(self.configtable)
88 self.configitem = registrar.configitem(self.configtable)
90 self.filesetpredicate = registrar.filesetpredicate()
89 self.filesetpredicate = registrar.filesetpredicate()
91 self.revsetpredicate = registrar.revsetpredicate()
90 self.revsetpredicate = registrar.revsetpredicate()
92 self.templatekeyword = registrar.templatekeyword()
91 self.templatekeyword = registrar.templatekeyword()
93
92
94 def merge(self, other):
93 def merge(self, other):
95 self._uicallables.extend(other._uicallables)
94 self._uicallables.extend(other._uicallables)
96 self._uipopulatecallables.extend(other._uipopulatecallables)
95 self._uipopulatecallables.extend(other._uipopulatecallables)
97 self._extcallables.extend(other._extcallables)
96 self._extcallables.extend(other._extcallables)
98 self._repocallables.extend(other._repocallables)
97 self._repocallables.extend(other._repocallables)
99 self.filesetpredicate._merge(other.filesetpredicate)
98 self.filesetpredicate._merge(other.filesetpredicate)
100 self.revsetpredicate._merge(other.revsetpredicate)
99 self.revsetpredicate._merge(other.revsetpredicate)
101 self.templatekeyword._merge(other.templatekeyword)
100 self.templatekeyword._merge(other.templatekeyword)
102 self._commandwrappers.extend(other._commandwrappers)
101 self._commandwrappers.extend(other._commandwrappers)
103 self._extcommandwrappers.extend(other._extcommandwrappers)
102 self._extcommandwrappers.extend(other._extcommandwrappers)
104 self._functionwrappers.extend(other._functionwrappers)
103 self._functionwrappers.extend(other._functionwrappers)
105 self._duckpunchers.extend(other._duckpunchers)
106 self.cmdtable.update(other.cmdtable)
104 self.cmdtable.update(other.cmdtable)
107 for section, items in other.configtable.iteritems():
105 for section, items in other.configtable.iteritems():
108 if section in self.configtable:
106 if section in self.configtable:
109 self.configtable[section].update(items)
107 self.configtable[section].update(items)
110 else:
108 else:
111 self.configtable[section] = items
109 self.configtable[section] = items
112
110
113 def finaluisetup(self, ui):
111 def finaluisetup(self, ui):
114 """Method to be used as the extension uisetup
112 """Method to be used as the extension uisetup
115
113
116 The following operations belong here:
114 The following operations belong here:
117
115
118 - Changes to ui.__class__ . The ui object that will be used to run the
116 - Changes to ui.__class__ . The ui object that will be used to run the
119 command has not yet been created. Changes made here will affect ui
117 command has not yet been created. Changes made here will affect ui
120 objects created after this, and in particular the ui that will be
118 objects created after this, and in particular the ui that will be
121 passed to runcommand
119 passed to runcommand
122 - Command wraps (extensions.wrapcommand)
120 - Command wraps (extensions.wrapcommand)
123 - Changes that need to be visible to other extensions: because
121 - Changes that need to be visible to other extensions: because
124 initialization occurs in phases (all extensions run uisetup, then all
122 initialization occurs in phases (all extensions run uisetup, then all
125 run extsetup), a change made here will be visible to other extensions
123 run extsetup), a change made here will be visible to other extensions
126 during extsetup
124 during extsetup
127 - Monkeypatch or wrap function (extensions.wrapfunction) of dispatch
125 - Monkeypatch or wrap function (extensions.wrapfunction) of dispatch
128 module members
126 module members
129 - Setup of pre-* and post-* hooks
127 - Setup of pre-* and post-* hooks
130 - pushkey setup
128 - pushkey setup
131 """
129 """
132 for cont, funcname, func in self._duckpunchers:
133 setattr(cont, funcname, func)
134 for command, wrapper, opts in self._commandwrappers:
130 for command, wrapper, opts in self._commandwrappers:
135 entry = extensions.wrapcommand(commands.table, command, wrapper)
131 entry = extensions.wrapcommand(commands.table, command, wrapper)
136 if opts:
132 if opts:
137 for opt in opts:
133 for opt in opts:
138 entry[1].append(opt)
134 entry[1].append(opt)
139 for cont, funcname, wrapper in self._functionwrappers:
135 for cont, funcname, wrapper in self._functionwrappers:
140 extensions.wrapfunction(cont, funcname, wrapper)
136 extensions.wrapfunction(cont, funcname, wrapper)
141 for c in self._uicallables:
137 for c in self._uicallables:
142 c(ui)
138 c(ui)
143
139
144 def finaluipopulate(self, ui):
140 def finaluipopulate(self, ui):
145 """Method to be used as the extension uipopulate
141 """Method to be used as the extension uipopulate
146
142
147 This is called once per ui instance to:
143 This is called once per ui instance to:
148
144
149 - Set up additional ui members
145 - Set up additional ui members
150 - Update configuration by ``ui.setconfig()``
146 - Update configuration by ``ui.setconfig()``
151 - Extend the class dynamically
147 - Extend the class dynamically
152 """
148 """
153 for c in self._uipopulatecallables:
149 for c in self._uipopulatecallables:
154 c(ui)
150 c(ui)
155
151
156 def finalextsetup(self, ui):
152 def finalextsetup(self, ui):
157 """Method to be used as a the extension extsetup
153 """Method to be used as a the extension extsetup
158
154
159 The following operations belong here:
155 The following operations belong here:
160
156
161 - Changes depending on the status of other extensions. (if
157 - Changes depending on the status of other extensions. (if
162 extensions.find('mq'))
158 extensions.find('mq'))
163 - Add a global option to all commands
159 - Add a global option to all commands
164 """
160 """
165 knownexts = {}
161 knownexts = {}
166
162
167 for ext, command, wrapper, opts in self._extcommandwrappers:
163 for ext, command, wrapper, opts in self._extcommandwrappers:
168 if ext not in knownexts:
164 if ext not in knownexts:
169 try:
165 try:
170 e = extensions.find(ext)
166 e = extensions.find(ext)
171 except KeyError:
167 except KeyError:
172 # Extension isn't enabled, so don't bother trying to wrap
168 # Extension isn't enabled, so don't bother trying to wrap
173 # it.
169 # it.
174 continue
170 continue
175 knownexts[ext] = e.cmdtable
171 knownexts[ext] = e.cmdtable
176 entry = extensions.wrapcommand(knownexts[ext], command, wrapper)
172 entry = extensions.wrapcommand(knownexts[ext], command, wrapper)
177 if opts:
173 if opts:
178 for opt in opts:
174 for opt in opts:
179 entry[1].append(opt)
175 entry[1].append(opt)
180
176
181 for c in self._extcallables:
177 for c in self._extcallables:
182 c(ui)
178 c(ui)
183
179
184 def finalreposetup(self, ui, repo):
180 def finalreposetup(self, ui, repo):
185 """Method to be used as the extension reposetup
181 """Method to be used as the extension reposetup
186
182
187 The following operations belong here:
183 The following operations belong here:
188
184
189 - All hooks but pre-* and post-*
185 - All hooks but pre-* and post-*
190 - Modify configuration variables
186 - Modify configuration variables
191 - Changes to repo.__class__, repo.dirstate.__class__
187 - Changes to repo.__class__, repo.dirstate.__class__
192 """
188 """
193 for c in self._repocallables:
189 for c in self._repocallables:
194 c(ui, repo)
190 c(ui, repo)
195
191
196 def uisetup(self, call):
192 def uisetup(self, call):
197 """Decorated function will be executed during uisetup
193 """Decorated function will be executed during uisetup
198
194
199 example::
195 example::
200
196
201 @eh.uisetup
197 @eh.uisetup
202 def setupbabar(ui):
198 def setupbabar(ui):
203 print 'this is uisetup!'
199 print 'this is uisetup!'
204 """
200 """
205 self._uicallables.append(call)
201 self._uicallables.append(call)
206 return call
202 return call
207
203
208 def uipopulate(self, call):
204 def uipopulate(self, call):
209 """Decorated function will be executed during uipopulate
205 """Decorated function will be executed during uipopulate
210
206
211 example::
207 example::
212
208
213 @eh.uipopulate
209 @eh.uipopulate
214 def setupfoo(ui):
210 def setupfoo(ui):
215 print 'this is uipopulate!'
211 print 'this is uipopulate!'
216 """
212 """
217 self._uipopulatecallables.append(call)
213 self._uipopulatecallables.append(call)
218 return call
214 return call
219
215
220 def extsetup(self, call):
216 def extsetup(self, call):
221 """Decorated function will be executed during extsetup
217 """Decorated function will be executed during extsetup
222
218
223 example::
219 example::
224
220
225 @eh.extsetup
221 @eh.extsetup
226 def setupcelestine(ui):
222 def setupcelestine(ui):
227 print 'this is extsetup!'
223 print 'this is extsetup!'
228 """
224 """
229 self._extcallables.append(call)
225 self._extcallables.append(call)
230 return call
226 return call
231
227
232 def reposetup(self, call):
228 def reposetup(self, call):
233 """Decorated function will be executed during reposetup
229 """Decorated function will be executed during reposetup
234
230
235 example::
231 example::
236
232
237 @eh.reposetup
233 @eh.reposetup
238 def setupzephir(ui, repo):
234 def setupzephir(ui, repo):
239 print 'this is reposetup!'
235 print 'this is reposetup!'
240 """
236 """
241 self._repocallables.append(call)
237 self._repocallables.append(call)
242 return call
238 return call
243
239
244 def wrapcommand(self, command, extension=None, opts=None):
240 def wrapcommand(self, command, extension=None, opts=None):
245 """Decorated function is a command wrapper
241 """Decorated function is a command wrapper
246
242
247 The name of the command must be given as the decorator argument.
243 The name of the command must be given as the decorator argument.
248 The wrapping is installed during `uisetup`.
244 The wrapping is installed during `uisetup`.
249
245
250 If the second option `extension` argument is provided, the wrapping
246 If the second option `extension` argument is provided, the wrapping
251 will be applied in the extension commandtable. This argument must be a
247 will be applied in the extension commandtable. This argument must be a
252 string that will be searched using `extension.find` if not found and
248 string that will be searched using `extension.find` if not found and
253 Abort error is raised. If the wrapping applies to an extension, it is
249 Abort error is raised. If the wrapping applies to an extension, it is
254 installed during `extsetup`.
250 installed during `extsetup`.
255
251
256 example::
252 example::
257
253
258 @eh.wrapcommand('summary')
254 @eh.wrapcommand('summary')
259 def wrapsummary(orig, ui, repo, *args, **kwargs):
255 def wrapsummary(orig, ui, repo, *args, **kwargs):
260 ui.note('Barry!')
256 ui.note('Barry!')
261 return orig(ui, repo, *args, **kwargs)
257 return orig(ui, repo, *args, **kwargs)
262
258
263 The `opts` argument allows specifying a list of tuples for additional
259 The `opts` argument allows specifying a list of tuples for additional
264 arguments for the command. See ``mercurial.fancyopts.fancyopts()`` for
260 arguments for the command. See ``mercurial.fancyopts.fancyopts()`` for
265 the format of the tuple.
261 the format of the tuple.
266
262
267 """
263 """
268 if opts is None:
264 if opts is None:
269 opts = []
265 opts = []
270 else:
266 else:
271 for opt in opts:
267 for opt in opts:
272 if not isinstance(opt, tuple):
268 if not isinstance(opt, tuple):
273 raise error.ProgrammingError('opts must be list of tuples')
269 raise error.ProgrammingError('opts must be list of tuples')
274 if len(opt) not in (4, 5):
270 if len(opt) not in (4, 5):
275 msg = 'each opt tuple must contain 4 or 5 values'
271 msg = 'each opt tuple must contain 4 or 5 values'
276 raise error.ProgrammingError(msg)
272 raise error.ProgrammingError(msg)
277
273
278 def dec(wrapper):
274 def dec(wrapper):
279 if extension is None:
275 if extension is None:
280 self._commandwrappers.append((command, wrapper, opts))
276 self._commandwrappers.append((command, wrapper, opts))
281 else:
277 else:
282 self._extcommandwrappers.append((extension, command, wrapper,
278 self._extcommandwrappers.append((extension, command, wrapper,
283 opts))
279 opts))
284 return wrapper
280 return wrapper
285 return dec
281 return dec
286
282
287 def wrapfunction(self, container, funcname):
283 def wrapfunction(self, container, funcname):
288 """Decorated function is a function wrapper
284 """Decorated function is a function wrapper
289
285
290 This function takes two arguments, the container and the name of the
286 This function takes two arguments, the container and the name of the
291 function to wrap. The wrapping is performed during `uisetup`.
287 function to wrap. The wrapping is performed during `uisetup`.
292 (there is no extension support)
288 (there is no extension support)
293
289
294 example::
290 example::
295
291
296 @eh.function(discovery, 'checkheads')
292 @eh.function(discovery, 'checkheads')
297 def wrapfunction(orig, *args, **kwargs):
293 def wrapfunction(orig, *args, **kwargs):
298 ui.note('His head smashed in and his heart cut out')
294 ui.note('His head smashed in and his heart cut out')
299 return orig(*args, **kwargs)
295 return orig(*args, **kwargs)
300 """
296 """
301 def dec(wrapper):
297 def dec(wrapper):
302 self._functionwrappers.append((container, funcname, wrapper))
298 self._functionwrappers.append((container, funcname, wrapper))
303 return wrapper
299 return wrapper
304 return dec
300 return dec
305
306 def addattr(self, container, funcname):
307 """Decorated function is to be added to the container
308
309 This function takes two arguments, the container and the name of the
310 function to wrap. The wrapping is performed during `uisetup`.
311
312 Adding attributes to a container like this is discouraged, because the
313 container modification is visible even in repositories that do not
314 have the extension loaded. Therefore, care must be taken that the
315 function doesn't make assumptions that the extension was loaded for the
316 current repository. For `ui` and `repo` instances, a better option is
317 to subclass the instance in `uipopulate` and `reposetup` respectively.
318
319 https://www.mercurial-scm.org/wiki/WritingExtensions
320
321 example::
322
323 @eh.addattr(context.changectx, 'babar')
324 def babar(ctx):
325 return 'babar' in ctx.description
326 """
327 def dec(func):
328 self._duckpunchers.append((container, funcname, func))
329 return func
330 return dec
General Comments 0
You need to be logged in to leave comments. Login now