##// END OF EJS Templates
lfs: add repository feature denoting the use of LFS...
Gregory Szorc -
r39887:1f7b3b98 default
parent child Browse files
Show More
@@ -1,401 +1,405
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will assume the remote repository also handles blob storage
90 # if unset, lfs will assume the remote repository also handles blob storage
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
92 # use this value.
92 # use this value.
93 # (default: unset)
93 # (default: unset)
94 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
95
95
96 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
97 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
98 # are relative to the repository root.
98 # are relative to the repository root.
99 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
100 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
101 #
101 #
102 # Some examples:
102 # Some examples:
103 # - all() # everything
103 # - all() # everything
104 # - none() # nothing
104 # - none() # nothing
105 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
106 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 # (default: none())
111 # (default: none())
112 #
112 #
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
115 track = size(">10M")
115 track = size(">10M")
116
116
117 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
118 retry = 5
118 retry = 5
119
119
120 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
121 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
122 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 from mercurial.i18n import _
127 from mercurial.i18n import _
128
128
129 from mercurial import (
129 from mercurial import (
130 bundle2,
130 bundle2,
131 changegroup,
131 changegroup,
132 cmdutil,
132 cmdutil,
133 config,
133 config,
134 context,
134 context,
135 error,
135 error,
136 exchange,
136 exchange,
137 extensions,
137 extensions,
138 filelog,
138 filelog,
139 filesetlang,
139 filesetlang,
140 hg,
140 hg,
141 localrepo,
141 localrepo,
142 minifileset,
142 minifileset,
143 node,
143 node,
144 pycompat,
144 pycompat,
145 registrar,
145 registrar,
146 repository,
146 revlog,
147 revlog,
147 scmutil,
148 scmutil,
148 templateutil,
149 templateutil,
149 upgrade,
150 upgrade,
150 util,
151 util,
151 vfs as vfsmod,
152 vfs as vfsmod,
152 wireprotoserver,
153 wireprotoserver,
153 wireprotov1server,
154 wireprotov1server,
154 )
155 )
155
156
156 from . import (
157 from . import (
157 blobstore,
158 blobstore,
158 wireprotolfsserver,
159 wireprotolfsserver,
159 wrapper,
160 wrapper,
160 )
161 )
161
162
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
163 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
164 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
164 # be specifying the version(s) of Mercurial they are tested with, or
165 # be specifying the version(s) of Mercurial they are tested with, or
165 # leave the attribute unspecified.
166 # leave the attribute unspecified.
166 testedwith = 'ships-with-hg-core'
167 testedwith = 'ships-with-hg-core'
167
168
168 configtable = {}
169 configtable = {}
169 configitem = registrar.configitem(configtable)
170 configitem = registrar.configitem(configtable)
170
171
171 configitem('experimental', 'lfs.serve',
172 configitem('experimental', 'lfs.serve',
172 default=True,
173 default=True,
173 )
174 )
174 configitem('experimental', 'lfs.user-agent',
175 configitem('experimental', 'lfs.user-agent',
175 default=None,
176 default=None,
176 )
177 )
177 configitem('experimental', 'lfs.disableusercache',
178 configitem('experimental', 'lfs.disableusercache',
178 default=False,
179 default=False,
179 )
180 )
180 configitem('experimental', 'lfs.worker-enable',
181 configitem('experimental', 'lfs.worker-enable',
181 default=False,
182 default=False,
182 )
183 )
183
184
184 configitem('lfs', 'url',
185 configitem('lfs', 'url',
185 default=None,
186 default=None,
186 )
187 )
187 configitem('lfs', 'usercache',
188 configitem('lfs', 'usercache',
188 default=None,
189 default=None,
189 )
190 )
190 # Deprecated
191 # Deprecated
191 configitem('lfs', 'threshold',
192 configitem('lfs', 'threshold',
192 default=None,
193 default=None,
193 )
194 )
194 configitem('lfs', 'track',
195 configitem('lfs', 'track',
195 default='none()',
196 default='none()',
196 )
197 )
197 configitem('lfs', 'retry',
198 configitem('lfs', 'retry',
198 default=5,
199 default=5,
199 )
200 )
200
201
201 cmdtable = {}
202 cmdtable = {}
202 command = registrar.command(cmdtable)
203 command = registrar.command(cmdtable)
203
204
204 templatekeyword = registrar.templatekeyword()
205 templatekeyword = registrar.templatekeyword()
205 filesetpredicate = registrar.filesetpredicate()
206 filesetpredicate = registrar.filesetpredicate()
206
207
207 def featuresetup(ui, supported):
208 def featuresetup(ui, supported):
208 # don't die on seeing a repo with the lfs requirement
209 # don't die on seeing a repo with the lfs requirement
209 supported |= {'lfs'}
210 supported |= {'lfs'}
210
211
211 def uisetup(ui):
212 def uisetup(ui):
212 localrepo.featuresetupfuncs.add(featuresetup)
213 localrepo.featuresetupfuncs.add(featuresetup)
213
214
214 def reposetup(ui, repo):
215 def reposetup(ui, repo):
215 # Nothing to do with a remote repo
216 # Nothing to do with a remote repo
216 if not repo.local():
217 if not repo.local():
217 return
218 return
218
219
219 repo.svfs.lfslocalblobstore = blobstore.local(repo)
220 repo.svfs.lfslocalblobstore = blobstore.local(repo)
220 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
221 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
221
222
222 class lfsrepo(repo.__class__):
223 class lfsrepo(repo.__class__):
223 @localrepo.unfilteredmethod
224 @localrepo.unfilteredmethod
224 def commitctx(self, ctx, error=False):
225 def commitctx(self, ctx, error=False):
225 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
226 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
226 return super(lfsrepo, self).commitctx(ctx, error)
227 return super(lfsrepo, self).commitctx(ctx, error)
227
228
228 repo.__class__ = lfsrepo
229 repo.__class__ = lfsrepo
229
230
230 if 'lfs' not in repo.requirements:
231 if 'lfs' not in repo.requirements:
231 def checkrequireslfs(ui, repo, **kwargs):
232 def checkrequireslfs(ui, repo, **kwargs):
232 if 'lfs' not in repo.requirements:
233 if 'lfs' not in repo.requirements:
233 last = kwargs.get(r'node_last')
234 last = kwargs.get(r'node_last')
234 _bin = node.bin
235 _bin = node.bin
235 if last:
236 if last:
236 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
237 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
237 else:
238 else:
238 s = repo.set('%n', _bin(kwargs[r'node']))
239 s = repo.set('%n', _bin(kwargs[r'node']))
239 match = repo.narrowmatch()
240 match = repo.narrowmatch()
240 for ctx in s:
241 for ctx in s:
241 # TODO: is there a way to just walk the files in the commit?
242 # TODO: is there a way to just walk the files in the commit?
242 if any(ctx[f].islfs() for f in ctx.files()
243 if any(ctx[f].islfs() for f in ctx.files()
243 if f in ctx and match(f)):
244 if f in ctx and match(f)):
244 repo.requirements.add('lfs')
245 repo.requirements.add('lfs')
246 repo.features.add(repository.REPO_FEATURE_LFS)
245 repo._writerequirements()
247 repo._writerequirements()
246 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
248 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
247 break
249 break
248
250
249 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
251 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
250 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
252 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
251 else:
253 else:
252 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
254 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
253
255
254 def _trackedmatcher(repo):
256 def _trackedmatcher(repo):
255 """Return a function (path, size) -> bool indicating whether or not to
257 """Return a function (path, size) -> bool indicating whether or not to
256 track a given file with lfs."""
258 track a given file with lfs."""
257 if not repo.wvfs.exists('.hglfs'):
259 if not repo.wvfs.exists('.hglfs'):
258 # No '.hglfs' in wdir. Fallback to config for now.
260 # No '.hglfs' in wdir. Fallback to config for now.
259 trackspec = repo.ui.config('lfs', 'track')
261 trackspec = repo.ui.config('lfs', 'track')
260
262
261 # deprecated config: lfs.threshold
263 # deprecated config: lfs.threshold
262 threshold = repo.ui.configbytes('lfs', 'threshold')
264 threshold = repo.ui.configbytes('lfs', 'threshold')
263 if threshold:
265 if threshold:
264 filesetlang.parse(trackspec) # make sure syntax errors are confined
266 filesetlang.parse(trackspec) # make sure syntax errors are confined
265 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
267 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
266
268
267 return minifileset.compile(trackspec)
269 return minifileset.compile(trackspec)
268
270
269 data = repo.wvfs.tryread('.hglfs')
271 data = repo.wvfs.tryread('.hglfs')
270 if not data:
272 if not data:
271 return lambda p, s: False
273 return lambda p, s: False
272
274
273 # Parse errors here will abort with a message that points to the .hglfs file
275 # Parse errors here will abort with a message that points to the .hglfs file
274 # and line number.
276 # and line number.
275 cfg = config.config()
277 cfg = config.config()
276 cfg.parse('.hglfs', data)
278 cfg.parse('.hglfs', data)
277
279
278 try:
280 try:
279 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
281 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
280 for pattern, rule in cfg.items('track')]
282 for pattern, rule in cfg.items('track')]
281 except error.ParseError as e:
283 except error.ParseError as e:
282 # The original exception gives no indicator that the error is in the
284 # The original exception gives no indicator that the error is in the
283 # .hglfs file, so add that.
285 # .hglfs file, so add that.
284
286
285 # TODO: See if the line number of the file can be made available.
287 # TODO: See if the line number of the file can be made available.
286 raise error.Abort(_('parse error in .hglfs: %s') % e)
288 raise error.Abort(_('parse error in .hglfs: %s') % e)
287
289
288 def _match(path, size):
290 def _match(path, size):
289 for pat, rule in rules:
291 for pat, rule in rules:
290 if pat(path, size):
292 if pat(path, size):
291 return rule(path, size)
293 return rule(path, size)
292
294
293 return False
295 return False
294
296
295 return _match
297 return _match
296
298
297 def wrapfilelog(filelog):
299 def wrapfilelog(filelog):
298 wrapfunction = extensions.wrapfunction
300 wrapfunction = extensions.wrapfunction
299
301
300 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
302 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
301 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
303 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
302 wrapfunction(filelog, 'size', wrapper.filelogsize)
304 wrapfunction(filelog, 'size', wrapper.filelogsize)
303
305
304 def extsetup(ui):
306 def extsetup(ui):
305 wrapfilelog(filelog.filelog)
307 wrapfilelog(filelog.filelog)
306
308
307 wrapfunction = extensions.wrapfunction
309 wrapfunction = extensions.wrapfunction
308
310
311 wrapfunction(localrepo, 'makefilestorage', wrapper.localrepomakefilestorage)
312
309 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
313 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
310 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
314 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
311
315
312 wrapfunction(upgrade, '_finishdatamigration',
316 wrapfunction(upgrade, '_finishdatamigration',
313 wrapper.upgradefinishdatamigration)
317 wrapper.upgradefinishdatamigration)
314
318
315 wrapfunction(upgrade, 'preservedrequirements',
319 wrapfunction(upgrade, 'preservedrequirements',
316 wrapper.upgraderequirements)
320 wrapper.upgraderequirements)
317
321
318 wrapfunction(upgrade, 'supporteddestrequirements',
322 wrapfunction(upgrade, 'supporteddestrequirements',
319 wrapper.upgraderequirements)
323 wrapper.upgraderequirements)
320
324
321 wrapfunction(changegroup,
325 wrapfunction(changegroup,
322 'allsupportedversions',
326 'allsupportedversions',
323 wrapper.allsupportedversions)
327 wrapper.allsupportedversions)
324
328
325 wrapfunction(exchange, 'push', wrapper.push)
329 wrapfunction(exchange, 'push', wrapper.push)
326 wrapfunction(wireprotov1server, '_capabilities', wrapper._capabilities)
330 wrapfunction(wireprotov1server, '_capabilities', wrapper._capabilities)
327 wrapfunction(wireprotoserver, 'handlewsgirequest',
331 wrapfunction(wireprotoserver, 'handlewsgirequest',
328 wireprotolfsserver.handlewsgirequest)
332 wireprotolfsserver.handlewsgirequest)
329
333
330 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
334 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
331 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
335 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
332 context.basefilectx.islfs = wrapper.filectxislfs
336 context.basefilectx.islfs = wrapper.filectxislfs
333
337
334 revlog.addflagprocessor(
338 revlog.addflagprocessor(
335 revlog.REVIDX_EXTSTORED,
339 revlog.REVIDX_EXTSTORED,
336 (
340 (
337 wrapper.readfromstore,
341 wrapper.readfromstore,
338 wrapper.writetostore,
342 wrapper.writetostore,
339 wrapper.bypasscheckhash,
343 wrapper.bypasscheckhash,
340 ),
344 ),
341 )
345 )
342
346
343 wrapfunction(hg, 'clone', wrapper.hgclone)
347 wrapfunction(hg, 'clone', wrapper.hgclone)
344 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
348 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
345
349
346 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
350 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
347
351
348 # Make bundle choose changegroup3 instead of changegroup2. This affects
352 # Make bundle choose changegroup3 instead of changegroup2. This affects
349 # "hg bundle" command. Note: it does not cover all bundle formats like
353 # "hg bundle" command. Note: it does not cover all bundle formats like
350 # "packed1". Using "packed1" with lfs will likely cause trouble.
354 # "packed1". Using "packed1" with lfs will likely cause trouble.
351 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
355 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
352
356
353 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
357 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
354 # options and blob stores are passed from othervfs to the new readonlyvfs.
358 # options and blob stores are passed from othervfs to the new readonlyvfs.
355 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
359 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
356
360
357 # when writing a bundle via "hg bundle" command, upload related LFS blobs
361 # when writing a bundle via "hg bundle" command, upload related LFS blobs
358 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
362 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
359
363
360 @filesetpredicate('lfs()')
364 @filesetpredicate('lfs()')
361 def lfsfileset(mctx, x):
365 def lfsfileset(mctx, x):
362 """File that uses LFS storage."""
366 """File that uses LFS storage."""
363 # i18n: "lfs" is a keyword
367 # i18n: "lfs" is a keyword
364 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
368 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
365 ctx = mctx.ctx
369 ctx = mctx.ctx
366 def lfsfilep(f):
370 def lfsfilep(f):
367 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
371 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
368 return mctx.predicate(lfsfilep, predrepr='<lfs>')
372 return mctx.predicate(lfsfilep, predrepr='<lfs>')
369
373
370 @templatekeyword('lfs_files', requires={'ctx'})
374 @templatekeyword('lfs_files', requires={'ctx'})
371 def lfsfiles(context, mapping):
375 def lfsfiles(context, mapping):
372 """List of strings. All files modified, added, or removed by this
376 """List of strings. All files modified, added, or removed by this
373 changeset."""
377 changeset."""
374 ctx = context.resource(mapping, 'ctx')
378 ctx = context.resource(mapping, 'ctx')
375
379
376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
380 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
377 files = sorted(pointers.keys())
381 files = sorted(pointers.keys())
378
382
379 def pointer(v):
383 def pointer(v):
380 # In the file spec, version is first and the other keys are sorted.
384 # In the file spec, version is first and the other keys are sorted.
381 sortkeyfunc = lambda x: (x[0] != 'version', x)
385 sortkeyfunc = lambda x: (x[0] != 'version', x)
382 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
386 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
383 return util.sortdict(items)
387 return util.sortdict(items)
384
388
385 makemap = lambda v: {
389 makemap = lambda v: {
386 'file': v,
390 'file': v,
387 'lfsoid': pointers[v].oid() if pointers[v] else None,
391 'lfsoid': pointers[v].oid() if pointers[v] else None,
388 'lfspointer': templateutil.hybriddict(pointer(v)),
392 'lfspointer': templateutil.hybriddict(pointer(v)),
389 }
393 }
390
394
391 # TODO: make the separator ', '?
395 # TODO: make the separator ', '?
392 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
396 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
397 return templateutil.hybrid(f, files, makemap, pycompat.identity)
394
398
395 @command('debuglfsupload',
399 @command('debuglfsupload',
396 [('r', 'rev', [], _('upload large files introduced by REV'))])
400 [('r', 'rev', [], _('upload large files introduced by REV'))])
397 def debuglfsupload(ui, repo, **opts):
401 def debuglfsupload(ui, repo, **opts):
398 """upload lfs blobs added by the working copy parent or given revisions"""
402 """upload lfs blobs added by the working copy parent or given revisions"""
399 revs = opts.get(r'rev', [])
403 revs = opts.get(r'rev', [])
400 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
404 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
401 wrapper.uploadblobs(repo, pointers)
405 wrapper.uploadblobs(repo, pointers)
@@ -1,425 +1,432
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, hex, nullid, short
13 from mercurial.node import bin, hex, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 repository,
17 revlog,
18 revlog,
18 util,
19 util,
19 )
20 )
20
21
21 from mercurial.utils import (
22 from mercurial.utils import (
22 stringutil,
23 stringutil,
23 )
24 )
24
25
25 from ..largefiles import lfutil
26 from ..largefiles import lfutil
26
27
27 from . import (
28 from . import (
28 blobstore,
29 blobstore,
29 pointer,
30 pointer,
30 )
31 )
31
32
33 def localrepomakefilestorage(orig, requirements, features, **kwargs):
34 if b'lfs' in requirements:
35 features.add(repository.REPO_FEATURE_LFS)
36
37 return orig(requirements=requirements, features=features, **kwargs)
38
32 def allsupportedversions(orig, ui):
39 def allsupportedversions(orig, ui):
33 versions = orig(ui)
40 versions = orig(ui)
34 versions.add('03')
41 versions.add('03')
35 return versions
42 return versions
36
43
37 def _capabilities(orig, repo, proto):
44 def _capabilities(orig, repo, proto):
38 '''Wrap server command to announce lfs server capability'''
45 '''Wrap server command to announce lfs server capability'''
39 caps = orig(repo, proto)
46 caps = orig(repo, proto)
40 if util.safehasattr(repo.svfs, 'lfslocalblobstore'):
47 if util.safehasattr(repo.svfs, 'lfslocalblobstore'):
41 # XXX: change to 'lfs=serve' when separate git server isn't required?
48 # XXX: change to 'lfs=serve' when separate git server isn't required?
42 caps.append('lfs')
49 caps.append('lfs')
43 return caps
50 return caps
44
51
45 def bypasscheckhash(self, text):
52 def bypasscheckhash(self, text):
46 return False
53 return False
47
54
48 def readfromstore(self, text):
55 def readfromstore(self, text):
49 """Read filelog content from local blobstore transform for flagprocessor.
56 """Read filelog content from local blobstore transform for flagprocessor.
50
57
51 Default tranform for flagprocessor, returning contents from blobstore.
58 Default tranform for flagprocessor, returning contents from blobstore.
52 Returns a 2-typle (text, validatehash) where validatehash is True as the
59 Returns a 2-typle (text, validatehash) where validatehash is True as the
53 contents of the blobstore should be checked using checkhash.
60 contents of the blobstore should be checked using checkhash.
54 """
61 """
55 p = pointer.deserialize(text)
62 p = pointer.deserialize(text)
56 oid = p.oid()
63 oid = p.oid()
57 store = self.opener.lfslocalblobstore
64 store = self.opener.lfslocalblobstore
58 if not store.has(oid):
65 if not store.has(oid):
59 p.filename = self.filename
66 p.filename = self.filename
60 self.opener.lfsremoteblobstore.readbatch([p], store)
67 self.opener.lfsremoteblobstore.readbatch([p], store)
61
68
62 # The caller will validate the content
69 # The caller will validate the content
63 text = store.read(oid, verify=False)
70 text = store.read(oid, verify=False)
64
71
65 # pack hg filelog metadata
72 # pack hg filelog metadata
66 hgmeta = {}
73 hgmeta = {}
67 for k in p.keys():
74 for k in p.keys():
68 if k.startswith('x-hg-'):
75 if k.startswith('x-hg-'):
69 name = k[len('x-hg-'):]
76 name = k[len('x-hg-'):]
70 hgmeta[name] = p[k]
77 hgmeta[name] = p[k]
71 if hgmeta or text.startswith('\1\n'):
78 if hgmeta or text.startswith('\1\n'):
72 text = revlog.packmeta(hgmeta, text)
79 text = revlog.packmeta(hgmeta, text)
73
80
74 return (text, True)
81 return (text, True)
75
82
76 def writetostore(self, text):
83 def writetostore(self, text):
77 # hg filelog metadata (includes rename, etc)
84 # hg filelog metadata (includes rename, etc)
78 hgmeta, offset = revlog.parsemeta(text)
85 hgmeta, offset = revlog.parsemeta(text)
79 if offset and offset > 0:
86 if offset and offset > 0:
80 # lfs blob does not contain hg filelog metadata
87 # lfs blob does not contain hg filelog metadata
81 text = text[offset:]
88 text = text[offset:]
82
89
83 # git-lfs only supports sha256
90 # git-lfs only supports sha256
84 oid = hex(hashlib.sha256(text).digest())
91 oid = hex(hashlib.sha256(text).digest())
85 self.opener.lfslocalblobstore.write(oid, text)
92 self.opener.lfslocalblobstore.write(oid, text)
86
93
87 # replace contents with metadata
94 # replace contents with metadata
88 longoid = 'sha256:%s' % oid
95 longoid = 'sha256:%s' % oid
89 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
96 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
90
97
91 # by default, we expect the content to be binary. however, LFS could also
98 # by default, we expect the content to be binary. however, LFS could also
92 # be used for non-binary content. add a special entry for non-binary data.
99 # be used for non-binary content. add a special entry for non-binary data.
93 # this will be used by filectx.isbinary().
100 # this will be used by filectx.isbinary().
94 if not stringutil.binary(text):
101 if not stringutil.binary(text):
95 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
102 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
96 metadata['x-is-binary'] = '0'
103 metadata['x-is-binary'] = '0'
97
104
98 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
105 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
99 if hgmeta is not None:
106 if hgmeta is not None:
100 for k, v in hgmeta.iteritems():
107 for k, v in hgmeta.iteritems():
101 metadata['x-hg-%s' % k] = v
108 metadata['x-hg-%s' % k] = v
102
109
103 rawtext = metadata.serialize()
110 rawtext = metadata.serialize()
104 return (rawtext, False)
111 return (rawtext, False)
105
112
106 def _islfs(rlog, node=None, rev=None):
113 def _islfs(rlog, node=None, rev=None):
107 if rev is None:
114 if rev is None:
108 if node is None:
115 if node is None:
109 # both None - likely working copy content where node is not ready
116 # both None - likely working copy content where node is not ready
110 return False
117 return False
111 rev = rlog.rev(node)
118 rev = rlog.rev(node)
112 else:
119 else:
113 node = rlog.node(rev)
120 node = rlog.node(rev)
114 if node == nullid:
121 if node == nullid:
115 return False
122 return False
116 flags = rlog.flags(rev)
123 flags = rlog.flags(rev)
117 return bool(flags & revlog.REVIDX_EXTSTORED)
124 return bool(flags & revlog.REVIDX_EXTSTORED)
118
125
119 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
126 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
120 cachedelta=None, node=None,
127 cachedelta=None, node=None,
121 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
128 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
122 # The matcher isn't available if reposetup() wasn't called.
129 # The matcher isn't available if reposetup() wasn't called.
123 lfstrack = self.opener.options.get('lfstrack')
130 lfstrack = self.opener.options.get('lfstrack')
124
131
125 if lfstrack:
132 if lfstrack:
126 textlen = len(text)
133 textlen = len(text)
127 # exclude hg rename meta from file size
134 # exclude hg rename meta from file size
128 meta, offset = revlog.parsemeta(text)
135 meta, offset = revlog.parsemeta(text)
129 if offset:
136 if offset:
130 textlen -= offset
137 textlen -= offset
131
138
132 if lfstrack(self.filename, textlen):
139 if lfstrack(self.filename, textlen):
133 flags |= revlog.REVIDX_EXTSTORED
140 flags |= revlog.REVIDX_EXTSTORED
134
141
135 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
142 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
136 node=node, flags=flags, **kwds)
143 node=node, flags=flags, **kwds)
137
144
138 def filelogrenamed(orig, self, node):
145 def filelogrenamed(orig, self, node):
139 if _islfs(self, node):
146 if _islfs(self, node):
140 rawtext = self.revision(node, raw=True)
147 rawtext = self.revision(node, raw=True)
141 if not rawtext:
148 if not rawtext:
142 return False
149 return False
143 metadata = pointer.deserialize(rawtext)
150 metadata = pointer.deserialize(rawtext)
144 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
151 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
145 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
152 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
146 else:
153 else:
147 return False
154 return False
148 return orig(self, node)
155 return orig(self, node)
149
156
150 def filelogsize(orig, self, rev):
157 def filelogsize(orig, self, rev):
151 if _islfs(self, rev=rev):
158 if _islfs(self, rev=rev):
152 # fast path: use lfs metadata to answer size
159 # fast path: use lfs metadata to answer size
153 rawtext = self.revision(rev, raw=True)
160 rawtext = self.revision(rev, raw=True)
154 metadata = pointer.deserialize(rawtext)
161 metadata = pointer.deserialize(rawtext)
155 return int(metadata['size'])
162 return int(metadata['size'])
156 return orig(self, rev)
163 return orig(self, rev)
157
164
158 def filectxcmp(orig, self, fctx):
165 def filectxcmp(orig, self, fctx):
159 """returns True if text is different than fctx"""
166 """returns True if text is different than fctx"""
160 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
167 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
161 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
168 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
162 # fast path: check LFS oid
169 # fast path: check LFS oid
163 p1 = pointer.deserialize(self.rawdata())
170 p1 = pointer.deserialize(self.rawdata())
164 p2 = pointer.deserialize(fctx.rawdata())
171 p2 = pointer.deserialize(fctx.rawdata())
165 return p1.oid() != p2.oid()
172 return p1.oid() != p2.oid()
166 return orig(self, fctx)
173 return orig(self, fctx)
167
174
168 def filectxisbinary(orig, self):
175 def filectxisbinary(orig, self):
169 if self.islfs():
176 if self.islfs():
170 # fast path: use lfs metadata to answer isbinary
177 # fast path: use lfs metadata to answer isbinary
171 metadata = pointer.deserialize(self.rawdata())
178 metadata = pointer.deserialize(self.rawdata())
172 # if lfs metadata says nothing, assume it's binary by default
179 # if lfs metadata says nothing, assume it's binary by default
173 return bool(int(metadata.get('x-is-binary', 1)))
180 return bool(int(metadata.get('x-is-binary', 1)))
174 return orig(self)
181 return orig(self)
175
182
176 def filectxislfs(self):
183 def filectxislfs(self):
177 return _islfs(self.filelog(), self.filenode())
184 return _islfs(self.filelog(), self.filenode())
178
185
179 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
186 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
180 orig(fm, ctx, matcher, path, decode)
187 orig(fm, ctx, matcher, path, decode)
181 fm.data(rawdata=ctx[path].rawdata())
188 fm.data(rawdata=ctx[path].rawdata())
182
189
183 def convertsink(orig, sink):
190 def convertsink(orig, sink):
184 sink = orig(sink)
191 sink = orig(sink)
185 if sink.repotype == 'hg':
192 if sink.repotype == 'hg':
186 class lfssink(sink.__class__):
193 class lfssink(sink.__class__):
187 def putcommit(self, files, copies, parents, commit, source, revmap,
194 def putcommit(self, files, copies, parents, commit, source, revmap,
188 full, cleanp2):
195 full, cleanp2):
189 pc = super(lfssink, self).putcommit
196 pc = super(lfssink, self).putcommit
190 node = pc(files, copies, parents, commit, source, revmap, full,
197 node = pc(files, copies, parents, commit, source, revmap, full,
191 cleanp2)
198 cleanp2)
192
199
193 if 'lfs' not in self.repo.requirements:
200 if 'lfs' not in self.repo.requirements:
194 ctx = self.repo[node]
201 ctx = self.repo[node]
195
202
196 # The file list may contain removed files, so check for
203 # The file list may contain removed files, so check for
197 # membership before assuming it is in the context.
204 # membership before assuming it is in the context.
198 if any(f in ctx and ctx[f].islfs() for f, n in files):
205 if any(f in ctx and ctx[f].islfs() for f, n in files):
199 self.repo.requirements.add('lfs')
206 self.repo.requirements.add('lfs')
200 self.repo._writerequirements()
207 self.repo._writerequirements()
201
208
202 # Permanently enable lfs locally
209 # Permanently enable lfs locally
203 self.repo.vfs.append(
210 self.repo.vfs.append(
204 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
211 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
205
212
206 return node
213 return node
207
214
208 sink.__class__ = lfssink
215 sink.__class__ = lfssink
209
216
210 return sink
217 return sink
211
218
212 def vfsinit(orig, self, othervfs):
219 def vfsinit(orig, self, othervfs):
213 orig(self, othervfs)
220 orig(self, othervfs)
214 # copy lfs related options
221 # copy lfs related options
215 for k, v in othervfs.options.items():
222 for k, v in othervfs.options.items():
216 if k.startswith('lfs'):
223 if k.startswith('lfs'):
217 self.options[k] = v
224 self.options[k] = v
218 # also copy lfs blobstores. note: this can run before reposetup, so lfs
225 # also copy lfs blobstores. note: this can run before reposetup, so lfs
219 # blobstore attributes are not always ready at this time.
226 # blobstore attributes are not always ready at this time.
220 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
227 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
221 if util.safehasattr(othervfs, name):
228 if util.safehasattr(othervfs, name):
222 setattr(self, name, getattr(othervfs, name))
229 setattr(self, name, getattr(othervfs, name))
223
230
224 def hgclone(orig, ui, opts, *args, **kwargs):
231 def hgclone(orig, ui, opts, *args, **kwargs):
225 result = orig(ui, opts, *args, **kwargs)
232 result = orig(ui, opts, *args, **kwargs)
226
233
227 if result is not None:
234 if result is not None:
228 sourcerepo, destrepo = result
235 sourcerepo, destrepo = result
229 repo = destrepo.local()
236 repo = destrepo.local()
230
237
231 # When cloning to a remote repo (like through SSH), no repo is available
238 # When cloning to a remote repo (like through SSH), no repo is available
232 # from the peer. Therefore the hgrc can't be updated.
239 # from the peer. Therefore the hgrc can't be updated.
233 if not repo:
240 if not repo:
234 return result
241 return result
235
242
236 # If lfs is required for this repo, permanently enable it locally
243 # If lfs is required for this repo, permanently enable it locally
237 if 'lfs' in repo.requirements:
244 if 'lfs' in repo.requirements:
238 repo.vfs.append('hgrc',
245 repo.vfs.append('hgrc',
239 util.tonativeeol('\n[extensions]\nlfs=\n'))
246 util.tonativeeol('\n[extensions]\nlfs=\n'))
240
247
241 return result
248 return result
242
249
243 def hgpostshare(orig, sourcerepo, destrepo, defaultpath=None):
250 def hgpostshare(orig, sourcerepo, destrepo, defaultpath=None):
244 orig(sourcerepo, destrepo, defaultpath=defaultpath)
251 orig(sourcerepo, destrepo, defaultpath=defaultpath)
245
252
246 # If lfs is required for this repo, permanently enable it locally
253 # If lfs is required for this repo, permanently enable it locally
247 if 'lfs' in destrepo.requirements:
254 if 'lfs' in destrepo.requirements:
248 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
255 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
249
256
250 def _prefetchfiles(repo, revs, match):
257 def _prefetchfiles(repo, revs, match):
251 """Ensure that required LFS blobs are present, fetching them as a group if
258 """Ensure that required LFS blobs are present, fetching them as a group if
252 needed."""
259 needed."""
253 if not util.safehasattr(repo.svfs, 'lfslocalblobstore'):
260 if not util.safehasattr(repo.svfs, 'lfslocalblobstore'):
254 return
261 return
255
262
256 pointers = []
263 pointers = []
257 oids = set()
264 oids = set()
258 localstore = repo.svfs.lfslocalblobstore
265 localstore = repo.svfs.lfslocalblobstore
259
266
260 for rev in revs:
267 for rev in revs:
261 ctx = repo[rev]
268 ctx = repo[rev]
262 for f in ctx.walk(match):
269 for f in ctx.walk(match):
263 p = pointerfromctx(ctx, f)
270 p = pointerfromctx(ctx, f)
264 if p and p.oid() not in oids and not localstore.has(p.oid()):
271 if p and p.oid() not in oids and not localstore.has(p.oid()):
265 p.filename = f
272 p.filename = f
266 pointers.append(p)
273 pointers.append(p)
267 oids.add(p.oid())
274 oids.add(p.oid())
268
275
269 if pointers:
276 if pointers:
270 # Recalculating the repo store here allows 'paths.default' that is set
277 # Recalculating the repo store here allows 'paths.default' that is set
271 # on the repo by a clone command to be used for the update.
278 # on the repo by a clone command to be used for the update.
272 blobstore.remote(repo).readbatch(pointers, localstore)
279 blobstore.remote(repo).readbatch(pointers, localstore)
273
280
274 def _canskipupload(repo):
281 def _canskipupload(repo):
275 # Skip if this hasn't been passed to reposetup()
282 # Skip if this hasn't been passed to reposetup()
276 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
283 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
277 return True
284 return True
278
285
279 # if remotestore is a null store, upload is a no-op and can be skipped
286 # if remotestore is a null store, upload is a no-op and can be skipped
280 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
287 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
281
288
282 def candownload(repo):
289 def candownload(repo):
283 # Skip if this hasn't been passed to reposetup()
290 # Skip if this hasn't been passed to reposetup()
284 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
291 if not util.safehasattr(repo.svfs, 'lfsremoteblobstore'):
285 return False
292 return False
286
293
287 # if remotestore is a null store, downloads will lead to nothing
294 # if remotestore is a null store, downloads will lead to nothing
288 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
295 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
289
296
290 def uploadblobsfromrevs(repo, revs):
297 def uploadblobsfromrevs(repo, revs):
291 '''upload lfs blobs introduced by revs
298 '''upload lfs blobs introduced by revs
292
299
293 Note: also used by other extensions e. g. infinitepush. avoid renaming.
300 Note: also used by other extensions e. g. infinitepush. avoid renaming.
294 '''
301 '''
295 if _canskipupload(repo):
302 if _canskipupload(repo):
296 return
303 return
297 pointers = extractpointers(repo, revs)
304 pointers = extractpointers(repo, revs)
298 uploadblobs(repo, pointers)
305 uploadblobs(repo, pointers)
299
306
300 def prepush(pushop):
307 def prepush(pushop):
301 """Prepush hook.
308 """Prepush hook.
302
309
303 Read through the revisions to push, looking for filelog entries that can be
310 Read through the revisions to push, looking for filelog entries that can be
304 deserialized into metadata so that we can block the push on their upload to
311 deserialized into metadata so that we can block the push on their upload to
305 the remote blobstore.
312 the remote blobstore.
306 """
313 """
307 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
314 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
308
315
309 def push(orig, repo, remote, *args, **kwargs):
316 def push(orig, repo, remote, *args, **kwargs):
310 """bail on push if the extension isn't enabled on remote when needed, and
317 """bail on push if the extension isn't enabled on remote when needed, and
311 update the remote store based on the destination path."""
318 update the remote store based on the destination path."""
312 if 'lfs' in repo.requirements:
319 if 'lfs' in repo.requirements:
313 # If the remote peer is for a local repo, the requirement tests in the
320 # If the remote peer is for a local repo, the requirement tests in the
314 # base class method enforce lfs support. Otherwise, some revisions in
321 # base class method enforce lfs support. Otherwise, some revisions in
315 # this repo use lfs, and the remote repo needs the extension loaded.
322 # this repo use lfs, and the remote repo needs the extension loaded.
316 if not remote.local() and not remote.capable('lfs'):
323 if not remote.local() and not remote.capable('lfs'):
317 # This is a copy of the message in exchange.push() when requirements
324 # This is a copy of the message in exchange.push() when requirements
318 # are missing between local repos.
325 # are missing between local repos.
319 m = _("required features are not supported in the destination: %s")
326 m = _("required features are not supported in the destination: %s")
320 raise error.Abort(m % 'lfs',
327 raise error.Abort(m % 'lfs',
321 hint=_('enable the lfs extension on the server'))
328 hint=_('enable the lfs extension on the server'))
322
329
323 # Repositories where this extension is disabled won't have the field.
330 # Repositories where this extension is disabled won't have the field.
324 # But if there's a requirement, then the extension must be loaded AND
331 # But if there's a requirement, then the extension must be loaded AND
325 # there may be blobs to push.
332 # there may be blobs to push.
326 remotestore = repo.svfs.lfsremoteblobstore
333 remotestore = repo.svfs.lfsremoteblobstore
327 try:
334 try:
328 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
335 repo.svfs.lfsremoteblobstore = blobstore.remote(repo, remote.url())
329 return orig(repo, remote, *args, **kwargs)
336 return orig(repo, remote, *args, **kwargs)
330 finally:
337 finally:
331 repo.svfs.lfsremoteblobstore = remotestore
338 repo.svfs.lfsremoteblobstore = remotestore
332 else:
339 else:
333 return orig(repo, remote, *args, **kwargs)
340 return orig(repo, remote, *args, **kwargs)
334
341
335 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
342 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
336 *args, **kwargs):
343 *args, **kwargs):
337 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
344 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
338 uploadblobsfromrevs(repo, outgoing.missing)
345 uploadblobsfromrevs(repo, outgoing.missing)
339 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
346 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
340 **kwargs)
347 **kwargs)
341
348
342 def extractpointers(repo, revs):
349 def extractpointers(repo, revs):
343 """return a list of lfs pointers added by given revs"""
350 """return a list of lfs pointers added by given revs"""
344 repo.ui.debug('lfs: computing set of blobs to upload\n')
351 repo.ui.debug('lfs: computing set of blobs to upload\n')
345 pointers = {}
352 pointers = {}
346
353
347 makeprogress = repo.ui.makeprogress
354 makeprogress = repo.ui.makeprogress
348 with makeprogress(_('lfs search'), _('changesets'), len(revs)) as progress:
355 with makeprogress(_('lfs search'), _('changesets'), len(revs)) as progress:
349 for r in revs:
356 for r in revs:
350 ctx = repo[r]
357 ctx = repo[r]
351 for p in pointersfromctx(ctx).values():
358 for p in pointersfromctx(ctx).values():
352 pointers[p.oid()] = p
359 pointers[p.oid()] = p
353 progress.increment()
360 progress.increment()
354 return sorted(pointers.values())
361 return sorted(pointers.values())
355
362
356 def pointerfromctx(ctx, f, removed=False):
363 def pointerfromctx(ctx, f, removed=False):
357 """return a pointer for the named file from the given changectx, or None if
364 """return a pointer for the named file from the given changectx, or None if
358 the file isn't LFS.
365 the file isn't LFS.
359
366
360 Optionally, the pointer for a file deleted from the context can be returned.
367 Optionally, the pointer for a file deleted from the context can be returned.
361 Since no such pointer is actually stored, and to distinguish from a non LFS
368 Since no such pointer is actually stored, and to distinguish from a non LFS
362 file, this pointer is represented by an empty dict.
369 file, this pointer is represented by an empty dict.
363 """
370 """
364 _ctx = ctx
371 _ctx = ctx
365 if f not in ctx:
372 if f not in ctx:
366 if not removed:
373 if not removed:
367 return None
374 return None
368 if f in ctx.p1():
375 if f in ctx.p1():
369 _ctx = ctx.p1()
376 _ctx = ctx.p1()
370 elif f in ctx.p2():
377 elif f in ctx.p2():
371 _ctx = ctx.p2()
378 _ctx = ctx.p2()
372 else:
379 else:
373 return None
380 return None
374 fctx = _ctx[f]
381 fctx = _ctx[f]
375 if not _islfs(fctx.filelog(), fctx.filenode()):
382 if not _islfs(fctx.filelog(), fctx.filenode()):
376 return None
383 return None
377 try:
384 try:
378 p = pointer.deserialize(fctx.rawdata())
385 p = pointer.deserialize(fctx.rawdata())
379 if ctx == _ctx:
386 if ctx == _ctx:
380 return p
387 return p
381 return {}
388 return {}
382 except pointer.InvalidPointer as ex:
389 except pointer.InvalidPointer as ex:
383 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
390 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
384 % (f, short(_ctx.node()), ex))
391 % (f, short(_ctx.node()), ex))
385
392
386 def pointersfromctx(ctx, removed=False):
393 def pointersfromctx(ctx, removed=False):
387 """return a dict {path: pointer} for given single changectx.
394 """return a dict {path: pointer} for given single changectx.
388
395
389 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
396 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
390 stored for the path is an empty dict.
397 stored for the path is an empty dict.
391 """
398 """
392 result = {}
399 result = {}
393 for f in ctx.files():
400 for f in ctx.files():
394 p = pointerfromctx(ctx, f, removed=removed)
401 p = pointerfromctx(ctx, f, removed=removed)
395 if p is not None:
402 if p is not None:
396 result[f] = p
403 result[f] = p
397 return result
404 return result
398
405
399 def uploadblobs(repo, pointers):
406 def uploadblobs(repo, pointers):
400 """upload given pointers from local blobstore"""
407 """upload given pointers from local blobstore"""
401 if not pointers:
408 if not pointers:
402 return
409 return
403
410
404 remoteblob = repo.svfs.lfsremoteblobstore
411 remoteblob = repo.svfs.lfsremoteblobstore
405 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
412 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
406
413
407 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
414 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
408 orig(ui, srcrepo, dstrepo, requirements)
415 orig(ui, srcrepo, dstrepo, requirements)
409
416
410 # Skip if this hasn't been passed to reposetup()
417 # Skip if this hasn't been passed to reposetup()
411 if (util.safehasattr(srcrepo.svfs, 'lfslocalblobstore') and
418 if (util.safehasattr(srcrepo.svfs, 'lfslocalblobstore') and
412 util.safehasattr(dstrepo.svfs, 'lfslocalblobstore')):
419 util.safehasattr(dstrepo.svfs, 'lfslocalblobstore')):
413 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
420 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
414 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
421 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
415
422
416 for dirpath, dirs, files in srclfsvfs.walk():
423 for dirpath, dirs, files in srclfsvfs.walk():
417 for oid in files:
424 for oid in files:
418 ui.write(_('copying lfs blob %s\n') % oid)
425 ui.write(_('copying lfs blob %s\n') % oid)
419 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
426 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
420
427
421 def upgraderequirements(orig, repo):
428 def upgraderequirements(orig, repo):
422 reqs = orig(repo)
429 reqs = orig(repo)
423 if 'lfs' in repo.requirements:
430 if 'lfs' in repo.requirements:
424 reqs.add('lfs')
431 reqs.add('lfs')
425 return reqs
432 return reqs
@@ -1,1632 +1,1634
1 # repository.py - Interfaces and base classes for repositories and peers.
1 # repository.py - Interfaces and base classes for repositories and peers.
2 #
2 #
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
3 # Copyright 2017 Gregory Szorc <gregory.szorc@gmail.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 from .i18n import _
10 from .i18n import _
11 from . import (
11 from . import (
12 error,
12 error,
13 )
13 )
14 from .utils import (
14 from .utils import (
15 interfaceutil,
15 interfaceutil,
16 )
16 )
17
17
18 # When narrowing is finalized and no longer subject to format changes,
18 # When narrowing is finalized and no longer subject to format changes,
19 # we should move this to just "narrow" or similar.
19 # we should move this to just "narrow" or similar.
20 NARROW_REQUIREMENT = 'narrowhg-experimental'
20 NARROW_REQUIREMENT = 'narrowhg-experimental'
21
21
22 # Local repository feature string.
22 # Local repository feature string.
23
23
24 # Revlogs are being used for file storage.
24 # Revlogs are being used for file storage.
25 REPO_FEATURE_REVLOG_FILE_STORAGE = b'revlogfilestorage'
25 REPO_FEATURE_REVLOG_FILE_STORAGE = b'revlogfilestorage'
26 # The storage part of the repository is shared from an external source.
26 # The storage part of the repository is shared from an external source.
27 REPO_FEATURE_SHARED_STORAGE = b'sharedstore'
27 REPO_FEATURE_SHARED_STORAGE = b'sharedstore'
28 # LFS supported for backing file storage.
29 REPO_FEATURE_LFS = b'lfs'
28
30
29 class ipeerconnection(interfaceutil.Interface):
31 class ipeerconnection(interfaceutil.Interface):
30 """Represents a "connection" to a repository.
32 """Represents a "connection" to a repository.
31
33
32 This is the base interface for representing a connection to a repository.
34 This is the base interface for representing a connection to a repository.
33 It holds basic properties and methods applicable to all peer types.
35 It holds basic properties and methods applicable to all peer types.
34
36
35 This is not a complete interface definition and should not be used
37 This is not a complete interface definition and should not be used
36 outside of this module.
38 outside of this module.
37 """
39 """
38 ui = interfaceutil.Attribute("""ui.ui instance""")
40 ui = interfaceutil.Attribute("""ui.ui instance""")
39
41
40 def url():
42 def url():
41 """Returns a URL string representing this peer.
43 """Returns a URL string representing this peer.
42
44
43 Currently, implementations expose the raw URL used to construct the
45 Currently, implementations expose the raw URL used to construct the
44 instance. It may contain credentials as part of the URL. The
46 instance. It may contain credentials as part of the URL. The
45 expectations of the value aren't well-defined and this could lead to
47 expectations of the value aren't well-defined and this could lead to
46 data leakage.
48 data leakage.
47
49
48 TODO audit/clean consumers and more clearly define the contents of this
50 TODO audit/clean consumers and more clearly define the contents of this
49 value.
51 value.
50 """
52 """
51
53
52 def local():
54 def local():
53 """Returns a local repository instance.
55 """Returns a local repository instance.
54
56
55 If the peer represents a local repository, returns an object that
57 If the peer represents a local repository, returns an object that
56 can be used to interface with it. Otherwise returns ``None``.
58 can be used to interface with it. Otherwise returns ``None``.
57 """
59 """
58
60
59 def peer():
61 def peer():
60 """Returns an object conforming to this interface.
62 """Returns an object conforming to this interface.
61
63
62 Most implementations will ``return self``.
64 Most implementations will ``return self``.
63 """
65 """
64
66
65 def canpush():
67 def canpush():
66 """Returns a boolean indicating if this peer can be pushed to."""
68 """Returns a boolean indicating if this peer can be pushed to."""
67
69
68 def close():
70 def close():
69 """Close the connection to this peer.
71 """Close the connection to this peer.
70
72
71 This is called when the peer will no longer be used. Resources
73 This is called when the peer will no longer be used. Resources
72 associated with the peer should be cleaned up.
74 associated with the peer should be cleaned up.
73 """
75 """
74
76
75 class ipeercapabilities(interfaceutil.Interface):
77 class ipeercapabilities(interfaceutil.Interface):
76 """Peer sub-interface related to capabilities."""
78 """Peer sub-interface related to capabilities."""
77
79
78 def capable(name):
80 def capable(name):
79 """Determine support for a named capability.
81 """Determine support for a named capability.
80
82
81 Returns ``False`` if capability not supported.
83 Returns ``False`` if capability not supported.
82
84
83 Returns ``True`` if boolean capability is supported. Returns a string
85 Returns ``True`` if boolean capability is supported. Returns a string
84 if capability support is non-boolean.
86 if capability support is non-boolean.
85
87
86 Capability strings may or may not map to wire protocol capabilities.
88 Capability strings may or may not map to wire protocol capabilities.
87 """
89 """
88
90
89 def requirecap(name, purpose):
91 def requirecap(name, purpose):
90 """Require a capability to be present.
92 """Require a capability to be present.
91
93
92 Raises a ``CapabilityError`` if the capability isn't present.
94 Raises a ``CapabilityError`` if the capability isn't present.
93 """
95 """
94
96
95 class ipeercommands(interfaceutil.Interface):
97 class ipeercommands(interfaceutil.Interface):
96 """Client-side interface for communicating over the wire protocol.
98 """Client-side interface for communicating over the wire protocol.
97
99
98 This interface is used as a gateway to the Mercurial wire protocol.
100 This interface is used as a gateway to the Mercurial wire protocol.
99 methods commonly call wire protocol commands of the same name.
101 methods commonly call wire protocol commands of the same name.
100 """
102 """
101
103
102 def branchmap():
104 def branchmap():
103 """Obtain heads in named branches.
105 """Obtain heads in named branches.
104
106
105 Returns a dict mapping branch name to an iterable of nodes that are
107 Returns a dict mapping branch name to an iterable of nodes that are
106 heads on that branch.
108 heads on that branch.
107 """
109 """
108
110
109 def capabilities():
111 def capabilities():
110 """Obtain capabilities of the peer.
112 """Obtain capabilities of the peer.
111
113
112 Returns a set of string capabilities.
114 Returns a set of string capabilities.
113 """
115 """
114
116
115 def clonebundles():
117 def clonebundles():
116 """Obtains the clone bundles manifest for the repo.
118 """Obtains the clone bundles manifest for the repo.
117
119
118 Returns the manifest as unparsed bytes.
120 Returns the manifest as unparsed bytes.
119 """
121 """
120
122
121 def debugwireargs(one, two, three=None, four=None, five=None):
123 def debugwireargs(one, two, three=None, four=None, five=None):
122 """Used to facilitate debugging of arguments passed over the wire."""
124 """Used to facilitate debugging of arguments passed over the wire."""
123
125
124 def getbundle(source, **kwargs):
126 def getbundle(source, **kwargs):
125 """Obtain remote repository data as a bundle.
127 """Obtain remote repository data as a bundle.
126
128
127 This command is how the bulk of repository data is transferred from
129 This command is how the bulk of repository data is transferred from
128 the peer to the local repository
130 the peer to the local repository
129
131
130 Returns a generator of bundle data.
132 Returns a generator of bundle data.
131 """
133 """
132
134
133 def heads():
135 def heads():
134 """Determine all known head revisions in the peer.
136 """Determine all known head revisions in the peer.
135
137
136 Returns an iterable of binary nodes.
138 Returns an iterable of binary nodes.
137 """
139 """
138
140
139 def known(nodes):
141 def known(nodes):
140 """Determine whether multiple nodes are known.
142 """Determine whether multiple nodes are known.
141
143
142 Accepts an iterable of nodes whose presence to check for.
144 Accepts an iterable of nodes whose presence to check for.
143
145
144 Returns an iterable of booleans indicating of the corresponding node
146 Returns an iterable of booleans indicating of the corresponding node
145 at that index is known to the peer.
147 at that index is known to the peer.
146 """
148 """
147
149
148 def listkeys(namespace):
150 def listkeys(namespace):
149 """Obtain all keys in a pushkey namespace.
151 """Obtain all keys in a pushkey namespace.
150
152
151 Returns an iterable of key names.
153 Returns an iterable of key names.
152 """
154 """
153
155
154 def lookup(key):
156 def lookup(key):
155 """Resolve a value to a known revision.
157 """Resolve a value to a known revision.
156
158
157 Returns a binary node of the resolved revision on success.
159 Returns a binary node of the resolved revision on success.
158 """
160 """
159
161
160 def pushkey(namespace, key, old, new):
162 def pushkey(namespace, key, old, new):
161 """Set a value using the ``pushkey`` protocol.
163 """Set a value using the ``pushkey`` protocol.
162
164
163 Arguments correspond to the pushkey namespace and key to operate on and
165 Arguments correspond to the pushkey namespace and key to operate on and
164 the old and new values for that key.
166 the old and new values for that key.
165
167
166 Returns a string with the peer result. The value inside varies by the
168 Returns a string with the peer result. The value inside varies by the
167 namespace.
169 namespace.
168 """
170 """
169
171
170 def stream_out():
172 def stream_out():
171 """Obtain streaming clone data.
173 """Obtain streaming clone data.
172
174
173 Successful result should be a generator of data chunks.
175 Successful result should be a generator of data chunks.
174 """
176 """
175
177
176 def unbundle(bundle, heads, url):
178 def unbundle(bundle, heads, url):
177 """Transfer repository data to the peer.
179 """Transfer repository data to the peer.
178
180
179 This is how the bulk of data during a push is transferred.
181 This is how the bulk of data during a push is transferred.
180
182
181 Returns the integer number of heads added to the peer.
183 Returns the integer number of heads added to the peer.
182 """
184 """
183
185
184 class ipeerlegacycommands(interfaceutil.Interface):
186 class ipeerlegacycommands(interfaceutil.Interface):
185 """Interface for implementing support for legacy wire protocol commands.
187 """Interface for implementing support for legacy wire protocol commands.
186
188
187 Wire protocol commands transition to legacy status when they are no longer
189 Wire protocol commands transition to legacy status when they are no longer
188 used by modern clients. To facilitate identifying which commands are
190 used by modern clients. To facilitate identifying which commands are
189 legacy, the interfaces are split.
191 legacy, the interfaces are split.
190 """
192 """
191
193
192 def between(pairs):
194 def between(pairs):
193 """Obtain nodes between pairs of nodes.
195 """Obtain nodes between pairs of nodes.
194
196
195 ``pairs`` is an iterable of node pairs.
197 ``pairs`` is an iterable of node pairs.
196
198
197 Returns an iterable of iterables of nodes corresponding to each
199 Returns an iterable of iterables of nodes corresponding to each
198 requested pair.
200 requested pair.
199 """
201 """
200
202
201 def branches(nodes):
203 def branches(nodes):
202 """Obtain ancestor changesets of specific nodes back to a branch point.
204 """Obtain ancestor changesets of specific nodes back to a branch point.
203
205
204 For each requested node, the peer finds the first ancestor node that is
206 For each requested node, the peer finds the first ancestor node that is
205 a DAG root or is a merge.
207 a DAG root or is a merge.
206
208
207 Returns an iterable of iterables with the resolved values for each node.
209 Returns an iterable of iterables with the resolved values for each node.
208 """
210 """
209
211
210 def changegroup(nodes, source):
212 def changegroup(nodes, source):
211 """Obtain a changegroup with data for descendants of specified nodes."""
213 """Obtain a changegroup with data for descendants of specified nodes."""
212
214
213 def changegroupsubset(bases, heads, source):
215 def changegroupsubset(bases, heads, source):
214 pass
216 pass
215
217
216 class ipeercommandexecutor(interfaceutil.Interface):
218 class ipeercommandexecutor(interfaceutil.Interface):
217 """Represents a mechanism to execute remote commands.
219 """Represents a mechanism to execute remote commands.
218
220
219 This is the primary interface for requesting that wire protocol commands
221 This is the primary interface for requesting that wire protocol commands
220 be executed. Instances of this interface are active in a context manager
222 be executed. Instances of this interface are active in a context manager
221 and have a well-defined lifetime. When the context manager exits, all
223 and have a well-defined lifetime. When the context manager exits, all
222 outstanding requests are waited on.
224 outstanding requests are waited on.
223 """
225 """
224
226
225 def callcommand(name, args):
227 def callcommand(name, args):
226 """Request that a named command be executed.
228 """Request that a named command be executed.
227
229
228 Receives the command name and a dictionary of command arguments.
230 Receives the command name and a dictionary of command arguments.
229
231
230 Returns a ``concurrent.futures.Future`` that will resolve to the
232 Returns a ``concurrent.futures.Future`` that will resolve to the
231 result of that command request. That exact value is left up to
233 result of that command request. That exact value is left up to
232 the implementation and possibly varies by command.
234 the implementation and possibly varies by command.
233
235
234 Not all commands can coexist with other commands in an executor
236 Not all commands can coexist with other commands in an executor
235 instance: it depends on the underlying wire protocol transport being
237 instance: it depends on the underlying wire protocol transport being
236 used and the command itself.
238 used and the command itself.
237
239
238 Implementations MAY call ``sendcommands()`` automatically if the
240 Implementations MAY call ``sendcommands()`` automatically if the
239 requested command can not coexist with other commands in this executor.
241 requested command can not coexist with other commands in this executor.
240
242
241 Implementations MAY call ``sendcommands()`` automatically when the
243 Implementations MAY call ``sendcommands()`` automatically when the
242 future's ``result()`` is called. So, consumers using multiple
244 future's ``result()`` is called. So, consumers using multiple
243 commands with an executor MUST ensure that ``result()`` is not called
245 commands with an executor MUST ensure that ``result()`` is not called
244 until all command requests have been issued.
246 until all command requests have been issued.
245 """
247 """
246
248
247 def sendcommands():
249 def sendcommands():
248 """Trigger submission of queued command requests.
250 """Trigger submission of queued command requests.
249
251
250 Not all transports submit commands as soon as they are requested to
252 Not all transports submit commands as soon as they are requested to
251 run. When called, this method forces queued command requests to be
253 run. When called, this method forces queued command requests to be
252 issued. It will no-op if all commands have already been sent.
254 issued. It will no-op if all commands have already been sent.
253
255
254 When called, no more new commands may be issued with this executor.
256 When called, no more new commands may be issued with this executor.
255 """
257 """
256
258
257 def close():
259 def close():
258 """Signal that this command request is finished.
260 """Signal that this command request is finished.
259
261
260 When called, no more new commands may be issued. All outstanding
262 When called, no more new commands may be issued. All outstanding
261 commands that have previously been issued are waited on before
263 commands that have previously been issued are waited on before
262 returning. This not only includes waiting for the futures to resolve,
264 returning. This not only includes waiting for the futures to resolve,
263 but also waiting for all response data to arrive. In other words,
265 but also waiting for all response data to arrive. In other words,
264 calling this waits for all on-wire state for issued command requests
266 calling this waits for all on-wire state for issued command requests
265 to finish.
267 to finish.
266
268
267 When used as a context manager, this method is called when exiting the
269 When used as a context manager, this method is called when exiting the
268 context manager.
270 context manager.
269
271
270 This method may call ``sendcommands()`` if there are buffered commands.
272 This method may call ``sendcommands()`` if there are buffered commands.
271 """
273 """
272
274
273 class ipeerrequests(interfaceutil.Interface):
275 class ipeerrequests(interfaceutil.Interface):
274 """Interface for executing commands on a peer."""
276 """Interface for executing commands on a peer."""
275
277
276 def commandexecutor():
278 def commandexecutor():
277 """A context manager that resolves to an ipeercommandexecutor.
279 """A context manager that resolves to an ipeercommandexecutor.
278
280
279 The object this resolves to can be used to issue command requests
281 The object this resolves to can be used to issue command requests
280 to the peer.
282 to the peer.
281
283
282 Callers should call its ``callcommand`` method to issue command
284 Callers should call its ``callcommand`` method to issue command
283 requests.
285 requests.
284
286
285 A new executor should be obtained for each distinct set of commands
287 A new executor should be obtained for each distinct set of commands
286 (possibly just a single command) that the consumer wants to execute
288 (possibly just a single command) that the consumer wants to execute
287 as part of a single operation or round trip. This is because some
289 as part of a single operation or round trip. This is because some
288 peers are half-duplex and/or don't support persistent connections.
290 peers are half-duplex and/or don't support persistent connections.
289 e.g. in the case of HTTP peers, commands sent to an executor represent
291 e.g. in the case of HTTP peers, commands sent to an executor represent
290 a single HTTP request. While some peers may support multiple command
292 a single HTTP request. While some peers may support multiple command
291 sends over the wire per executor, consumers need to code to the least
293 sends over the wire per executor, consumers need to code to the least
292 capable peer. So it should be assumed that command executors buffer
294 capable peer. So it should be assumed that command executors buffer
293 called commands until they are told to send them and that each
295 called commands until they are told to send them and that each
294 command executor could result in a new connection or wire-level request
296 command executor could result in a new connection or wire-level request
295 being issued.
297 being issued.
296 """
298 """
297
299
298 class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests):
300 class ipeerbase(ipeerconnection, ipeercapabilities, ipeerrequests):
299 """Unified interface for peer repositories.
301 """Unified interface for peer repositories.
300
302
301 All peer instances must conform to this interface.
303 All peer instances must conform to this interface.
302 """
304 """
303
305
304 @interfaceutil.implementer(ipeerbase)
306 @interfaceutil.implementer(ipeerbase)
305 class peer(object):
307 class peer(object):
306 """Base class for peer repositories."""
308 """Base class for peer repositories."""
307
309
308 def capable(self, name):
310 def capable(self, name):
309 caps = self.capabilities()
311 caps = self.capabilities()
310 if name in caps:
312 if name in caps:
311 return True
313 return True
312
314
313 name = '%s=' % name
315 name = '%s=' % name
314 for cap in caps:
316 for cap in caps:
315 if cap.startswith(name):
317 if cap.startswith(name):
316 return cap[len(name):]
318 return cap[len(name):]
317
319
318 return False
320 return False
319
321
320 def requirecap(self, name, purpose):
322 def requirecap(self, name, purpose):
321 if self.capable(name):
323 if self.capable(name):
322 return
324 return
323
325
324 raise error.CapabilityError(
326 raise error.CapabilityError(
325 _('cannot %s; remote repository does not support the %r '
327 _('cannot %s; remote repository does not support the %r '
326 'capability') % (purpose, name))
328 'capability') % (purpose, name))
327
329
328 class iverifyproblem(interfaceutil.Interface):
330 class iverifyproblem(interfaceutil.Interface):
329 """Represents a problem with the integrity of the repository.
331 """Represents a problem with the integrity of the repository.
330
332
331 Instances of this interface are emitted to describe an integrity issue
333 Instances of this interface are emitted to describe an integrity issue
332 with a repository (e.g. corrupt storage, missing data, etc).
334 with a repository (e.g. corrupt storage, missing data, etc).
333
335
334 Instances are essentially messages associated with severity.
336 Instances are essentially messages associated with severity.
335 """
337 """
336 warning = interfaceutil.Attribute(
338 warning = interfaceutil.Attribute(
337 """Message indicating a non-fatal problem.""")
339 """Message indicating a non-fatal problem.""")
338
340
339 error = interfaceutil.Attribute(
341 error = interfaceutil.Attribute(
340 """Message indicating a fatal problem.""")
342 """Message indicating a fatal problem.""")
341
343
342 class irevisiondelta(interfaceutil.Interface):
344 class irevisiondelta(interfaceutil.Interface):
343 """Represents a delta between one revision and another.
345 """Represents a delta between one revision and another.
344
346
345 Instances convey enough information to allow a revision to be exchanged
347 Instances convey enough information to allow a revision to be exchanged
346 with another repository.
348 with another repository.
347
349
348 Instances represent the fulltext revision data or a delta against
350 Instances represent the fulltext revision data or a delta against
349 another revision. Therefore the ``revision`` and ``delta`` attributes
351 another revision. Therefore the ``revision`` and ``delta`` attributes
350 are mutually exclusive.
352 are mutually exclusive.
351
353
352 Typically used for changegroup generation.
354 Typically used for changegroup generation.
353 """
355 """
354
356
355 node = interfaceutil.Attribute(
357 node = interfaceutil.Attribute(
356 """20 byte node of this revision.""")
358 """20 byte node of this revision.""")
357
359
358 p1node = interfaceutil.Attribute(
360 p1node = interfaceutil.Attribute(
359 """20 byte node of 1st parent of this revision.""")
361 """20 byte node of 1st parent of this revision.""")
360
362
361 p2node = interfaceutil.Attribute(
363 p2node = interfaceutil.Attribute(
362 """20 byte node of 2nd parent of this revision.""")
364 """20 byte node of 2nd parent of this revision.""")
363
365
364 linknode = interfaceutil.Attribute(
366 linknode = interfaceutil.Attribute(
365 """20 byte node of the changelog revision this node is linked to.""")
367 """20 byte node of the changelog revision this node is linked to.""")
366
368
367 flags = interfaceutil.Attribute(
369 flags = interfaceutil.Attribute(
368 """2 bytes of integer flags that apply to this revision.""")
370 """2 bytes of integer flags that apply to this revision.""")
369
371
370 basenode = interfaceutil.Attribute(
372 basenode = interfaceutil.Attribute(
371 """20 byte node of the revision this data is a delta against.
373 """20 byte node of the revision this data is a delta against.
372
374
373 ``nullid`` indicates that the revision is a full revision and not
375 ``nullid`` indicates that the revision is a full revision and not
374 a delta.
376 a delta.
375 """)
377 """)
376
378
377 baserevisionsize = interfaceutil.Attribute(
379 baserevisionsize = interfaceutil.Attribute(
378 """Size of base revision this delta is against.
380 """Size of base revision this delta is against.
379
381
380 May be ``None`` if ``basenode`` is ``nullid``.
382 May be ``None`` if ``basenode`` is ``nullid``.
381 """)
383 """)
382
384
383 revision = interfaceutil.Attribute(
385 revision = interfaceutil.Attribute(
384 """Raw fulltext of revision data for this node.""")
386 """Raw fulltext of revision data for this node.""")
385
387
386 delta = interfaceutil.Attribute(
388 delta = interfaceutil.Attribute(
387 """Delta between ``basenode`` and ``node``.
389 """Delta between ``basenode`` and ``node``.
388
390
389 Stored in the bdiff delta format.
391 Stored in the bdiff delta format.
390 """)
392 """)
391
393
392 class irevisiondeltarequest(interfaceutil.Interface):
394 class irevisiondeltarequest(interfaceutil.Interface):
393 """Represents a request to generate an ``irevisiondelta``."""
395 """Represents a request to generate an ``irevisiondelta``."""
394
396
395 node = interfaceutil.Attribute(
397 node = interfaceutil.Attribute(
396 """20 byte node of revision being requested.""")
398 """20 byte node of revision being requested.""")
397
399
398 p1node = interfaceutil.Attribute(
400 p1node = interfaceutil.Attribute(
399 """20 byte node of 1st parent of revision.""")
401 """20 byte node of 1st parent of revision.""")
400
402
401 p2node = interfaceutil.Attribute(
403 p2node = interfaceutil.Attribute(
402 """20 byte node of 2nd parent of revision.""")
404 """20 byte node of 2nd parent of revision.""")
403
405
404 linknode = interfaceutil.Attribute(
406 linknode = interfaceutil.Attribute(
405 """20 byte node to store in ``linknode`` attribute.""")
407 """20 byte node to store in ``linknode`` attribute.""")
406
408
407 basenode = interfaceutil.Attribute(
409 basenode = interfaceutil.Attribute(
408 """Base revision that delta should be generated against.
410 """Base revision that delta should be generated against.
409
411
410 If ``nullid``, the derived ``irevisiondelta`` should have its
412 If ``nullid``, the derived ``irevisiondelta`` should have its
411 ``revision`` field populated and no delta should be generated.
413 ``revision`` field populated and no delta should be generated.
412
414
413 If ``None``, the delta may be generated against any revision that
415 If ``None``, the delta may be generated against any revision that
414 is an ancestor of this revision. Or a full revision may be used.
416 is an ancestor of this revision. Or a full revision may be used.
415
417
416 If any other value, the delta should be produced against that
418 If any other value, the delta should be produced against that
417 revision.
419 revision.
418 """)
420 """)
419
421
420 ellipsis = interfaceutil.Attribute(
422 ellipsis = interfaceutil.Attribute(
421 """Boolean on whether the ellipsis flag should be set.""")
423 """Boolean on whether the ellipsis flag should be set.""")
422
424
423 class ifilerevisionssequence(interfaceutil.Interface):
425 class ifilerevisionssequence(interfaceutil.Interface):
424 """Contains index data for all revisions of a file.
426 """Contains index data for all revisions of a file.
425
427
426 Types implementing this behave like lists of tuples. The index
428 Types implementing this behave like lists of tuples. The index
427 in the list corresponds to the revision number. The values contain
429 in the list corresponds to the revision number. The values contain
428 index metadata.
430 index metadata.
429
431
430 The *null* revision (revision number -1) is always the last item
432 The *null* revision (revision number -1) is always the last item
431 in the index.
433 in the index.
432 """
434 """
433
435
434 def __len__():
436 def __len__():
435 """The total number of revisions."""
437 """The total number of revisions."""
436
438
437 def __getitem__(rev):
439 def __getitem__(rev):
438 """Returns the object having a specific revision number.
440 """Returns the object having a specific revision number.
439
441
440 Returns an 8-tuple with the following fields:
442 Returns an 8-tuple with the following fields:
441
443
442 offset+flags
444 offset+flags
443 Contains the offset and flags for the revision. 64-bit unsigned
445 Contains the offset and flags for the revision. 64-bit unsigned
444 integer where first 6 bytes are the offset and the next 2 bytes
446 integer where first 6 bytes are the offset and the next 2 bytes
445 are flags. The offset can be 0 if it is not used by the store.
447 are flags. The offset can be 0 if it is not used by the store.
446 compressed size
448 compressed size
447 Size of the revision data in the store. It can be 0 if it isn't
449 Size of the revision data in the store. It can be 0 if it isn't
448 needed by the store.
450 needed by the store.
449 uncompressed size
451 uncompressed size
450 Fulltext size. It can be 0 if it isn't needed by the store.
452 Fulltext size. It can be 0 if it isn't needed by the store.
451 base revision
453 base revision
452 Revision number of revision the delta for storage is encoded
454 Revision number of revision the delta for storage is encoded
453 against. -1 indicates not encoded against a base revision.
455 against. -1 indicates not encoded against a base revision.
454 link revision
456 link revision
455 Revision number of changelog revision this entry is related to.
457 Revision number of changelog revision this entry is related to.
456 p1 revision
458 p1 revision
457 Revision number of 1st parent. -1 if no 1st parent.
459 Revision number of 1st parent. -1 if no 1st parent.
458 p2 revision
460 p2 revision
459 Revision number of 2nd parent. -1 if no 1st parent.
461 Revision number of 2nd parent. -1 if no 1st parent.
460 node
462 node
461 Binary node value for this revision number.
463 Binary node value for this revision number.
462
464
463 Negative values should index off the end of the sequence. ``-1``
465 Negative values should index off the end of the sequence. ``-1``
464 should return the null revision. ``-2`` should return the most
466 should return the null revision. ``-2`` should return the most
465 recent revision.
467 recent revision.
466 """
468 """
467
469
468 def __contains__(rev):
470 def __contains__(rev):
469 """Whether a revision number exists."""
471 """Whether a revision number exists."""
470
472
471 def insert(self, i, entry):
473 def insert(self, i, entry):
472 """Add an item to the index at specific revision."""
474 """Add an item to the index at specific revision."""
473
475
474 class ifileindex(interfaceutil.Interface):
476 class ifileindex(interfaceutil.Interface):
475 """Storage interface for index data of a single file.
477 """Storage interface for index data of a single file.
476
478
477 File storage data is divided into index metadata and data storage.
479 File storage data is divided into index metadata and data storage.
478 This interface defines the index portion of the interface.
480 This interface defines the index portion of the interface.
479
481
480 The index logically consists of:
482 The index logically consists of:
481
483
482 * A mapping between revision numbers and nodes.
484 * A mapping between revision numbers and nodes.
483 * DAG data (storing and querying the relationship between nodes).
485 * DAG data (storing and querying the relationship between nodes).
484 * Metadata to facilitate storage.
486 * Metadata to facilitate storage.
485 """
487 """
486 index = interfaceutil.Attribute(
488 index = interfaceutil.Attribute(
487 """An ``ifilerevisionssequence`` instance.""")
489 """An ``ifilerevisionssequence`` instance.""")
488
490
489 def __len__():
491 def __len__():
490 """Obtain the number of revisions stored for this file."""
492 """Obtain the number of revisions stored for this file."""
491
493
492 def __iter__():
494 def __iter__():
493 """Iterate over revision numbers for this file."""
495 """Iterate over revision numbers for this file."""
494
496
495 def revs(start=0, stop=None):
497 def revs(start=0, stop=None):
496 """Iterate over revision numbers for this file, with control."""
498 """Iterate over revision numbers for this file, with control."""
497
499
498 def parents(node):
500 def parents(node):
499 """Returns a 2-tuple of parent nodes for a revision.
501 """Returns a 2-tuple of parent nodes for a revision.
500
502
501 Values will be ``nullid`` if the parent is empty.
503 Values will be ``nullid`` if the parent is empty.
502 """
504 """
503
505
504 def parentrevs(rev):
506 def parentrevs(rev):
505 """Like parents() but operates on revision numbers."""
507 """Like parents() but operates on revision numbers."""
506
508
507 def rev(node):
509 def rev(node):
508 """Obtain the revision number given a node.
510 """Obtain the revision number given a node.
509
511
510 Raises ``error.LookupError`` if the node is not known.
512 Raises ``error.LookupError`` if the node is not known.
511 """
513 """
512
514
513 def node(rev):
515 def node(rev):
514 """Obtain the node value given a revision number.
516 """Obtain the node value given a revision number.
515
517
516 Raises ``IndexError`` if the node is not known.
518 Raises ``IndexError`` if the node is not known.
517 """
519 """
518
520
519 def lookup(node):
521 def lookup(node):
520 """Attempt to resolve a value to a node.
522 """Attempt to resolve a value to a node.
521
523
522 Value can be a binary node, hex node, revision number, or a string
524 Value can be a binary node, hex node, revision number, or a string
523 that can be converted to an integer.
525 that can be converted to an integer.
524
526
525 Raises ``error.LookupError`` if a node could not be resolved.
527 Raises ``error.LookupError`` if a node could not be resolved.
526 """
528 """
527
529
528 def linkrev(rev):
530 def linkrev(rev):
529 """Obtain the changeset revision number a revision is linked to."""
531 """Obtain the changeset revision number a revision is linked to."""
530
532
531 def flags(rev):
533 def flags(rev):
532 """Obtain flags used to affect storage of a revision."""
534 """Obtain flags used to affect storage of a revision."""
533
535
534 def iscensored(rev):
536 def iscensored(rev):
535 """Return whether a revision's content has been censored."""
537 """Return whether a revision's content has been censored."""
536
538
537 def commonancestorsheads(node1, node2):
539 def commonancestorsheads(node1, node2):
538 """Obtain an iterable of nodes containing heads of common ancestors.
540 """Obtain an iterable of nodes containing heads of common ancestors.
539
541
540 See ``ancestor.commonancestorsheads()``.
542 See ``ancestor.commonancestorsheads()``.
541 """
543 """
542
544
543 def descendants(revs):
545 def descendants(revs):
544 """Obtain descendant revision numbers for a set of revision numbers.
546 """Obtain descendant revision numbers for a set of revision numbers.
545
547
546 If ``nullrev`` is in the set, this is equivalent to ``revs()``.
548 If ``nullrev`` is in the set, this is equivalent to ``revs()``.
547 """
549 """
548
550
549 def heads(start=None, stop=None):
551 def heads(start=None, stop=None):
550 """Obtain a list of nodes that are DAG heads, with control.
552 """Obtain a list of nodes that are DAG heads, with control.
551
553
552 The set of revisions examined can be limited by specifying
554 The set of revisions examined can be limited by specifying
553 ``start`` and ``stop``. ``start`` is a node. ``stop`` is an
555 ``start`` and ``stop``. ``start`` is a node. ``stop`` is an
554 iterable of nodes. DAG traversal starts at earlier revision
556 iterable of nodes. DAG traversal starts at earlier revision
555 ``start`` and iterates forward until any node in ``stop`` is
557 ``start`` and iterates forward until any node in ``stop`` is
556 encountered.
558 encountered.
557 """
559 """
558
560
559 def children(node):
561 def children(node):
560 """Obtain nodes that are children of a node.
562 """Obtain nodes that are children of a node.
561
563
562 Returns a list of nodes.
564 Returns a list of nodes.
563 """
565 """
564
566
565 def deltaparent(rev):
567 def deltaparent(rev):
566 """"Return the revision that is a suitable parent to delta against."""
568 """"Return the revision that is a suitable parent to delta against."""
567
569
568 class ifiledata(interfaceutil.Interface):
570 class ifiledata(interfaceutil.Interface):
569 """Storage interface for data storage of a specific file.
571 """Storage interface for data storage of a specific file.
570
572
571 This complements ``ifileindex`` and provides an interface for accessing
573 This complements ``ifileindex`` and provides an interface for accessing
572 data for a tracked file.
574 data for a tracked file.
573 """
575 """
574 def rawsize(rev):
576 def rawsize(rev):
575 """The size of the fulltext data for a revision as stored."""
577 """The size of the fulltext data for a revision as stored."""
576
578
577 def size(rev):
579 def size(rev):
578 """Obtain the fulltext size of file data.
580 """Obtain the fulltext size of file data.
579
581
580 Any metadata is excluded from size measurements. Use ``rawsize()`` if
582 Any metadata is excluded from size measurements. Use ``rawsize()`` if
581 metadata size is important.
583 metadata size is important.
582 """
584 """
583
585
584 def checkhash(fulltext, node, p1=None, p2=None, rev=None):
586 def checkhash(fulltext, node, p1=None, p2=None, rev=None):
585 """Validate the stored hash of a given fulltext and node.
587 """Validate the stored hash of a given fulltext and node.
586
588
587 Raises ``error.StorageError`` is hash validation fails.
589 Raises ``error.StorageError`` is hash validation fails.
588 """
590 """
589
591
590 def revision(node, raw=False):
592 def revision(node, raw=False):
591 """"Obtain fulltext data for a node.
593 """"Obtain fulltext data for a node.
592
594
593 By default, any storage transformations are applied before the data
595 By default, any storage transformations are applied before the data
594 is returned. If ``raw`` is True, non-raw storage transformations
596 is returned. If ``raw`` is True, non-raw storage transformations
595 are not applied.
597 are not applied.
596
598
597 The fulltext data may contain a header containing metadata. Most
599 The fulltext data may contain a header containing metadata. Most
598 consumers should use ``read()`` to obtain the actual file data.
600 consumers should use ``read()`` to obtain the actual file data.
599 """
601 """
600
602
601 def read(node):
603 def read(node):
602 """Resolve file fulltext data.
604 """Resolve file fulltext data.
603
605
604 This is similar to ``revision()`` except any metadata in the data
606 This is similar to ``revision()`` except any metadata in the data
605 headers is stripped.
607 headers is stripped.
606 """
608 """
607
609
608 def renamed(node):
610 def renamed(node):
609 """Obtain copy metadata for a node.
611 """Obtain copy metadata for a node.
610
612
611 Returns ``False`` if no copy metadata is stored or a 2-tuple of
613 Returns ``False`` if no copy metadata is stored or a 2-tuple of
612 (path, node) from which this revision was copied.
614 (path, node) from which this revision was copied.
613 """
615 """
614
616
615 def cmp(node, fulltext):
617 def cmp(node, fulltext):
616 """Compare fulltext to another revision.
618 """Compare fulltext to another revision.
617
619
618 Returns True if the fulltext is different from what is stored.
620 Returns True if the fulltext is different from what is stored.
619
621
620 This takes copy metadata into account.
622 This takes copy metadata into account.
621
623
622 TODO better document the copy metadata and censoring logic.
624 TODO better document the copy metadata and censoring logic.
623 """
625 """
624
626
625 def revdiff(rev1, rev2):
627 def revdiff(rev1, rev2):
626 """Obtain a delta between two revision numbers.
628 """Obtain a delta between two revision numbers.
627
629
628 Operates on raw data in the store (``revision(node, raw=True)``).
630 Operates on raw data in the store (``revision(node, raw=True)``).
629
631
630 The returned data is the result of ``bdiff.bdiff`` on the raw
632 The returned data is the result of ``bdiff.bdiff`` on the raw
631 revision data.
633 revision data.
632 """
634 """
633
635
634 def emitrevisiondeltas(requests):
636 def emitrevisiondeltas(requests):
635 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
637 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
636
638
637 Given an iterable of objects conforming to the ``irevisiondeltarequest``
639 Given an iterable of objects conforming to the ``irevisiondeltarequest``
638 interface, emits objects conforming to the ``irevisiondelta``
640 interface, emits objects conforming to the ``irevisiondelta``
639 interface.
641 interface.
640
642
641 This method is a generator.
643 This method is a generator.
642
644
643 ``irevisiondelta`` should be emitted in the same order of
645 ``irevisiondelta`` should be emitted in the same order of
644 ``irevisiondeltarequest`` that was passed in.
646 ``irevisiondeltarequest`` that was passed in.
645
647
646 The emitted objects MUST conform by the results of
648 The emitted objects MUST conform by the results of
647 ``irevisiondeltarequest``. Namely, they must respect any requests
649 ``irevisiondeltarequest``. Namely, they must respect any requests
648 for building a delta from a specific ``basenode`` if defined.
650 for building a delta from a specific ``basenode`` if defined.
649
651
650 When sending deltas, implementations must take into account whether
652 When sending deltas, implementations must take into account whether
651 the client has the base delta before encoding a delta against that
653 the client has the base delta before encoding a delta against that
652 revision. A revision encountered previously in ``requests`` is
654 revision. A revision encountered previously in ``requests`` is
653 always a suitable base revision. An example of a bad delta is a delta
655 always a suitable base revision. An example of a bad delta is a delta
654 against a non-ancestor revision. Another example of a bad delta is a
656 against a non-ancestor revision. Another example of a bad delta is a
655 delta against a censored revision.
657 delta against a censored revision.
656 """
658 """
657
659
658 class ifilemutation(interfaceutil.Interface):
660 class ifilemutation(interfaceutil.Interface):
659 """Storage interface for mutation events of a tracked file."""
661 """Storage interface for mutation events of a tracked file."""
660
662
661 def add(filedata, meta, transaction, linkrev, p1, p2):
663 def add(filedata, meta, transaction, linkrev, p1, p2):
662 """Add a new revision to the store.
664 """Add a new revision to the store.
663
665
664 Takes file data, dictionary of metadata, a transaction, linkrev,
666 Takes file data, dictionary of metadata, a transaction, linkrev,
665 and parent nodes.
667 and parent nodes.
666
668
667 Returns the node that was added.
669 Returns the node that was added.
668
670
669 May no-op if a revision matching the supplied data is already stored.
671 May no-op if a revision matching the supplied data is already stored.
670 """
672 """
671
673
672 def addrevision(revisiondata, transaction, linkrev, p1, p2, node=None,
674 def addrevision(revisiondata, transaction, linkrev, p1, p2, node=None,
673 flags=0, cachedelta=None):
675 flags=0, cachedelta=None):
674 """Add a new revision to the store.
676 """Add a new revision to the store.
675
677
676 This is similar to ``add()`` except it operates at a lower level.
678 This is similar to ``add()`` except it operates at a lower level.
677
679
678 The data passed in already contains a metadata header, if any.
680 The data passed in already contains a metadata header, if any.
679
681
680 ``node`` and ``flags`` can be used to define the expected node and
682 ``node`` and ``flags`` can be used to define the expected node and
681 the flags to use with storage.
683 the flags to use with storage.
682
684
683 ``add()`` is usually called when adding files from e.g. the working
685 ``add()`` is usually called when adding files from e.g. the working
684 directory. ``addrevision()`` is often called by ``add()`` and for
686 directory. ``addrevision()`` is often called by ``add()`` and for
685 scenarios where revision data has already been computed, such as when
687 scenarios where revision data has already been computed, such as when
686 applying raw data from a peer repo.
688 applying raw data from a peer repo.
687 """
689 """
688
690
689 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
691 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
690 """Process a series of deltas for storage.
692 """Process a series of deltas for storage.
691
693
692 ``deltas`` is an iterable of 7-tuples of
694 ``deltas`` is an iterable of 7-tuples of
693 (node, p1, p2, linknode, deltabase, delta, flags) defining revisions
695 (node, p1, p2, linknode, deltabase, delta, flags) defining revisions
694 to add.
696 to add.
695
697
696 The ``delta`` field contains ``mpatch`` data to apply to a base
698 The ``delta`` field contains ``mpatch`` data to apply to a base
697 revision, identified by ``deltabase``. The base node can be
699 revision, identified by ``deltabase``. The base node can be
698 ``nullid``, in which case the header from the delta can be ignored
700 ``nullid``, in which case the header from the delta can be ignored
699 and the delta used as the fulltext.
701 and the delta used as the fulltext.
700
702
701 ``addrevisioncb`` should be called for each node as it is committed.
703 ``addrevisioncb`` should be called for each node as it is committed.
702
704
703 Returns a list of nodes that were processed. A node will be in the list
705 Returns a list of nodes that were processed. A node will be in the list
704 even if it existed in the store previously.
706 even if it existed in the store previously.
705 """
707 """
706
708
707 def censorrevision(tr, node, tombstone=b''):
709 def censorrevision(tr, node, tombstone=b''):
708 """Remove the content of a single revision.
710 """Remove the content of a single revision.
709
711
710 The specified ``node`` will have its content purged from storage.
712 The specified ``node`` will have its content purged from storage.
711 Future attempts to access the revision data for this node will
713 Future attempts to access the revision data for this node will
712 result in failure.
714 result in failure.
713
715
714 A ``tombstone`` message can optionally be stored. This message may be
716 A ``tombstone`` message can optionally be stored. This message may be
715 displayed to users when they attempt to access the missing revision
717 displayed to users when they attempt to access the missing revision
716 data.
718 data.
717
719
718 Storage backends may have stored deltas against the previous content
720 Storage backends may have stored deltas against the previous content
719 in this revision. As part of censoring a revision, these storage
721 in this revision. As part of censoring a revision, these storage
720 backends are expected to rewrite any internally stored deltas such
722 backends are expected to rewrite any internally stored deltas such
721 that they no longer reference the deleted content.
723 that they no longer reference the deleted content.
722 """
724 """
723
725
724 def getstrippoint(minlink):
726 def getstrippoint(minlink):
725 """Find the minimum revision that must be stripped to strip a linkrev.
727 """Find the minimum revision that must be stripped to strip a linkrev.
726
728
727 Returns a 2-tuple containing the minimum revision number and a set
729 Returns a 2-tuple containing the minimum revision number and a set
728 of all revisions numbers that would be broken by this strip.
730 of all revisions numbers that would be broken by this strip.
729
731
730 TODO this is highly revlog centric and should be abstracted into
732 TODO this is highly revlog centric and should be abstracted into
731 a higher-level deletion API. ``repair.strip()`` relies on this.
733 a higher-level deletion API. ``repair.strip()`` relies on this.
732 """
734 """
733
735
734 def strip(minlink, transaction):
736 def strip(minlink, transaction):
735 """Remove storage of items starting at a linkrev.
737 """Remove storage of items starting at a linkrev.
736
738
737 This uses ``getstrippoint()`` to determine the first node to remove.
739 This uses ``getstrippoint()`` to determine the first node to remove.
738 Then it effectively truncates storage for all revisions after that.
740 Then it effectively truncates storage for all revisions after that.
739
741
740 TODO this is highly revlog centric and should be abstracted into a
742 TODO this is highly revlog centric and should be abstracted into a
741 higher-level deletion API.
743 higher-level deletion API.
742 """
744 """
743
745
744 class ifilestorage(ifileindex, ifiledata, ifilemutation):
746 class ifilestorage(ifileindex, ifiledata, ifilemutation):
745 """Complete storage interface for a single tracked file."""
747 """Complete storage interface for a single tracked file."""
746
748
747 _generaldelta = interfaceutil.Attribute(
749 _generaldelta = interfaceutil.Attribute(
748 """Whether deltas can be against any parent revision.
750 """Whether deltas can be against any parent revision.
749
751
750 TODO this is used by changegroup code and it could probably be
752 TODO this is used by changegroup code and it could probably be
751 folded into another API.
753 folded into another API.
752 """)
754 """)
753
755
754 def files():
756 def files():
755 """Obtain paths that are backing storage for this file.
757 """Obtain paths that are backing storage for this file.
756
758
757 TODO this is used heavily by verify code and there should probably
759 TODO this is used heavily by verify code and there should probably
758 be a better API for that.
760 be a better API for that.
759 """
761 """
760
762
761 def verifyintegrity(state):
763 def verifyintegrity(state):
762 """Verifies the integrity of file storage.
764 """Verifies the integrity of file storage.
763
765
764 ``state`` is a dict holding state of the verifier process. It can be
766 ``state`` is a dict holding state of the verifier process. It can be
765 used to communicate data between invocations of multiple storage
767 used to communicate data between invocations of multiple storage
766 primitives.
768 primitives.
767
769
768 The method yields objects conforming to the ``iverifyproblem``
770 The method yields objects conforming to the ``iverifyproblem``
769 interface.
771 interface.
770 """
772 """
771
773
772 class idirs(interfaceutil.Interface):
774 class idirs(interfaceutil.Interface):
773 """Interface representing a collection of directories from paths.
775 """Interface representing a collection of directories from paths.
774
776
775 This interface is essentially a derived data structure representing
777 This interface is essentially a derived data structure representing
776 directories from a collection of paths.
778 directories from a collection of paths.
777 """
779 """
778
780
779 def addpath(path):
781 def addpath(path):
780 """Add a path to the collection.
782 """Add a path to the collection.
781
783
782 All directories in the path will be added to the collection.
784 All directories in the path will be added to the collection.
783 """
785 """
784
786
785 def delpath(path):
787 def delpath(path):
786 """Remove a path from the collection.
788 """Remove a path from the collection.
787
789
788 If the removal was the last path in a particular directory, the
790 If the removal was the last path in a particular directory, the
789 directory is removed from the collection.
791 directory is removed from the collection.
790 """
792 """
791
793
792 def __iter__():
794 def __iter__():
793 """Iterate over the directories in this collection of paths."""
795 """Iterate over the directories in this collection of paths."""
794
796
795 def __contains__(path):
797 def __contains__(path):
796 """Whether a specific directory is in this collection."""
798 """Whether a specific directory is in this collection."""
797
799
798 class imanifestdict(interfaceutil.Interface):
800 class imanifestdict(interfaceutil.Interface):
799 """Interface representing a manifest data structure.
801 """Interface representing a manifest data structure.
800
802
801 A manifest is effectively a dict mapping paths to entries. Each entry
803 A manifest is effectively a dict mapping paths to entries. Each entry
802 consists of a binary node and extra flags affecting that entry.
804 consists of a binary node and extra flags affecting that entry.
803 """
805 """
804
806
805 def __getitem__(path):
807 def __getitem__(path):
806 """Returns the binary node value for a path in the manifest.
808 """Returns the binary node value for a path in the manifest.
807
809
808 Raises ``KeyError`` if the path does not exist in the manifest.
810 Raises ``KeyError`` if the path does not exist in the manifest.
809
811
810 Equivalent to ``self.find(path)[0]``.
812 Equivalent to ``self.find(path)[0]``.
811 """
813 """
812
814
813 def find(path):
815 def find(path):
814 """Returns the entry for a path in the manifest.
816 """Returns the entry for a path in the manifest.
815
817
816 Returns a 2-tuple of (node, flags).
818 Returns a 2-tuple of (node, flags).
817
819
818 Raises ``KeyError`` if the path does not exist in the manifest.
820 Raises ``KeyError`` if the path does not exist in the manifest.
819 """
821 """
820
822
821 def __len__():
823 def __len__():
822 """Return the number of entries in the manifest."""
824 """Return the number of entries in the manifest."""
823
825
824 def __nonzero__():
826 def __nonzero__():
825 """Returns True if the manifest has entries, False otherwise."""
827 """Returns True if the manifest has entries, False otherwise."""
826
828
827 __bool__ = __nonzero__
829 __bool__ = __nonzero__
828
830
829 def __setitem__(path, node):
831 def __setitem__(path, node):
830 """Define the node value for a path in the manifest.
832 """Define the node value for a path in the manifest.
831
833
832 If the path is already in the manifest, its flags will be copied to
834 If the path is already in the manifest, its flags will be copied to
833 the new entry.
835 the new entry.
834 """
836 """
835
837
836 def __contains__(path):
838 def __contains__(path):
837 """Whether a path exists in the manifest."""
839 """Whether a path exists in the manifest."""
838
840
839 def __delitem__(path):
841 def __delitem__(path):
840 """Remove a path from the manifest.
842 """Remove a path from the manifest.
841
843
842 Raises ``KeyError`` if the path is not in the manifest.
844 Raises ``KeyError`` if the path is not in the manifest.
843 """
845 """
844
846
845 def __iter__():
847 def __iter__():
846 """Iterate over paths in the manifest."""
848 """Iterate over paths in the manifest."""
847
849
848 def iterkeys():
850 def iterkeys():
849 """Iterate over paths in the manifest."""
851 """Iterate over paths in the manifest."""
850
852
851 def keys():
853 def keys():
852 """Obtain a list of paths in the manifest."""
854 """Obtain a list of paths in the manifest."""
853
855
854 def filesnotin(other, match=None):
856 def filesnotin(other, match=None):
855 """Obtain the set of paths in this manifest but not in another.
857 """Obtain the set of paths in this manifest but not in another.
856
858
857 ``match`` is an optional matcher function to be applied to both
859 ``match`` is an optional matcher function to be applied to both
858 manifests.
860 manifests.
859
861
860 Returns a set of paths.
862 Returns a set of paths.
861 """
863 """
862
864
863 def dirs():
865 def dirs():
864 """Returns an object implementing the ``idirs`` interface."""
866 """Returns an object implementing the ``idirs`` interface."""
865
867
866 def hasdir(dir):
868 def hasdir(dir):
867 """Returns a bool indicating if a directory is in this manifest."""
869 """Returns a bool indicating if a directory is in this manifest."""
868
870
869 def matches(match):
871 def matches(match):
870 """Generate a new manifest filtered through a matcher.
872 """Generate a new manifest filtered through a matcher.
871
873
872 Returns an object conforming to the ``imanifestdict`` interface.
874 Returns an object conforming to the ``imanifestdict`` interface.
873 """
875 """
874
876
875 def walk(match):
877 def walk(match):
876 """Generator of paths in manifest satisfying a matcher.
878 """Generator of paths in manifest satisfying a matcher.
877
879
878 This is equivalent to ``self.matches(match).iterkeys()`` except a new
880 This is equivalent to ``self.matches(match).iterkeys()`` except a new
879 manifest object is not created.
881 manifest object is not created.
880
882
881 If the matcher has explicit files listed and they don't exist in
883 If the matcher has explicit files listed and they don't exist in
882 the manifest, ``match.bad()`` is called for each missing file.
884 the manifest, ``match.bad()`` is called for each missing file.
883 """
885 """
884
886
885 def diff(other, match=None, clean=False):
887 def diff(other, match=None, clean=False):
886 """Find differences between this manifest and another.
888 """Find differences between this manifest and another.
887
889
888 This manifest is compared to ``other``.
890 This manifest is compared to ``other``.
889
891
890 If ``match`` is provided, the two manifests are filtered against this
892 If ``match`` is provided, the two manifests are filtered against this
891 matcher and only entries satisfying the matcher are compared.
893 matcher and only entries satisfying the matcher are compared.
892
894
893 If ``clean`` is True, unchanged files are included in the returned
895 If ``clean`` is True, unchanged files are included in the returned
894 object.
896 object.
895
897
896 Returns a dict with paths as keys and values of 2-tuples of 2-tuples of
898 Returns a dict with paths as keys and values of 2-tuples of 2-tuples of
897 the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)``
899 the form ``((node1, flag1), (node2, flag2))`` where ``(node1, flag1)``
898 represents the node and flags for this manifest and ``(node2, flag2)``
900 represents the node and flags for this manifest and ``(node2, flag2)``
899 are the same for the other manifest.
901 are the same for the other manifest.
900 """
902 """
901
903
902 def setflag(path, flag):
904 def setflag(path, flag):
903 """Set the flag value for a given path.
905 """Set the flag value for a given path.
904
906
905 Raises ``KeyError`` if the path is not already in the manifest.
907 Raises ``KeyError`` if the path is not already in the manifest.
906 """
908 """
907
909
908 def get(path, default=None):
910 def get(path, default=None):
909 """Obtain the node value for a path or a default value if missing."""
911 """Obtain the node value for a path or a default value if missing."""
910
912
911 def flags(path, default=''):
913 def flags(path, default=''):
912 """Return the flags value for a path or a default value if missing."""
914 """Return the flags value for a path or a default value if missing."""
913
915
914 def copy():
916 def copy():
915 """Return a copy of this manifest."""
917 """Return a copy of this manifest."""
916
918
917 def items():
919 def items():
918 """Returns an iterable of (path, node) for items in this manifest."""
920 """Returns an iterable of (path, node) for items in this manifest."""
919
921
920 def iteritems():
922 def iteritems():
921 """Identical to items()."""
923 """Identical to items()."""
922
924
923 def iterentries():
925 def iterentries():
924 """Returns an iterable of (path, node, flags) for this manifest.
926 """Returns an iterable of (path, node, flags) for this manifest.
925
927
926 Similar to ``iteritems()`` except items are a 3-tuple and include
928 Similar to ``iteritems()`` except items are a 3-tuple and include
927 flags.
929 flags.
928 """
930 """
929
931
930 def text():
932 def text():
931 """Obtain the raw data representation for this manifest.
933 """Obtain the raw data representation for this manifest.
932
934
933 Result is used to create a manifest revision.
935 Result is used to create a manifest revision.
934 """
936 """
935
937
936 def fastdelta(base, changes):
938 def fastdelta(base, changes):
937 """Obtain a delta between this manifest and another given changes.
939 """Obtain a delta between this manifest and another given changes.
938
940
939 ``base`` in the raw data representation for another manifest.
941 ``base`` in the raw data representation for another manifest.
940
942
941 ``changes`` is an iterable of ``(path, to_delete)``.
943 ``changes`` is an iterable of ``(path, to_delete)``.
942
944
943 Returns a 2-tuple containing ``bytearray(self.text())`` and the
945 Returns a 2-tuple containing ``bytearray(self.text())`` and the
944 delta between ``base`` and this manifest.
946 delta between ``base`` and this manifest.
945 """
947 """
946
948
947 class imanifestrevisionbase(interfaceutil.Interface):
949 class imanifestrevisionbase(interfaceutil.Interface):
948 """Base interface representing a single revision of a manifest.
950 """Base interface representing a single revision of a manifest.
949
951
950 Should not be used as a primary interface: should always be inherited
952 Should not be used as a primary interface: should always be inherited
951 as part of a larger interface.
953 as part of a larger interface.
952 """
954 """
953
955
954 def new():
956 def new():
955 """Obtain a new manifest instance.
957 """Obtain a new manifest instance.
956
958
957 Returns an object conforming to the ``imanifestrevisionwritable``
959 Returns an object conforming to the ``imanifestrevisionwritable``
958 interface. The instance will be associated with the same
960 interface. The instance will be associated with the same
959 ``imanifestlog`` collection as this instance.
961 ``imanifestlog`` collection as this instance.
960 """
962 """
961
963
962 def copy():
964 def copy():
963 """Obtain a copy of this manifest instance.
965 """Obtain a copy of this manifest instance.
964
966
965 Returns an object conforming to the ``imanifestrevisionwritable``
967 Returns an object conforming to the ``imanifestrevisionwritable``
966 interface. The instance will be associated with the same
968 interface. The instance will be associated with the same
967 ``imanifestlog`` collection as this instance.
969 ``imanifestlog`` collection as this instance.
968 """
970 """
969
971
970 def read():
972 def read():
971 """Obtain the parsed manifest data structure.
973 """Obtain the parsed manifest data structure.
972
974
973 The returned object conforms to the ``imanifestdict`` interface.
975 The returned object conforms to the ``imanifestdict`` interface.
974 """
976 """
975
977
976 class imanifestrevisionstored(imanifestrevisionbase):
978 class imanifestrevisionstored(imanifestrevisionbase):
977 """Interface representing a manifest revision committed to storage."""
979 """Interface representing a manifest revision committed to storage."""
978
980
979 def node():
981 def node():
980 """The binary node for this manifest."""
982 """The binary node for this manifest."""
981
983
982 parents = interfaceutil.Attribute(
984 parents = interfaceutil.Attribute(
983 """List of binary nodes that are parents for this manifest revision."""
985 """List of binary nodes that are parents for this manifest revision."""
984 )
986 )
985
987
986 def readdelta(shallow=False):
988 def readdelta(shallow=False):
987 """Obtain the manifest data structure representing changes from parent.
989 """Obtain the manifest data structure representing changes from parent.
988
990
989 This manifest is compared to its 1st parent. A new manifest representing
991 This manifest is compared to its 1st parent. A new manifest representing
990 those differences is constructed.
992 those differences is constructed.
991
993
992 The returned object conforms to the ``imanifestdict`` interface.
994 The returned object conforms to the ``imanifestdict`` interface.
993 """
995 """
994
996
995 def readfast(shallow=False):
997 def readfast(shallow=False):
996 """Calls either ``read()`` or ``readdelta()``.
998 """Calls either ``read()`` or ``readdelta()``.
997
999
998 The faster of the two options is called.
1000 The faster of the two options is called.
999 """
1001 """
1000
1002
1001 def find(key):
1003 def find(key):
1002 """Calls self.read().find(key)``.
1004 """Calls self.read().find(key)``.
1003
1005
1004 Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``.
1006 Returns a 2-tuple of ``(node, flags)`` or raises ``KeyError``.
1005 """
1007 """
1006
1008
1007 class imanifestrevisionwritable(imanifestrevisionbase):
1009 class imanifestrevisionwritable(imanifestrevisionbase):
1008 """Interface representing a manifest revision that can be committed."""
1010 """Interface representing a manifest revision that can be committed."""
1009
1011
1010 def write(transaction, linkrev, p1node, p2node, added, removed, match=None):
1012 def write(transaction, linkrev, p1node, p2node, added, removed, match=None):
1011 """Add this revision to storage.
1013 """Add this revision to storage.
1012
1014
1013 Takes a transaction object, the changeset revision number it will
1015 Takes a transaction object, the changeset revision number it will
1014 be associated with, its parent nodes, and lists of added and
1016 be associated with, its parent nodes, and lists of added and
1015 removed paths.
1017 removed paths.
1016
1018
1017 If match is provided, storage can choose not to inspect or write out
1019 If match is provided, storage can choose not to inspect or write out
1018 items that do not match. Storage is still required to be able to provide
1020 items that do not match. Storage is still required to be able to provide
1019 the full manifest in the future for any directories written (these
1021 the full manifest in the future for any directories written (these
1020 manifests should not be "narrowed on disk").
1022 manifests should not be "narrowed on disk").
1021
1023
1022 Returns the binary node of the created revision.
1024 Returns the binary node of the created revision.
1023 """
1025 """
1024
1026
1025 class imanifeststorage(interfaceutil.Interface):
1027 class imanifeststorage(interfaceutil.Interface):
1026 """Storage interface for manifest data."""
1028 """Storage interface for manifest data."""
1027
1029
1028 tree = interfaceutil.Attribute(
1030 tree = interfaceutil.Attribute(
1029 """The path to the directory this manifest tracks.
1031 """The path to the directory this manifest tracks.
1030
1032
1031 The empty bytestring represents the root manifest.
1033 The empty bytestring represents the root manifest.
1032 """)
1034 """)
1033
1035
1034 index = interfaceutil.Attribute(
1036 index = interfaceutil.Attribute(
1035 """An ``ifilerevisionssequence`` instance.""")
1037 """An ``ifilerevisionssequence`` instance.""")
1036
1038
1037 indexfile = interfaceutil.Attribute(
1039 indexfile = interfaceutil.Attribute(
1038 """Path of revlog index file.
1040 """Path of revlog index file.
1039
1041
1040 TODO this is revlog specific and should not be exposed.
1042 TODO this is revlog specific and should not be exposed.
1041 """)
1043 """)
1042
1044
1043 opener = interfaceutil.Attribute(
1045 opener = interfaceutil.Attribute(
1044 """VFS opener to use to access underlying files used for storage.
1046 """VFS opener to use to access underlying files used for storage.
1045
1047
1046 TODO this is revlog specific and should not be exposed.
1048 TODO this is revlog specific and should not be exposed.
1047 """)
1049 """)
1048
1050
1049 version = interfaceutil.Attribute(
1051 version = interfaceutil.Attribute(
1050 """Revlog version number.
1052 """Revlog version number.
1051
1053
1052 TODO this is revlog specific and should not be exposed.
1054 TODO this is revlog specific and should not be exposed.
1053 """)
1055 """)
1054
1056
1055 _generaldelta = interfaceutil.Attribute(
1057 _generaldelta = interfaceutil.Attribute(
1056 """Whether generaldelta storage is being used.
1058 """Whether generaldelta storage is being used.
1057
1059
1058 TODO this is revlog specific and should not be exposed.
1060 TODO this is revlog specific and should not be exposed.
1059 """)
1061 """)
1060
1062
1061 fulltextcache = interfaceutil.Attribute(
1063 fulltextcache = interfaceutil.Attribute(
1062 """Dict with cache of fulltexts.
1064 """Dict with cache of fulltexts.
1063
1065
1064 TODO this doesn't feel appropriate for the storage interface.
1066 TODO this doesn't feel appropriate for the storage interface.
1065 """)
1067 """)
1066
1068
1067 def __len__():
1069 def __len__():
1068 """Obtain the number of revisions stored for this manifest."""
1070 """Obtain the number of revisions stored for this manifest."""
1069
1071
1070 def __iter__():
1072 def __iter__():
1071 """Iterate over revision numbers for this manifest."""
1073 """Iterate over revision numbers for this manifest."""
1072
1074
1073 def rev(node):
1075 def rev(node):
1074 """Obtain the revision number given a binary node.
1076 """Obtain the revision number given a binary node.
1075
1077
1076 Raises ``error.LookupError`` if the node is not known.
1078 Raises ``error.LookupError`` if the node is not known.
1077 """
1079 """
1078
1080
1079 def node(rev):
1081 def node(rev):
1080 """Obtain the node value given a revision number.
1082 """Obtain the node value given a revision number.
1081
1083
1082 Raises ``error.LookupError`` if the revision is not known.
1084 Raises ``error.LookupError`` if the revision is not known.
1083 """
1085 """
1084
1086
1085 def lookup(value):
1087 def lookup(value):
1086 """Attempt to resolve a value to a node.
1088 """Attempt to resolve a value to a node.
1087
1089
1088 Value can be a binary node, hex node, revision number, or a bytes
1090 Value can be a binary node, hex node, revision number, or a bytes
1089 that can be converted to an integer.
1091 that can be converted to an integer.
1090
1092
1091 Raises ``error.LookupError`` if a ndoe could not be resolved.
1093 Raises ``error.LookupError`` if a ndoe could not be resolved.
1092
1094
1093 TODO this is only used by debug* commands and can probably be deleted
1095 TODO this is only used by debug* commands and can probably be deleted
1094 easily.
1096 easily.
1095 """
1097 """
1096
1098
1097 def parents(node):
1099 def parents(node):
1098 """Returns a 2-tuple of parent nodes for a node.
1100 """Returns a 2-tuple of parent nodes for a node.
1099
1101
1100 Values will be ``nullid`` if the parent is empty.
1102 Values will be ``nullid`` if the parent is empty.
1101 """
1103 """
1102
1104
1103 def parentrevs(rev):
1105 def parentrevs(rev):
1104 """Like parents() but operates on revision numbers."""
1106 """Like parents() but operates on revision numbers."""
1105
1107
1106 def linkrev(rev):
1108 def linkrev(rev):
1107 """Obtain the changeset revision number a revision is linked to."""
1109 """Obtain the changeset revision number a revision is linked to."""
1108
1110
1109 def revision(node, _df=None, raw=False):
1111 def revision(node, _df=None, raw=False):
1110 """Obtain fulltext data for a node."""
1112 """Obtain fulltext data for a node."""
1111
1113
1112 def revdiff(rev1, rev2):
1114 def revdiff(rev1, rev2):
1113 """Obtain a delta between two revision numbers.
1115 """Obtain a delta between two revision numbers.
1114
1116
1115 The returned data is the result of ``bdiff.bdiff()`` on the raw
1117 The returned data is the result of ``bdiff.bdiff()`` on the raw
1116 revision data.
1118 revision data.
1117 """
1119 """
1118
1120
1119 def cmp(node, fulltext):
1121 def cmp(node, fulltext):
1120 """Compare fulltext to another revision.
1122 """Compare fulltext to another revision.
1121
1123
1122 Returns True if the fulltext is different from what is stored.
1124 Returns True if the fulltext is different from what is stored.
1123 """
1125 """
1124
1126
1125 def emitrevisiondeltas(requests):
1127 def emitrevisiondeltas(requests):
1126 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
1128 """Produce ``irevisiondelta`` from ``irevisiondeltarequest``s.
1127
1129
1128 See the documentation for ``ifiledata`` for more.
1130 See the documentation for ``ifiledata`` for more.
1129 """
1131 """
1130
1132
1131 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
1133 def addgroup(deltas, linkmapper, transaction, addrevisioncb=None):
1132 """Process a series of deltas for storage.
1134 """Process a series of deltas for storage.
1133
1135
1134 See the documentation in ``ifilemutation`` for more.
1136 See the documentation in ``ifilemutation`` for more.
1135 """
1137 """
1136
1138
1137 def getstrippoint(minlink):
1139 def getstrippoint(minlink):
1138 """Find minimum revision that must be stripped to strip a linkrev.
1140 """Find minimum revision that must be stripped to strip a linkrev.
1139
1141
1140 See the documentation in ``ifilemutation`` for more.
1142 See the documentation in ``ifilemutation`` for more.
1141 """
1143 """
1142
1144
1143 def strip(minlink, transaction):
1145 def strip(minlink, transaction):
1144 """Remove storage of items starting at a linkrev.
1146 """Remove storage of items starting at a linkrev.
1145
1147
1146 See the documentation in ``ifilemutation`` for more.
1148 See the documentation in ``ifilemutation`` for more.
1147 """
1149 """
1148
1150
1149 def checksize():
1151 def checksize():
1150 """Obtain the expected sizes of backing files.
1152 """Obtain the expected sizes of backing files.
1151
1153
1152 TODO this is used by verify and it should not be part of the interface.
1154 TODO this is used by verify and it should not be part of the interface.
1153 """
1155 """
1154
1156
1155 def files():
1157 def files():
1156 """Obtain paths that are backing storage for this manifest.
1158 """Obtain paths that are backing storage for this manifest.
1157
1159
1158 TODO this is used by verify and there should probably be a better API
1160 TODO this is used by verify and there should probably be a better API
1159 for this functionality.
1161 for this functionality.
1160 """
1162 """
1161
1163
1162 def deltaparent(rev):
1164 def deltaparent(rev):
1163 """Obtain the revision that a revision is delta'd against.
1165 """Obtain the revision that a revision is delta'd against.
1164
1166
1165 TODO delta encoding is an implementation detail of storage and should
1167 TODO delta encoding is an implementation detail of storage and should
1166 not be exposed to the storage interface.
1168 not be exposed to the storage interface.
1167 """
1169 """
1168
1170
1169 def clone(tr, dest, **kwargs):
1171 def clone(tr, dest, **kwargs):
1170 """Clone this instance to another."""
1172 """Clone this instance to another."""
1171
1173
1172 def clearcaches(clear_persisted_data=False):
1174 def clearcaches(clear_persisted_data=False):
1173 """Clear any caches associated with this instance."""
1175 """Clear any caches associated with this instance."""
1174
1176
1175 def dirlog(d):
1177 def dirlog(d):
1176 """Obtain a manifest storage instance for a tree."""
1178 """Obtain a manifest storage instance for a tree."""
1177
1179
1178 def add(m, transaction, link, p1, p2, added, removed, readtree=None,
1180 def add(m, transaction, link, p1, p2, added, removed, readtree=None,
1179 match=None):
1181 match=None):
1180 """Add a revision to storage.
1182 """Add a revision to storage.
1181
1183
1182 ``m`` is an object conforming to ``imanifestdict``.
1184 ``m`` is an object conforming to ``imanifestdict``.
1183
1185
1184 ``link`` is the linkrev revision number.
1186 ``link`` is the linkrev revision number.
1185
1187
1186 ``p1`` and ``p2`` are the parent revision numbers.
1188 ``p1`` and ``p2`` are the parent revision numbers.
1187
1189
1188 ``added`` and ``removed`` are iterables of added and removed paths,
1190 ``added`` and ``removed`` are iterables of added and removed paths,
1189 respectively.
1191 respectively.
1190
1192
1191 ``readtree`` is a function that can be used to read the child tree(s)
1193 ``readtree`` is a function that can be used to read the child tree(s)
1192 when recursively writing the full tree structure when using
1194 when recursively writing the full tree structure when using
1193 treemanifets.
1195 treemanifets.
1194
1196
1195 ``match`` is a matcher that can be used to hint to storage that not all
1197 ``match`` is a matcher that can be used to hint to storage that not all
1196 paths must be inspected; this is an optimization and can be safely
1198 paths must be inspected; this is an optimization and can be safely
1197 ignored. Note that the storage must still be able to reproduce a full
1199 ignored. Note that the storage must still be able to reproduce a full
1198 manifest including files that did not match.
1200 manifest including files that did not match.
1199 """
1201 """
1200
1202
1201 class imanifestlog(interfaceutil.Interface):
1203 class imanifestlog(interfaceutil.Interface):
1202 """Interface representing a collection of manifest snapshots.
1204 """Interface representing a collection of manifest snapshots.
1203
1205
1204 Represents the root manifest in a repository.
1206 Represents the root manifest in a repository.
1205
1207
1206 Also serves as a means to access nested tree manifests and to cache
1208 Also serves as a means to access nested tree manifests and to cache
1207 tree manifests.
1209 tree manifests.
1208 """
1210 """
1209
1211
1210 def __getitem__(node):
1212 def __getitem__(node):
1211 """Obtain a manifest instance for a given binary node.
1213 """Obtain a manifest instance for a given binary node.
1212
1214
1213 Equivalent to calling ``self.get('', node)``.
1215 Equivalent to calling ``self.get('', node)``.
1214
1216
1215 The returned object conforms to the ``imanifestrevisionstored``
1217 The returned object conforms to the ``imanifestrevisionstored``
1216 interface.
1218 interface.
1217 """
1219 """
1218
1220
1219 def get(tree, node, verify=True):
1221 def get(tree, node, verify=True):
1220 """Retrieve the manifest instance for a given directory and binary node.
1222 """Retrieve the manifest instance for a given directory and binary node.
1221
1223
1222 ``node`` always refers to the node of the root manifest (which will be
1224 ``node`` always refers to the node of the root manifest (which will be
1223 the only manifest if flat manifests are being used).
1225 the only manifest if flat manifests are being used).
1224
1226
1225 If ``tree`` is the empty string, the root manifest is returned.
1227 If ``tree`` is the empty string, the root manifest is returned.
1226 Otherwise the manifest for the specified directory will be returned
1228 Otherwise the manifest for the specified directory will be returned
1227 (requires tree manifests).
1229 (requires tree manifests).
1228
1230
1229 If ``verify`` is True, ``LookupError`` is raised if the node is not
1231 If ``verify`` is True, ``LookupError`` is raised if the node is not
1230 known.
1232 known.
1231
1233
1232 The returned object conforms to the ``imanifestrevisionstored``
1234 The returned object conforms to the ``imanifestrevisionstored``
1233 interface.
1235 interface.
1234 """
1236 """
1235
1237
1236 def getstorage(tree):
1238 def getstorage(tree):
1237 """Retrieve an interface to storage for a particular tree.
1239 """Retrieve an interface to storage for a particular tree.
1238
1240
1239 If ``tree`` is the empty bytestring, storage for the root manifest will
1241 If ``tree`` is the empty bytestring, storage for the root manifest will
1240 be returned. Otherwise storage for a tree manifest is returned.
1242 be returned. Otherwise storage for a tree manifest is returned.
1241
1243
1242 TODO formalize interface for returned object.
1244 TODO formalize interface for returned object.
1243 """
1245 """
1244
1246
1245 def clearcaches():
1247 def clearcaches():
1246 """Clear caches associated with this collection."""
1248 """Clear caches associated with this collection."""
1247
1249
1248 def rev(node):
1250 def rev(node):
1249 """Obtain the revision number for a binary node.
1251 """Obtain the revision number for a binary node.
1250
1252
1251 Raises ``error.LookupError`` if the node is not known.
1253 Raises ``error.LookupError`` if the node is not known.
1252 """
1254 """
1253
1255
1254 class ilocalrepositoryfilestorage(interfaceutil.Interface):
1256 class ilocalrepositoryfilestorage(interfaceutil.Interface):
1255 """Local repository sub-interface providing access to tracked file storage.
1257 """Local repository sub-interface providing access to tracked file storage.
1256
1258
1257 This interface defines how a repository accesses storage for a single
1259 This interface defines how a repository accesses storage for a single
1258 tracked file path.
1260 tracked file path.
1259 """
1261 """
1260
1262
1261 def file(f):
1263 def file(f):
1262 """Obtain a filelog for a tracked path.
1264 """Obtain a filelog for a tracked path.
1263
1265
1264 The returned type conforms to the ``ifilestorage`` interface.
1266 The returned type conforms to the ``ifilestorage`` interface.
1265 """
1267 """
1266
1268
1267 class ilocalrepositorymain(interfaceutil.Interface):
1269 class ilocalrepositorymain(interfaceutil.Interface):
1268 """Main interface for local repositories.
1270 """Main interface for local repositories.
1269
1271
1270 This currently captures the reality of things - not how things should be.
1272 This currently captures the reality of things - not how things should be.
1271 """
1273 """
1272
1274
1273 supportedformats = interfaceutil.Attribute(
1275 supportedformats = interfaceutil.Attribute(
1274 """Set of requirements that apply to stream clone.
1276 """Set of requirements that apply to stream clone.
1275
1277
1276 This is actually a class attribute and is shared among all instances.
1278 This is actually a class attribute and is shared among all instances.
1277 """)
1279 """)
1278
1280
1279 supported = interfaceutil.Attribute(
1281 supported = interfaceutil.Attribute(
1280 """Set of requirements that this repo is capable of opening.""")
1282 """Set of requirements that this repo is capable of opening.""")
1281
1283
1282 requirements = interfaceutil.Attribute(
1284 requirements = interfaceutil.Attribute(
1283 """Set of requirements this repo uses.""")
1285 """Set of requirements this repo uses.""")
1284
1286
1285 features = interfaceutil.Attribute(
1287 features = interfaceutil.Attribute(
1286 """Set of "features" this repository supports.
1288 """Set of "features" this repository supports.
1287
1289
1288 A "feature" is a loosely-defined term. It can refer to a feature
1290 A "feature" is a loosely-defined term. It can refer to a feature
1289 in the classical sense or can describe an implementation detail
1291 in the classical sense or can describe an implementation detail
1290 of the repository. For example, a ``readonly`` feature may denote
1292 of the repository. For example, a ``readonly`` feature may denote
1291 the repository as read-only. Or a ``revlogfilestore`` feature may
1293 the repository as read-only. Or a ``revlogfilestore`` feature may
1292 denote that the repository is using revlogs for file storage.
1294 denote that the repository is using revlogs for file storage.
1293
1295
1294 The intent of features is to provide a machine-queryable mechanism
1296 The intent of features is to provide a machine-queryable mechanism
1295 for repo consumers to test for various repository characteristics.
1297 for repo consumers to test for various repository characteristics.
1296
1298
1297 Features are similar to ``requirements``. The main difference is that
1299 Features are similar to ``requirements``. The main difference is that
1298 requirements are stored on-disk and represent requirements to open the
1300 requirements are stored on-disk and represent requirements to open the
1299 repository. Features are more run-time capabilities of the repository
1301 repository. Features are more run-time capabilities of the repository
1300 and more granular capabilities (which may be derived from requirements).
1302 and more granular capabilities (which may be derived from requirements).
1301 """)
1303 """)
1302
1304
1303 filtername = interfaceutil.Attribute(
1305 filtername = interfaceutil.Attribute(
1304 """Name of the repoview that is active on this repo.""")
1306 """Name of the repoview that is active on this repo.""")
1305
1307
1306 wvfs = interfaceutil.Attribute(
1308 wvfs = interfaceutil.Attribute(
1307 """VFS used to access the working directory.""")
1309 """VFS used to access the working directory.""")
1308
1310
1309 vfs = interfaceutil.Attribute(
1311 vfs = interfaceutil.Attribute(
1310 """VFS rooted at the .hg directory.
1312 """VFS rooted at the .hg directory.
1311
1313
1312 Used to access repository data not in the store.
1314 Used to access repository data not in the store.
1313 """)
1315 """)
1314
1316
1315 svfs = interfaceutil.Attribute(
1317 svfs = interfaceutil.Attribute(
1316 """VFS rooted at the store.
1318 """VFS rooted at the store.
1317
1319
1318 Used to access repository data in the store. Typically .hg/store.
1320 Used to access repository data in the store. Typically .hg/store.
1319 But can point elsewhere if the store is shared.
1321 But can point elsewhere if the store is shared.
1320 """)
1322 """)
1321
1323
1322 root = interfaceutil.Attribute(
1324 root = interfaceutil.Attribute(
1323 """Path to the root of the working directory.""")
1325 """Path to the root of the working directory.""")
1324
1326
1325 path = interfaceutil.Attribute(
1327 path = interfaceutil.Attribute(
1326 """Path to the .hg directory.""")
1328 """Path to the .hg directory.""")
1327
1329
1328 origroot = interfaceutil.Attribute(
1330 origroot = interfaceutil.Attribute(
1329 """The filesystem path that was used to construct the repo.""")
1331 """The filesystem path that was used to construct the repo.""")
1330
1332
1331 auditor = interfaceutil.Attribute(
1333 auditor = interfaceutil.Attribute(
1332 """A pathauditor for the working directory.
1334 """A pathauditor for the working directory.
1333
1335
1334 This checks if a path refers to a nested repository.
1336 This checks if a path refers to a nested repository.
1335
1337
1336 Operates on the filesystem.
1338 Operates on the filesystem.
1337 """)
1339 """)
1338
1340
1339 nofsauditor = interfaceutil.Attribute(
1341 nofsauditor = interfaceutil.Attribute(
1340 """A pathauditor for the working directory.
1342 """A pathauditor for the working directory.
1341
1343
1342 This is like ``auditor`` except it doesn't do filesystem checks.
1344 This is like ``auditor`` except it doesn't do filesystem checks.
1343 """)
1345 """)
1344
1346
1345 baseui = interfaceutil.Attribute(
1347 baseui = interfaceutil.Attribute(
1346 """Original ui instance passed into constructor.""")
1348 """Original ui instance passed into constructor.""")
1347
1349
1348 ui = interfaceutil.Attribute(
1350 ui = interfaceutil.Attribute(
1349 """Main ui instance for this instance.""")
1351 """Main ui instance for this instance.""")
1350
1352
1351 sharedpath = interfaceutil.Attribute(
1353 sharedpath = interfaceutil.Attribute(
1352 """Path to the .hg directory of the repo this repo was shared from.""")
1354 """Path to the .hg directory of the repo this repo was shared from.""")
1353
1355
1354 store = interfaceutil.Attribute(
1356 store = interfaceutil.Attribute(
1355 """A store instance.""")
1357 """A store instance.""")
1356
1358
1357 spath = interfaceutil.Attribute(
1359 spath = interfaceutil.Attribute(
1358 """Path to the store.""")
1360 """Path to the store.""")
1359
1361
1360 sjoin = interfaceutil.Attribute(
1362 sjoin = interfaceutil.Attribute(
1361 """Alias to self.store.join.""")
1363 """Alias to self.store.join.""")
1362
1364
1363 cachevfs = interfaceutil.Attribute(
1365 cachevfs = interfaceutil.Attribute(
1364 """A VFS used to access the cache directory.
1366 """A VFS used to access the cache directory.
1365
1367
1366 Typically .hg/cache.
1368 Typically .hg/cache.
1367 """)
1369 """)
1368
1370
1369 filteredrevcache = interfaceutil.Attribute(
1371 filteredrevcache = interfaceutil.Attribute(
1370 """Holds sets of revisions to be filtered.""")
1372 """Holds sets of revisions to be filtered.""")
1371
1373
1372 names = interfaceutil.Attribute(
1374 names = interfaceutil.Attribute(
1373 """A ``namespaces`` instance.""")
1375 """A ``namespaces`` instance.""")
1374
1376
1375 def close():
1377 def close():
1376 """Close the handle on this repository."""
1378 """Close the handle on this repository."""
1377
1379
1378 def peer():
1380 def peer():
1379 """Obtain an object conforming to the ``peer`` interface."""
1381 """Obtain an object conforming to the ``peer`` interface."""
1380
1382
1381 def unfiltered():
1383 def unfiltered():
1382 """Obtain an unfiltered/raw view of this repo."""
1384 """Obtain an unfiltered/raw view of this repo."""
1383
1385
1384 def filtered(name, visibilityexceptions=None):
1386 def filtered(name, visibilityexceptions=None):
1385 """Obtain a named view of this repository."""
1387 """Obtain a named view of this repository."""
1386
1388
1387 obsstore = interfaceutil.Attribute(
1389 obsstore = interfaceutil.Attribute(
1388 """A store of obsolescence data.""")
1390 """A store of obsolescence data.""")
1389
1391
1390 changelog = interfaceutil.Attribute(
1392 changelog = interfaceutil.Attribute(
1391 """A handle on the changelog revlog.""")
1393 """A handle on the changelog revlog.""")
1392
1394
1393 manifestlog = interfaceutil.Attribute(
1395 manifestlog = interfaceutil.Attribute(
1394 """An instance conforming to the ``imanifestlog`` interface.
1396 """An instance conforming to the ``imanifestlog`` interface.
1395
1397
1396 Provides access to manifests for the repository.
1398 Provides access to manifests for the repository.
1397 """)
1399 """)
1398
1400
1399 dirstate = interfaceutil.Attribute(
1401 dirstate = interfaceutil.Attribute(
1400 """Working directory state.""")
1402 """Working directory state.""")
1401
1403
1402 narrowpats = interfaceutil.Attribute(
1404 narrowpats = interfaceutil.Attribute(
1403 """Matcher patterns for this repository's narrowspec.""")
1405 """Matcher patterns for this repository's narrowspec.""")
1404
1406
1405 def narrowmatch():
1407 def narrowmatch():
1406 """Obtain a matcher for the narrowspec."""
1408 """Obtain a matcher for the narrowspec."""
1407
1409
1408 def setnarrowpats(newincludes, newexcludes):
1410 def setnarrowpats(newincludes, newexcludes):
1409 """Define the narrowspec for this repository."""
1411 """Define the narrowspec for this repository."""
1410
1412
1411 def __getitem__(changeid):
1413 def __getitem__(changeid):
1412 """Try to resolve a changectx."""
1414 """Try to resolve a changectx."""
1413
1415
1414 def __contains__(changeid):
1416 def __contains__(changeid):
1415 """Whether a changeset exists."""
1417 """Whether a changeset exists."""
1416
1418
1417 def __nonzero__():
1419 def __nonzero__():
1418 """Always returns True."""
1420 """Always returns True."""
1419 return True
1421 return True
1420
1422
1421 __bool__ = __nonzero__
1423 __bool__ = __nonzero__
1422
1424
1423 def __len__():
1425 def __len__():
1424 """Returns the number of changesets in the repo."""
1426 """Returns the number of changesets in the repo."""
1425
1427
1426 def __iter__():
1428 def __iter__():
1427 """Iterate over revisions in the changelog."""
1429 """Iterate over revisions in the changelog."""
1428
1430
1429 def revs(expr, *args):
1431 def revs(expr, *args):
1430 """Evaluate a revset.
1432 """Evaluate a revset.
1431
1433
1432 Emits revisions.
1434 Emits revisions.
1433 """
1435 """
1434
1436
1435 def set(expr, *args):
1437 def set(expr, *args):
1436 """Evaluate a revset.
1438 """Evaluate a revset.
1437
1439
1438 Emits changectx instances.
1440 Emits changectx instances.
1439 """
1441 """
1440
1442
1441 def anyrevs(specs, user=False, localalias=None):
1443 def anyrevs(specs, user=False, localalias=None):
1442 """Find revisions matching one of the given revsets."""
1444 """Find revisions matching one of the given revsets."""
1443
1445
1444 def url():
1446 def url():
1445 """Returns a string representing the location of this repo."""
1447 """Returns a string representing the location of this repo."""
1446
1448
1447 def hook(name, throw=False, **args):
1449 def hook(name, throw=False, **args):
1448 """Call a hook."""
1450 """Call a hook."""
1449
1451
1450 def tags():
1452 def tags():
1451 """Return a mapping of tag to node."""
1453 """Return a mapping of tag to node."""
1452
1454
1453 def tagtype(tagname):
1455 def tagtype(tagname):
1454 """Return the type of a given tag."""
1456 """Return the type of a given tag."""
1455
1457
1456 def tagslist():
1458 def tagslist():
1457 """Return a list of tags ordered by revision."""
1459 """Return a list of tags ordered by revision."""
1458
1460
1459 def nodetags(node):
1461 def nodetags(node):
1460 """Return the tags associated with a node."""
1462 """Return the tags associated with a node."""
1461
1463
1462 def nodebookmarks(node):
1464 def nodebookmarks(node):
1463 """Return the list of bookmarks pointing to the specified node."""
1465 """Return the list of bookmarks pointing to the specified node."""
1464
1466
1465 def branchmap():
1467 def branchmap():
1466 """Return a mapping of branch to heads in that branch."""
1468 """Return a mapping of branch to heads in that branch."""
1467
1469
1468 def revbranchcache():
1470 def revbranchcache():
1469 pass
1471 pass
1470
1472
1471 def branchtip(branchtip, ignoremissing=False):
1473 def branchtip(branchtip, ignoremissing=False):
1472 """Return the tip node for a given branch."""
1474 """Return the tip node for a given branch."""
1473
1475
1474 def lookup(key):
1476 def lookup(key):
1475 """Resolve the node for a revision."""
1477 """Resolve the node for a revision."""
1476
1478
1477 def lookupbranch(key):
1479 def lookupbranch(key):
1478 """Look up the branch name of the given revision or branch name."""
1480 """Look up the branch name of the given revision or branch name."""
1479
1481
1480 def known(nodes):
1482 def known(nodes):
1481 """Determine whether a series of nodes is known.
1483 """Determine whether a series of nodes is known.
1482
1484
1483 Returns a list of bools.
1485 Returns a list of bools.
1484 """
1486 """
1485
1487
1486 def local():
1488 def local():
1487 """Whether the repository is local."""
1489 """Whether the repository is local."""
1488 return True
1490 return True
1489
1491
1490 def publishing():
1492 def publishing():
1491 """Whether the repository is a publishing repository."""
1493 """Whether the repository is a publishing repository."""
1492
1494
1493 def cancopy():
1495 def cancopy():
1494 pass
1496 pass
1495
1497
1496 def shared():
1498 def shared():
1497 """The type of shared repository or None."""
1499 """The type of shared repository or None."""
1498
1500
1499 def wjoin(f, *insidef):
1501 def wjoin(f, *insidef):
1500 """Calls self.vfs.reljoin(self.root, f, *insidef)"""
1502 """Calls self.vfs.reljoin(self.root, f, *insidef)"""
1501
1503
1502 def setparents(p1, p2):
1504 def setparents(p1, p2):
1503 """Set the parent nodes of the working directory."""
1505 """Set the parent nodes of the working directory."""
1504
1506
1505 def filectx(path, changeid=None, fileid=None):
1507 def filectx(path, changeid=None, fileid=None):
1506 """Obtain a filectx for the given file revision."""
1508 """Obtain a filectx for the given file revision."""
1507
1509
1508 def getcwd():
1510 def getcwd():
1509 """Obtain the current working directory from the dirstate."""
1511 """Obtain the current working directory from the dirstate."""
1510
1512
1511 def pathto(f, cwd=None):
1513 def pathto(f, cwd=None):
1512 """Obtain the relative path to a file."""
1514 """Obtain the relative path to a file."""
1513
1515
1514 def adddatafilter(name, fltr):
1516 def adddatafilter(name, fltr):
1515 pass
1517 pass
1516
1518
1517 def wread(filename):
1519 def wread(filename):
1518 """Read a file from wvfs, using data filters."""
1520 """Read a file from wvfs, using data filters."""
1519
1521
1520 def wwrite(filename, data, flags, backgroundclose=False, **kwargs):
1522 def wwrite(filename, data, flags, backgroundclose=False, **kwargs):
1521 """Write data to a file in the wvfs, using data filters."""
1523 """Write data to a file in the wvfs, using data filters."""
1522
1524
1523 def wwritedata(filename, data):
1525 def wwritedata(filename, data):
1524 """Resolve data for writing to the wvfs, using data filters."""
1526 """Resolve data for writing to the wvfs, using data filters."""
1525
1527
1526 def currenttransaction():
1528 def currenttransaction():
1527 """Obtain the current transaction instance or None."""
1529 """Obtain the current transaction instance or None."""
1528
1530
1529 def transaction(desc, report=None):
1531 def transaction(desc, report=None):
1530 """Open a new transaction to write to the repository."""
1532 """Open a new transaction to write to the repository."""
1531
1533
1532 def undofiles():
1534 def undofiles():
1533 """Returns a list of (vfs, path) for files to undo transactions."""
1535 """Returns a list of (vfs, path) for files to undo transactions."""
1534
1536
1535 def recover():
1537 def recover():
1536 """Roll back an interrupted transaction."""
1538 """Roll back an interrupted transaction."""
1537
1539
1538 def rollback(dryrun=False, force=False):
1540 def rollback(dryrun=False, force=False):
1539 """Undo the last transaction.
1541 """Undo the last transaction.
1540
1542
1541 DANGEROUS.
1543 DANGEROUS.
1542 """
1544 """
1543
1545
1544 def updatecaches(tr=None, full=False):
1546 def updatecaches(tr=None, full=False):
1545 """Warm repo caches."""
1547 """Warm repo caches."""
1546
1548
1547 def invalidatecaches():
1549 def invalidatecaches():
1548 """Invalidate cached data due to the repository mutating."""
1550 """Invalidate cached data due to the repository mutating."""
1549
1551
1550 def invalidatevolatilesets():
1552 def invalidatevolatilesets():
1551 pass
1553 pass
1552
1554
1553 def invalidatedirstate():
1555 def invalidatedirstate():
1554 """Invalidate the dirstate."""
1556 """Invalidate the dirstate."""
1555
1557
1556 def invalidate(clearfilecache=False):
1558 def invalidate(clearfilecache=False):
1557 pass
1559 pass
1558
1560
1559 def invalidateall():
1561 def invalidateall():
1560 pass
1562 pass
1561
1563
1562 def lock(wait=True):
1564 def lock(wait=True):
1563 """Lock the repository store and return a lock instance."""
1565 """Lock the repository store and return a lock instance."""
1564
1566
1565 def wlock(wait=True):
1567 def wlock(wait=True):
1566 """Lock the non-store parts of the repository."""
1568 """Lock the non-store parts of the repository."""
1567
1569
1568 def currentwlock():
1570 def currentwlock():
1569 """Return the wlock if it's held or None."""
1571 """Return the wlock if it's held or None."""
1570
1572
1571 def checkcommitpatterns(wctx, vdirs, match, status, fail):
1573 def checkcommitpatterns(wctx, vdirs, match, status, fail):
1572 pass
1574 pass
1573
1575
1574 def commit(text='', user=None, date=None, match=None, force=False,
1576 def commit(text='', user=None, date=None, match=None, force=False,
1575 editor=False, extra=None):
1577 editor=False, extra=None):
1576 """Add a new revision to the repository."""
1578 """Add a new revision to the repository."""
1577
1579
1578 def commitctx(ctx, error=False):
1580 def commitctx(ctx, error=False):
1579 """Commit a commitctx instance to the repository."""
1581 """Commit a commitctx instance to the repository."""
1580
1582
1581 def destroying():
1583 def destroying():
1582 """Inform the repository that nodes are about to be destroyed."""
1584 """Inform the repository that nodes are about to be destroyed."""
1583
1585
1584 def destroyed():
1586 def destroyed():
1585 """Inform the repository that nodes have been destroyed."""
1587 """Inform the repository that nodes have been destroyed."""
1586
1588
1587 def status(node1='.', node2=None, match=None, ignored=False,
1589 def status(node1='.', node2=None, match=None, ignored=False,
1588 clean=False, unknown=False, listsubrepos=False):
1590 clean=False, unknown=False, listsubrepos=False):
1589 """Convenience method to call repo[x].status()."""
1591 """Convenience method to call repo[x].status()."""
1590
1592
1591 def addpostdsstatus(ps):
1593 def addpostdsstatus(ps):
1592 pass
1594 pass
1593
1595
1594 def postdsstatus():
1596 def postdsstatus():
1595 pass
1597 pass
1596
1598
1597 def clearpostdsstatus():
1599 def clearpostdsstatus():
1598 pass
1600 pass
1599
1601
1600 def heads(start=None):
1602 def heads(start=None):
1601 """Obtain list of nodes that are DAG heads."""
1603 """Obtain list of nodes that are DAG heads."""
1602
1604
1603 def branchheads(branch=None, start=None, closed=False):
1605 def branchheads(branch=None, start=None, closed=False):
1604 pass
1606 pass
1605
1607
1606 def branches(nodes):
1608 def branches(nodes):
1607 pass
1609 pass
1608
1610
1609 def between(pairs):
1611 def between(pairs):
1610 pass
1612 pass
1611
1613
1612 def checkpush(pushop):
1614 def checkpush(pushop):
1613 pass
1615 pass
1614
1616
1615 prepushoutgoinghooks = interfaceutil.Attribute(
1617 prepushoutgoinghooks = interfaceutil.Attribute(
1616 """util.hooks instance.""")
1618 """util.hooks instance.""")
1617
1619
1618 def pushkey(namespace, key, old, new):
1620 def pushkey(namespace, key, old, new):
1619 pass
1621 pass
1620
1622
1621 def listkeys(namespace):
1623 def listkeys(namespace):
1622 pass
1624 pass
1623
1625
1624 def debugwireargs(one, two, three=None, four=None, five=None):
1626 def debugwireargs(one, two, three=None, four=None, five=None):
1625 pass
1627 pass
1626
1628
1627 def savecommitmessage(text):
1629 def savecommitmessage(text):
1628 pass
1630 pass
1629
1631
1630 class completelocalrepository(ilocalrepositorymain,
1632 class completelocalrepository(ilocalrepositorymain,
1631 ilocalrepositoryfilestorage):
1633 ilocalrepositoryfilestorage):
1632 """Complete interface for a local repository."""
1634 """Complete interface for a local repository."""
General Comments 0
You need to be logged in to leave comments. Login now