##// END OF EJS Templates
narrow: extract repo property for store narrowmatcher...
Martin von Zweigbergk -
r41266:d2d716cc default
parent child Browse files
Show More
@@ -1,380 +1,380 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will assume the remote repository also handles blob storage
90 # if unset, lfs will assume the remote repository also handles blob storage
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
92 # use this value.
92 # use this value.
93 # (default: unset)
93 # (default: unset)
94 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
95
95
96 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
97 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
98 # are relative to the repository root.
98 # are relative to the repository root.
99 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
100 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
101 #
101 #
102 # Some examples:
102 # Some examples:
103 # - all() # everything
103 # - all() # everything
104 # - none() # nothing
104 # - none() # nothing
105 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
106 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 # (default: none())
111 # (default: none())
112 #
112 #
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
115 track = size(">10M")
115 track = size(">10M")
116
116
117 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
118 retry = 5
118 retry = 5
119
119
120 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
121 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
122 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 import sys
127 import sys
128
128
129 from mercurial.i18n import _
129 from mercurial.i18n import _
130
130
131 from mercurial import (
131 from mercurial import (
132 config,
132 config,
133 error,
133 error,
134 exchange,
134 exchange,
135 extensions,
135 extensions,
136 exthelper,
136 exthelper,
137 filelog,
137 filelog,
138 filesetlang,
138 filesetlang,
139 localrepo,
139 localrepo,
140 minifileset,
140 minifileset,
141 node,
141 node,
142 pycompat,
142 pycompat,
143 repository,
143 repository,
144 revlog,
144 revlog,
145 scmutil,
145 scmutil,
146 templateutil,
146 templateutil,
147 util,
147 util,
148 )
148 )
149
149
150 from . import (
150 from . import (
151 blobstore,
151 blobstore,
152 wireprotolfsserver,
152 wireprotolfsserver,
153 wrapper,
153 wrapper,
154 )
154 )
155
155
156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 # be specifying the version(s) of Mercurial they are tested with, or
158 # be specifying the version(s) of Mercurial they are tested with, or
159 # leave the attribute unspecified.
159 # leave the attribute unspecified.
160 testedwith = 'ships-with-hg-core'
160 testedwith = 'ships-with-hg-core'
161
161
162 eh = exthelper.exthelper()
162 eh = exthelper.exthelper()
163 eh.merge(wrapper.eh)
163 eh.merge(wrapper.eh)
164 eh.merge(wireprotolfsserver.eh)
164 eh.merge(wireprotolfsserver.eh)
165
165
166 cmdtable = eh.cmdtable
166 cmdtable = eh.cmdtable
167 configtable = eh.configtable
167 configtable = eh.configtable
168 extsetup = eh.finalextsetup
168 extsetup = eh.finalextsetup
169 uisetup = eh.finaluisetup
169 uisetup = eh.finaluisetup
170 filesetpredicate = eh.filesetpredicate
170 filesetpredicate = eh.filesetpredicate
171 reposetup = eh.finalreposetup
171 reposetup = eh.finalreposetup
172 templatekeyword = eh.templatekeyword
172 templatekeyword = eh.templatekeyword
173
173
174 eh.configitem('experimental', 'lfs.serve',
174 eh.configitem('experimental', 'lfs.serve',
175 default=True,
175 default=True,
176 )
176 )
177 eh.configitem('experimental', 'lfs.user-agent',
177 eh.configitem('experimental', 'lfs.user-agent',
178 default=None,
178 default=None,
179 )
179 )
180 eh.configitem('experimental', 'lfs.disableusercache',
180 eh.configitem('experimental', 'lfs.disableusercache',
181 default=False,
181 default=False,
182 )
182 )
183 eh.configitem('experimental', 'lfs.worker-enable',
183 eh.configitem('experimental', 'lfs.worker-enable',
184 default=False,
184 default=False,
185 )
185 )
186
186
187 eh.configitem('lfs', 'url',
187 eh.configitem('lfs', 'url',
188 default=None,
188 default=None,
189 )
189 )
190 eh.configitem('lfs', 'usercache',
190 eh.configitem('lfs', 'usercache',
191 default=None,
191 default=None,
192 )
192 )
193 # Deprecated
193 # Deprecated
194 eh.configitem('lfs', 'threshold',
194 eh.configitem('lfs', 'threshold',
195 default=None,
195 default=None,
196 )
196 )
197 eh.configitem('lfs', 'track',
197 eh.configitem('lfs', 'track',
198 default='none()',
198 default='none()',
199 )
199 )
200 eh.configitem('lfs', 'retry',
200 eh.configitem('lfs', 'retry',
201 default=5,
201 default=5,
202 )
202 )
203
203
204 lfsprocessor = (
204 lfsprocessor = (
205 wrapper.readfromstore,
205 wrapper.readfromstore,
206 wrapper.writetostore,
206 wrapper.writetostore,
207 wrapper.bypasscheckhash,
207 wrapper.bypasscheckhash,
208 )
208 )
209
209
210 def featuresetup(ui, supported):
210 def featuresetup(ui, supported):
211 # don't die on seeing a repo with the lfs requirement
211 # don't die on seeing a repo with the lfs requirement
212 supported |= {'lfs'}
212 supported |= {'lfs'}
213
213
214 @eh.uisetup
214 @eh.uisetup
215 def _uisetup(ui):
215 def _uisetup(ui):
216 localrepo.featuresetupfuncs.add(featuresetup)
216 localrepo.featuresetupfuncs.add(featuresetup)
217
217
218 @eh.reposetup
218 @eh.reposetup
219 def _reposetup(ui, repo):
219 def _reposetup(ui, repo):
220 # Nothing to do with a remote repo
220 # Nothing to do with a remote repo
221 if not repo.local():
221 if not repo.local():
222 return
222 return
223
223
224 repo.svfs.lfslocalblobstore = blobstore.local(repo)
224 repo.svfs.lfslocalblobstore = blobstore.local(repo)
225 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
225 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
226
226
227 class lfsrepo(repo.__class__):
227 class lfsrepo(repo.__class__):
228 @localrepo.unfilteredmethod
228 @localrepo.unfilteredmethod
229 def commitctx(self, ctx, error=False):
229 def commitctx(self, ctx, error=False):
230 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
230 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
231 return super(lfsrepo, self).commitctx(ctx, error)
231 return super(lfsrepo, self).commitctx(ctx, error)
232
232
233 repo.__class__ = lfsrepo
233 repo.__class__ = lfsrepo
234
234
235 if 'lfs' not in repo.requirements:
235 if 'lfs' not in repo.requirements:
236 def checkrequireslfs(ui, repo, **kwargs):
236 def checkrequireslfs(ui, repo, **kwargs):
237 if 'lfs' in repo.requirements:
237 if 'lfs' in repo.requirements:
238 return 0
238 return 0
239
239
240 last = kwargs.get(r'node_last')
240 last = kwargs.get(r'node_last')
241 _bin = node.bin
241 _bin = node.bin
242 if last:
242 if last:
243 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
243 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
244 else:
244 else:
245 s = repo.set('%n', _bin(kwargs[r'node']))
245 s = repo.set('%n', _bin(kwargs[r'node']))
246 match = repo.narrowmatch()
246 match = repo._storenarrowmatch
247 for ctx in s:
247 for ctx in s:
248 # TODO: is there a way to just walk the files in the commit?
248 # TODO: is there a way to just walk the files in the commit?
249 if any(ctx[f].islfs() for f in ctx.files()
249 if any(ctx[f].islfs() for f in ctx.files()
250 if f in ctx and match(f)):
250 if f in ctx and match(f)):
251 repo.requirements.add('lfs')
251 repo.requirements.add('lfs')
252 repo.features.add(repository.REPO_FEATURE_LFS)
252 repo.features.add(repository.REPO_FEATURE_LFS)
253 repo._writerequirements()
253 repo._writerequirements()
254 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
254 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
255 break
255 break
256
256
257 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
257 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
258 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
258 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
259 else:
259 else:
260 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
260 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
261
261
262 def _trackedmatcher(repo):
262 def _trackedmatcher(repo):
263 """Return a function (path, size) -> bool indicating whether or not to
263 """Return a function (path, size) -> bool indicating whether or not to
264 track a given file with lfs."""
264 track a given file with lfs."""
265 if not repo.wvfs.exists('.hglfs'):
265 if not repo.wvfs.exists('.hglfs'):
266 # No '.hglfs' in wdir. Fallback to config for now.
266 # No '.hglfs' in wdir. Fallback to config for now.
267 trackspec = repo.ui.config('lfs', 'track')
267 trackspec = repo.ui.config('lfs', 'track')
268
268
269 # deprecated config: lfs.threshold
269 # deprecated config: lfs.threshold
270 threshold = repo.ui.configbytes('lfs', 'threshold')
270 threshold = repo.ui.configbytes('lfs', 'threshold')
271 if threshold:
271 if threshold:
272 filesetlang.parse(trackspec) # make sure syntax errors are confined
272 filesetlang.parse(trackspec) # make sure syntax errors are confined
273 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
273 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
274
274
275 return minifileset.compile(trackspec)
275 return minifileset.compile(trackspec)
276
276
277 data = repo.wvfs.tryread('.hglfs')
277 data = repo.wvfs.tryread('.hglfs')
278 if not data:
278 if not data:
279 return lambda p, s: False
279 return lambda p, s: False
280
280
281 # Parse errors here will abort with a message that points to the .hglfs file
281 # Parse errors here will abort with a message that points to the .hglfs file
282 # and line number.
282 # and line number.
283 cfg = config.config()
283 cfg = config.config()
284 cfg.parse('.hglfs', data)
284 cfg.parse('.hglfs', data)
285
285
286 try:
286 try:
287 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
287 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
288 for pattern, rule in cfg.items('track')]
288 for pattern, rule in cfg.items('track')]
289 except error.ParseError as e:
289 except error.ParseError as e:
290 # The original exception gives no indicator that the error is in the
290 # The original exception gives no indicator that the error is in the
291 # .hglfs file, so add that.
291 # .hglfs file, so add that.
292
292
293 # TODO: See if the line number of the file can be made available.
293 # TODO: See if the line number of the file can be made available.
294 raise error.Abort(_('parse error in .hglfs: %s') % e)
294 raise error.Abort(_('parse error in .hglfs: %s') % e)
295
295
296 def _match(path, size):
296 def _match(path, size):
297 for pat, rule in rules:
297 for pat, rule in rules:
298 if pat(path, size):
298 if pat(path, size):
299 return rule(path, size)
299 return rule(path, size)
300
300
301 return False
301 return False
302
302
303 return _match
303 return _match
304
304
305 # Called by remotefilelog
305 # Called by remotefilelog
306 def wrapfilelog(filelog):
306 def wrapfilelog(filelog):
307 wrapfunction = extensions.wrapfunction
307 wrapfunction = extensions.wrapfunction
308
308
309 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
309 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
310 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
310 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
311 wrapfunction(filelog, 'size', wrapper.filelogsize)
311 wrapfunction(filelog, 'size', wrapper.filelogsize)
312
312
313 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
313 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
314 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
314 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
315 opts = orig(ui, requirements, features)
315 opts = orig(ui, requirements, features)
316 for name, module in extensions.extensions(ui):
316 for name, module in extensions.extensions(ui):
317 if module is sys.modules[__name__]:
317 if module is sys.modules[__name__]:
318 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
318 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
319 msg = (_(b"cannot register multiple processors on flag '%#x'.")
319 msg = (_(b"cannot register multiple processors on flag '%#x'.")
320 % revlog.REVIDX_EXTSTORED)
320 % revlog.REVIDX_EXTSTORED)
321 raise error.Abort(msg)
321 raise error.Abort(msg)
322
322
323 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
323 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
324 break
324 break
325
325
326 return opts
326 return opts
327
327
328 @eh.extsetup
328 @eh.extsetup
329 def _extsetup(ui):
329 def _extsetup(ui):
330 wrapfilelog(filelog.filelog)
330 wrapfilelog(filelog.filelog)
331
331
332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
333
333
334 # Make bundle choose changegroup3 instead of changegroup2. This affects
334 # Make bundle choose changegroup3 instead of changegroup2. This affects
335 # "hg bundle" command. Note: it does not cover all bundle formats like
335 # "hg bundle" command. Note: it does not cover all bundle formats like
336 # "packed1". Using "packed1" with lfs will likely cause trouble.
336 # "packed1". Using "packed1" with lfs will likely cause trouble.
337 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
337 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
338
338
339 @eh.filesetpredicate('lfs()')
339 @eh.filesetpredicate('lfs()')
340 def lfsfileset(mctx, x):
340 def lfsfileset(mctx, x):
341 """File that uses LFS storage."""
341 """File that uses LFS storage."""
342 # i18n: "lfs" is a keyword
342 # i18n: "lfs" is a keyword
343 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
343 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
344 ctx = mctx.ctx
344 ctx = mctx.ctx
345 def lfsfilep(f):
345 def lfsfilep(f):
346 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
346 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
347 return mctx.predicate(lfsfilep, predrepr='<lfs>')
347 return mctx.predicate(lfsfilep, predrepr='<lfs>')
348
348
349 @eh.templatekeyword('lfs_files', requires={'ctx'})
349 @eh.templatekeyword('lfs_files', requires={'ctx'})
350 def lfsfiles(context, mapping):
350 def lfsfiles(context, mapping):
351 """List of strings. All files modified, added, or removed by this
351 """List of strings. All files modified, added, or removed by this
352 changeset."""
352 changeset."""
353 ctx = context.resource(mapping, 'ctx')
353 ctx = context.resource(mapping, 'ctx')
354
354
355 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
355 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
356 files = sorted(pointers.keys())
356 files = sorted(pointers.keys())
357
357
358 def pointer(v):
358 def pointer(v):
359 # In the file spec, version is first and the other keys are sorted.
359 # In the file spec, version is first and the other keys are sorted.
360 sortkeyfunc = lambda x: (x[0] != 'version', x)
360 sortkeyfunc = lambda x: (x[0] != 'version', x)
361 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
361 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
362 return util.sortdict(items)
362 return util.sortdict(items)
363
363
364 makemap = lambda v: {
364 makemap = lambda v: {
365 'file': v,
365 'file': v,
366 'lfsoid': pointers[v].oid() if pointers[v] else None,
366 'lfsoid': pointers[v].oid() if pointers[v] else None,
367 'lfspointer': templateutil.hybriddict(pointer(v)),
367 'lfspointer': templateutil.hybriddict(pointer(v)),
368 }
368 }
369
369
370 # TODO: make the separator ', '?
370 # TODO: make the separator ', '?
371 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
371 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
372 return templateutil.hybrid(f, files, makemap, pycompat.identity)
372 return templateutil.hybrid(f, files, makemap, pycompat.identity)
373
373
374 @eh.command('debuglfsupload',
374 @eh.command('debuglfsupload',
375 [('r', 'rev', [], _('upload large files introduced by REV'))])
375 [('r', 'rev', [], _('upload large files introduced by REV'))])
376 def debuglfsupload(ui, repo, **opts):
376 def debuglfsupload(ui, repo, **opts):
377 """upload lfs blobs added by the working copy parent or given revisions"""
377 """upload lfs blobs added by the working copy parent or given revisions"""
378 revs = opts.get(r'rev', [])
378 revs = opts.get(r'rev', [])
379 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
379 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
380 wrapper.uploadblobs(repo, pointers)
380 wrapper.uploadblobs(repo, pointers)
@@ -1,3089 +1,3096 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12 import os
12 import os
13 import random
13 import random
14 import sys
14 import sys
15 import time
15 import time
16 import weakref
16 import weakref
17
17
18 from .i18n import _
18 from .i18n import _
19 from .node import (
19 from .node import (
20 bin,
20 bin,
21 hex,
21 hex,
22 nullid,
22 nullid,
23 nullrev,
23 nullrev,
24 short,
24 short,
25 )
25 )
26 from . import (
26 from . import (
27 bookmarks,
27 bookmarks,
28 branchmap,
28 branchmap,
29 bundle2,
29 bundle2,
30 changegroup,
30 changegroup,
31 changelog,
31 changelog,
32 color,
32 color,
33 context,
33 context,
34 dirstate,
34 dirstate,
35 dirstateguard,
35 dirstateguard,
36 discovery,
36 discovery,
37 encoding,
37 encoding,
38 error,
38 error,
39 exchange,
39 exchange,
40 extensions,
40 extensions,
41 filelog,
41 filelog,
42 hook,
42 hook,
43 lock as lockmod,
43 lock as lockmod,
44 manifest,
44 manifest,
45 match as matchmod,
45 match as matchmod,
46 merge as mergemod,
46 merge as mergemod,
47 mergeutil,
47 mergeutil,
48 namespaces,
48 namespaces,
49 narrowspec,
49 narrowspec,
50 obsolete,
50 obsolete,
51 pathutil,
51 pathutil,
52 phases,
52 phases,
53 pushkey,
53 pushkey,
54 pycompat,
54 pycompat,
55 repository,
55 repository,
56 repoview,
56 repoview,
57 revset,
57 revset,
58 revsetlang,
58 revsetlang,
59 scmutil,
59 scmutil,
60 sparse,
60 sparse,
61 store as storemod,
61 store as storemod,
62 subrepoutil,
62 subrepoutil,
63 tags as tagsmod,
63 tags as tagsmod,
64 transaction,
64 transaction,
65 txnutil,
65 txnutil,
66 util,
66 util,
67 vfs as vfsmod,
67 vfs as vfsmod,
68 )
68 )
69 from .utils import (
69 from .utils import (
70 interfaceutil,
70 interfaceutil,
71 procutil,
71 procutil,
72 stringutil,
72 stringutil,
73 )
73 )
74
74
75 from .revlogutils import (
75 from .revlogutils import (
76 constants as revlogconst,
76 constants as revlogconst,
77 )
77 )
78
78
79 release = lockmod.release
79 release = lockmod.release
80 urlerr = util.urlerr
80 urlerr = util.urlerr
81 urlreq = util.urlreq
81 urlreq = util.urlreq
82
82
83 # set of (path, vfs-location) tuples. vfs-location is:
83 # set of (path, vfs-location) tuples. vfs-location is:
84 # - 'plain for vfs relative paths
84 # - 'plain for vfs relative paths
85 # - '' for svfs relative paths
85 # - '' for svfs relative paths
86 _cachedfiles = set()
86 _cachedfiles = set()
87
87
88 class _basefilecache(scmutil.filecache):
88 class _basefilecache(scmutil.filecache):
89 """All filecache usage on repo are done for logic that should be unfiltered
89 """All filecache usage on repo are done for logic that should be unfiltered
90 """
90 """
91 def __get__(self, repo, type=None):
91 def __get__(self, repo, type=None):
92 if repo is None:
92 if repo is None:
93 return self
93 return self
94 # proxy to unfiltered __dict__ since filtered repo has no entry
94 # proxy to unfiltered __dict__ since filtered repo has no entry
95 unfi = repo.unfiltered()
95 unfi = repo.unfiltered()
96 try:
96 try:
97 return unfi.__dict__[self.sname]
97 return unfi.__dict__[self.sname]
98 except KeyError:
98 except KeyError:
99 pass
99 pass
100 return super(_basefilecache, self).__get__(unfi, type)
100 return super(_basefilecache, self).__get__(unfi, type)
101
101
102 def set(self, repo, value):
102 def set(self, repo, value):
103 return super(_basefilecache, self).set(repo.unfiltered(), value)
103 return super(_basefilecache, self).set(repo.unfiltered(), value)
104
104
105 class repofilecache(_basefilecache):
105 class repofilecache(_basefilecache):
106 """filecache for files in .hg but outside of .hg/store"""
106 """filecache for files in .hg but outside of .hg/store"""
107 def __init__(self, *paths):
107 def __init__(self, *paths):
108 super(repofilecache, self).__init__(*paths)
108 super(repofilecache, self).__init__(*paths)
109 for path in paths:
109 for path in paths:
110 _cachedfiles.add((path, 'plain'))
110 _cachedfiles.add((path, 'plain'))
111
111
112 def join(self, obj, fname):
112 def join(self, obj, fname):
113 return obj.vfs.join(fname)
113 return obj.vfs.join(fname)
114
114
115 class storecache(_basefilecache):
115 class storecache(_basefilecache):
116 """filecache for files in the store"""
116 """filecache for files in the store"""
117 def __init__(self, *paths):
117 def __init__(self, *paths):
118 super(storecache, self).__init__(*paths)
118 super(storecache, self).__init__(*paths)
119 for path in paths:
119 for path in paths:
120 _cachedfiles.add((path, ''))
120 _cachedfiles.add((path, ''))
121
121
122 def join(self, obj, fname):
122 def join(self, obj, fname):
123 return obj.sjoin(fname)
123 return obj.sjoin(fname)
124
124
125 def isfilecached(repo, name):
125 def isfilecached(repo, name):
126 """check if a repo has already cached "name" filecache-ed property
126 """check if a repo has already cached "name" filecache-ed property
127
127
128 This returns (cachedobj-or-None, iscached) tuple.
128 This returns (cachedobj-or-None, iscached) tuple.
129 """
129 """
130 cacheentry = repo.unfiltered()._filecache.get(name, None)
130 cacheentry = repo.unfiltered()._filecache.get(name, None)
131 if not cacheentry:
131 if not cacheentry:
132 return None, False
132 return None, False
133 return cacheentry.obj, True
133 return cacheentry.obj, True
134
134
135 class unfilteredpropertycache(util.propertycache):
135 class unfilteredpropertycache(util.propertycache):
136 """propertycache that apply to unfiltered repo only"""
136 """propertycache that apply to unfiltered repo only"""
137
137
138 def __get__(self, repo, type=None):
138 def __get__(self, repo, type=None):
139 unfi = repo.unfiltered()
139 unfi = repo.unfiltered()
140 if unfi is repo:
140 if unfi is repo:
141 return super(unfilteredpropertycache, self).__get__(unfi)
141 return super(unfilteredpropertycache, self).__get__(unfi)
142 return getattr(unfi, self.name)
142 return getattr(unfi, self.name)
143
143
144 class filteredpropertycache(util.propertycache):
144 class filteredpropertycache(util.propertycache):
145 """propertycache that must take filtering in account"""
145 """propertycache that must take filtering in account"""
146
146
147 def cachevalue(self, obj, value):
147 def cachevalue(self, obj, value):
148 object.__setattr__(obj, self.name, value)
148 object.__setattr__(obj, self.name, value)
149
149
150
150
151 def hasunfilteredcache(repo, name):
151 def hasunfilteredcache(repo, name):
152 """check if a repo has an unfilteredpropertycache value for <name>"""
152 """check if a repo has an unfilteredpropertycache value for <name>"""
153 return name in vars(repo.unfiltered())
153 return name in vars(repo.unfiltered())
154
154
155 def unfilteredmethod(orig):
155 def unfilteredmethod(orig):
156 """decorate method that always need to be run on unfiltered version"""
156 """decorate method that always need to be run on unfiltered version"""
157 def wrapper(repo, *args, **kwargs):
157 def wrapper(repo, *args, **kwargs):
158 return orig(repo.unfiltered(), *args, **kwargs)
158 return orig(repo.unfiltered(), *args, **kwargs)
159 return wrapper
159 return wrapper
160
160
161 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
161 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
162 'unbundle'}
162 'unbundle'}
163 legacycaps = moderncaps.union({'changegroupsubset'})
163 legacycaps = moderncaps.union({'changegroupsubset'})
164
164
165 @interfaceutil.implementer(repository.ipeercommandexecutor)
165 @interfaceutil.implementer(repository.ipeercommandexecutor)
166 class localcommandexecutor(object):
166 class localcommandexecutor(object):
167 def __init__(self, peer):
167 def __init__(self, peer):
168 self._peer = peer
168 self._peer = peer
169 self._sent = False
169 self._sent = False
170 self._closed = False
170 self._closed = False
171
171
172 def __enter__(self):
172 def __enter__(self):
173 return self
173 return self
174
174
175 def __exit__(self, exctype, excvalue, exctb):
175 def __exit__(self, exctype, excvalue, exctb):
176 self.close()
176 self.close()
177
177
178 def callcommand(self, command, args):
178 def callcommand(self, command, args):
179 if self._sent:
179 if self._sent:
180 raise error.ProgrammingError('callcommand() cannot be used after '
180 raise error.ProgrammingError('callcommand() cannot be used after '
181 'sendcommands()')
181 'sendcommands()')
182
182
183 if self._closed:
183 if self._closed:
184 raise error.ProgrammingError('callcommand() cannot be used after '
184 raise error.ProgrammingError('callcommand() cannot be used after '
185 'close()')
185 'close()')
186
186
187 # We don't need to support anything fancy. Just call the named
187 # We don't need to support anything fancy. Just call the named
188 # method on the peer and return a resolved future.
188 # method on the peer and return a resolved future.
189 fn = getattr(self._peer, pycompat.sysstr(command))
189 fn = getattr(self._peer, pycompat.sysstr(command))
190
190
191 f = pycompat.futures.Future()
191 f = pycompat.futures.Future()
192
192
193 try:
193 try:
194 result = fn(**pycompat.strkwargs(args))
194 result = fn(**pycompat.strkwargs(args))
195 except Exception:
195 except Exception:
196 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
196 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
197 else:
197 else:
198 f.set_result(result)
198 f.set_result(result)
199
199
200 return f
200 return f
201
201
202 def sendcommands(self):
202 def sendcommands(self):
203 self._sent = True
203 self._sent = True
204
204
205 def close(self):
205 def close(self):
206 self._closed = True
206 self._closed = True
207
207
208 @interfaceutil.implementer(repository.ipeercommands)
208 @interfaceutil.implementer(repository.ipeercommands)
209 class localpeer(repository.peer):
209 class localpeer(repository.peer):
210 '''peer for a local repo; reflects only the most recent API'''
210 '''peer for a local repo; reflects only the most recent API'''
211
211
212 def __init__(self, repo, caps=None):
212 def __init__(self, repo, caps=None):
213 super(localpeer, self).__init__()
213 super(localpeer, self).__init__()
214
214
215 if caps is None:
215 if caps is None:
216 caps = moderncaps.copy()
216 caps = moderncaps.copy()
217 self._repo = repo.filtered('served')
217 self._repo = repo.filtered('served')
218 self.ui = repo.ui
218 self.ui = repo.ui
219 self._caps = repo._restrictcapabilities(caps)
219 self._caps = repo._restrictcapabilities(caps)
220
220
221 # Begin of _basepeer interface.
221 # Begin of _basepeer interface.
222
222
223 def url(self):
223 def url(self):
224 return self._repo.url()
224 return self._repo.url()
225
225
226 def local(self):
226 def local(self):
227 return self._repo
227 return self._repo
228
228
229 def peer(self):
229 def peer(self):
230 return self
230 return self
231
231
232 def canpush(self):
232 def canpush(self):
233 return True
233 return True
234
234
235 def close(self):
235 def close(self):
236 self._repo.close()
236 self._repo.close()
237
237
238 # End of _basepeer interface.
238 # End of _basepeer interface.
239
239
240 # Begin of _basewirecommands interface.
240 # Begin of _basewirecommands interface.
241
241
242 def branchmap(self):
242 def branchmap(self):
243 return self._repo.branchmap()
243 return self._repo.branchmap()
244
244
245 def capabilities(self):
245 def capabilities(self):
246 return self._caps
246 return self._caps
247
247
248 def clonebundles(self):
248 def clonebundles(self):
249 return self._repo.tryread('clonebundles.manifest')
249 return self._repo.tryread('clonebundles.manifest')
250
250
251 def debugwireargs(self, one, two, three=None, four=None, five=None):
251 def debugwireargs(self, one, two, three=None, four=None, five=None):
252 """Used to test argument passing over the wire"""
252 """Used to test argument passing over the wire"""
253 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
253 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
254 pycompat.bytestr(four),
254 pycompat.bytestr(four),
255 pycompat.bytestr(five))
255 pycompat.bytestr(five))
256
256
257 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
257 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
258 **kwargs):
258 **kwargs):
259 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
259 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
260 common=common, bundlecaps=bundlecaps,
260 common=common, bundlecaps=bundlecaps,
261 **kwargs)[1]
261 **kwargs)[1]
262 cb = util.chunkbuffer(chunks)
262 cb = util.chunkbuffer(chunks)
263
263
264 if exchange.bundle2requested(bundlecaps):
264 if exchange.bundle2requested(bundlecaps):
265 # When requesting a bundle2, getbundle returns a stream to make the
265 # When requesting a bundle2, getbundle returns a stream to make the
266 # wire level function happier. We need to build a proper object
266 # wire level function happier. We need to build a proper object
267 # from it in local peer.
267 # from it in local peer.
268 return bundle2.getunbundler(self.ui, cb)
268 return bundle2.getunbundler(self.ui, cb)
269 else:
269 else:
270 return changegroup.getunbundler('01', cb, None)
270 return changegroup.getunbundler('01', cb, None)
271
271
272 def heads(self):
272 def heads(self):
273 return self._repo.heads()
273 return self._repo.heads()
274
274
275 def known(self, nodes):
275 def known(self, nodes):
276 return self._repo.known(nodes)
276 return self._repo.known(nodes)
277
277
278 def listkeys(self, namespace):
278 def listkeys(self, namespace):
279 return self._repo.listkeys(namespace)
279 return self._repo.listkeys(namespace)
280
280
281 def lookup(self, key):
281 def lookup(self, key):
282 return self._repo.lookup(key)
282 return self._repo.lookup(key)
283
283
284 def pushkey(self, namespace, key, old, new):
284 def pushkey(self, namespace, key, old, new):
285 return self._repo.pushkey(namespace, key, old, new)
285 return self._repo.pushkey(namespace, key, old, new)
286
286
287 def stream_out(self):
287 def stream_out(self):
288 raise error.Abort(_('cannot perform stream clone against local '
288 raise error.Abort(_('cannot perform stream clone against local '
289 'peer'))
289 'peer'))
290
290
291 def unbundle(self, bundle, heads, url):
291 def unbundle(self, bundle, heads, url):
292 """apply a bundle on a repo
292 """apply a bundle on a repo
293
293
294 This function handles the repo locking itself."""
294 This function handles the repo locking itself."""
295 try:
295 try:
296 try:
296 try:
297 bundle = exchange.readbundle(self.ui, bundle, None)
297 bundle = exchange.readbundle(self.ui, bundle, None)
298 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
298 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
299 if util.safehasattr(ret, 'getchunks'):
299 if util.safehasattr(ret, 'getchunks'):
300 # This is a bundle20 object, turn it into an unbundler.
300 # This is a bundle20 object, turn it into an unbundler.
301 # This little dance should be dropped eventually when the
301 # This little dance should be dropped eventually when the
302 # API is finally improved.
302 # API is finally improved.
303 stream = util.chunkbuffer(ret.getchunks())
303 stream = util.chunkbuffer(ret.getchunks())
304 ret = bundle2.getunbundler(self.ui, stream)
304 ret = bundle2.getunbundler(self.ui, stream)
305 return ret
305 return ret
306 except Exception as exc:
306 except Exception as exc:
307 # If the exception contains output salvaged from a bundle2
307 # If the exception contains output salvaged from a bundle2
308 # reply, we need to make sure it is printed before continuing
308 # reply, we need to make sure it is printed before continuing
309 # to fail. So we build a bundle2 with such output and consume
309 # to fail. So we build a bundle2 with such output and consume
310 # it directly.
310 # it directly.
311 #
311 #
312 # This is not very elegant but allows a "simple" solution for
312 # This is not very elegant but allows a "simple" solution for
313 # issue4594
313 # issue4594
314 output = getattr(exc, '_bundle2salvagedoutput', ())
314 output = getattr(exc, '_bundle2salvagedoutput', ())
315 if output:
315 if output:
316 bundler = bundle2.bundle20(self._repo.ui)
316 bundler = bundle2.bundle20(self._repo.ui)
317 for out in output:
317 for out in output:
318 bundler.addpart(out)
318 bundler.addpart(out)
319 stream = util.chunkbuffer(bundler.getchunks())
319 stream = util.chunkbuffer(bundler.getchunks())
320 b = bundle2.getunbundler(self.ui, stream)
320 b = bundle2.getunbundler(self.ui, stream)
321 bundle2.processbundle(self._repo, b)
321 bundle2.processbundle(self._repo, b)
322 raise
322 raise
323 except error.PushRaced as exc:
323 except error.PushRaced as exc:
324 raise error.ResponseError(_('push failed:'),
324 raise error.ResponseError(_('push failed:'),
325 stringutil.forcebytestr(exc))
325 stringutil.forcebytestr(exc))
326
326
327 # End of _basewirecommands interface.
327 # End of _basewirecommands interface.
328
328
329 # Begin of peer interface.
329 # Begin of peer interface.
330
330
331 def commandexecutor(self):
331 def commandexecutor(self):
332 return localcommandexecutor(self)
332 return localcommandexecutor(self)
333
333
334 # End of peer interface.
334 # End of peer interface.
335
335
336 @interfaceutil.implementer(repository.ipeerlegacycommands)
336 @interfaceutil.implementer(repository.ipeerlegacycommands)
337 class locallegacypeer(localpeer):
337 class locallegacypeer(localpeer):
338 '''peer extension which implements legacy methods too; used for tests with
338 '''peer extension which implements legacy methods too; used for tests with
339 restricted capabilities'''
339 restricted capabilities'''
340
340
341 def __init__(self, repo):
341 def __init__(self, repo):
342 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
342 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
343
343
344 # Begin of baselegacywirecommands interface.
344 # Begin of baselegacywirecommands interface.
345
345
346 def between(self, pairs):
346 def between(self, pairs):
347 return self._repo.between(pairs)
347 return self._repo.between(pairs)
348
348
349 def branches(self, nodes):
349 def branches(self, nodes):
350 return self._repo.branches(nodes)
350 return self._repo.branches(nodes)
351
351
352 def changegroup(self, nodes, source):
352 def changegroup(self, nodes, source):
353 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
353 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
354 missingheads=self._repo.heads())
354 missingheads=self._repo.heads())
355 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
355 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
356
356
357 def changegroupsubset(self, bases, heads, source):
357 def changegroupsubset(self, bases, heads, source):
358 outgoing = discovery.outgoing(self._repo, missingroots=bases,
358 outgoing = discovery.outgoing(self._repo, missingroots=bases,
359 missingheads=heads)
359 missingheads=heads)
360 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
360 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
361
361
362 # End of baselegacywirecommands interface.
362 # End of baselegacywirecommands interface.
363
363
364 # Increment the sub-version when the revlog v2 format changes to lock out old
364 # Increment the sub-version when the revlog v2 format changes to lock out old
365 # clients.
365 # clients.
366 REVLOGV2_REQUIREMENT = 'exp-revlogv2.1'
366 REVLOGV2_REQUIREMENT = 'exp-revlogv2.1'
367
367
368 # A repository with the sparserevlog feature will have delta chains that
368 # A repository with the sparserevlog feature will have delta chains that
369 # can spread over a larger span. Sparse reading cuts these large spans into
369 # can spread over a larger span. Sparse reading cuts these large spans into
370 # pieces, so that each piece isn't too big.
370 # pieces, so that each piece isn't too big.
371 # Without the sparserevlog capability, reading from the repository could use
371 # Without the sparserevlog capability, reading from the repository could use
372 # huge amounts of memory, because the whole span would be read at once,
372 # huge amounts of memory, because the whole span would be read at once,
373 # including all the intermediate revisions that aren't pertinent for the chain.
373 # including all the intermediate revisions that aren't pertinent for the chain.
374 # This is why once a repository has enabled sparse-read, it becomes required.
374 # This is why once a repository has enabled sparse-read, it becomes required.
375 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
375 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
376
376
377 # Functions receiving (ui, features) that extensions can register to impact
377 # Functions receiving (ui, features) that extensions can register to impact
378 # the ability to load repositories with custom requirements. Only
378 # the ability to load repositories with custom requirements. Only
379 # functions defined in loaded extensions are called.
379 # functions defined in loaded extensions are called.
380 #
380 #
381 # The function receives a set of requirement strings that the repository
381 # The function receives a set of requirement strings that the repository
382 # is capable of opening. Functions will typically add elements to the
382 # is capable of opening. Functions will typically add elements to the
383 # set to reflect that the extension knows how to handle that requirements.
383 # set to reflect that the extension knows how to handle that requirements.
384 featuresetupfuncs = set()
384 featuresetupfuncs = set()
385
385
386 def makelocalrepository(baseui, path, intents=None):
386 def makelocalrepository(baseui, path, intents=None):
387 """Create a local repository object.
387 """Create a local repository object.
388
388
389 Given arguments needed to construct a local repository, this function
389 Given arguments needed to construct a local repository, this function
390 performs various early repository loading functionality (such as
390 performs various early repository loading functionality (such as
391 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
391 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
392 the repository can be opened, derives a type suitable for representing
392 the repository can be opened, derives a type suitable for representing
393 that repository, and returns an instance of it.
393 that repository, and returns an instance of it.
394
394
395 The returned object conforms to the ``repository.completelocalrepository``
395 The returned object conforms to the ``repository.completelocalrepository``
396 interface.
396 interface.
397
397
398 The repository type is derived by calling a series of factory functions
398 The repository type is derived by calling a series of factory functions
399 for each aspect/interface of the final repository. These are defined by
399 for each aspect/interface of the final repository. These are defined by
400 ``REPO_INTERFACES``.
400 ``REPO_INTERFACES``.
401
401
402 Each factory function is called to produce a type implementing a specific
402 Each factory function is called to produce a type implementing a specific
403 interface. The cumulative list of returned types will be combined into a
403 interface. The cumulative list of returned types will be combined into a
404 new type and that type will be instantiated to represent the local
404 new type and that type will be instantiated to represent the local
405 repository.
405 repository.
406
406
407 The factory functions each receive various state that may be consulted
407 The factory functions each receive various state that may be consulted
408 as part of deriving a type.
408 as part of deriving a type.
409
409
410 Extensions should wrap these factory functions to customize repository type
410 Extensions should wrap these factory functions to customize repository type
411 creation. Note that an extension's wrapped function may be called even if
411 creation. Note that an extension's wrapped function may be called even if
412 that extension is not loaded for the repo being constructed. Extensions
412 that extension is not loaded for the repo being constructed. Extensions
413 should check if their ``__name__`` appears in the
413 should check if their ``__name__`` appears in the
414 ``extensionmodulenames`` set passed to the factory function and no-op if
414 ``extensionmodulenames`` set passed to the factory function and no-op if
415 not.
415 not.
416 """
416 """
417 ui = baseui.copy()
417 ui = baseui.copy()
418 # Prevent copying repo configuration.
418 # Prevent copying repo configuration.
419 ui.copy = baseui.copy
419 ui.copy = baseui.copy
420
420
421 # Working directory VFS rooted at repository root.
421 # Working directory VFS rooted at repository root.
422 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
422 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
423
423
424 # Main VFS for .hg/ directory.
424 # Main VFS for .hg/ directory.
425 hgpath = wdirvfs.join(b'.hg')
425 hgpath = wdirvfs.join(b'.hg')
426 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
426 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
427
427
428 # The .hg/ path should exist and should be a directory. All other
428 # The .hg/ path should exist and should be a directory. All other
429 # cases are errors.
429 # cases are errors.
430 if not hgvfs.isdir():
430 if not hgvfs.isdir():
431 try:
431 try:
432 hgvfs.stat()
432 hgvfs.stat()
433 except OSError as e:
433 except OSError as e:
434 if e.errno != errno.ENOENT:
434 if e.errno != errno.ENOENT:
435 raise
435 raise
436
436
437 raise error.RepoError(_(b'repository %s not found') % path)
437 raise error.RepoError(_(b'repository %s not found') % path)
438
438
439 # .hg/requires file contains a newline-delimited list of
439 # .hg/requires file contains a newline-delimited list of
440 # features/capabilities the opener (us) must have in order to use
440 # features/capabilities the opener (us) must have in order to use
441 # the repository. This file was introduced in Mercurial 0.9.2,
441 # the repository. This file was introduced in Mercurial 0.9.2,
442 # which means very old repositories may not have one. We assume
442 # which means very old repositories may not have one. We assume
443 # a missing file translates to no requirements.
443 # a missing file translates to no requirements.
444 try:
444 try:
445 requirements = set(hgvfs.read(b'requires').splitlines())
445 requirements = set(hgvfs.read(b'requires').splitlines())
446 except IOError as e:
446 except IOError as e:
447 if e.errno != errno.ENOENT:
447 if e.errno != errno.ENOENT:
448 raise
448 raise
449 requirements = set()
449 requirements = set()
450
450
451 # The .hg/hgrc file may load extensions or contain config options
451 # The .hg/hgrc file may load extensions or contain config options
452 # that influence repository construction. Attempt to load it and
452 # that influence repository construction. Attempt to load it and
453 # process any new extensions that it may have pulled in.
453 # process any new extensions that it may have pulled in.
454 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
454 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
455 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
455 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
456 extensions.loadall(ui)
456 extensions.loadall(ui)
457 extensions.populateui(ui)
457 extensions.populateui(ui)
458
458
459 # Set of module names of extensions loaded for this repository.
459 # Set of module names of extensions loaded for this repository.
460 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
460 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
461
461
462 supportedrequirements = gathersupportedrequirements(ui)
462 supportedrequirements = gathersupportedrequirements(ui)
463
463
464 # We first validate the requirements are known.
464 # We first validate the requirements are known.
465 ensurerequirementsrecognized(requirements, supportedrequirements)
465 ensurerequirementsrecognized(requirements, supportedrequirements)
466
466
467 # Then we validate that the known set is reasonable to use together.
467 # Then we validate that the known set is reasonable to use together.
468 ensurerequirementscompatible(ui, requirements)
468 ensurerequirementscompatible(ui, requirements)
469
469
470 # TODO there are unhandled edge cases related to opening repositories with
470 # TODO there are unhandled edge cases related to opening repositories with
471 # shared storage. If storage is shared, we should also test for requirements
471 # shared storage. If storage is shared, we should also test for requirements
472 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
472 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
473 # that repo, as that repo may load extensions needed to open it. This is a
473 # that repo, as that repo may load extensions needed to open it. This is a
474 # bit complicated because we don't want the other hgrc to overwrite settings
474 # bit complicated because we don't want the other hgrc to overwrite settings
475 # in this hgrc.
475 # in this hgrc.
476 #
476 #
477 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
477 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
478 # file when sharing repos. But if a requirement is added after the share is
478 # file when sharing repos. But if a requirement is added after the share is
479 # performed, thereby introducing a new requirement for the opener, we may
479 # performed, thereby introducing a new requirement for the opener, we may
480 # will not see that and could encounter a run-time error interacting with
480 # will not see that and could encounter a run-time error interacting with
481 # that shared store since it has an unknown-to-us requirement.
481 # that shared store since it has an unknown-to-us requirement.
482
482
483 # At this point, we know we should be capable of opening the repository.
483 # At this point, we know we should be capable of opening the repository.
484 # Now get on with doing that.
484 # Now get on with doing that.
485
485
486 features = set()
486 features = set()
487
487
488 # The "store" part of the repository holds versioned data. How it is
488 # The "store" part of the repository holds versioned data. How it is
489 # accessed is determined by various requirements. The ``shared`` or
489 # accessed is determined by various requirements. The ``shared`` or
490 # ``relshared`` requirements indicate the store lives in the path contained
490 # ``relshared`` requirements indicate the store lives in the path contained
491 # in the ``.hg/sharedpath`` file. This is an absolute path for
491 # in the ``.hg/sharedpath`` file. This is an absolute path for
492 # ``shared`` and relative to ``.hg/`` for ``relshared``.
492 # ``shared`` and relative to ``.hg/`` for ``relshared``.
493 if b'shared' in requirements or b'relshared' in requirements:
493 if b'shared' in requirements or b'relshared' in requirements:
494 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
494 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
495 if b'relshared' in requirements:
495 if b'relshared' in requirements:
496 sharedpath = hgvfs.join(sharedpath)
496 sharedpath = hgvfs.join(sharedpath)
497
497
498 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
498 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
499
499
500 if not sharedvfs.exists():
500 if not sharedvfs.exists():
501 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
501 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
502 b'directory %s') % sharedvfs.base)
502 b'directory %s') % sharedvfs.base)
503
503
504 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
504 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
505
505
506 storebasepath = sharedvfs.base
506 storebasepath = sharedvfs.base
507 cachepath = sharedvfs.join(b'cache')
507 cachepath = sharedvfs.join(b'cache')
508 else:
508 else:
509 storebasepath = hgvfs.base
509 storebasepath = hgvfs.base
510 cachepath = hgvfs.join(b'cache')
510 cachepath = hgvfs.join(b'cache')
511 wcachepath = hgvfs.join(b'wcache')
511 wcachepath = hgvfs.join(b'wcache')
512
512
513
513
514 # The store has changed over time and the exact layout is dictated by
514 # The store has changed over time and the exact layout is dictated by
515 # requirements. The store interface abstracts differences across all
515 # requirements. The store interface abstracts differences across all
516 # of them.
516 # of them.
517 store = makestore(requirements, storebasepath,
517 store = makestore(requirements, storebasepath,
518 lambda base: vfsmod.vfs(base, cacheaudited=True))
518 lambda base: vfsmod.vfs(base, cacheaudited=True))
519 hgvfs.createmode = store.createmode
519 hgvfs.createmode = store.createmode
520
520
521 storevfs = store.vfs
521 storevfs = store.vfs
522 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
522 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
523
523
524 # The cache vfs is used to manage cache files.
524 # The cache vfs is used to manage cache files.
525 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
525 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
526 cachevfs.createmode = store.createmode
526 cachevfs.createmode = store.createmode
527 # The cache vfs is used to manage cache files related to the working copy
527 # The cache vfs is used to manage cache files related to the working copy
528 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
528 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
529 wcachevfs.createmode = store.createmode
529 wcachevfs.createmode = store.createmode
530
530
531 # Now resolve the type for the repository object. We do this by repeatedly
531 # Now resolve the type for the repository object. We do this by repeatedly
532 # calling a factory function to produces types for specific aspects of the
532 # calling a factory function to produces types for specific aspects of the
533 # repo's operation. The aggregate returned types are used as base classes
533 # repo's operation. The aggregate returned types are used as base classes
534 # for a dynamically-derived type, which will represent our new repository.
534 # for a dynamically-derived type, which will represent our new repository.
535
535
536 bases = []
536 bases = []
537 extrastate = {}
537 extrastate = {}
538
538
539 for iface, fn in REPO_INTERFACES:
539 for iface, fn in REPO_INTERFACES:
540 # We pass all potentially useful state to give extensions tons of
540 # We pass all potentially useful state to give extensions tons of
541 # flexibility.
541 # flexibility.
542 typ = fn()(ui=ui,
542 typ = fn()(ui=ui,
543 intents=intents,
543 intents=intents,
544 requirements=requirements,
544 requirements=requirements,
545 features=features,
545 features=features,
546 wdirvfs=wdirvfs,
546 wdirvfs=wdirvfs,
547 hgvfs=hgvfs,
547 hgvfs=hgvfs,
548 store=store,
548 store=store,
549 storevfs=storevfs,
549 storevfs=storevfs,
550 storeoptions=storevfs.options,
550 storeoptions=storevfs.options,
551 cachevfs=cachevfs,
551 cachevfs=cachevfs,
552 wcachevfs=wcachevfs,
552 wcachevfs=wcachevfs,
553 extensionmodulenames=extensionmodulenames,
553 extensionmodulenames=extensionmodulenames,
554 extrastate=extrastate,
554 extrastate=extrastate,
555 baseclasses=bases)
555 baseclasses=bases)
556
556
557 if not isinstance(typ, type):
557 if not isinstance(typ, type):
558 raise error.ProgrammingError('unable to construct type for %s' %
558 raise error.ProgrammingError('unable to construct type for %s' %
559 iface)
559 iface)
560
560
561 bases.append(typ)
561 bases.append(typ)
562
562
563 # type() allows you to use characters in type names that wouldn't be
563 # type() allows you to use characters in type names that wouldn't be
564 # recognized as Python symbols in source code. We abuse that to add
564 # recognized as Python symbols in source code. We abuse that to add
565 # rich information about our constructed repo.
565 # rich information about our constructed repo.
566 name = pycompat.sysstr(b'derivedrepo:%s<%s>' % (
566 name = pycompat.sysstr(b'derivedrepo:%s<%s>' % (
567 wdirvfs.base,
567 wdirvfs.base,
568 b','.join(sorted(requirements))))
568 b','.join(sorted(requirements))))
569
569
570 cls = type(name, tuple(bases), {})
570 cls = type(name, tuple(bases), {})
571
571
572 return cls(
572 return cls(
573 baseui=baseui,
573 baseui=baseui,
574 ui=ui,
574 ui=ui,
575 origroot=path,
575 origroot=path,
576 wdirvfs=wdirvfs,
576 wdirvfs=wdirvfs,
577 hgvfs=hgvfs,
577 hgvfs=hgvfs,
578 requirements=requirements,
578 requirements=requirements,
579 supportedrequirements=supportedrequirements,
579 supportedrequirements=supportedrequirements,
580 sharedpath=storebasepath,
580 sharedpath=storebasepath,
581 store=store,
581 store=store,
582 cachevfs=cachevfs,
582 cachevfs=cachevfs,
583 wcachevfs=wcachevfs,
583 wcachevfs=wcachevfs,
584 features=features,
584 features=features,
585 intents=intents)
585 intents=intents)
586
586
587 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
587 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
588 """Load hgrc files/content into a ui instance.
588 """Load hgrc files/content into a ui instance.
589
589
590 This is called during repository opening to load any additional
590 This is called during repository opening to load any additional
591 config files or settings relevant to the current repository.
591 config files or settings relevant to the current repository.
592
592
593 Returns a bool indicating whether any additional configs were loaded.
593 Returns a bool indicating whether any additional configs were loaded.
594
594
595 Extensions should monkeypatch this function to modify how per-repo
595 Extensions should monkeypatch this function to modify how per-repo
596 configs are loaded. For example, an extension may wish to pull in
596 configs are loaded. For example, an extension may wish to pull in
597 configs from alternate files or sources.
597 configs from alternate files or sources.
598 """
598 """
599 try:
599 try:
600 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
600 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
601 return True
601 return True
602 except IOError:
602 except IOError:
603 return False
603 return False
604
604
605 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
605 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
606 """Perform additional actions after .hg/hgrc is loaded.
606 """Perform additional actions after .hg/hgrc is loaded.
607
607
608 This function is called during repository loading immediately after
608 This function is called during repository loading immediately after
609 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
609 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
610
610
611 The function can be used to validate configs, automatically add
611 The function can be used to validate configs, automatically add
612 options (including extensions) based on requirements, etc.
612 options (including extensions) based on requirements, etc.
613 """
613 """
614
614
615 # Map of requirements to list of extensions to load automatically when
615 # Map of requirements to list of extensions to load automatically when
616 # requirement is present.
616 # requirement is present.
617 autoextensions = {
617 autoextensions = {
618 b'largefiles': [b'largefiles'],
618 b'largefiles': [b'largefiles'],
619 b'lfs': [b'lfs'],
619 b'lfs': [b'lfs'],
620 }
620 }
621
621
622 for requirement, names in sorted(autoextensions.items()):
622 for requirement, names in sorted(autoextensions.items()):
623 if requirement not in requirements:
623 if requirement not in requirements:
624 continue
624 continue
625
625
626 for name in names:
626 for name in names:
627 if not ui.hasconfig(b'extensions', name):
627 if not ui.hasconfig(b'extensions', name):
628 ui.setconfig(b'extensions', name, b'', source='autoload')
628 ui.setconfig(b'extensions', name, b'', source='autoload')
629
629
630 def gathersupportedrequirements(ui):
630 def gathersupportedrequirements(ui):
631 """Determine the complete set of recognized requirements."""
631 """Determine the complete set of recognized requirements."""
632 # Start with all requirements supported by this file.
632 # Start with all requirements supported by this file.
633 supported = set(localrepository._basesupported)
633 supported = set(localrepository._basesupported)
634
634
635 # Execute ``featuresetupfuncs`` entries if they belong to an extension
635 # Execute ``featuresetupfuncs`` entries if they belong to an extension
636 # relevant to this ui instance.
636 # relevant to this ui instance.
637 modules = {m.__name__ for n, m in extensions.extensions(ui)}
637 modules = {m.__name__ for n, m in extensions.extensions(ui)}
638
638
639 for fn in featuresetupfuncs:
639 for fn in featuresetupfuncs:
640 if fn.__module__ in modules:
640 if fn.__module__ in modules:
641 fn(ui, supported)
641 fn(ui, supported)
642
642
643 # Add derived requirements from registered compression engines.
643 # Add derived requirements from registered compression engines.
644 for name in util.compengines:
644 for name in util.compengines:
645 engine = util.compengines[name]
645 engine = util.compengines[name]
646 if engine.revlogheader():
646 if engine.revlogheader():
647 supported.add(b'exp-compression-%s' % name)
647 supported.add(b'exp-compression-%s' % name)
648
648
649 return supported
649 return supported
650
650
651 def ensurerequirementsrecognized(requirements, supported):
651 def ensurerequirementsrecognized(requirements, supported):
652 """Validate that a set of local requirements is recognized.
652 """Validate that a set of local requirements is recognized.
653
653
654 Receives a set of requirements. Raises an ``error.RepoError`` if there
654 Receives a set of requirements. Raises an ``error.RepoError`` if there
655 exists any requirement in that set that currently loaded code doesn't
655 exists any requirement in that set that currently loaded code doesn't
656 recognize.
656 recognize.
657
657
658 Returns a set of supported requirements.
658 Returns a set of supported requirements.
659 """
659 """
660 missing = set()
660 missing = set()
661
661
662 for requirement in requirements:
662 for requirement in requirements:
663 if requirement in supported:
663 if requirement in supported:
664 continue
664 continue
665
665
666 if not requirement or not requirement[0:1].isalnum():
666 if not requirement or not requirement[0:1].isalnum():
667 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
667 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
668
668
669 missing.add(requirement)
669 missing.add(requirement)
670
670
671 if missing:
671 if missing:
672 raise error.RequirementError(
672 raise error.RequirementError(
673 _(b'repository requires features unknown to this Mercurial: %s') %
673 _(b'repository requires features unknown to this Mercurial: %s') %
674 b' '.join(sorted(missing)),
674 b' '.join(sorted(missing)),
675 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
675 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
676 b'for more information'))
676 b'for more information'))
677
677
678 def ensurerequirementscompatible(ui, requirements):
678 def ensurerequirementscompatible(ui, requirements):
679 """Validates that a set of recognized requirements is mutually compatible.
679 """Validates that a set of recognized requirements is mutually compatible.
680
680
681 Some requirements may not be compatible with others or require
681 Some requirements may not be compatible with others or require
682 config options that aren't enabled. This function is called during
682 config options that aren't enabled. This function is called during
683 repository opening to ensure that the set of requirements needed
683 repository opening to ensure that the set of requirements needed
684 to open a repository is sane and compatible with config options.
684 to open a repository is sane and compatible with config options.
685
685
686 Extensions can monkeypatch this function to perform additional
686 Extensions can monkeypatch this function to perform additional
687 checking.
687 checking.
688
688
689 ``error.RepoError`` should be raised on failure.
689 ``error.RepoError`` should be raised on failure.
690 """
690 """
691 if b'exp-sparse' in requirements and not sparse.enabled:
691 if b'exp-sparse' in requirements and not sparse.enabled:
692 raise error.RepoError(_(b'repository is using sparse feature but '
692 raise error.RepoError(_(b'repository is using sparse feature but '
693 b'sparse is not enabled; enable the '
693 b'sparse is not enabled; enable the '
694 b'"sparse" extensions to access'))
694 b'"sparse" extensions to access'))
695
695
696 def makestore(requirements, path, vfstype):
696 def makestore(requirements, path, vfstype):
697 """Construct a storage object for a repository."""
697 """Construct a storage object for a repository."""
698 if b'store' in requirements:
698 if b'store' in requirements:
699 if b'fncache' in requirements:
699 if b'fncache' in requirements:
700 return storemod.fncachestore(path, vfstype,
700 return storemod.fncachestore(path, vfstype,
701 b'dotencode' in requirements)
701 b'dotencode' in requirements)
702
702
703 return storemod.encodedstore(path, vfstype)
703 return storemod.encodedstore(path, vfstype)
704
704
705 return storemod.basicstore(path, vfstype)
705 return storemod.basicstore(path, vfstype)
706
706
707 def resolvestorevfsoptions(ui, requirements, features):
707 def resolvestorevfsoptions(ui, requirements, features):
708 """Resolve the options to pass to the store vfs opener.
708 """Resolve the options to pass to the store vfs opener.
709
709
710 The returned dict is used to influence behavior of the storage layer.
710 The returned dict is used to influence behavior of the storage layer.
711 """
711 """
712 options = {}
712 options = {}
713
713
714 if b'treemanifest' in requirements:
714 if b'treemanifest' in requirements:
715 options[b'treemanifest'] = True
715 options[b'treemanifest'] = True
716
716
717 # experimental config: format.manifestcachesize
717 # experimental config: format.manifestcachesize
718 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
718 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
719 if manifestcachesize is not None:
719 if manifestcachesize is not None:
720 options[b'manifestcachesize'] = manifestcachesize
720 options[b'manifestcachesize'] = manifestcachesize
721
721
722 # In the absence of another requirement superseding a revlog-related
722 # In the absence of another requirement superseding a revlog-related
723 # requirement, we have to assume the repo is using revlog version 0.
723 # requirement, we have to assume the repo is using revlog version 0.
724 # This revlog format is super old and we don't bother trying to parse
724 # This revlog format is super old and we don't bother trying to parse
725 # opener options for it because those options wouldn't do anything
725 # opener options for it because those options wouldn't do anything
726 # meaningful on such old repos.
726 # meaningful on such old repos.
727 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
727 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
728 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
728 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
729
729
730 return options
730 return options
731
731
732 def resolverevlogstorevfsoptions(ui, requirements, features):
732 def resolverevlogstorevfsoptions(ui, requirements, features):
733 """Resolve opener options specific to revlogs."""
733 """Resolve opener options specific to revlogs."""
734
734
735 options = {}
735 options = {}
736 options[b'flagprocessors'] = {}
736 options[b'flagprocessors'] = {}
737
737
738 if b'revlogv1' in requirements:
738 if b'revlogv1' in requirements:
739 options[b'revlogv1'] = True
739 options[b'revlogv1'] = True
740 if REVLOGV2_REQUIREMENT in requirements:
740 if REVLOGV2_REQUIREMENT in requirements:
741 options[b'revlogv2'] = True
741 options[b'revlogv2'] = True
742
742
743 if b'generaldelta' in requirements:
743 if b'generaldelta' in requirements:
744 options[b'generaldelta'] = True
744 options[b'generaldelta'] = True
745
745
746 # experimental config: format.chunkcachesize
746 # experimental config: format.chunkcachesize
747 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
747 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
748 if chunkcachesize is not None:
748 if chunkcachesize is not None:
749 options[b'chunkcachesize'] = chunkcachesize
749 options[b'chunkcachesize'] = chunkcachesize
750
750
751 deltabothparents = ui.configbool(b'storage',
751 deltabothparents = ui.configbool(b'storage',
752 b'revlog.optimize-delta-parent-choice')
752 b'revlog.optimize-delta-parent-choice')
753 options[b'deltabothparents'] = deltabothparents
753 options[b'deltabothparents'] = deltabothparents
754
754
755 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
755 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
756
756
757 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
757 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
758 if 0 <= chainspan:
758 if 0 <= chainspan:
759 options[b'maxdeltachainspan'] = chainspan
759 options[b'maxdeltachainspan'] = chainspan
760
760
761 mmapindexthreshold = ui.configbytes(b'storage', b'mmap-threshold')
761 mmapindexthreshold = ui.configbytes(b'storage', b'mmap-threshold')
762 if mmapindexthreshold is not None:
762 if mmapindexthreshold is not None:
763 options[b'mmapindexthreshold'] = mmapindexthreshold
763 options[b'mmapindexthreshold'] = mmapindexthreshold
764
764
765 withsparseread = ui.configbool(b'experimental', b'sparse-read')
765 withsparseread = ui.configbool(b'experimental', b'sparse-read')
766 srdensitythres = float(ui.config(b'experimental',
766 srdensitythres = float(ui.config(b'experimental',
767 b'sparse-read.density-threshold'))
767 b'sparse-read.density-threshold'))
768 srmingapsize = ui.configbytes(b'experimental',
768 srmingapsize = ui.configbytes(b'experimental',
769 b'sparse-read.min-gap-size')
769 b'sparse-read.min-gap-size')
770 options[b'with-sparse-read'] = withsparseread
770 options[b'with-sparse-read'] = withsparseread
771 options[b'sparse-read-density-threshold'] = srdensitythres
771 options[b'sparse-read-density-threshold'] = srdensitythres
772 options[b'sparse-read-min-gap-size'] = srmingapsize
772 options[b'sparse-read-min-gap-size'] = srmingapsize
773
773
774 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
774 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
775 options[b'sparse-revlog'] = sparserevlog
775 options[b'sparse-revlog'] = sparserevlog
776 if sparserevlog:
776 if sparserevlog:
777 options[b'generaldelta'] = True
777 options[b'generaldelta'] = True
778
778
779 maxchainlen = None
779 maxchainlen = None
780 if sparserevlog:
780 if sparserevlog:
781 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
781 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
782 # experimental config: format.maxchainlen
782 # experimental config: format.maxchainlen
783 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
783 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
784 if maxchainlen is not None:
784 if maxchainlen is not None:
785 options[b'maxchainlen'] = maxchainlen
785 options[b'maxchainlen'] = maxchainlen
786
786
787 for r in requirements:
787 for r in requirements:
788 if r.startswith(b'exp-compression-'):
788 if r.startswith(b'exp-compression-'):
789 options[b'compengine'] = r[len(b'exp-compression-'):]
789 options[b'compengine'] = r[len(b'exp-compression-'):]
790
790
791 if repository.NARROW_REQUIREMENT in requirements:
791 if repository.NARROW_REQUIREMENT in requirements:
792 options[b'enableellipsis'] = True
792 options[b'enableellipsis'] = True
793
793
794 return options
794 return options
795
795
796 def makemain(**kwargs):
796 def makemain(**kwargs):
797 """Produce a type conforming to ``ilocalrepositorymain``."""
797 """Produce a type conforming to ``ilocalrepositorymain``."""
798 return localrepository
798 return localrepository
799
799
800 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
800 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
801 class revlogfilestorage(object):
801 class revlogfilestorage(object):
802 """File storage when using revlogs."""
802 """File storage when using revlogs."""
803
803
804 def file(self, path):
804 def file(self, path):
805 if path[0] == b'/':
805 if path[0] == b'/':
806 path = path[1:]
806 path = path[1:]
807
807
808 return filelog.filelog(self.svfs, path)
808 return filelog.filelog(self.svfs, path)
809
809
810 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
810 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
811 class revlognarrowfilestorage(object):
811 class revlognarrowfilestorage(object):
812 """File storage when using revlogs and narrow files."""
812 """File storage when using revlogs and narrow files."""
813
813
814 def file(self, path):
814 def file(self, path):
815 if path[0] == b'/':
815 if path[0] == b'/':
816 path = path[1:]
816 path = path[1:]
817
817
818 return filelog.narrowfilelog(self.svfs, path, self.narrowmatch())
818 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
819
819
820 def makefilestorage(requirements, features, **kwargs):
820 def makefilestorage(requirements, features, **kwargs):
821 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
821 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
822 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
822 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
823 features.add(repository.REPO_FEATURE_STREAM_CLONE)
823 features.add(repository.REPO_FEATURE_STREAM_CLONE)
824
824
825 if repository.NARROW_REQUIREMENT in requirements:
825 if repository.NARROW_REQUIREMENT in requirements:
826 return revlognarrowfilestorage
826 return revlognarrowfilestorage
827 else:
827 else:
828 return revlogfilestorage
828 return revlogfilestorage
829
829
830 # List of repository interfaces and factory functions for them. Each
830 # List of repository interfaces and factory functions for them. Each
831 # will be called in order during ``makelocalrepository()`` to iteratively
831 # will be called in order during ``makelocalrepository()`` to iteratively
832 # derive the final type for a local repository instance. We capture the
832 # derive the final type for a local repository instance. We capture the
833 # function as a lambda so we don't hold a reference and the module-level
833 # function as a lambda so we don't hold a reference and the module-level
834 # functions can be wrapped.
834 # functions can be wrapped.
835 REPO_INTERFACES = [
835 REPO_INTERFACES = [
836 (repository.ilocalrepositorymain, lambda: makemain),
836 (repository.ilocalrepositorymain, lambda: makemain),
837 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
837 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
838 ]
838 ]
839
839
840 @interfaceutil.implementer(repository.ilocalrepositorymain)
840 @interfaceutil.implementer(repository.ilocalrepositorymain)
841 class localrepository(object):
841 class localrepository(object):
842 """Main class for representing local repositories.
842 """Main class for representing local repositories.
843
843
844 All local repositories are instances of this class.
844 All local repositories are instances of this class.
845
845
846 Constructed on its own, instances of this class are not usable as
846 Constructed on its own, instances of this class are not usable as
847 repository objects. To obtain a usable repository object, call
847 repository objects. To obtain a usable repository object, call
848 ``hg.repository()``, ``localrepo.instance()``, or
848 ``hg.repository()``, ``localrepo.instance()``, or
849 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
849 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
850 ``instance()`` adds support for creating new repositories.
850 ``instance()`` adds support for creating new repositories.
851 ``hg.repository()`` adds more extension integration, including calling
851 ``hg.repository()`` adds more extension integration, including calling
852 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
852 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
853 used.
853 used.
854 """
854 """
855
855
856 # obsolete experimental requirements:
856 # obsolete experimental requirements:
857 # - manifestv2: An experimental new manifest format that allowed
857 # - manifestv2: An experimental new manifest format that allowed
858 # for stem compression of long paths. Experiment ended up not
858 # for stem compression of long paths. Experiment ended up not
859 # being successful (repository sizes went up due to worse delta
859 # being successful (repository sizes went up due to worse delta
860 # chains), and the code was deleted in 4.6.
860 # chains), and the code was deleted in 4.6.
861 supportedformats = {
861 supportedformats = {
862 'revlogv1',
862 'revlogv1',
863 'generaldelta',
863 'generaldelta',
864 'treemanifest',
864 'treemanifest',
865 REVLOGV2_REQUIREMENT,
865 REVLOGV2_REQUIREMENT,
866 SPARSEREVLOG_REQUIREMENT,
866 SPARSEREVLOG_REQUIREMENT,
867 }
867 }
868 _basesupported = supportedformats | {
868 _basesupported = supportedformats | {
869 'store',
869 'store',
870 'fncache',
870 'fncache',
871 'shared',
871 'shared',
872 'relshared',
872 'relshared',
873 'dotencode',
873 'dotencode',
874 'exp-sparse',
874 'exp-sparse',
875 'internal-phase'
875 'internal-phase'
876 }
876 }
877
877
878 # list of prefix for file which can be written without 'wlock'
878 # list of prefix for file which can be written without 'wlock'
879 # Extensions should extend this list when needed
879 # Extensions should extend this list when needed
880 _wlockfreeprefix = {
880 _wlockfreeprefix = {
881 # We migh consider requiring 'wlock' for the next
881 # We migh consider requiring 'wlock' for the next
882 # two, but pretty much all the existing code assume
882 # two, but pretty much all the existing code assume
883 # wlock is not needed so we keep them excluded for
883 # wlock is not needed so we keep them excluded for
884 # now.
884 # now.
885 'hgrc',
885 'hgrc',
886 'requires',
886 'requires',
887 # XXX cache is a complicatged business someone
887 # XXX cache is a complicatged business someone
888 # should investigate this in depth at some point
888 # should investigate this in depth at some point
889 'cache/',
889 'cache/',
890 # XXX shouldn't be dirstate covered by the wlock?
890 # XXX shouldn't be dirstate covered by the wlock?
891 'dirstate',
891 'dirstate',
892 # XXX bisect was still a bit too messy at the time
892 # XXX bisect was still a bit too messy at the time
893 # this changeset was introduced. Someone should fix
893 # this changeset was introduced. Someone should fix
894 # the remainig bit and drop this line
894 # the remainig bit and drop this line
895 'bisect.state',
895 'bisect.state',
896 }
896 }
897
897
898 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
898 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
899 supportedrequirements, sharedpath, store, cachevfs, wcachevfs,
899 supportedrequirements, sharedpath, store, cachevfs, wcachevfs,
900 features, intents=None):
900 features, intents=None):
901 """Create a new local repository instance.
901 """Create a new local repository instance.
902
902
903 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
903 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
904 or ``localrepo.makelocalrepository()`` for obtaining a new repository
904 or ``localrepo.makelocalrepository()`` for obtaining a new repository
905 object.
905 object.
906
906
907 Arguments:
907 Arguments:
908
908
909 baseui
909 baseui
910 ``ui.ui`` instance that ``ui`` argument was based off of.
910 ``ui.ui`` instance that ``ui`` argument was based off of.
911
911
912 ui
912 ui
913 ``ui.ui`` instance for use by the repository.
913 ``ui.ui`` instance for use by the repository.
914
914
915 origroot
915 origroot
916 ``bytes`` path to working directory root of this repository.
916 ``bytes`` path to working directory root of this repository.
917
917
918 wdirvfs
918 wdirvfs
919 ``vfs.vfs`` rooted at the working directory.
919 ``vfs.vfs`` rooted at the working directory.
920
920
921 hgvfs
921 hgvfs
922 ``vfs.vfs`` rooted at .hg/
922 ``vfs.vfs`` rooted at .hg/
923
923
924 requirements
924 requirements
925 ``set`` of bytestrings representing repository opening requirements.
925 ``set`` of bytestrings representing repository opening requirements.
926
926
927 supportedrequirements
927 supportedrequirements
928 ``set`` of bytestrings representing repository requirements that we
928 ``set`` of bytestrings representing repository requirements that we
929 know how to open. May be a supetset of ``requirements``.
929 know how to open. May be a supetset of ``requirements``.
930
930
931 sharedpath
931 sharedpath
932 ``bytes`` Defining path to storage base directory. Points to a
932 ``bytes`` Defining path to storage base directory. Points to a
933 ``.hg/`` directory somewhere.
933 ``.hg/`` directory somewhere.
934
934
935 store
935 store
936 ``store.basicstore`` (or derived) instance providing access to
936 ``store.basicstore`` (or derived) instance providing access to
937 versioned storage.
937 versioned storage.
938
938
939 cachevfs
939 cachevfs
940 ``vfs.vfs`` used for cache files.
940 ``vfs.vfs`` used for cache files.
941
941
942 wcachevfs
942 wcachevfs
943 ``vfs.vfs`` used for cache files related to the working copy.
943 ``vfs.vfs`` used for cache files related to the working copy.
944
944
945 features
945 features
946 ``set`` of bytestrings defining features/capabilities of this
946 ``set`` of bytestrings defining features/capabilities of this
947 instance.
947 instance.
948
948
949 intents
949 intents
950 ``set`` of system strings indicating what this repo will be used
950 ``set`` of system strings indicating what this repo will be used
951 for.
951 for.
952 """
952 """
953 self.baseui = baseui
953 self.baseui = baseui
954 self.ui = ui
954 self.ui = ui
955 self.origroot = origroot
955 self.origroot = origroot
956 # vfs rooted at working directory.
956 # vfs rooted at working directory.
957 self.wvfs = wdirvfs
957 self.wvfs = wdirvfs
958 self.root = wdirvfs.base
958 self.root = wdirvfs.base
959 # vfs rooted at .hg/. Used to access most non-store paths.
959 # vfs rooted at .hg/. Used to access most non-store paths.
960 self.vfs = hgvfs
960 self.vfs = hgvfs
961 self.path = hgvfs.base
961 self.path = hgvfs.base
962 self.requirements = requirements
962 self.requirements = requirements
963 self.supported = supportedrequirements
963 self.supported = supportedrequirements
964 self.sharedpath = sharedpath
964 self.sharedpath = sharedpath
965 self.store = store
965 self.store = store
966 self.cachevfs = cachevfs
966 self.cachevfs = cachevfs
967 self.wcachevfs = wcachevfs
967 self.wcachevfs = wcachevfs
968 self.features = features
968 self.features = features
969
969
970 self.filtername = None
970 self.filtername = None
971
971
972 if (self.ui.configbool('devel', 'all-warnings') or
972 if (self.ui.configbool('devel', 'all-warnings') or
973 self.ui.configbool('devel', 'check-locks')):
973 self.ui.configbool('devel', 'check-locks')):
974 self.vfs.audit = self._getvfsward(self.vfs.audit)
974 self.vfs.audit = self._getvfsward(self.vfs.audit)
975 # A list of callback to shape the phase if no data were found.
975 # A list of callback to shape the phase if no data were found.
976 # Callback are in the form: func(repo, roots) --> processed root.
976 # Callback are in the form: func(repo, roots) --> processed root.
977 # This list it to be filled by extension during repo setup
977 # This list it to be filled by extension during repo setup
978 self._phasedefaults = []
978 self._phasedefaults = []
979
979
980 color.setup(self.ui)
980 color.setup(self.ui)
981
981
982 self.spath = self.store.path
982 self.spath = self.store.path
983 self.svfs = self.store.vfs
983 self.svfs = self.store.vfs
984 self.sjoin = self.store.join
984 self.sjoin = self.store.join
985 if (self.ui.configbool('devel', 'all-warnings') or
985 if (self.ui.configbool('devel', 'all-warnings') or
986 self.ui.configbool('devel', 'check-locks')):
986 self.ui.configbool('devel', 'check-locks')):
987 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
987 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
988 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
988 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
989 else: # standard vfs
989 else: # standard vfs
990 self.svfs.audit = self._getsvfsward(self.svfs.audit)
990 self.svfs.audit = self._getsvfsward(self.svfs.audit)
991
991
992 self._dirstatevalidatewarned = False
992 self._dirstatevalidatewarned = False
993
993
994 self._branchcaches = {}
994 self._branchcaches = {}
995 self._revbranchcache = None
995 self._revbranchcache = None
996 self._filterpats = {}
996 self._filterpats = {}
997 self._datafilters = {}
997 self._datafilters = {}
998 self._transref = self._lockref = self._wlockref = None
998 self._transref = self._lockref = self._wlockref = None
999
999
1000 # A cache for various files under .hg/ that tracks file changes,
1000 # A cache for various files under .hg/ that tracks file changes,
1001 # (used by the filecache decorator)
1001 # (used by the filecache decorator)
1002 #
1002 #
1003 # Maps a property name to its util.filecacheentry
1003 # Maps a property name to its util.filecacheentry
1004 self._filecache = {}
1004 self._filecache = {}
1005
1005
1006 # hold sets of revision to be filtered
1006 # hold sets of revision to be filtered
1007 # should be cleared when something might have changed the filter value:
1007 # should be cleared when something might have changed the filter value:
1008 # - new changesets,
1008 # - new changesets,
1009 # - phase change,
1009 # - phase change,
1010 # - new obsolescence marker,
1010 # - new obsolescence marker,
1011 # - working directory parent change,
1011 # - working directory parent change,
1012 # - bookmark changes
1012 # - bookmark changes
1013 self.filteredrevcache = {}
1013 self.filteredrevcache = {}
1014
1014
1015 # post-dirstate-status hooks
1015 # post-dirstate-status hooks
1016 self._postdsstatus = []
1016 self._postdsstatus = []
1017
1017
1018 # generic mapping between names and nodes
1018 # generic mapping between names and nodes
1019 self.names = namespaces.namespaces()
1019 self.names = namespaces.namespaces()
1020
1020
1021 # Key to signature value.
1021 # Key to signature value.
1022 self._sparsesignaturecache = {}
1022 self._sparsesignaturecache = {}
1023 # Signature to cached matcher instance.
1023 # Signature to cached matcher instance.
1024 self._sparsematchercache = {}
1024 self._sparsematchercache = {}
1025
1025
1026 def _getvfsward(self, origfunc):
1026 def _getvfsward(self, origfunc):
1027 """build a ward for self.vfs"""
1027 """build a ward for self.vfs"""
1028 rref = weakref.ref(self)
1028 rref = weakref.ref(self)
1029 def checkvfs(path, mode=None):
1029 def checkvfs(path, mode=None):
1030 ret = origfunc(path, mode=mode)
1030 ret = origfunc(path, mode=mode)
1031 repo = rref()
1031 repo = rref()
1032 if (repo is None
1032 if (repo is None
1033 or not util.safehasattr(repo, '_wlockref')
1033 or not util.safehasattr(repo, '_wlockref')
1034 or not util.safehasattr(repo, '_lockref')):
1034 or not util.safehasattr(repo, '_lockref')):
1035 return
1035 return
1036 if mode in (None, 'r', 'rb'):
1036 if mode in (None, 'r', 'rb'):
1037 return
1037 return
1038 if path.startswith(repo.path):
1038 if path.startswith(repo.path):
1039 # truncate name relative to the repository (.hg)
1039 # truncate name relative to the repository (.hg)
1040 path = path[len(repo.path) + 1:]
1040 path = path[len(repo.path) + 1:]
1041 if path.startswith('cache/'):
1041 if path.startswith('cache/'):
1042 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
1042 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
1043 repo.ui.develwarn(msg % path, stacklevel=3, config="cache-vfs")
1043 repo.ui.develwarn(msg % path, stacklevel=3, config="cache-vfs")
1044 if path.startswith('journal.') or path.startswith('undo.'):
1044 if path.startswith('journal.') or path.startswith('undo.'):
1045 # journal is covered by 'lock'
1045 # journal is covered by 'lock'
1046 if repo._currentlock(repo._lockref) is None:
1046 if repo._currentlock(repo._lockref) is None:
1047 repo.ui.develwarn('write with no lock: "%s"' % path,
1047 repo.ui.develwarn('write with no lock: "%s"' % path,
1048 stacklevel=3, config='check-locks')
1048 stacklevel=3, config='check-locks')
1049 elif repo._currentlock(repo._wlockref) is None:
1049 elif repo._currentlock(repo._wlockref) is None:
1050 # rest of vfs files are covered by 'wlock'
1050 # rest of vfs files are covered by 'wlock'
1051 #
1051 #
1052 # exclude special files
1052 # exclude special files
1053 for prefix in self._wlockfreeprefix:
1053 for prefix in self._wlockfreeprefix:
1054 if path.startswith(prefix):
1054 if path.startswith(prefix):
1055 return
1055 return
1056 repo.ui.develwarn('write with no wlock: "%s"' % path,
1056 repo.ui.develwarn('write with no wlock: "%s"' % path,
1057 stacklevel=3, config='check-locks')
1057 stacklevel=3, config='check-locks')
1058 return ret
1058 return ret
1059 return checkvfs
1059 return checkvfs
1060
1060
1061 def _getsvfsward(self, origfunc):
1061 def _getsvfsward(self, origfunc):
1062 """build a ward for self.svfs"""
1062 """build a ward for self.svfs"""
1063 rref = weakref.ref(self)
1063 rref = weakref.ref(self)
1064 def checksvfs(path, mode=None):
1064 def checksvfs(path, mode=None):
1065 ret = origfunc(path, mode=mode)
1065 ret = origfunc(path, mode=mode)
1066 repo = rref()
1066 repo = rref()
1067 if repo is None or not util.safehasattr(repo, '_lockref'):
1067 if repo is None or not util.safehasattr(repo, '_lockref'):
1068 return
1068 return
1069 if mode in (None, 'r', 'rb'):
1069 if mode in (None, 'r', 'rb'):
1070 return
1070 return
1071 if path.startswith(repo.sharedpath):
1071 if path.startswith(repo.sharedpath):
1072 # truncate name relative to the repository (.hg)
1072 # truncate name relative to the repository (.hg)
1073 path = path[len(repo.sharedpath) + 1:]
1073 path = path[len(repo.sharedpath) + 1:]
1074 if repo._currentlock(repo._lockref) is None:
1074 if repo._currentlock(repo._lockref) is None:
1075 repo.ui.develwarn('write with no lock: "%s"' % path,
1075 repo.ui.develwarn('write with no lock: "%s"' % path,
1076 stacklevel=4)
1076 stacklevel=4)
1077 return ret
1077 return ret
1078 return checksvfs
1078 return checksvfs
1079
1079
1080 def close(self):
1080 def close(self):
1081 self._writecaches()
1081 self._writecaches()
1082
1082
1083 def _writecaches(self):
1083 def _writecaches(self):
1084 if self._revbranchcache:
1084 if self._revbranchcache:
1085 self._revbranchcache.write()
1085 self._revbranchcache.write()
1086
1086
1087 def _restrictcapabilities(self, caps):
1087 def _restrictcapabilities(self, caps):
1088 if self.ui.configbool('experimental', 'bundle2-advertise'):
1088 if self.ui.configbool('experimental', 'bundle2-advertise'):
1089 caps = set(caps)
1089 caps = set(caps)
1090 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
1090 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
1091 role='client'))
1091 role='client'))
1092 caps.add('bundle2=' + urlreq.quote(capsblob))
1092 caps.add('bundle2=' + urlreq.quote(capsblob))
1093 return caps
1093 return caps
1094
1094
1095 def _writerequirements(self):
1095 def _writerequirements(self):
1096 scmutil.writerequires(self.vfs, self.requirements)
1096 scmutil.writerequires(self.vfs, self.requirements)
1097
1097
1098 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1098 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1099 # self -> auditor -> self._checknested -> self
1099 # self -> auditor -> self._checknested -> self
1100
1100
1101 @property
1101 @property
1102 def auditor(self):
1102 def auditor(self):
1103 # This is only used by context.workingctx.match in order to
1103 # This is only used by context.workingctx.match in order to
1104 # detect files in subrepos.
1104 # detect files in subrepos.
1105 return pathutil.pathauditor(self.root, callback=self._checknested)
1105 return pathutil.pathauditor(self.root, callback=self._checknested)
1106
1106
1107 @property
1107 @property
1108 def nofsauditor(self):
1108 def nofsauditor(self):
1109 # This is only used by context.basectx.match in order to detect
1109 # This is only used by context.basectx.match in order to detect
1110 # files in subrepos.
1110 # files in subrepos.
1111 return pathutil.pathauditor(self.root, callback=self._checknested,
1111 return pathutil.pathauditor(self.root, callback=self._checknested,
1112 realfs=False, cached=True)
1112 realfs=False, cached=True)
1113
1113
1114 def _checknested(self, path):
1114 def _checknested(self, path):
1115 """Determine if path is a legal nested repository."""
1115 """Determine if path is a legal nested repository."""
1116 if not path.startswith(self.root):
1116 if not path.startswith(self.root):
1117 return False
1117 return False
1118 subpath = path[len(self.root) + 1:]
1118 subpath = path[len(self.root) + 1:]
1119 normsubpath = util.pconvert(subpath)
1119 normsubpath = util.pconvert(subpath)
1120
1120
1121 # XXX: Checking against the current working copy is wrong in
1121 # XXX: Checking against the current working copy is wrong in
1122 # the sense that it can reject things like
1122 # the sense that it can reject things like
1123 #
1123 #
1124 # $ hg cat -r 10 sub/x.txt
1124 # $ hg cat -r 10 sub/x.txt
1125 #
1125 #
1126 # if sub/ is no longer a subrepository in the working copy
1126 # if sub/ is no longer a subrepository in the working copy
1127 # parent revision.
1127 # parent revision.
1128 #
1128 #
1129 # However, it can of course also allow things that would have
1129 # However, it can of course also allow things that would have
1130 # been rejected before, such as the above cat command if sub/
1130 # been rejected before, such as the above cat command if sub/
1131 # is a subrepository now, but was a normal directory before.
1131 # is a subrepository now, but was a normal directory before.
1132 # The old path auditor would have rejected by mistake since it
1132 # The old path auditor would have rejected by mistake since it
1133 # panics when it sees sub/.hg/.
1133 # panics when it sees sub/.hg/.
1134 #
1134 #
1135 # All in all, checking against the working copy seems sensible
1135 # All in all, checking against the working copy seems sensible
1136 # since we want to prevent access to nested repositories on
1136 # since we want to prevent access to nested repositories on
1137 # the filesystem *now*.
1137 # the filesystem *now*.
1138 ctx = self[None]
1138 ctx = self[None]
1139 parts = util.splitpath(subpath)
1139 parts = util.splitpath(subpath)
1140 while parts:
1140 while parts:
1141 prefix = '/'.join(parts)
1141 prefix = '/'.join(parts)
1142 if prefix in ctx.substate:
1142 if prefix in ctx.substate:
1143 if prefix == normsubpath:
1143 if prefix == normsubpath:
1144 return True
1144 return True
1145 else:
1145 else:
1146 sub = ctx.sub(prefix)
1146 sub = ctx.sub(prefix)
1147 return sub.checknested(subpath[len(prefix) + 1:])
1147 return sub.checknested(subpath[len(prefix) + 1:])
1148 else:
1148 else:
1149 parts.pop()
1149 parts.pop()
1150 return False
1150 return False
1151
1151
1152 def peer(self):
1152 def peer(self):
1153 return localpeer(self) # not cached to avoid reference cycle
1153 return localpeer(self) # not cached to avoid reference cycle
1154
1154
1155 def unfiltered(self):
1155 def unfiltered(self):
1156 """Return unfiltered version of the repository
1156 """Return unfiltered version of the repository
1157
1157
1158 Intended to be overwritten by filtered repo."""
1158 Intended to be overwritten by filtered repo."""
1159 return self
1159 return self
1160
1160
1161 def filtered(self, name, visibilityexceptions=None):
1161 def filtered(self, name, visibilityexceptions=None):
1162 """Return a filtered version of a repository"""
1162 """Return a filtered version of a repository"""
1163 cls = repoview.newtype(self.unfiltered().__class__)
1163 cls = repoview.newtype(self.unfiltered().__class__)
1164 return cls(self, name, visibilityexceptions)
1164 return cls(self, name, visibilityexceptions)
1165
1165
1166 @repofilecache('bookmarks', 'bookmarks.current')
1166 @repofilecache('bookmarks', 'bookmarks.current')
1167 def _bookmarks(self):
1167 def _bookmarks(self):
1168 return bookmarks.bmstore(self)
1168 return bookmarks.bmstore(self)
1169
1169
1170 @property
1170 @property
1171 def _activebookmark(self):
1171 def _activebookmark(self):
1172 return self._bookmarks.active
1172 return self._bookmarks.active
1173
1173
1174 # _phasesets depend on changelog. what we need is to call
1174 # _phasesets depend on changelog. what we need is to call
1175 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1175 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1176 # can't be easily expressed in filecache mechanism.
1176 # can't be easily expressed in filecache mechanism.
1177 @storecache('phaseroots', '00changelog.i')
1177 @storecache('phaseroots', '00changelog.i')
1178 def _phasecache(self):
1178 def _phasecache(self):
1179 return phases.phasecache(self, self._phasedefaults)
1179 return phases.phasecache(self, self._phasedefaults)
1180
1180
1181 @storecache('obsstore')
1181 @storecache('obsstore')
1182 def obsstore(self):
1182 def obsstore(self):
1183 return obsolete.makestore(self.ui, self)
1183 return obsolete.makestore(self.ui, self)
1184
1184
1185 @storecache('00changelog.i')
1185 @storecache('00changelog.i')
1186 def changelog(self):
1186 def changelog(self):
1187 return changelog.changelog(self.svfs,
1187 return changelog.changelog(self.svfs,
1188 trypending=txnutil.mayhavepending(self.root))
1188 trypending=txnutil.mayhavepending(self.root))
1189
1189
1190 @storecache('00manifest.i')
1190 @storecache('00manifest.i')
1191 def manifestlog(self):
1191 def manifestlog(self):
1192 rootstore = manifest.manifestrevlog(self.svfs)
1192 rootstore = manifest.manifestrevlog(self.svfs)
1193 return manifest.manifestlog(self.svfs, self, rootstore,
1193 return manifest.manifestlog(self.svfs, self, rootstore,
1194 self.narrowmatch())
1194 self._storenarrowmatch)
1195
1195
1196 @repofilecache('dirstate')
1196 @repofilecache('dirstate')
1197 def dirstate(self):
1197 def dirstate(self):
1198 return self._makedirstate()
1198 return self._makedirstate()
1199
1199
1200 def _makedirstate(self):
1200 def _makedirstate(self):
1201 """Extension point for wrapping the dirstate per-repo."""
1201 """Extension point for wrapping the dirstate per-repo."""
1202 sparsematchfn = lambda: sparse.matcher(self)
1202 sparsematchfn = lambda: sparse.matcher(self)
1203
1203
1204 return dirstate.dirstate(self.vfs, self.ui, self.root,
1204 return dirstate.dirstate(self.vfs, self.ui, self.root,
1205 self._dirstatevalidate, sparsematchfn)
1205 self._dirstatevalidate, sparsematchfn)
1206
1206
1207 def _dirstatevalidate(self, node):
1207 def _dirstatevalidate(self, node):
1208 try:
1208 try:
1209 self.changelog.rev(node)
1209 self.changelog.rev(node)
1210 return node
1210 return node
1211 except error.LookupError:
1211 except error.LookupError:
1212 if not self._dirstatevalidatewarned:
1212 if not self._dirstatevalidatewarned:
1213 self._dirstatevalidatewarned = True
1213 self._dirstatevalidatewarned = True
1214 self.ui.warn(_("warning: ignoring unknown"
1214 self.ui.warn(_("warning: ignoring unknown"
1215 " working parent %s!\n") % short(node))
1215 " working parent %s!\n") % short(node))
1216 return nullid
1216 return nullid
1217
1217
1218 @storecache(narrowspec.FILENAME)
1218 @storecache(narrowspec.FILENAME)
1219 def narrowpats(self):
1219 def narrowpats(self):
1220 """matcher patterns for this repository's narrowspec
1220 """matcher patterns for this repository's narrowspec
1221
1221
1222 A tuple of (includes, excludes).
1222 A tuple of (includes, excludes).
1223 """
1223 """
1224 return narrowspec.load(self)
1224 return narrowspec.load(self)
1225
1225
1226 @storecache(narrowspec.FILENAME)
1226 @storecache(narrowspec.FILENAME)
1227 def _storenarrowmatch(self):
1228 if repository.NARROW_REQUIREMENT not in self.requirements:
1229 return matchmod.always(self.root, '')
1230 include, exclude = self.narrowpats
1231 return narrowspec.match(self.root, include=include, exclude=exclude)
1232
1233 @storecache(narrowspec.FILENAME)
1227 def _narrowmatch(self):
1234 def _narrowmatch(self):
1228 if repository.NARROW_REQUIREMENT not in self.requirements:
1235 if repository.NARROW_REQUIREMENT not in self.requirements:
1229 return matchmod.always(self.root, '')
1236 return matchmod.always(self.root, '')
1230 narrowspec.checkworkingcopynarrowspec(self)
1237 narrowspec.checkworkingcopynarrowspec(self)
1231 include, exclude = self.narrowpats
1238 include, exclude = self.narrowpats
1232 return narrowspec.match(self.root, include=include, exclude=exclude)
1239 return narrowspec.match(self.root, include=include, exclude=exclude)
1233
1240
1234 def narrowmatch(self, match=None, includeexact=False):
1241 def narrowmatch(self, match=None, includeexact=False):
1235 """matcher corresponding the the repo's narrowspec
1242 """matcher corresponding the the repo's narrowspec
1236
1243
1237 If `match` is given, then that will be intersected with the narrow
1244 If `match` is given, then that will be intersected with the narrow
1238 matcher.
1245 matcher.
1239
1246
1240 If `includeexact` is True, then any exact matches from `match` will
1247 If `includeexact` is True, then any exact matches from `match` will
1241 be included even if they're outside the narrowspec.
1248 be included even if they're outside the narrowspec.
1242 """
1249 """
1243 if match:
1250 if match:
1244 if includeexact and not self._narrowmatch.always():
1251 if includeexact and not self._narrowmatch.always():
1245 # do not exclude explicitly-specified paths so that they can
1252 # do not exclude explicitly-specified paths so that they can
1246 # be warned later on
1253 # be warned later on
1247 em = matchmod.exact(match._root, match._cwd, match.files())
1254 em = matchmod.exact(match._root, match._cwd, match.files())
1248 nm = matchmod.unionmatcher([self._narrowmatch, em])
1255 nm = matchmod.unionmatcher([self._narrowmatch, em])
1249 return matchmod.intersectmatchers(match, nm)
1256 return matchmod.intersectmatchers(match, nm)
1250 return matchmod.intersectmatchers(match, self._narrowmatch)
1257 return matchmod.intersectmatchers(match, self._narrowmatch)
1251 return self._narrowmatch
1258 return self._narrowmatch
1252
1259
1253 def setnarrowpats(self, newincludes, newexcludes):
1260 def setnarrowpats(self, newincludes, newexcludes):
1254 narrowspec.save(self, newincludes, newexcludes)
1261 narrowspec.save(self, newincludes, newexcludes)
1255 narrowspec.copytoworkingcopy(self)
1262 narrowspec.copytoworkingcopy(self)
1256 self.invalidate(clearfilecache=True)
1263 self.invalidate(clearfilecache=True)
1257 # So the next access won't be considered a conflict
1264 # So the next access won't be considered a conflict
1258 # TODO: It seems like there should be a way of doing this that
1265 # TODO: It seems like there should be a way of doing this that
1259 # doesn't involve replacing these attributes.
1266 # doesn't involve replacing these attributes.
1260 self.narrowpats = newincludes, newexcludes
1267 self.narrowpats = newincludes, newexcludes
1261 self._narrowmatch = narrowspec.match(self.root, include=newincludes,
1268 self._narrowmatch = narrowspec.match(self.root, include=newincludes,
1262 exclude=newexcludes)
1269 exclude=newexcludes)
1263
1270
1264 def __getitem__(self, changeid):
1271 def __getitem__(self, changeid):
1265 if changeid is None:
1272 if changeid is None:
1266 return context.workingctx(self)
1273 return context.workingctx(self)
1267 if isinstance(changeid, context.basectx):
1274 if isinstance(changeid, context.basectx):
1268 return changeid
1275 return changeid
1269 if isinstance(changeid, slice):
1276 if isinstance(changeid, slice):
1270 # wdirrev isn't contiguous so the slice shouldn't include it
1277 # wdirrev isn't contiguous so the slice shouldn't include it
1271 return [self[i]
1278 return [self[i]
1272 for i in pycompat.xrange(*changeid.indices(len(self)))
1279 for i in pycompat.xrange(*changeid.indices(len(self)))
1273 if i not in self.changelog.filteredrevs]
1280 if i not in self.changelog.filteredrevs]
1274 try:
1281 try:
1275 if isinstance(changeid, int):
1282 if isinstance(changeid, int):
1276 node = self.changelog.node(changeid)
1283 node = self.changelog.node(changeid)
1277 rev = changeid
1284 rev = changeid
1278 elif changeid == 'null':
1285 elif changeid == 'null':
1279 node = nullid
1286 node = nullid
1280 rev = nullrev
1287 rev = nullrev
1281 elif changeid == 'tip':
1288 elif changeid == 'tip':
1282 node = self.changelog.tip()
1289 node = self.changelog.tip()
1283 rev = self.changelog.rev(node)
1290 rev = self.changelog.rev(node)
1284 elif changeid == '.':
1291 elif changeid == '.':
1285 # this is a hack to delay/avoid loading obsmarkers
1292 # this is a hack to delay/avoid loading obsmarkers
1286 # when we know that '.' won't be hidden
1293 # when we know that '.' won't be hidden
1287 node = self.dirstate.p1()
1294 node = self.dirstate.p1()
1288 rev = self.unfiltered().changelog.rev(node)
1295 rev = self.unfiltered().changelog.rev(node)
1289 elif len(changeid) == 20:
1296 elif len(changeid) == 20:
1290 try:
1297 try:
1291 node = changeid
1298 node = changeid
1292 rev = self.changelog.rev(changeid)
1299 rev = self.changelog.rev(changeid)
1293 except error.FilteredLookupError:
1300 except error.FilteredLookupError:
1294 changeid = hex(changeid) # for the error message
1301 changeid = hex(changeid) # for the error message
1295 raise
1302 raise
1296 except LookupError:
1303 except LookupError:
1297 # check if it might have come from damaged dirstate
1304 # check if it might have come from damaged dirstate
1298 #
1305 #
1299 # XXX we could avoid the unfiltered if we had a recognizable
1306 # XXX we could avoid the unfiltered if we had a recognizable
1300 # exception for filtered changeset access
1307 # exception for filtered changeset access
1301 if (self.local()
1308 if (self.local()
1302 and changeid in self.unfiltered().dirstate.parents()):
1309 and changeid in self.unfiltered().dirstate.parents()):
1303 msg = _("working directory has unknown parent '%s'!")
1310 msg = _("working directory has unknown parent '%s'!")
1304 raise error.Abort(msg % short(changeid))
1311 raise error.Abort(msg % short(changeid))
1305 changeid = hex(changeid) # for the error message
1312 changeid = hex(changeid) # for the error message
1306 raise
1313 raise
1307
1314
1308 elif len(changeid) == 40:
1315 elif len(changeid) == 40:
1309 node = bin(changeid)
1316 node = bin(changeid)
1310 rev = self.changelog.rev(node)
1317 rev = self.changelog.rev(node)
1311 else:
1318 else:
1312 raise error.ProgrammingError(
1319 raise error.ProgrammingError(
1313 "unsupported changeid '%s' of type %s" %
1320 "unsupported changeid '%s' of type %s" %
1314 (changeid, type(changeid)))
1321 (changeid, type(changeid)))
1315
1322
1316 return context.changectx(self, rev, node)
1323 return context.changectx(self, rev, node)
1317
1324
1318 except (error.FilteredIndexError, error.FilteredLookupError):
1325 except (error.FilteredIndexError, error.FilteredLookupError):
1319 raise error.FilteredRepoLookupError(_("filtered revision '%s'")
1326 raise error.FilteredRepoLookupError(_("filtered revision '%s'")
1320 % pycompat.bytestr(changeid))
1327 % pycompat.bytestr(changeid))
1321 except (IndexError, LookupError):
1328 except (IndexError, LookupError):
1322 raise error.RepoLookupError(
1329 raise error.RepoLookupError(
1323 _("unknown revision '%s'") % pycompat.bytestr(changeid))
1330 _("unknown revision '%s'") % pycompat.bytestr(changeid))
1324 except error.WdirUnsupported:
1331 except error.WdirUnsupported:
1325 return context.workingctx(self)
1332 return context.workingctx(self)
1326
1333
1327 def __contains__(self, changeid):
1334 def __contains__(self, changeid):
1328 """True if the given changeid exists
1335 """True if the given changeid exists
1329
1336
1330 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1337 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1331 specified.
1338 specified.
1332 """
1339 """
1333 try:
1340 try:
1334 self[changeid]
1341 self[changeid]
1335 return True
1342 return True
1336 except error.RepoLookupError:
1343 except error.RepoLookupError:
1337 return False
1344 return False
1338
1345
1339 def __nonzero__(self):
1346 def __nonzero__(self):
1340 return True
1347 return True
1341
1348
1342 __bool__ = __nonzero__
1349 __bool__ = __nonzero__
1343
1350
1344 def __len__(self):
1351 def __len__(self):
1345 # no need to pay the cost of repoview.changelog
1352 # no need to pay the cost of repoview.changelog
1346 unfi = self.unfiltered()
1353 unfi = self.unfiltered()
1347 return len(unfi.changelog)
1354 return len(unfi.changelog)
1348
1355
1349 def __iter__(self):
1356 def __iter__(self):
1350 return iter(self.changelog)
1357 return iter(self.changelog)
1351
1358
1352 def revs(self, expr, *args):
1359 def revs(self, expr, *args):
1353 '''Find revisions matching a revset.
1360 '''Find revisions matching a revset.
1354
1361
1355 The revset is specified as a string ``expr`` that may contain
1362 The revset is specified as a string ``expr`` that may contain
1356 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1363 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1357
1364
1358 Revset aliases from the configuration are not expanded. To expand
1365 Revset aliases from the configuration are not expanded. To expand
1359 user aliases, consider calling ``scmutil.revrange()`` or
1366 user aliases, consider calling ``scmutil.revrange()`` or
1360 ``repo.anyrevs([expr], user=True)``.
1367 ``repo.anyrevs([expr], user=True)``.
1361
1368
1362 Returns a revset.abstractsmartset, which is a list-like interface
1369 Returns a revset.abstractsmartset, which is a list-like interface
1363 that contains integer revisions.
1370 that contains integer revisions.
1364 '''
1371 '''
1365 tree = revsetlang.spectree(expr, *args)
1372 tree = revsetlang.spectree(expr, *args)
1366 return revset.makematcher(tree)(self)
1373 return revset.makematcher(tree)(self)
1367
1374
1368 def set(self, expr, *args):
1375 def set(self, expr, *args):
1369 '''Find revisions matching a revset and emit changectx instances.
1376 '''Find revisions matching a revset and emit changectx instances.
1370
1377
1371 This is a convenience wrapper around ``revs()`` that iterates the
1378 This is a convenience wrapper around ``revs()`` that iterates the
1372 result and is a generator of changectx instances.
1379 result and is a generator of changectx instances.
1373
1380
1374 Revset aliases from the configuration are not expanded. To expand
1381 Revset aliases from the configuration are not expanded. To expand
1375 user aliases, consider calling ``scmutil.revrange()``.
1382 user aliases, consider calling ``scmutil.revrange()``.
1376 '''
1383 '''
1377 for r in self.revs(expr, *args):
1384 for r in self.revs(expr, *args):
1378 yield self[r]
1385 yield self[r]
1379
1386
1380 def anyrevs(self, specs, user=False, localalias=None):
1387 def anyrevs(self, specs, user=False, localalias=None):
1381 '''Find revisions matching one of the given revsets.
1388 '''Find revisions matching one of the given revsets.
1382
1389
1383 Revset aliases from the configuration are not expanded by default. To
1390 Revset aliases from the configuration are not expanded by default. To
1384 expand user aliases, specify ``user=True``. To provide some local
1391 expand user aliases, specify ``user=True``. To provide some local
1385 definitions overriding user aliases, set ``localalias`` to
1392 definitions overriding user aliases, set ``localalias`` to
1386 ``{name: definitionstring}``.
1393 ``{name: definitionstring}``.
1387 '''
1394 '''
1388 if user:
1395 if user:
1389 m = revset.matchany(self.ui, specs,
1396 m = revset.matchany(self.ui, specs,
1390 lookup=revset.lookupfn(self),
1397 lookup=revset.lookupfn(self),
1391 localalias=localalias)
1398 localalias=localalias)
1392 else:
1399 else:
1393 m = revset.matchany(None, specs, localalias=localalias)
1400 m = revset.matchany(None, specs, localalias=localalias)
1394 return m(self)
1401 return m(self)
1395
1402
1396 def url(self):
1403 def url(self):
1397 return 'file:' + self.root
1404 return 'file:' + self.root
1398
1405
1399 def hook(self, name, throw=False, **args):
1406 def hook(self, name, throw=False, **args):
1400 """Call a hook, passing this repo instance.
1407 """Call a hook, passing this repo instance.
1401
1408
1402 This a convenience method to aid invoking hooks. Extensions likely
1409 This a convenience method to aid invoking hooks. Extensions likely
1403 won't call this unless they have registered a custom hook or are
1410 won't call this unless they have registered a custom hook or are
1404 replacing code that is expected to call a hook.
1411 replacing code that is expected to call a hook.
1405 """
1412 """
1406 return hook.hook(self.ui, self, name, throw, **args)
1413 return hook.hook(self.ui, self, name, throw, **args)
1407
1414
1408 @filteredpropertycache
1415 @filteredpropertycache
1409 def _tagscache(self):
1416 def _tagscache(self):
1410 '''Returns a tagscache object that contains various tags related
1417 '''Returns a tagscache object that contains various tags related
1411 caches.'''
1418 caches.'''
1412
1419
1413 # This simplifies its cache management by having one decorated
1420 # This simplifies its cache management by having one decorated
1414 # function (this one) and the rest simply fetch things from it.
1421 # function (this one) and the rest simply fetch things from it.
1415 class tagscache(object):
1422 class tagscache(object):
1416 def __init__(self):
1423 def __init__(self):
1417 # These two define the set of tags for this repository. tags
1424 # These two define the set of tags for this repository. tags
1418 # maps tag name to node; tagtypes maps tag name to 'global' or
1425 # maps tag name to node; tagtypes maps tag name to 'global' or
1419 # 'local'. (Global tags are defined by .hgtags across all
1426 # 'local'. (Global tags are defined by .hgtags across all
1420 # heads, and local tags are defined in .hg/localtags.)
1427 # heads, and local tags are defined in .hg/localtags.)
1421 # They constitute the in-memory cache of tags.
1428 # They constitute the in-memory cache of tags.
1422 self.tags = self.tagtypes = None
1429 self.tags = self.tagtypes = None
1423
1430
1424 self.nodetagscache = self.tagslist = None
1431 self.nodetagscache = self.tagslist = None
1425
1432
1426 cache = tagscache()
1433 cache = tagscache()
1427 cache.tags, cache.tagtypes = self._findtags()
1434 cache.tags, cache.tagtypes = self._findtags()
1428
1435
1429 return cache
1436 return cache
1430
1437
1431 def tags(self):
1438 def tags(self):
1432 '''return a mapping of tag to node'''
1439 '''return a mapping of tag to node'''
1433 t = {}
1440 t = {}
1434 if self.changelog.filteredrevs:
1441 if self.changelog.filteredrevs:
1435 tags, tt = self._findtags()
1442 tags, tt = self._findtags()
1436 else:
1443 else:
1437 tags = self._tagscache.tags
1444 tags = self._tagscache.tags
1438 rev = self.changelog.rev
1445 rev = self.changelog.rev
1439 for k, v in tags.iteritems():
1446 for k, v in tags.iteritems():
1440 try:
1447 try:
1441 # ignore tags to unknown nodes
1448 # ignore tags to unknown nodes
1442 rev(v)
1449 rev(v)
1443 t[k] = v
1450 t[k] = v
1444 except (error.LookupError, ValueError):
1451 except (error.LookupError, ValueError):
1445 pass
1452 pass
1446 return t
1453 return t
1447
1454
1448 def _findtags(self):
1455 def _findtags(self):
1449 '''Do the hard work of finding tags. Return a pair of dicts
1456 '''Do the hard work of finding tags. Return a pair of dicts
1450 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1457 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1451 maps tag name to a string like \'global\' or \'local\'.
1458 maps tag name to a string like \'global\' or \'local\'.
1452 Subclasses or extensions are free to add their own tags, but
1459 Subclasses or extensions are free to add their own tags, but
1453 should be aware that the returned dicts will be retained for the
1460 should be aware that the returned dicts will be retained for the
1454 duration of the localrepo object.'''
1461 duration of the localrepo object.'''
1455
1462
1456 # XXX what tagtype should subclasses/extensions use? Currently
1463 # XXX what tagtype should subclasses/extensions use? Currently
1457 # mq and bookmarks add tags, but do not set the tagtype at all.
1464 # mq and bookmarks add tags, but do not set the tagtype at all.
1458 # Should each extension invent its own tag type? Should there
1465 # Should each extension invent its own tag type? Should there
1459 # be one tagtype for all such "virtual" tags? Or is the status
1466 # be one tagtype for all such "virtual" tags? Or is the status
1460 # quo fine?
1467 # quo fine?
1461
1468
1462
1469
1463 # map tag name to (node, hist)
1470 # map tag name to (node, hist)
1464 alltags = tagsmod.findglobaltags(self.ui, self)
1471 alltags = tagsmod.findglobaltags(self.ui, self)
1465 # map tag name to tag type
1472 # map tag name to tag type
1466 tagtypes = dict((tag, 'global') for tag in alltags)
1473 tagtypes = dict((tag, 'global') for tag in alltags)
1467
1474
1468 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1475 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1469
1476
1470 # Build the return dicts. Have to re-encode tag names because
1477 # Build the return dicts. Have to re-encode tag names because
1471 # the tags module always uses UTF-8 (in order not to lose info
1478 # the tags module always uses UTF-8 (in order not to lose info
1472 # writing to the cache), but the rest of Mercurial wants them in
1479 # writing to the cache), but the rest of Mercurial wants them in
1473 # local encoding.
1480 # local encoding.
1474 tags = {}
1481 tags = {}
1475 for (name, (node, hist)) in alltags.iteritems():
1482 for (name, (node, hist)) in alltags.iteritems():
1476 if node != nullid:
1483 if node != nullid:
1477 tags[encoding.tolocal(name)] = node
1484 tags[encoding.tolocal(name)] = node
1478 tags['tip'] = self.changelog.tip()
1485 tags['tip'] = self.changelog.tip()
1479 tagtypes = dict([(encoding.tolocal(name), value)
1486 tagtypes = dict([(encoding.tolocal(name), value)
1480 for (name, value) in tagtypes.iteritems()])
1487 for (name, value) in tagtypes.iteritems()])
1481 return (tags, tagtypes)
1488 return (tags, tagtypes)
1482
1489
1483 def tagtype(self, tagname):
1490 def tagtype(self, tagname):
1484 '''
1491 '''
1485 return the type of the given tag. result can be:
1492 return the type of the given tag. result can be:
1486
1493
1487 'local' : a local tag
1494 'local' : a local tag
1488 'global' : a global tag
1495 'global' : a global tag
1489 None : tag does not exist
1496 None : tag does not exist
1490 '''
1497 '''
1491
1498
1492 return self._tagscache.tagtypes.get(tagname)
1499 return self._tagscache.tagtypes.get(tagname)
1493
1500
1494 def tagslist(self):
1501 def tagslist(self):
1495 '''return a list of tags ordered by revision'''
1502 '''return a list of tags ordered by revision'''
1496 if not self._tagscache.tagslist:
1503 if not self._tagscache.tagslist:
1497 l = []
1504 l = []
1498 for t, n in self.tags().iteritems():
1505 for t, n in self.tags().iteritems():
1499 l.append((self.changelog.rev(n), t, n))
1506 l.append((self.changelog.rev(n), t, n))
1500 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1507 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1501
1508
1502 return self._tagscache.tagslist
1509 return self._tagscache.tagslist
1503
1510
1504 def nodetags(self, node):
1511 def nodetags(self, node):
1505 '''return the tags associated with a node'''
1512 '''return the tags associated with a node'''
1506 if not self._tagscache.nodetagscache:
1513 if not self._tagscache.nodetagscache:
1507 nodetagscache = {}
1514 nodetagscache = {}
1508 for t, n in self._tagscache.tags.iteritems():
1515 for t, n in self._tagscache.tags.iteritems():
1509 nodetagscache.setdefault(n, []).append(t)
1516 nodetagscache.setdefault(n, []).append(t)
1510 for tags in nodetagscache.itervalues():
1517 for tags in nodetagscache.itervalues():
1511 tags.sort()
1518 tags.sort()
1512 self._tagscache.nodetagscache = nodetagscache
1519 self._tagscache.nodetagscache = nodetagscache
1513 return self._tagscache.nodetagscache.get(node, [])
1520 return self._tagscache.nodetagscache.get(node, [])
1514
1521
1515 def nodebookmarks(self, node):
1522 def nodebookmarks(self, node):
1516 """return the list of bookmarks pointing to the specified node"""
1523 """return the list of bookmarks pointing to the specified node"""
1517 return self._bookmarks.names(node)
1524 return self._bookmarks.names(node)
1518
1525
1519 def branchmap(self):
1526 def branchmap(self):
1520 '''returns a dictionary {branch: [branchheads]} with branchheads
1527 '''returns a dictionary {branch: [branchheads]} with branchheads
1521 ordered by increasing revision number'''
1528 ordered by increasing revision number'''
1522 branchmap.updatecache(self)
1529 branchmap.updatecache(self)
1523 return self._branchcaches[self.filtername]
1530 return self._branchcaches[self.filtername]
1524
1531
1525 @unfilteredmethod
1532 @unfilteredmethod
1526 def revbranchcache(self):
1533 def revbranchcache(self):
1527 if not self._revbranchcache:
1534 if not self._revbranchcache:
1528 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1535 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1529 return self._revbranchcache
1536 return self._revbranchcache
1530
1537
1531 def branchtip(self, branch, ignoremissing=False):
1538 def branchtip(self, branch, ignoremissing=False):
1532 '''return the tip node for a given branch
1539 '''return the tip node for a given branch
1533
1540
1534 If ignoremissing is True, then this method will not raise an error.
1541 If ignoremissing is True, then this method will not raise an error.
1535 This is helpful for callers that only expect None for a missing branch
1542 This is helpful for callers that only expect None for a missing branch
1536 (e.g. namespace).
1543 (e.g. namespace).
1537
1544
1538 '''
1545 '''
1539 try:
1546 try:
1540 return self.branchmap().branchtip(branch)
1547 return self.branchmap().branchtip(branch)
1541 except KeyError:
1548 except KeyError:
1542 if not ignoremissing:
1549 if not ignoremissing:
1543 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1550 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1544 else:
1551 else:
1545 pass
1552 pass
1546
1553
1547 def lookup(self, key):
1554 def lookup(self, key):
1548 return scmutil.revsymbol(self, key).node()
1555 return scmutil.revsymbol(self, key).node()
1549
1556
1550 def lookupbranch(self, key):
1557 def lookupbranch(self, key):
1551 if key in self.branchmap():
1558 if key in self.branchmap():
1552 return key
1559 return key
1553
1560
1554 return scmutil.revsymbol(self, key).branch()
1561 return scmutil.revsymbol(self, key).branch()
1555
1562
1556 def known(self, nodes):
1563 def known(self, nodes):
1557 cl = self.changelog
1564 cl = self.changelog
1558 nm = cl.nodemap
1565 nm = cl.nodemap
1559 filtered = cl.filteredrevs
1566 filtered = cl.filteredrevs
1560 result = []
1567 result = []
1561 for n in nodes:
1568 for n in nodes:
1562 r = nm.get(n)
1569 r = nm.get(n)
1563 resp = not (r is None or r in filtered)
1570 resp = not (r is None or r in filtered)
1564 result.append(resp)
1571 result.append(resp)
1565 return result
1572 return result
1566
1573
1567 def local(self):
1574 def local(self):
1568 return self
1575 return self
1569
1576
1570 def publishing(self):
1577 def publishing(self):
1571 # it's safe (and desirable) to trust the publish flag unconditionally
1578 # it's safe (and desirable) to trust the publish flag unconditionally
1572 # so that we don't finalize changes shared between users via ssh or nfs
1579 # so that we don't finalize changes shared between users via ssh or nfs
1573 return self.ui.configbool('phases', 'publish', untrusted=True)
1580 return self.ui.configbool('phases', 'publish', untrusted=True)
1574
1581
1575 def cancopy(self):
1582 def cancopy(self):
1576 # so statichttprepo's override of local() works
1583 # so statichttprepo's override of local() works
1577 if not self.local():
1584 if not self.local():
1578 return False
1585 return False
1579 if not self.publishing():
1586 if not self.publishing():
1580 return True
1587 return True
1581 # if publishing we can't copy if there is filtered content
1588 # if publishing we can't copy if there is filtered content
1582 return not self.filtered('visible').changelog.filteredrevs
1589 return not self.filtered('visible').changelog.filteredrevs
1583
1590
1584 def shared(self):
1591 def shared(self):
1585 '''the type of shared repository (None if not shared)'''
1592 '''the type of shared repository (None if not shared)'''
1586 if self.sharedpath != self.path:
1593 if self.sharedpath != self.path:
1587 return 'store'
1594 return 'store'
1588 return None
1595 return None
1589
1596
1590 def wjoin(self, f, *insidef):
1597 def wjoin(self, f, *insidef):
1591 return self.vfs.reljoin(self.root, f, *insidef)
1598 return self.vfs.reljoin(self.root, f, *insidef)
1592
1599
1593 def setparents(self, p1, p2=nullid):
1600 def setparents(self, p1, p2=nullid):
1594 with self.dirstate.parentchange():
1601 with self.dirstate.parentchange():
1595 copies = self.dirstate.setparents(p1, p2)
1602 copies = self.dirstate.setparents(p1, p2)
1596 pctx = self[p1]
1603 pctx = self[p1]
1597 if copies:
1604 if copies:
1598 # Adjust copy records, the dirstate cannot do it, it
1605 # Adjust copy records, the dirstate cannot do it, it
1599 # requires access to parents manifests. Preserve them
1606 # requires access to parents manifests. Preserve them
1600 # only for entries added to first parent.
1607 # only for entries added to first parent.
1601 for f in copies:
1608 for f in copies:
1602 if f not in pctx and copies[f] in pctx:
1609 if f not in pctx and copies[f] in pctx:
1603 self.dirstate.copy(copies[f], f)
1610 self.dirstate.copy(copies[f], f)
1604 if p2 == nullid:
1611 if p2 == nullid:
1605 for f, s in sorted(self.dirstate.copies().items()):
1612 for f, s in sorted(self.dirstate.copies().items()):
1606 if f not in pctx and s not in pctx:
1613 if f not in pctx and s not in pctx:
1607 self.dirstate.copy(None, f)
1614 self.dirstate.copy(None, f)
1608
1615
1609 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1616 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1610 """changeid must be a changeset revision, if specified.
1617 """changeid must be a changeset revision, if specified.
1611 fileid can be a file revision or node."""
1618 fileid can be a file revision or node."""
1612 return context.filectx(self, path, changeid, fileid,
1619 return context.filectx(self, path, changeid, fileid,
1613 changectx=changectx)
1620 changectx=changectx)
1614
1621
1615 def getcwd(self):
1622 def getcwd(self):
1616 return self.dirstate.getcwd()
1623 return self.dirstate.getcwd()
1617
1624
1618 def pathto(self, f, cwd=None):
1625 def pathto(self, f, cwd=None):
1619 return self.dirstate.pathto(f, cwd)
1626 return self.dirstate.pathto(f, cwd)
1620
1627
1621 def _loadfilter(self, filter):
1628 def _loadfilter(self, filter):
1622 if filter not in self._filterpats:
1629 if filter not in self._filterpats:
1623 l = []
1630 l = []
1624 for pat, cmd in self.ui.configitems(filter):
1631 for pat, cmd in self.ui.configitems(filter):
1625 if cmd == '!':
1632 if cmd == '!':
1626 continue
1633 continue
1627 mf = matchmod.match(self.root, '', [pat])
1634 mf = matchmod.match(self.root, '', [pat])
1628 fn = None
1635 fn = None
1629 params = cmd
1636 params = cmd
1630 for name, filterfn in self._datafilters.iteritems():
1637 for name, filterfn in self._datafilters.iteritems():
1631 if cmd.startswith(name):
1638 if cmd.startswith(name):
1632 fn = filterfn
1639 fn = filterfn
1633 params = cmd[len(name):].lstrip()
1640 params = cmd[len(name):].lstrip()
1634 break
1641 break
1635 if not fn:
1642 if not fn:
1636 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1643 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1637 # Wrap old filters not supporting keyword arguments
1644 # Wrap old filters not supporting keyword arguments
1638 if not pycompat.getargspec(fn)[2]:
1645 if not pycompat.getargspec(fn)[2]:
1639 oldfn = fn
1646 oldfn = fn
1640 fn = lambda s, c, **kwargs: oldfn(s, c)
1647 fn = lambda s, c, **kwargs: oldfn(s, c)
1641 l.append((mf, fn, params))
1648 l.append((mf, fn, params))
1642 self._filterpats[filter] = l
1649 self._filterpats[filter] = l
1643 return self._filterpats[filter]
1650 return self._filterpats[filter]
1644
1651
1645 def _filter(self, filterpats, filename, data):
1652 def _filter(self, filterpats, filename, data):
1646 for mf, fn, cmd in filterpats:
1653 for mf, fn, cmd in filterpats:
1647 if mf(filename):
1654 if mf(filename):
1648 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1655 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1649 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1656 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1650 break
1657 break
1651
1658
1652 return data
1659 return data
1653
1660
1654 @unfilteredpropertycache
1661 @unfilteredpropertycache
1655 def _encodefilterpats(self):
1662 def _encodefilterpats(self):
1656 return self._loadfilter('encode')
1663 return self._loadfilter('encode')
1657
1664
1658 @unfilteredpropertycache
1665 @unfilteredpropertycache
1659 def _decodefilterpats(self):
1666 def _decodefilterpats(self):
1660 return self._loadfilter('decode')
1667 return self._loadfilter('decode')
1661
1668
1662 def adddatafilter(self, name, filter):
1669 def adddatafilter(self, name, filter):
1663 self._datafilters[name] = filter
1670 self._datafilters[name] = filter
1664
1671
1665 def wread(self, filename):
1672 def wread(self, filename):
1666 if self.wvfs.islink(filename):
1673 if self.wvfs.islink(filename):
1667 data = self.wvfs.readlink(filename)
1674 data = self.wvfs.readlink(filename)
1668 else:
1675 else:
1669 data = self.wvfs.read(filename)
1676 data = self.wvfs.read(filename)
1670 return self._filter(self._encodefilterpats, filename, data)
1677 return self._filter(self._encodefilterpats, filename, data)
1671
1678
1672 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1679 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1673 """write ``data`` into ``filename`` in the working directory
1680 """write ``data`` into ``filename`` in the working directory
1674
1681
1675 This returns length of written (maybe decoded) data.
1682 This returns length of written (maybe decoded) data.
1676 """
1683 """
1677 data = self._filter(self._decodefilterpats, filename, data)
1684 data = self._filter(self._decodefilterpats, filename, data)
1678 if 'l' in flags:
1685 if 'l' in flags:
1679 self.wvfs.symlink(data, filename)
1686 self.wvfs.symlink(data, filename)
1680 else:
1687 else:
1681 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1688 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1682 **kwargs)
1689 **kwargs)
1683 if 'x' in flags:
1690 if 'x' in flags:
1684 self.wvfs.setflags(filename, False, True)
1691 self.wvfs.setflags(filename, False, True)
1685 else:
1692 else:
1686 self.wvfs.setflags(filename, False, False)
1693 self.wvfs.setflags(filename, False, False)
1687 return len(data)
1694 return len(data)
1688
1695
1689 def wwritedata(self, filename, data):
1696 def wwritedata(self, filename, data):
1690 return self._filter(self._decodefilterpats, filename, data)
1697 return self._filter(self._decodefilterpats, filename, data)
1691
1698
1692 def currenttransaction(self):
1699 def currenttransaction(self):
1693 """return the current transaction or None if non exists"""
1700 """return the current transaction or None if non exists"""
1694 if self._transref:
1701 if self._transref:
1695 tr = self._transref()
1702 tr = self._transref()
1696 else:
1703 else:
1697 tr = None
1704 tr = None
1698
1705
1699 if tr and tr.running():
1706 if tr and tr.running():
1700 return tr
1707 return tr
1701 return None
1708 return None
1702
1709
1703 def transaction(self, desc, report=None):
1710 def transaction(self, desc, report=None):
1704 if (self.ui.configbool('devel', 'all-warnings')
1711 if (self.ui.configbool('devel', 'all-warnings')
1705 or self.ui.configbool('devel', 'check-locks')):
1712 or self.ui.configbool('devel', 'check-locks')):
1706 if self._currentlock(self._lockref) is None:
1713 if self._currentlock(self._lockref) is None:
1707 raise error.ProgrammingError('transaction requires locking')
1714 raise error.ProgrammingError('transaction requires locking')
1708 tr = self.currenttransaction()
1715 tr = self.currenttransaction()
1709 if tr is not None:
1716 if tr is not None:
1710 return tr.nest(name=desc)
1717 return tr.nest(name=desc)
1711
1718
1712 # abort here if the journal already exists
1719 # abort here if the journal already exists
1713 if self.svfs.exists("journal"):
1720 if self.svfs.exists("journal"):
1714 raise error.RepoError(
1721 raise error.RepoError(
1715 _("abandoned transaction found"),
1722 _("abandoned transaction found"),
1716 hint=_("run 'hg recover' to clean up transaction"))
1723 hint=_("run 'hg recover' to clean up transaction"))
1717
1724
1718 idbase = "%.40f#%f" % (random.random(), time.time())
1725 idbase = "%.40f#%f" % (random.random(), time.time())
1719 ha = hex(hashlib.sha1(idbase).digest())
1726 ha = hex(hashlib.sha1(idbase).digest())
1720 txnid = 'TXN:' + ha
1727 txnid = 'TXN:' + ha
1721 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1728 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1722
1729
1723 self._writejournal(desc)
1730 self._writejournal(desc)
1724 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1731 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1725 if report:
1732 if report:
1726 rp = report
1733 rp = report
1727 else:
1734 else:
1728 rp = self.ui.warn
1735 rp = self.ui.warn
1729 vfsmap = {'plain': self.vfs, 'store': self.svfs} # root of .hg/
1736 vfsmap = {'plain': self.vfs, 'store': self.svfs} # root of .hg/
1730 # we must avoid cyclic reference between repo and transaction.
1737 # we must avoid cyclic reference between repo and transaction.
1731 reporef = weakref.ref(self)
1738 reporef = weakref.ref(self)
1732 # Code to track tag movement
1739 # Code to track tag movement
1733 #
1740 #
1734 # Since tags are all handled as file content, it is actually quite hard
1741 # Since tags are all handled as file content, it is actually quite hard
1735 # to track these movement from a code perspective. So we fallback to a
1742 # to track these movement from a code perspective. So we fallback to a
1736 # tracking at the repository level. One could envision to track changes
1743 # tracking at the repository level. One could envision to track changes
1737 # to the '.hgtags' file through changegroup apply but that fails to
1744 # to the '.hgtags' file through changegroup apply but that fails to
1738 # cope with case where transaction expose new heads without changegroup
1745 # cope with case where transaction expose new heads without changegroup
1739 # being involved (eg: phase movement).
1746 # being involved (eg: phase movement).
1740 #
1747 #
1741 # For now, We gate the feature behind a flag since this likely comes
1748 # For now, We gate the feature behind a flag since this likely comes
1742 # with performance impacts. The current code run more often than needed
1749 # with performance impacts. The current code run more often than needed
1743 # and do not use caches as much as it could. The current focus is on
1750 # and do not use caches as much as it could. The current focus is on
1744 # the behavior of the feature so we disable it by default. The flag
1751 # the behavior of the feature so we disable it by default. The flag
1745 # will be removed when we are happy with the performance impact.
1752 # will be removed when we are happy with the performance impact.
1746 #
1753 #
1747 # Once this feature is no longer experimental move the following
1754 # Once this feature is no longer experimental move the following
1748 # documentation to the appropriate help section:
1755 # documentation to the appropriate help section:
1749 #
1756 #
1750 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1757 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1751 # tags (new or changed or deleted tags). In addition the details of
1758 # tags (new or changed or deleted tags). In addition the details of
1752 # these changes are made available in a file at:
1759 # these changes are made available in a file at:
1753 # ``REPOROOT/.hg/changes/tags.changes``.
1760 # ``REPOROOT/.hg/changes/tags.changes``.
1754 # Make sure you check for HG_TAG_MOVED before reading that file as it
1761 # Make sure you check for HG_TAG_MOVED before reading that file as it
1755 # might exist from a previous transaction even if no tag were touched
1762 # might exist from a previous transaction even if no tag were touched
1756 # in this one. Changes are recorded in a line base format::
1763 # in this one. Changes are recorded in a line base format::
1757 #
1764 #
1758 # <action> <hex-node> <tag-name>\n
1765 # <action> <hex-node> <tag-name>\n
1759 #
1766 #
1760 # Actions are defined as follow:
1767 # Actions are defined as follow:
1761 # "-R": tag is removed,
1768 # "-R": tag is removed,
1762 # "+A": tag is added,
1769 # "+A": tag is added,
1763 # "-M": tag is moved (old value),
1770 # "-M": tag is moved (old value),
1764 # "+M": tag is moved (new value),
1771 # "+M": tag is moved (new value),
1765 tracktags = lambda x: None
1772 tracktags = lambda x: None
1766 # experimental config: experimental.hook-track-tags
1773 # experimental config: experimental.hook-track-tags
1767 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1774 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1768 if desc != 'strip' and shouldtracktags:
1775 if desc != 'strip' and shouldtracktags:
1769 oldheads = self.changelog.headrevs()
1776 oldheads = self.changelog.headrevs()
1770 def tracktags(tr2):
1777 def tracktags(tr2):
1771 repo = reporef()
1778 repo = reporef()
1772 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1779 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1773 newheads = repo.changelog.headrevs()
1780 newheads = repo.changelog.headrevs()
1774 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1781 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1775 # notes: we compare lists here.
1782 # notes: we compare lists here.
1776 # As we do it only once buiding set would not be cheaper
1783 # As we do it only once buiding set would not be cheaper
1777 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1784 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1778 if changes:
1785 if changes:
1779 tr2.hookargs['tag_moved'] = '1'
1786 tr2.hookargs['tag_moved'] = '1'
1780 with repo.vfs('changes/tags.changes', 'w',
1787 with repo.vfs('changes/tags.changes', 'w',
1781 atomictemp=True) as changesfile:
1788 atomictemp=True) as changesfile:
1782 # note: we do not register the file to the transaction
1789 # note: we do not register the file to the transaction
1783 # because we needs it to still exist on the transaction
1790 # because we needs it to still exist on the transaction
1784 # is close (for txnclose hooks)
1791 # is close (for txnclose hooks)
1785 tagsmod.writediff(changesfile, changes)
1792 tagsmod.writediff(changesfile, changes)
1786 def validate(tr2):
1793 def validate(tr2):
1787 """will run pre-closing hooks"""
1794 """will run pre-closing hooks"""
1788 # XXX the transaction API is a bit lacking here so we take a hacky
1795 # XXX the transaction API is a bit lacking here so we take a hacky
1789 # path for now
1796 # path for now
1790 #
1797 #
1791 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1798 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1792 # dict is copied before these run. In addition we needs the data
1799 # dict is copied before these run. In addition we needs the data
1793 # available to in memory hooks too.
1800 # available to in memory hooks too.
1794 #
1801 #
1795 # Moreover, we also need to make sure this runs before txnclose
1802 # Moreover, we also need to make sure this runs before txnclose
1796 # hooks and there is no "pending" mechanism that would execute
1803 # hooks and there is no "pending" mechanism that would execute
1797 # logic only if hooks are about to run.
1804 # logic only if hooks are about to run.
1798 #
1805 #
1799 # Fixing this limitation of the transaction is also needed to track
1806 # Fixing this limitation of the transaction is also needed to track
1800 # other families of changes (bookmarks, phases, obsolescence).
1807 # other families of changes (bookmarks, phases, obsolescence).
1801 #
1808 #
1802 # This will have to be fixed before we remove the experimental
1809 # This will have to be fixed before we remove the experimental
1803 # gating.
1810 # gating.
1804 tracktags(tr2)
1811 tracktags(tr2)
1805 repo = reporef()
1812 repo = reporef()
1806 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1813 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1807 scmutil.enforcesinglehead(repo, tr2, desc)
1814 scmutil.enforcesinglehead(repo, tr2, desc)
1808 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1815 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1809 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1816 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1810 args = tr.hookargs.copy()
1817 args = tr.hookargs.copy()
1811 args.update(bookmarks.preparehookargs(name, old, new))
1818 args.update(bookmarks.preparehookargs(name, old, new))
1812 repo.hook('pretxnclose-bookmark', throw=True,
1819 repo.hook('pretxnclose-bookmark', throw=True,
1813 txnname=desc,
1820 txnname=desc,
1814 **pycompat.strkwargs(args))
1821 **pycompat.strkwargs(args))
1815 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1822 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1816 cl = repo.unfiltered().changelog
1823 cl = repo.unfiltered().changelog
1817 for rev, (old, new) in tr.changes['phases'].items():
1824 for rev, (old, new) in tr.changes['phases'].items():
1818 args = tr.hookargs.copy()
1825 args = tr.hookargs.copy()
1819 node = hex(cl.node(rev))
1826 node = hex(cl.node(rev))
1820 args.update(phases.preparehookargs(node, old, new))
1827 args.update(phases.preparehookargs(node, old, new))
1821 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1828 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1822 **pycompat.strkwargs(args))
1829 **pycompat.strkwargs(args))
1823
1830
1824 repo.hook('pretxnclose', throw=True,
1831 repo.hook('pretxnclose', throw=True,
1825 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1832 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1826 def releasefn(tr, success):
1833 def releasefn(tr, success):
1827 repo = reporef()
1834 repo = reporef()
1828 if success:
1835 if success:
1829 # this should be explicitly invoked here, because
1836 # this should be explicitly invoked here, because
1830 # in-memory changes aren't written out at closing
1837 # in-memory changes aren't written out at closing
1831 # transaction, if tr.addfilegenerator (via
1838 # transaction, if tr.addfilegenerator (via
1832 # dirstate.write or so) isn't invoked while
1839 # dirstate.write or so) isn't invoked while
1833 # transaction running
1840 # transaction running
1834 repo.dirstate.write(None)
1841 repo.dirstate.write(None)
1835 else:
1842 else:
1836 # discard all changes (including ones already written
1843 # discard all changes (including ones already written
1837 # out) in this transaction
1844 # out) in this transaction
1838 narrowspec.restorebackup(self, 'journal.narrowspec')
1845 narrowspec.restorebackup(self, 'journal.narrowspec')
1839 narrowspec.restorewcbackup(self, 'journal.narrowspec.dirstate')
1846 narrowspec.restorewcbackup(self, 'journal.narrowspec.dirstate')
1840 repo.dirstate.restorebackup(None, 'journal.dirstate')
1847 repo.dirstate.restorebackup(None, 'journal.dirstate')
1841
1848
1842 repo.invalidate(clearfilecache=True)
1849 repo.invalidate(clearfilecache=True)
1843
1850
1844 tr = transaction.transaction(rp, self.svfs, vfsmap,
1851 tr = transaction.transaction(rp, self.svfs, vfsmap,
1845 "journal",
1852 "journal",
1846 "undo",
1853 "undo",
1847 aftertrans(renames),
1854 aftertrans(renames),
1848 self.store.createmode,
1855 self.store.createmode,
1849 validator=validate,
1856 validator=validate,
1850 releasefn=releasefn,
1857 releasefn=releasefn,
1851 checkambigfiles=_cachedfiles,
1858 checkambigfiles=_cachedfiles,
1852 name=desc)
1859 name=desc)
1853 tr.changes['origrepolen'] = len(self)
1860 tr.changes['origrepolen'] = len(self)
1854 tr.changes['obsmarkers'] = set()
1861 tr.changes['obsmarkers'] = set()
1855 tr.changes['phases'] = {}
1862 tr.changes['phases'] = {}
1856 tr.changes['bookmarks'] = {}
1863 tr.changes['bookmarks'] = {}
1857
1864
1858 tr.hookargs['txnid'] = txnid
1865 tr.hookargs['txnid'] = txnid
1859 # note: writing the fncache only during finalize mean that the file is
1866 # note: writing the fncache only during finalize mean that the file is
1860 # outdated when running hooks. As fncache is used for streaming clone,
1867 # outdated when running hooks. As fncache is used for streaming clone,
1861 # this is not expected to break anything that happen during the hooks.
1868 # this is not expected to break anything that happen during the hooks.
1862 tr.addfinalize('flush-fncache', self.store.write)
1869 tr.addfinalize('flush-fncache', self.store.write)
1863 def txnclosehook(tr2):
1870 def txnclosehook(tr2):
1864 """To be run if transaction is successful, will schedule a hook run
1871 """To be run if transaction is successful, will schedule a hook run
1865 """
1872 """
1866 # Don't reference tr2 in hook() so we don't hold a reference.
1873 # Don't reference tr2 in hook() so we don't hold a reference.
1867 # This reduces memory consumption when there are multiple
1874 # This reduces memory consumption when there are multiple
1868 # transactions per lock. This can likely go away if issue5045
1875 # transactions per lock. This can likely go away if issue5045
1869 # fixes the function accumulation.
1876 # fixes the function accumulation.
1870 hookargs = tr2.hookargs
1877 hookargs = tr2.hookargs
1871
1878
1872 def hookfunc():
1879 def hookfunc():
1873 repo = reporef()
1880 repo = reporef()
1874 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1881 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1875 bmchanges = sorted(tr.changes['bookmarks'].items())
1882 bmchanges = sorted(tr.changes['bookmarks'].items())
1876 for name, (old, new) in bmchanges:
1883 for name, (old, new) in bmchanges:
1877 args = tr.hookargs.copy()
1884 args = tr.hookargs.copy()
1878 args.update(bookmarks.preparehookargs(name, old, new))
1885 args.update(bookmarks.preparehookargs(name, old, new))
1879 repo.hook('txnclose-bookmark', throw=False,
1886 repo.hook('txnclose-bookmark', throw=False,
1880 txnname=desc, **pycompat.strkwargs(args))
1887 txnname=desc, **pycompat.strkwargs(args))
1881
1888
1882 if hook.hashook(repo.ui, 'txnclose-phase'):
1889 if hook.hashook(repo.ui, 'txnclose-phase'):
1883 cl = repo.unfiltered().changelog
1890 cl = repo.unfiltered().changelog
1884 phasemv = sorted(tr.changes['phases'].items())
1891 phasemv = sorted(tr.changes['phases'].items())
1885 for rev, (old, new) in phasemv:
1892 for rev, (old, new) in phasemv:
1886 args = tr.hookargs.copy()
1893 args = tr.hookargs.copy()
1887 node = hex(cl.node(rev))
1894 node = hex(cl.node(rev))
1888 args.update(phases.preparehookargs(node, old, new))
1895 args.update(phases.preparehookargs(node, old, new))
1889 repo.hook('txnclose-phase', throw=False, txnname=desc,
1896 repo.hook('txnclose-phase', throw=False, txnname=desc,
1890 **pycompat.strkwargs(args))
1897 **pycompat.strkwargs(args))
1891
1898
1892 repo.hook('txnclose', throw=False, txnname=desc,
1899 repo.hook('txnclose', throw=False, txnname=desc,
1893 **pycompat.strkwargs(hookargs))
1900 **pycompat.strkwargs(hookargs))
1894 reporef()._afterlock(hookfunc)
1901 reporef()._afterlock(hookfunc)
1895 tr.addfinalize('txnclose-hook', txnclosehook)
1902 tr.addfinalize('txnclose-hook', txnclosehook)
1896 # Include a leading "-" to make it happen before the transaction summary
1903 # Include a leading "-" to make it happen before the transaction summary
1897 # reports registered via scmutil.registersummarycallback() whose names
1904 # reports registered via scmutil.registersummarycallback() whose names
1898 # are 00-txnreport etc. That way, the caches will be warm when the
1905 # are 00-txnreport etc. That way, the caches will be warm when the
1899 # callbacks run.
1906 # callbacks run.
1900 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1907 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1901 def txnaborthook(tr2):
1908 def txnaborthook(tr2):
1902 """To be run if transaction is aborted
1909 """To be run if transaction is aborted
1903 """
1910 """
1904 reporef().hook('txnabort', throw=False, txnname=desc,
1911 reporef().hook('txnabort', throw=False, txnname=desc,
1905 **pycompat.strkwargs(tr2.hookargs))
1912 **pycompat.strkwargs(tr2.hookargs))
1906 tr.addabort('txnabort-hook', txnaborthook)
1913 tr.addabort('txnabort-hook', txnaborthook)
1907 # avoid eager cache invalidation. in-memory data should be identical
1914 # avoid eager cache invalidation. in-memory data should be identical
1908 # to stored data if transaction has no error.
1915 # to stored data if transaction has no error.
1909 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1916 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1910 self._transref = weakref.ref(tr)
1917 self._transref = weakref.ref(tr)
1911 scmutil.registersummarycallback(self, tr, desc)
1918 scmutil.registersummarycallback(self, tr, desc)
1912 return tr
1919 return tr
1913
1920
1914 def _journalfiles(self):
1921 def _journalfiles(self):
1915 return ((self.svfs, 'journal'),
1922 return ((self.svfs, 'journal'),
1916 (self.svfs, 'journal.narrowspec'),
1923 (self.svfs, 'journal.narrowspec'),
1917 (self.vfs, 'journal.narrowspec.dirstate'),
1924 (self.vfs, 'journal.narrowspec.dirstate'),
1918 (self.vfs, 'journal.dirstate'),
1925 (self.vfs, 'journal.dirstate'),
1919 (self.vfs, 'journal.branch'),
1926 (self.vfs, 'journal.branch'),
1920 (self.vfs, 'journal.desc'),
1927 (self.vfs, 'journal.desc'),
1921 (self.vfs, 'journal.bookmarks'),
1928 (self.vfs, 'journal.bookmarks'),
1922 (self.svfs, 'journal.phaseroots'))
1929 (self.svfs, 'journal.phaseroots'))
1923
1930
1924 def undofiles(self):
1931 def undofiles(self):
1925 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1932 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1926
1933
1927 @unfilteredmethod
1934 @unfilteredmethod
1928 def _writejournal(self, desc):
1935 def _writejournal(self, desc):
1929 self.dirstate.savebackup(None, 'journal.dirstate')
1936 self.dirstate.savebackup(None, 'journal.dirstate')
1930 narrowspec.savewcbackup(self, 'journal.narrowspec.dirstate')
1937 narrowspec.savewcbackup(self, 'journal.narrowspec.dirstate')
1931 narrowspec.savebackup(self, 'journal.narrowspec')
1938 narrowspec.savebackup(self, 'journal.narrowspec')
1932 self.vfs.write("journal.branch",
1939 self.vfs.write("journal.branch",
1933 encoding.fromlocal(self.dirstate.branch()))
1940 encoding.fromlocal(self.dirstate.branch()))
1934 self.vfs.write("journal.desc",
1941 self.vfs.write("journal.desc",
1935 "%d\n%s\n" % (len(self), desc))
1942 "%d\n%s\n" % (len(self), desc))
1936 self.vfs.write("journal.bookmarks",
1943 self.vfs.write("journal.bookmarks",
1937 self.vfs.tryread("bookmarks"))
1944 self.vfs.tryread("bookmarks"))
1938 self.svfs.write("journal.phaseroots",
1945 self.svfs.write("journal.phaseroots",
1939 self.svfs.tryread("phaseroots"))
1946 self.svfs.tryread("phaseroots"))
1940
1947
1941 def recover(self):
1948 def recover(self):
1942 with self.lock():
1949 with self.lock():
1943 if self.svfs.exists("journal"):
1950 if self.svfs.exists("journal"):
1944 self.ui.status(_("rolling back interrupted transaction\n"))
1951 self.ui.status(_("rolling back interrupted transaction\n"))
1945 vfsmap = {'': self.svfs,
1952 vfsmap = {'': self.svfs,
1946 'plain': self.vfs,}
1953 'plain': self.vfs,}
1947 transaction.rollback(self.svfs, vfsmap, "journal",
1954 transaction.rollback(self.svfs, vfsmap, "journal",
1948 self.ui.warn,
1955 self.ui.warn,
1949 checkambigfiles=_cachedfiles)
1956 checkambigfiles=_cachedfiles)
1950 self.invalidate()
1957 self.invalidate()
1951 return True
1958 return True
1952 else:
1959 else:
1953 self.ui.warn(_("no interrupted transaction available\n"))
1960 self.ui.warn(_("no interrupted transaction available\n"))
1954 return False
1961 return False
1955
1962
1956 def rollback(self, dryrun=False, force=False):
1963 def rollback(self, dryrun=False, force=False):
1957 wlock = lock = dsguard = None
1964 wlock = lock = dsguard = None
1958 try:
1965 try:
1959 wlock = self.wlock()
1966 wlock = self.wlock()
1960 lock = self.lock()
1967 lock = self.lock()
1961 if self.svfs.exists("undo"):
1968 if self.svfs.exists("undo"):
1962 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1969 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1963
1970
1964 return self._rollback(dryrun, force, dsguard)
1971 return self._rollback(dryrun, force, dsguard)
1965 else:
1972 else:
1966 self.ui.warn(_("no rollback information available\n"))
1973 self.ui.warn(_("no rollback information available\n"))
1967 return 1
1974 return 1
1968 finally:
1975 finally:
1969 release(dsguard, lock, wlock)
1976 release(dsguard, lock, wlock)
1970
1977
1971 @unfilteredmethod # Until we get smarter cache management
1978 @unfilteredmethod # Until we get smarter cache management
1972 def _rollback(self, dryrun, force, dsguard):
1979 def _rollback(self, dryrun, force, dsguard):
1973 ui = self.ui
1980 ui = self.ui
1974 try:
1981 try:
1975 args = self.vfs.read('undo.desc').splitlines()
1982 args = self.vfs.read('undo.desc').splitlines()
1976 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1983 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1977 if len(args) >= 3:
1984 if len(args) >= 3:
1978 detail = args[2]
1985 detail = args[2]
1979 oldtip = oldlen - 1
1986 oldtip = oldlen - 1
1980
1987
1981 if detail and ui.verbose:
1988 if detail and ui.verbose:
1982 msg = (_('repository tip rolled back to revision %d'
1989 msg = (_('repository tip rolled back to revision %d'
1983 ' (undo %s: %s)\n')
1990 ' (undo %s: %s)\n')
1984 % (oldtip, desc, detail))
1991 % (oldtip, desc, detail))
1985 else:
1992 else:
1986 msg = (_('repository tip rolled back to revision %d'
1993 msg = (_('repository tip rolled back to revision %d'
1987 ' (undo %s)\n')
1994 ' (undo %s)\n')
1988 % (oldtip, desc))
1995 % (oldtip, desc))
1989 except IOError:
1996 except IOError:
1990 msg = _('rolling back unknown transaction\n')
1997 msg = _('rolling back unknown transaction\n')
1991 desc = None
1998 desc = None
1992
1999
1993 if not force and self['.'] != self['tip'] and desc == 'commit':
2000 if not force and self['.'] != self['tip'] and desc == 'commit':
1994 raise error.Abort(
2001 raise error.Abort(
1995 _('rollback of last commit while not checked out '
2002 _('rollback of last commit while not checked out '
1996 'may lose data'), hint=_('use -f to force'))
2003 'may lose data'), hint=_('use -f to force'))
1997
2004
1998 ui.status(msg)
2005 ui.status(msg)
1999 if dryrun:
2006 if dryrun:
2000 return 0
2007 return 0
2001
2008
2002 parents = self.dirstate.parents()
2009 parents = self.dirstate.parents()
2003 self.destroying()
2010 self.destroying()
2004 vfsmap = {'plain': self.vfs, '': self.svfs}
2011 vfsmap = {'plain': self.vfs, '': self.svfs}
2005 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
2012 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
2006 checkambigfiles=_cachedfiles)
2013 checkambigfiles=_cachedfiles)
2007 if self.vfs.exists('undo.bookmarks'):
2014 if self.vfs.exists('undo.bookmarks'):
2008 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
2015 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
2009 if self.svfs.exists('undo.phaseroots'):
2016 if self.svfs.exists('undo.phaseroots'):
2010 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
2017 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
2011 self.invalidate()
2018 self.invalidate()
2012
2019
2013 parentgone = (parents[0] not in self.changelog.nodemap or
2020 parentgone = (parents[0] not in self.changelog.nodemap or
2014 parents[1] not in self.changelog.nodemap)
2021 parents[1] not in self.changelog.nodemap)
2015 if parentgone:
2022 if parentgone:
2016 # prevent dirstateguard from overwriting already restored one
2023 # prevent dirstateguard from overwriting already restored one
2017 dsguard.close()
2024 dsguard.close()
2018
2025
2019 narrowspec.restorebackup(self, 'undo.narrowspec')
2026 narrowspec.restorebackup(self, 'undo.narrowspec')
2020 narrowspec.restorewcbackup(self, 'undo.narrowspec.dirstate')
2027 narrowspec.restorewcbackup(self, 'undo.narrowspec.dirstate')
2021 self.dirstate.restorebackup(None, 'undo.dirstate')
2028 self.dirstate.restorebackup(None, 'undo.dirstate')
2022 try:
2029 try:
2023 branch = self.vfs.read('undo.branch')
2030 branch = self.vfs.read('undo.branch')
2024 self.dirstate.setbranch(encoding.tolocal(branch))
2031 self.dirstate.setbranch(encoding.tolocal(branch))
2025 except IOError:
2032 except IOError:
2026 ui.warn(_('named branch could not be reset: '
2033 ui.warn(_('named branch could not be reset: '
2027 'current branch is still \'%s\'\n')
2034 'current branch is still \'%s\'\n')
2028 % self.dirstate.branch())
2035 % self.dirstate.branch())
2029
2036
2030 parents = tuple([p.rev() for p in self[None].parents()])
2037 parents = tuple([p.rev() for p in self[None].parents()])
2031 if len(parents) > 1:
2038 if len(parents) > 1:
2032 ui.status(_('working directory now based on '
2039 ui.status(_('working directory now based on '
2033 'revisions %d and %d\n') % parents)
2040 'revisions %d and %d\n') % parents)
2034 else:
2041 else:
2035 ui.status(_('working directory now based on '
2042 ui.status(_('working directory now based on '
2036 'revision %d\n') % parents)
2043 'revision %d\n') % parents)
2037 mergemod.mergestate.clean(self, self['.'].node())
2044 mergemod.mergestate.clean(self, self['.'].node())
2038
2045
2039 # TODO: if we know which new heads may result from this rollback, pass
2046 # TODO: if we know which new heads may result from this rollback, pass
2040 # them to destroy(), which will prevent the branchhead cache from being
2047 # them to destroy(), which will prevent the branchhead cache from being
2041 # invalidated.
2048 # invalidated.
2042 self.destroyed()
2049 self.destroyed()
2043 return 0
2050 return 0
2044
2051
2045 def _buildcacheupdater(self, newtransaction):
2052 def _buildcacheupdater(self, newtransaction):
2046 """called during transaction to build the callback updating cache
2053 """called during transaction to build the callback updating cache
2047
2054
2048 Lives on the repository to help extension who might want to augment
2055 Lives on the repository to help extension who might want to augment
2049 this logic. For this purpose, the created transaction is passed to the
2056 this logic. For this purpose, the created transaction is passed to the
2050 method.
2057 method.
2051 """
2058 """
2052 # we must avoid cyclic reference between repo and transaction.
2059 # we must avoid cyclic reference between repo and transaction.
2053 reporef = weakref.ref(self)
2060 reporef = weakref.ref(self)
2054 def updater(tr):
2061 def updater(tr):
2055 repo = reporef()
2062 repo = reporef()
2056 repo.updatecaches(tr)
2063 repo.updatecaches(tr)
2057 return updater
2064 return updater
2058
2065
2059 @unfilteredmethod
2066 @unfilteredmethod
2060 def updatecaches(self, tr=None, full=False):
2067 def updatecaches(self, tr=None, full=False):
2061 """warm appropriate caches
2068 """warm appropriate caches
2062
2069
2063 If this function is called after a transaction closed. The transaction
2070 If this function is called after a transaction closed. The transaction
2064 will be available in the 'tr' argument. This can be used to selectively
2071 will be available in the 'tr' argument. This can be used to selectively
2065 update caches relevant to the changes in that transaction.
2072 update caches relevant to the changes in that transaction.
2066
2073
2067 If 'full' is set, make sure all caches the function knows about have
2074 If 'full' is set, make sure all caches the function knows about have
2068 up-to-date data. Even the ones usually loaded more lazily.
2075 up-to-date data. Even the ones usually loaded more lazily.
2069 """
2076 """
2070 if tr is not None and tr.hookargs.get('source') == 'strip':
2077 if tr is not None and tr.hookargs.get('source') == 'strip':
2071 # During strip, many caches are invalid but
2078 # During strip, many caches are invalid but
2072 # later call to `destroyed` will refresh them.
2079 # later call to `destroyed` will refresh them.
2073 return
2080 return
2074
2081
2075 if tr is None or tr.changes['origrepolen'] < len(self):
2082 if tr is None or tr.changes['origrepolen'] < len(self):
2076 # updating the unfiltered branchmap should refresh all the others,
2083 # updating the unfiltered branchmap should refresh all the others,
2077 self.ui.debug('updating the branch cache\n')
2084 self.ui.debug('updating the branch cache\n')
2078 branchmap.updatecache(self.filtered('served'))
2085 branchmap.updatecache(self.filtered('served'))
2079
2086
2080 if full:
2087 if full:
2081 rbc = self.revbranchcache()
2088 rbc = self.revbranchcache()
2082 for r in self.changelog:
2089 for r in self.changelog:
2083 rbc.branchinfo(r)
2090 rbc.branchinfo(r)
2084 rbc.write()
2091 rbc.write()
2085
2092
2086 # ensure the working copy parents are in the manifestfulltextcache
2093 # ensure the working copy parents are in the manifestfulltextcache
2087 for ctx in self['.'].parents():
2094 for ctx in self['.'].parents():
2088 ctx.manifest() # accessing the manifest is enough
2095 ctx.manifest() # accessing the manifest is enough
2089
2096
2090 def invalidatecaches(self):
2097 def invalidatecaches(self):
2091
2098
2092 if r'_tagscache' in vars(self):
2099 if r'_tagscache' in vars(self):
2093 # can't use delattr on proxy
2100 # can't use delattr on proxy
2094 del self.__dict__[r'_tagscache']
2101 del self.__dict__[r'_tagscache']
2095
2102
2096 self.unfiltered()._branchcaches.clear()
2103 self.unfiltered()._branchcaches.clear()
2097 self.invalidatevolatilesets()
2104 self.invalidatevolatilesets()
2098 self._sparsesignaturecache.clear()
2105 self._sparsesignaturecache.clear()
2099
2106
2100 def invalidatevolatilesets(self):
2107 def invalidatevolatilesets(self):
2101 self.filteredrevcache.clear()
2108 self.filteredrevcache.clear()
2102 obsolete.clearobscaches(self)
2109 obsolete.clearobscaches(self)
2103
2110
2104 def invalidatedirstate(self):
2111 def invalidatedirstate(self):
2105 '''Invalidates the dirstate, causing the next call to dirstate
2112 '''Invalidates the dirstate, causing the next call to dirstate
2106 to check if it was modified since the last time it was read,
2113 to check if it was modified since the last time it was read,
2107 rereading it if it has.
2114 rereading it if it has.
2108
2115
2109 This is different to dirstate.invalidate() that it doesn't always
2116 This is different to dirstate.invalidate() that it doesn't always
2110 rereads the dirstate. Use dirstate.invalidate() if you want to
2117 rereads the dirstate. Use dirstate.invalidate() if you want to
2111 explicitly read the dirstate again (i.e. restoring it to a previous
2118 explicitly read the dirstate again (i.e. restoring it to a previous
2112 known good state).'''
2119 known good state).'''
2113 if hasunfilteredcache(self, r'dirstate'):
2120 if hasunfilteredcache(self, r'dirstate'):
2114 for k in self.dirstate._filecache:
2121 for k in self.dirstate._filecache:
2115 try:
2122 try:
2116 delattr(self.dirstate, k)
2123 delattr(self.dirstate, k)
2117 except AttributeError:
2124 except AttributeError:
2118 pass
2125 pass
2119 delattr(self.unfiltered(), r'dirstate')
2126 delattr(self.unfiltered(), r'dirstate')
2120
2127
2121 def invalidate(self, clearfilecache=False):
2128 def invalidate(self, clearfilecache=False):
2122 '''Invalidates both store and non-store parts other than dirstate
2129 '''Invalidates both store and non-store parts other than dirstate
2123
2130
2124 If a transaction is running, invalidation of store is omitted,
2131 If a transaction is running, invalidation of store is omitted,
2125 because discarding in-memory changes might cause inconsistency
2132 because discarding in-memory changes might cause inconsistency
2126 (e.g. incomplete fncache causes unintentional failure, but
2133 (e.g. incomplete fncache causes unintentional failure, but
2127 redundant one doesn't).
2134 redundant one doesn't).
2128 '''
2135 '''
2129 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2136 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2130 for k in list(self._filecache.keys()):
2137 for k in list(self._filecache.keys()):
2131 # dirstate is invalidated separately in invalidatedirstate()
2138 # dirstate is invalidated separately in invalidatedirstate()
2132 if k == 'dirstate':
2139 if k == 'dirstate':
2133 continue
2140 continue
2134 if (k == 'changelog' and
2141 if (k == 'changelog' and
2135 self.currenttransaction() and
2142 self.currenttransaction() and
2136 self.changelog._delayed):
2143 self.changelog._delayed):
2137 # The changelog object may store unwritten revisions. We don't
2144 # The changelog object may store unwritten revisions. We don't
2138 # want to lose them.
2145 # want to lose them.
2139 # TODO: Solve the problem instead of working around it.
2146 # TODO: Solve the problem instead of working around it.
2140 continue
2147 continue
2141
2148
2142 if clearfilecache:
2149 if clearfilecache:
2143 del self._filecache[k]
2150 del self._filecache[k]
2144 try:
2151 try:
2145 delattr(unfiltered, k)
2152 delattr(unfiltered, k)
2146 except AttributeError:
2153 except AttributeError:
2147 pass
2154 pass
2148 self.invalidatecaches()
2155 self.invalidatecaches()
2149 if not self.currenttransaction():
2156 if not self.currenttransaction():
2150 # TODO: Changing contents of store outside transaction
2157 # TODO: Changing contents of store outside transaction
2151 # causes inconsistency. We should make in-memory store
2158 # causes inconsistency. We should make in-memory store
2152 # changes detectable, and abort if changed.
2159 # changes detectable, and abort if changed.
2153 self.store.invalidatecaches()
2160 self.store.invalidatecaches()
2154
2161
2155 def invalidateall(self):
2162 def invalidateall(self):
2156 '''Fully invalidates both store and non-store parts, causing the
2163 '''Fully invalidates both store and non-store parts, causing the
2157 subsequent operation to reread any outside changes.'''
2164 subsequent operation to reread any outside changes.'''
2158 # extension should hook this to invalidate its caches
2165 # extension should hook this to invalidate its caches
2159 self.invalidate()
2166 self.invalidate()
2160 self.invalidatedirstate()
2167 self.invalidatedirstate()
2161
2168
2162 @unfilteredmethod
2169 @unfilteredmethod
2163 def _refreshfilecachestats(self, tr):
2170 def _refreshfilecachestats(self, tr):
2164 """Reload stats of cached files so that they are flagged as valid"""
2171 """Reload stats of cached files so that they are flagged as valid"""
2165 for k, ce in self._filecache.items():
2172 for k, ce in self._filecache.items():
2166 k = pycompat.sysstr(k)
2173 k = pycompat.sysstr(k)
2167 if k == r'dirstate' or k not in self.__dict__:
2174 if k == r'dirstate' or k not in self.__dict__:
2168 continue
2175 continue
2169 ce.refresh()
2176 ce.refresh()
2170
2177
2171 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
2178 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
2172 inheritchecker=None, parentenvvar=None):
2179 inheritchecker=None, parentenvvar=None):
2173 parentlock = None
2180 parentlock = None
2174 # the contents of parentenvvar are used by the underlying lock to
2181 # the contents of parentenvvar are used by the underlying lock to
2175 # determine whether it can be inherited
2182 # determine whether it can be inherited
2176 if parentenvvar is not None:
2183 if parentenvvar is not None:
2177 parentlock = encoding.environ.get(parentenvvar)
2184 parentlock = encoding.environ.get(parentenvvar)
2178
2185
2179 timeout = 0
2186 timeout = 0
2180 warntimeout = 0
2187 warntimeout = 0
2181 if wait:
2188 if wait:
2182 timeout = self.ui.configint("ui", "timeout")
2189 timeout = self.ui.configint("ui", "timeout")
2183 warntimeout = self.ui.configint("ui", "timeout.warn")
2190 warntimeout = self.ui.configint("ui", "timeout.warn")
2184 # internal config: ui.signal-safe-lock
2191 # internal config: ui.signal-safe-lock
2185 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
2192 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
2186
2193
2187 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
2194 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
2188 releasefn=releasefn,
2195 releasefn=releasefn,
2189 acquirefn=acquirefn, desc=desc,
2196 acquirefn=acquirefn, desc=desc,
2190 inheritchecker=inheritchecker,
2197 inheritchecker=inheritchecker,
2191 parentlock=parentlock,
2198 parentlock=parentlock,
2192 signalsafe=signalsafe)
2199 signalsafe=signalsafe)
2193 return l
2200 return l
2194
2201
2195 def _afterlock(self, callback):
2202 def _afterlock(self, callback):
2196 """add a callback to be run when the repository is fully unlocked
2203 """add a callback to be run when the repository is fully unlocked
2197
2204
2198 The callback will be executed when the outermost lock is released
2205 The callback will be executed when the outermost lock is released
2199 (with wlock being higher level than 'lock')."""
2206 (with wlock being higher level than 'lock')."""
2200 for ref in (self._wlockref, self._lockref):
2207 for ref in (self._wlockref, self._lockref):
2201 l = ref and ref()
2208 l = ref and ref()
2202 if l and l.held:
2209 if l and l.held:
2203 l.postrelease.append(callback)
2210 l.postrelease.append(callback)
2204 break
2211 break
2205 else: # no lock have been found.
2212 else: # no lock have been found.
2206 callback()
2213 callback()
2207
2214
2208 def lock(self, wait=True):
2215 def lock(self, wait=True):
2209 '''Lock the repository store (.hg/store) and return a weak reference
2216 '''Lock the repository store (.hg/store) and return a weak reference
2210 to the lock. Use this before modifying the store (e.g. committing or
2217 to the lock. Use this before modifying the store (e.g. committing or
2211 stripping). If you are opening a transaction, get a lock as well.)
2218 stripping). If you are opening a transaction, get a lock as well.)
2212
2219
2213 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2220 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2214 'wlock' first to avoid a dead-lock hazard.'''
2221 'wlock' first to avoid a dead-lock hazard.'''
2215 l = self._currentlock(self._lockref)
2222 l = self._currentlock(self._lockref)
2216 if l is not None:
2223 if l is not None:
2217 l.lock()
2224 l.lock()
2218 return l
2225 return l
2219
2226
2220 l = self._lock(self.svfs, "lock", wait, None,
2227 l = self._lock(self.svfs, "lock", wait, None,
2221 self.invalidate, _('repository %s') % self.origroot)
2228 self.invalidate, _('repository %s') % self.origroot)
2222 self._lockref = weakref.ref(l)
2229 self._lockref = weakref.ref(l)
2223 return l
2230 return l
2224
2231
2225 def _wlockchecktransaction(self):
2232 def _wlockchecktransaction(self):
2226 if self.currenttransaction() is not None:
2233 if self.currenttransaction() is not None:
2227 raise error.LockInheritanceContractViolation(
2234 raise error.LockInheritanceContractViolation(
2228 'wlock cannot be inherited in the middle of a transaction')
2235 'wlock cannot be inherited in the middle of a transaction')
2229
2236
2230 def wlock(self, wait=True):
2237 def wlock(self, wait=True):
2231 '''Lock the non-store parts of the repository (everything under
2238 '''Lock the non-store parts of the repository (everything under
2232 .hg except .hg/store) and return a weak reference to the lock.
2239 .hg except .hg/store) and return a weak reference to the lock.
2233
2240
2234 Use this before modifying files in .hg.
2241 Use this before modifying files in .hg.
2235
2242
2236 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2243 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2237 'wlock' first to avoid a dead-lock hazard.'''
2244 'wlock' first to avoid a dead-lock hazard.'''
2238 l = self._wlockref and self._wlockref()
2245 l = self._wlockref and self._wlockref()
2239 if l is not None and l.held:
2246 if l is not None and l.held:
2240 l.lock()
2247 l.lock()
2241 return l
2248 return l
2242
2249
2243 # We do not need to check for non-waiting lock acquisition. Such
2250 # We do not need to check for non-waiting lock acquisition. Such
2244 # acquisition would not cause dead-lock as they would just fail.
2251 # acquisition would not cause dead-lock as they would just fail.
2245 if wait and (self.ui.configbool('devel', 'all-warnings')
2252 if wait and (self.ui.configbool('devel', 'all-warnings')
2246 or self.ui.configbool('devel', 'check-locks')):
2253 or self.ui.configbool('devel', 'check-locks')):
2247 if self._currentlock(self._lockref) is not None:
2254 if self._currentlock(self._lockref) is not None:
2248 self.ui.develwarn('"wlock" acquired after "lock"')
2255 self.ui.develwarn('"wlock" acquired after "lock"')
2249
2256
2250 def unlock():
2257 def unlock():
2251 if self.dirstate.pendingparentchange():
2258 if self.dirstate.pendingparentchange():
2252 self.dirstate.invalidate()
2259 self.dirstate.invalidate()
2253 else:
2260 else:
2254 self.dirstate.write(None)
2261 self.dirstate.write(None)
2255
2262
2256 self._filecache['dirstate'].refresh()
2263 self._filecache['dirstate'].refresh()
2257
2264
2258 l = self._lock(self.vfs, "wlock", wait, unlock,
2265 l = self._lock(self.vfs, "wlock", wait, unlock,
2259 self.invalidatedirstate, _('working directory of %s') %
2266 self.invalidatedirstate, _('working directory of %s') %
2260 self.origroot,
2267 self.origroot,
2261 inheritchecker=self._wlockchecktransaction,
2268 inheritchecker=self._wlockchecktransaction,
2262 parentenvvar='HG_WLOCK_LOCKER')
2269 parentenvvar='HG_WLOCK_LOCKER')
2263 self._wlockref = weakref.ref(l)
2270 self._wlockref = weakref.ref(l)
2264 return l
2271 return l
2265
2272
2266 def _currentlock(self, lockref):
2273 def _currentlock(self, lockref):
2267 """Returns the lock if it's held, or None if it's not."""
2274 """Returns the lock if it's held, or None if it's not."""
2268 if lockref is None:
2275 if lockref is None:
2269 return None
2276 return None
2270 l = lockref()
2277 l = lockref()
2271 if l is None or not l.held:
2278 if l is None or not l.held:
2272 return None
2279 return None
2273 return l
2280 return l
2274
2281
2275 def currentwlock(self):
2282 def currentwlock(self):
2276 """Returns the wlock if it's held, or None if it's not."""
2283 """Returns the wlock if it's held, or None if it's not."""
2277 return self._currentlock(self._wlockref)
2284 return self._currentlock(self._wlockref)
2278
2285
2279 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2286 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2280 """
2287 """
2281 commit an individual file as part of a larger transaction
2288 commit an individual file as part of a larger transaction
2282 """
2289 """
2283
2290
2284 fname = fctx.path()
2291 fname = fctx.path()
2285 fparent1 = manifest1.get(fname, nullid)
2292 fparent1 = manifest1.get(fname, nullid)
2286 fparent2 = manifest2.get(fname, nullid)
2293 fparent2 = manifest2.get(fname, nullid)
2287 if isinstance(fctx, context.filectx):
2294 if isinstance(fctx, context.filectx):
2288 node = fctx.filenode()
2295 node = fctx.filenode()
2289 if node in [fparent1, fparent2]:
2296 if node in [fparent1, fparent2]:
2290 self.ui.debug('reusing %s filelog entry\n' % fname)
2297 self.ui.debug('reusing %s filelog entry\n' % fname)
2291 if manifest1.flags(fname) != fctx.flags():
2298 if manifest1.flags(fname) != fctx.flags():
2292 changelist.append(fname)
2299 changelist.append(fname)
2293 return node
2300 return node
2294
2301
2295 flog = self.file(fname)
2302 flog = self.file(fname)
2296 meta = {}
2303 meta = {}
2297 copy = fctx.renamed()
2304 copy = fctx.renamed()
2298 if copy and copy[0] != fname:
2305 if copy and copy[0] != fname:
2299 # Mark the new revision of this file as a copy of another
2306 # Mark the new revision of this file as a copy of another
2300 # file. This copy data will effectively act as a parent
2307 # file. This copy data will effectively act as a parent
2301 # of this new revision. If this is a merge, the first
2308 # of this new revision. If this is a merge, the first
2302 # parent will be the nullid (meaning "look up the copy data")
2309 # parent will be the nullid (meaning "look up the copy data")
2303 # and the second one will be the other parent. For example:
2310 # and the second one will be the other parent. For example:
2304 #
2311 #
2305 # 0 --- 1 --- 3 rev1 changes file foo
2312 # 0 --- 1 --- 3 rev1 changes file foo
2306 # \ / rev2 renames foo to bar and changes it
2313 # \ / rev2 renames foo to bar and changes it
2307 # \- 2 -/ rev3 should have bar with all changes and
2314 # \- 2 -/ rev3 should have bar with all changes and
2308 # should record that bar descends from
2315 # should record that bar descends from
2309 # bar in rev2 and foo in rev1
2316 # bar in rev2 and foo in rev1
2310 #
2317 #
2311 # this allows this merge to succeed:
2318 # this allows this merge to succeed:
2312 #
2319 #
2313 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2320 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2314 # \ / merging rev3 and rev4 should use bar@rev2
2321 # \ / merging rev3 and rev4 should use bar@rev2
2315 # \- 2 --- 4 as the merge base
2322 # \- 2 --- 4 as the merge base
2316 #
2323 #
2317
2324
2318 cfname = copy[0]
2325 cfname = copy[0]
2319 crev = manifest1.get(cfname)
2326 crev = manifest1.get(cfname)
2320 newfparent = fparent2
2327 newfparent = fparent2
2321
2328
2322 if manifest2: # branch merge
2329 if manifest2: # branch merge
2323 if fparent2 == nullid or crev is None: # copied on remote side
2330 if fparent2 == nullid or crev is None: # copied on remote side
2324 if cfname in manifest2:
2331 if cfname in manifest2:
2325 crev = manifest2[cfname]
2332 crev = manifest2[cfname]
2326 newfparent = fparent1
2333 newfparent = fparent1
2327
2334
2328 # Here, we used to search backwards through history to try to find
2335 # Here, we used to search backwards through history to try to find
2329 # where the file copy came from if the source of a copy was not in
2336 # where the file copy came from if the source of a copy was not in
2330 # the parent directory. However, this doesn't actually make sense to
2337 # the parent directory. However, this doesn't actually make sense to
2331 # do (what does a copy from something not in your working copy even
2338 # do (what does a copy from something not in your working copy even
2332 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2339 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2333 # the user that copy information was dropped, so if they didn't
2340 # the user that copy information was dropped, so if they didn't
2334 # expect this outcome it can be fixed, but this is the correct
2341 # expect this outcome it can be fixed, but this is the correct
2335 # behavior in this circumstance.
2342 # behavior in this circumstance.
2336
2343
2337 if crev:
2344 if crev:
2338 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2345 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2339 meta["copy"] = cfname
2346 meta["copy"] = cfname
2340 meta["copyrev"] = hex(crev)
2347 meta["copyrev"] = hex(crev)
2341 fparent1, fparent2 = nullid, newfparent
2348 fparent1, fparent2 = nullid, newfparent
2342 else:
2349 else:
2343 self.ui.warn(_("warning: can't find ancestor for '%s' "
2350 self.ui.warn(_("warning: can't find ancestor for '%s' "
2344 "copied from '%s'!\n") % (fname, cfname))
2351 "copied from '%s'!\n") % (fname, cfname))
2345
2352
2346 elif fparent1 == nullid:
2353 elif fparent1 == nullid:
2347 fparent1, fparent2 = fparent2, nullid
2354 fparent1, fparent2 = fparent2, nullid
2348 elif fparent2 != nullid:
2355 elif fparent2 != nullid:
2349 # is one parent an ancestor of the other?
2356 # is one parent an ancestor of the other?
2350 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2357 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2351 if fparent1 in fparentancestors:
2358 if fparent1 in fparentancestors:
2352 fparent1, fparent2 = fparent2, nullid
2359 fparent1, fparent2 = fparent2, nullid
2353 elif fparent2 in fparentancestors:
2360 elif fparent2 in fparentancestors:
2354 fparent2 = nullid
2361 fparent2 = nullid
2355
2362
2356 # is the file changed?
2363 # is the file changed?
2357 text = fctx.data()
2364 text = fctx.data()
2358 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2365 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2359 changelist.append(fname)
2366 changelist.append(fname)
2360 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2367 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2361 # are just the flags changed during merge?
2368 # are just the flags changed during merge?
2362 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2369 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2363 changelist.append(fname)
2370 changelist.append(fname)
2364
2371
2365 return fparent1
2372 return fparent1
2366
2373
2367 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2374 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2368 """check for commit arguments that aren't committable"""
2375 """check for commit arguments that aren't committable"""
2369 if match.isexact() or match.prefix():
2376 if match.isexact() or match.prefix():
2370 matched = set(status.modified + status.added + status.removed)
2377 matched = set(status.modified + status.added + status.removed)
2371
2378
2372 for f in match.files():
2379 for f in match.files():
2373 f = self.dirstate.normalize(f)
2380 f = self.dirstate.normalize(f)
2374 if f == '.' or f in matched or f in wctx.substate:
2381 if f == '.' or f in matched or f in wctx.substate:
2375 continue
2382 continue
2376 if f in status.deleted:
2383 if f in status.deleted:
2377 fail(f, _('file not found!'))
2384 fail(f, _('file not found!'))
2378 if f in vdirs: # visited directory
2385 if f in vdirs: # visited directory
2379 d = f + '/'
2386 d = f + '/'
2380 for mf in matched:
2387 for mf in matched:
2381 if mf.startswith(d):
2388 if mf.startswith(d):
2382 break
2389 break
2383 else:
2390 else:
2384 fail(f, _("no match under directory!"))
2391 fail(f, _("no match under directory!"))
2385 elif f not in self.dirstate:
2392 elif f not in self.dirstate:
2386 fail(f, _("file not tracked!"))
2393 fail(f, _("file not tracked!"))
2387
2394
2388 @unfilteredmethod
2395 @unfilteredmethod
2389 def commit(self, text="", user=None, date=None, match=None, force=False,
2396 def commit(self, text="", user=None, date=None, match=None, force=False,
2390 editor=False, extra=None):
2397 editor=False, extra=None):
2391 """Add a new revision to current repository.
2398 """Add a new revision to current repository.
2392
2399
2393 Revision information is gathered from the working directory,
2400 Revision information is gathered from the working directory,
2394 match can be used to filter the committed files. If editor is
2401 match can be used to filter the committed files. If editor is
2395 supplied, it is called to get a commit message.
2402 supplied, it is called to get a commit message.
2396 """
2403 """
2397 if extra is None:
2404 if extra is None:
2398 extra = {}
2405 extra = {}
2399
2406
2400 def fail(f, msg):
2407 def fail(f, msg):
2401 raise error.Abort('%s: %s' % (f, msg))
2408 raise error.Abort('%s: %s' % (f, msg))
2402
2409
2403 if not match:
2410 if not match:
2404 match = matchmod.always(self.root, '')
2411 match = matchmod.always(self.root, '')
2405
2412
2406 if not force:
2413 if not force:
2407 vdirs = []
2414 vdirs = []
2408 match.explicitdir = vdirs.append
2415 match.explicitdir = vdirs.append
2409 match.bad = fail
2416 match.bad = fail
2410
2417
2411 wlock = lock = tr = None
2418 wlock = lock = tr = None
2412 try:
2419 try:
2413 wlock = self.wlock()
2420 wlock = self.wlock()
2414 lock = self.lock() # for recent changelog (see issue4368)
2421 lock = self.lock() # for recent changelog (see issue4368)
2415
2422
2416 wctx = self[None]
2423 wctx = self[None]
2417 merge = len(wctx.parents()) > 1
2424 merge = len(wctx.parents()) > 1
2418
2425
2419 if not force and merge and not match.always():
2426 if not force and merge and not match.always():
2420 raise error.Abort(_('cannot partially commit a merge '
2427 raise error.Abort(_('cannot partially commit a merge '
2421 '(do not specify files or patterns)'))
2428 '(do not specify files or patterns)'))
2422
2429
2423 status = self.status(match=match, clean=force)
2430 status = self.status(match=match, clean=force)
2424 if force:
2431 if force:
2425 status.modified.extend(status.clean) # mq may commit clean files
2432 status.modified.extend(status.clean) # mq may commit clean files
2426
2433
2427 # check subrepos
2434 # check subrepos
2428 subs, commitsubs, newstate = subrepoutil.precommit(
2435 subs, commitsubs, newstate = subrepoutil.precommit(
2429 self.ui, wctx, status, match, force=force)
2436 self.ui, wctx, status, match, force=force)
2430
2437
2431 # make sure all explicit patterns are matched
2438 # make sure all explicit patterns are matched
2432 if not force:
2439 if not force:
2433 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2440 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2434
2441
2435 cctx = context.workingcommitctx(self, status,
2442 cctx = context.workingcommitctx(self, status,
2436 text, user, date, extra)
2443 text, user, date, extra)
2437
2444
2438 # internal config: ui.allowemptycommit
2445 # internal config: ui.allowemptycommit
2439 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2446 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2440 or extra.get('close') or merge or cctx.files()
2447 or extra.get('close') or merge or cctx.files()
2441 or self.ui.configbool('ui', 'allowemptycommit'))
2448 or self.ui.configbool('ui', 'allowemptycommit'))
2442 if not allowemptycommit:
2449 if not allowemptycommit:
2443 return None
2450 return None
2444
2451
2445 if merge and cctx.deleted():
2452 if merge and cctx.deleted():
2446 raise error.Abort(_("cannot commit merge with missing files"))
2453 raise error.Abort(_("cannot commit merge with missing files"))
2447
2454
2448 ms = mergemod.mergestate.read(self)
2455 ms = mergemod.mergestate.read(self)
2449 mergeutil.checkunresolved(ms)
2456 mergeutil.checkunresolved(ms)
2450
2457
2451 if editor:
2458 if editor:
2452 cctx._text = editor(self, cctx, subs)
2459 cctx._text = editor(self, cctx, subs)
2453 edited = (text != cctx._text)
2460 edited = (text != cctx._text)
2454
2461
2455 # Save commit message in case this transaction gets rolled back
2462 # Save commit message in case this transaction gets rolled back
2456 # (e.g. by a pretxncommit hook). Leave the content alone on
2463 # (e.g. by a pretxncommit hook). Leave the content alone on
2457 # the assumption that the user will use the same editor again.
2464 # the assumption that the user will use the same editor again.
2458 msgfn = self.savecommitmessage(cctx._text)
2465 msgfn = self.savecommitmessage(cctx._text)
2459
2466
2460 # commit subs and write new state
2467 # commit subs and write new state
2461 if subs:
2468 if subs:
2462 for s in sorted(commitsubs):
2469 for s in sorted(commitsubs):
2463 sub = wctx.sub(s)
2470 sub = wctx.sub(s)
2464 self.ui.status(_('committing subrepository %s\n') %
2471 self.ui.status(_('committing subrepository %s\n') %
2465 subrepoutil.subrelpath(sub))
2472 subrepoutil.subrelpath(sub))
2466 sr = sub.commit(cctx._text, user, date)
2473 sr = sub.commit(cctx._text, user, date)
2467 newstate[s] = (newstate[s][0], sr)
2474 newstate[s] = (newstate[s][0], sr)
2468 subrepoutil.writestate(self, newstate)
2475 subrepoutil.writestate(self, newstate)
2469
2476
2470 p1, p2 = self.dirstate.parents()
2477 p1, p2 = self.dirstate.parents()
2471 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2478 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2472 try:
2479 try:
2473 self.hook("precommit", throw=True, parent1=hookp1,
2480 self.hook("precommit", throw=True, parent1=hookp1,
2474 parent2=hookp2)
2481 parent2=hookp2)
2475 tr = self.transaction('commit')
2482 tr = self.transaction('commit')
2476 ret = self.commitctx(cctx, True)
2483 ret = self.commitctx(cctx, True)
2477 except: # re-raises
2484 except: # re-raises
2478 if edited:
2485 if edited:
2479 self.ui.write(
2486 self.ui.write(
2480 _('note: commit message saved in %s\n') % msgfn)
2487 _('note: commit message saved in %s\n') % msgfn)
2481 raise
2488 raise
2482 # update bookmarks, dirstate and mergestate
2489 # update bookmarks, dirstate and mergestate
2483 bookmarks.update(self, [p1, p2], ret)
2490 bookmarks.update(self, [p1, p2], ret)
2484 cctx.markcommitted(ret)
2491 cctx.markcommitted(ret)
2485 ms.reset()
2492 ms.reset()
2486 tr.close()
2493 tr.close()
2487
2494
2488 finally:
2495 finally:
2489 lockmod.release(tr, lock, wlock)
2496 lockmod.release(tr, lock, wlock)
2490
2497
2491 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2498 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2492 # hack for command that use a temporary commit (eg: histedit)
2499 # hack for command that use a temporary commit (eg: histedit)
2493 # temporary commit got stripped before hook release
2500 # temporary commit got stripped before hook release
2494 if self.changelog.hasnode(ret):
2501 if self.changelog.hasnode(ret):
2495 self.hook("commit", node=node, parent1=parent1,
2502 self.hook("commit", node=node, parent1=parent1,
2496 parent2=parent2)
2503 parent2=parent2)
2497 self._afterlock(commithook)
2504 self._afterlock(commithook)
2498 return ret
2505 return ret
2499
2506
2500 @unfilteredmethod
2507 @unfilteredmethod
2501 def commitctx(self, ctx, error=False):
2508 def commitctx(self, ctx, error=False):
2502 """Add a new revision to current repository.
2509 """Add a new revision to current repository.
2503 Revision information is passed via the context argument.
2510 Revision information is passed via the context argument.
2504
2511
2505 ctx.files() should list all files involved in this commit, i.e.
2512 ctx.files() should list all files involved in this commit, i.e.
2506 modified/added/removed files. On merge, it may be wider than the
2513 modified/added/removed files. On merge, it may be wider than the
2507 ctx.files() to be committed, since any file nodes derived directly
2514 ctx.files() to be committed, since any file nodes derived directly
2508 from p1 or p2 are excluded from the committed ctx.files().
2515 from p1 or p2 are excluded from the committed ctx.files().
2509 """
2516 """
2510
2517
2511 tr = None
2518 tr = None
2512 p1, p2 = ctx.p1(), ctx.p2()
2519 p1, p2 = ctx.p1(), ctx.p2()
2513 user = ctx.user()
2520 user = ctx.user()
2514
2521
2515 lock = self.lock()
2522 lock = self.lock()
2516 try:
2523 try:
2517 tr = self.transaction("commit")
2524 tr = self.transaction("commit")
2518 trp = weakref.proxy(tr)
2525 trp = weakref.proxy(tr)
2519
2526
2520 if ctx.manifestnode():
2527 if ctx.manifestnode():
2521 # reuse an existing manifest revision
2528 # reuse an existing manifest revision
2522 self.ui.debug('reusing known manifest\n')
2529 self.ui.debug('reusing known manifest\n')
2523 mn = ctx.manifestnode()
2530 mn = ctx.manifestnode()
2524 files = ctx.files()
2531 files = ctx.files()
2525 elif ctx.files():
2532 elif ctx.files():
2526 m1ctx = p1.manifestctx()
2533 m1ctx = p1.manifestctx()
2527 m2ctx = p2.manifestctx()
2534 m2ctx = p2.manifestctx()
2528 mctx = m1ctx.copy()
2535 mctx = m1ctx.copy()
2529
2536
2530 m = mctx.read()
2537 m = mctx.read()
2531 m1 = m1ctx.read()
2538 m1 = m1ctx.read()
2532 m2 = m2ctx.read()
2539 m2 = m2ctx.read()
2533
2540
2534 # check in files
2541 # check in files
2535 added = []
2542 added = []
2536 changed = []
2543 changed = []
2537 removed = list(ctx.removed())
2544 removed = list(ctx.removed())
2538 linkrev = len(self)
2545 linkrev = len(self)
2539 self.ui.note(_("committing files:\n"))
2546 self.ui.note(_("committing files:\n"))
2540 for f in sorted(ctx.modified() + ctx.added()):
2547 for f in sorted(ctx.modified() + ctx.added()):
2541 self.ui.note(f + "\n")
2548 self.ui.note(f + "\n")
2542 try:
2549 try:
2543 fctx = ctx[f]
2550 fctx = ctx[f]
2544 if fctx is None:
2551 if fctx is None:
2545 removed.append(f)
2552 removed.append(f)
2546 else:
2553 else:
2547 added.append(f)
2554 added.append(f)
2548 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2555 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2549 trp, changed)
2556 trp, changed)
2550 m.setflag(f, fctx.flags())
2557 m.setflag(f, fctx.flags())
2551 except OSError as inst:
2558 except OSError as inst:
2552 self.ui.warn(_("trouble committing %s!\n") % f)
2559 self.ui.warn(_("trouble committing %s!\n") % f)
2553 raise
2560 raise
2554 except IOError as inst:
2561 except IOError as inst:
2555 errcode = getattr(inst, 'errno', errno.ENOENT)
2562 errcode = getattr(inst, 'errno', errno.ENOENT)
2556 if error or errcode and errcode != errno.ENOENT:
2563 if error or errcode and errcode != errno.ENOENT:
2557 self.ui.warn(_("trouble committing %s!\n") % f)
2564 self.ui.warn(_("trouble committing %s!\n") % f)
2558 raise
2565 raise
2559
2566
2560 # update manifest
2567 # update manifest
2561 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2568 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2562 drop = [f for f in removed if f in m]
2569 drop = [f for f in removed if f in m]
2563 for f in drop:
2570 for f in drop:
2564 del m[f]
2571 del m[f]
2565 files = changed + removed
2572 files = changed + removed
2566 md = None
2573 md = None
2567 if not files:
2574 if not files:
2568 # if no "files" actually changed in terms of the changelog,
2575 # if no "files" actually changed in terms of the changelog,
2569 # try hard to detect unmodified manifest entry so that the
2576 # try hard to detect unmodified manifest entry so that the
2570 # exact same commit can be reproduced later on convert.
2577 # exact same commit can be reproduced later on convert.
2571 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2578 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2572 if not files and md:
2579 if not files and md:
2573 self.ui.debug('not reusing manifest (no file change in '
2580 self.ui.debug('not reusing manifest (no file change in '
2574 'changelog, but manifest differs)\n')
2581 'changelog, but manifest differs)\n')
2575 if files or md:
2582 if files or md:
2576 self.ui.note(_("committing manifest\n"))
2583 self.ui.note(_("committing manifest\n"))
2577 # we're using narrowmatch here since it's already applied at
2584 # we're using narrowmatch here since it's already applied at
2578 # other stages (such as dirstate.walk), so we're already
2585 # other stages (such as dirstate.walk), so we're already
2579 # ignoring things outside of narrowspec in most cases. The
2586 # ignoring things outside of narrowspec in most cases. The
2580 # one case where we might have files outside the narrowspec
2587 # one case where we might have files outside the narrowspec
2581 # at this point is merges, and we already error out in the
2588 # at this point is merges, and we already error out in the
2582 # case where the merge has files outside of the narrowspec,
2589 # case where the merge has files outside of the narrowspec,
2583 # so this is safe.
2590 # so this is safe.
2584 mn = mctx.write(trp, linkrev,
2591 mn = mctx.write(trp, linkrev,
2585 p1.manifestnode(), p2.manifestnode(),
2592 p1.manifestnode(), p2.manifestnode(),
2586 added, drop, match=self.narrowmatch())
2593 added, drop, match=self.narrowmatch())
2587 else:
2594 else:
2588 self.ui.debug('reusing manifest form p1 (listed files '
2595 self.ui.debug('reusing manifest form p1 (listed files '
2589 'actually unchanged)\n')
2596 'actually unchanged)\n')
2590 mn = p1.manifestnode()
2597 mn = p1.manifestnode()
2591 else:
2598 else:
2592 self.ui.debug('reusing manifest from p1 (no file change)\n')
2599 self.ui.debug('reusing manifest from p1 (no file change)\n')
2593 mn = p1.manifestnode()
2600 mn = p1.manifestnode()
2594 files = []
2601 files = []
2595
2602
2596 # update changelog
2603 # update changelog
2597 self.ui.note(_("committing changelog\n"))
2604 self.ui.note(_("committing changelog\n"))
2598 self.changelog.delayupdate(tr)
2605 self.changelog.delayupdate(tr)
2599 n = self.changelog.add(mn, files, ctx.description(),
2606 n = self.changelog.add(mn, files, ctx.description(),
2600 trp, p1.node(), p2.node(),
2607 trp, p1.node(), p2.node(),
2601 user, ctx.date(), ctx.extra().copy())
2608 user, ctx.date(), ctx.extra().copy())
2602 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2609 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2603 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2610 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2604 parent2=xp2)
2611 parent2=xp2)
2605 # set the new commit is proper phase
2612 # set the new commit is proper phase
2606 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2613 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2607 if targetphase:
2614 if targetphase:
2608 # retract boundary do not alter parent changeset.
2615 # retract boundary do not alter parent changeset.
2609 # if a parent have higher the resulting phase will
2616 # if a parent have higher the resulting phase will
2610 # be compliant anyway
2617 # be compliant anyway
2611 #
2618 #
2612 # if minimal phase was 0 we don't need to retract anything
2619 # if minimal phase was 0 we don't need to retract anything
2613 phases.registernew(self, tr, targetphase, [n])
2620 phases.registernew(self, tr, targetphase, [n])
2614 tr.close()
2621 tr.close()
2615 return n
2622 return n
2616 finally:
2623 finally:
2617 if tr:
2624 if tr:
2618 tr.release()
2625 tr.release()
2619 lock.release()
2626 lock.release()
2620
2627
2621 @unfilteredmethod
2628 @unfilteredmethod
2622 def destroying(self):
2629 def destroying(self):
2623 '''Inform the repository that nodes are about to be destroyed.
2630 '''Inform the repository that nodes are about to be destroyed.
2624 Intended for use by strip and rollback, so there's a common
2631 Intended for use by strip and rollback, so there's a common
2625 place for anything that has to be done before destroying history.
2632 place for anything that has to be done before destroying history.
2626
2633
2627 This is mostly useful for saving state that is in memory and waiting
2634 This is mostly useful for saving state that is in memory and waiting
2628 to be flushed when the current lock is released. Because a call to
2635 to be flushed when the current lock is released. Because a call to
2629 destroyed is imminent, the repo will be invalidated causing those
2636 destroyed is imminent, the repo will be invalidated causing those
2630 changes to stay in memory (waiting for the next unlock), or vanish
2637 changes to stay in memory (waiting for the next unlock), or vanish
2631 completely.
2638 completely.
2632 '''
2639 '''
2633 # When using the same lock to commit and strip, the phasecache is left
2640 # When using the same lock to commit and strip, the phasecache is left
2634 # dirty after committing. Then when we strip, the repo is invalidated,
2641 # dirty after committing. Then when we strip, the repo is invalidated,
2635 # causing those changes to disappear.
2642 # causing those changes to disappear.
2636 if '_phasecache' in vars(self):
2643 if '_phasecache' in vars(self):
2637 self._phasecache.write()
2644 self._phasecache.write()
2638
2645
2639 @unfilteredmethod
2646 @unfilteredmethod
2640 def destroyed(self):
2647 def destroyed(self):
2641 '''Inform the repository that nodes have been destroyed.
2648 '''Inform the repository that nodes have been destroyed.
2642 Intended for use by strip and rollback, so there's a common
2649 Intended for use by strip and rollback, so there's a common
2643 place for anything that has to be done after destroying history.
2650 place for anything that has to be done after destroying history.
2644 '''
2651 '''
2645 # When one tries to:
2652 # When one tries to:
2646 # 1) destroy nodes thus calling this method (e.g. strip)
2653 # 1) destroy nodes thus calling this method (e.g. strip)
2647 # 2) use phasecache somewhere (e.g. commit)
2654 # 2) use phasecache somewhere (e.g. commit)
2648 #
2655 #
2649 # then 2) will fail because the phasecache contains nodes that were
2656 # then 2) will fail because the phasecache contains nodes that were
2650 # removed. We can either remove phasecache from the filecache,
2657 # removed. We can either remove phasecache from the filecache,
2651 # causing it to reload next time it is accessed, or simply filter
2658 # causing it to reload next time it is accessed, or simply filter
2652 # the removed nodes now and write the updated cache.
2659 # the removed nodes now and write the updated cache.
2653 self._phasecache.filterunknown(self)
2660 self._phasecache.filterunknown(self)
2654 self._phasecache.write()
2661 self._phasecache.write()
2655
2662
2656 # refresh all repository caches
2663 # refresh all repository caches
2657 self.updatecaches()
2664 self.updatecaches()
2658
2665
2659 # Ensure the persistent tag cache is updated. Doing it now
2666 # Ensure the persistent tag cache is updated. Doing it now
2660 # means that the tag cache only has to worry about destroyed
2667 # means that the tag cache only has to worry about destroyed
2661 # heads immediately after a strip/rollback. That in turn
2668 # heads immediately after a strip/rollback. That in turn
2662 # guarantees that "cachetip == currenttip" (comparing both rev
2669 # guarantees that "cachetip == currenttip" (comparing both rev
2663 # and node) always means no nodes have been added or destroyed.
2670 # and node) always means no nodes have been added or destroyed.
2664
2671
2665 # XXX this is suboptimal when qrefresh'ing: we strip the current
2672 # XXX this is suboptimal when qrefresh'ing: we strip the current
2666 # head, refresh the tag cache, then immediately add a new head.
2673 # head, refresh the tag cache, then immediately add a new head.
2667 # But I think doing it this way is necessary for the "instant
2674 # But I think doing it this way is necessary for the "instant
2668 # tag cache retrieval" case to work.
2675 # tag cache retrieval" case to work.
2669 self.invalidate()
2676 self.invalidate()
2670
2677
2671 def status(self, node1='.', node2=None, match=None,
2678 def status(self, node1='.', node2=None, match=None,
2672 ignored=False, clean=False, unknown=False,
2679 ignored=False, clean=False, unknown=False,
2673 listsubrepos=False):
2680 listsubrepos=False):
2674 '''a convenience method that calls node1.status(node2)'''
2681 '''a convenience method that calls node1.status(node2)'''
2675 return self[node1].status(node2, match, ignored, clean, unknown,
2682 return self[node1].status(node2, match, ignored, clean, unknown,
2676 listsubrepos)
2683 listsubrepos)
2677
2684
2678 def addpostdsstatus(self, ps):
2685 def addpostdsstatus(self, ps):
2679 """Add a callback to run within the wlock, at the point at which status
2686 """Add a callback to run within the wlock, at the point at which status
2680 fixups happen.
2687 fixups happen.
2681
2688
2682 On status completion, callback(wctx, status) will be called with the
2689 On status completion, callback(wctx, status) will be called with the
2683 wlock held, unless the dirstate has changed from underneath or the wlock
2690 wlock held, unless the dirstate has changed from underneath or the wlock
2684 couldn't be grabbed.
2691 couldn't be grabbed.
2685
2692
2686 Callbacks should not capture and use a cached copy of the dirstate --
2693 Callbacks should not capture and use a cached copy of the dirstate --
2687 it might change in the meanwhile. Instead, they should access the
2694 it might change in the meanwhile. Instead, they should access the
2688 dirstate via wctx.repo().dirstate.
2695 dirstate via wctx.repo().dirstate.
2689
2696
2690 This list is emptied out after each status run -- extensions should
2697 This list is emptied out after each status run -- extensions should
2691 make sure it adds to this list each time dirstate.status is called.
2698 make sure it adds to this list each time dirstate.status is called.
2692 Extensions should also make sure they don't call this for statuses
2699 Extensions should also make sure they don't call this for statuses
2693 that don't involve the dirstate.
2700 that don't involve the dirstate.
2694 """
2701 """
2695
2702
2696 # The list is located here for uniqueness reasons -- it is actually
2703 # The list is located here for uniqueness reasons -- it is actually
2697 # managed by the workingctx, but that isn't unique per-repo.
2704 # managed by the workingctx, but that isn't unique per-repo.
2698 self._postdsstatus.append(ps)
2705 self._postdsstatus.append(ps)
2699
2706
2700 def postdsstatus(self):
2707 def postdsstatus(self):
2701 """Used by workingctx to get the list of post-dirstate-status hooks."""
2708 """Used by workingctx to get the list of post-dirstate-status hooks."""
2702 return self._postdsstatus
2709 return self._postdsstatus
2703
2710
2704 def clearpostdsstatus(self):
2711 def clearpostdsstatus(self):
2705 """Used by workingctx to clear post-dirstate-status hooks."""
2712 """Used by workingctx to clear post-dirstate-status hooks."""
2706 del self._postdsstatus[:]
2713 del self._postdsstatus[:]
2707
2714
2708 def heads(self, start=None):
2715 def heads(self, start=None):
2709 if start is None:
2716 if start is None:
2710 cl = self.changelog
2717 cl = self.changelog
2711 headrevs = reversed(cl.headrevs())
2718 headrevs = reversed(cl.headrevs())
2712 return [cl.node(rev) for rev in headrevs]
2719 return [cl.node(rev) for rev in headrevs]
2713
2720
2714 heads = self.changelog.heads(start)
2721 heads = self.changelog.heads(start)
2715 # sort the output in rev descending order
2722 # sort the output in rev descending order
2716 return sorted(heads, key=self.changelog.rev, reverse=True)
2723 return sorted(heads, key=self.changelog.rev, reverse=True)
2717
2724
2718 def branchheads(self, branch=None, start=None, closed=False):
2725 def branchheads(self, branch=None, start=None, closed=False):
2719 '''return a (possibly filtered) list of heads for the given branch
2726 '''return a (possibly filtered) list of heads for the given branch
2720
2727
2721 Heads are returned in topological order, from newest to oldest.
2728 Heads are returned in topological order, from newest to oldest.
2722 If branch is None, use the dirstate branch.
2729 If branch is None, use the dirstate branch.
2723 If start is not None, return only heads reachable from start.
2730 If start is not None, return only heads reachable from start.
2724 If closed is True, return heads that are marked as closed as well.
2731 If closed is True, return heads that are marked as closed as well.
2725 '''
2732 '''
2726 if branch is None:
2733 if branch is None:
2727 branch = self[None].branch()
2734 branch = self[None].branch()
2728 branches = self.branchmap()
2735 branches = self.branchmap()
2729 if branch not in branches:
2736 if branch not in branches:
2730 return []
2737 return []
2731 # the cache returns heads ordered lowest to highest
2738 # the cache returns heads ordered lowest to highest
2732 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2739 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2733 if start is not None:
2740 if start is not None:
2734 # filter out the heads that cannot be reached from startrev
2741 # filter out the heads that cannot be reached from startrev
2735 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2742 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2736 bheads = [h for h in bheads if h in fbheads]
2743 bheads = [h for h in bheads if h in fbheads]
2737 return bheads
2744 return bheads
2738
2745
2739 def branches(self, nodes):
2746 def branches(self, nodes):
2740 if not nodes:
2747 if not nodes:
2741 nodes = [self.changelog.tip()]
2748 nodes = [self.changelog.tip()]
2742 b = []
2749 b = []
2743 for n in nodes:
2750 for n in nodes:
2744 t = n
2751 t = n
2745 while True:
2752 while True:
2746 p = self.changelog.parents(n)
2753 p = self.changelog.parents(n)
2747 if p[1] != nullid or p[0] == nullid:
2754 if p[1] != nullid or p[0] == nullid:
2748 b.append((t, n, p[0], p[1]))
2755 b.append((t, n, p[0], p[1]))
2749 break
2756 break
2750 n = p[0]
2757 n = p[0]
2751 return b
2758 return b
2752
2759
2753 def between(self, pairs):
2760 def between(self, pairs):
2754 r = []
2761 r = []
2755
2762
2756 for top, bottom in pairs:
2763 for top, bottom in pairs:
2757 n, l, i = top, [], 0
2764 n, l, i = top, [], 0
2758 f = 1
2765 f = 1
2759
2766
2760 while n != bottom and n != nullid:
2767 while n != bottom and n != nullid:
2761 p = self.changelog.parents(n)[0]
2768 p = self.changelog.parents(n)[0]
2762 if i == f:
2769 if i == f:
2763 l.append(n)
2770 l.append(n)
2764 f = f * 2
2771 f = f * 2
2765 n = p
2772 n = p
2766 i += 1
2773 i += 1
2767
2774
2768 r.append(l)
2775 r.append(l)
2769
2776
2770 return r
2777 return r
2771
2778
2772 def checkpush(self, pushop):
2779 def checkpush(self, pushop):
2773 """Extensions can override this function if additional checks have
2780 """Extensions can override this function if additional checks have
2774 to be performed before pushing, or call it if they override push
2781 to be performed before pushing, or call it if they override push
2775 command.
2782 command.
2776 """
2783 """
2777
2784
2778 @unfilteredpropertycache
2785 @unfilteredpropertycache
2779 def prepushoutgoinghooks(self):
2786 def prepushoutgoinghooks(self):
2780 """Return util.hooks consists of a pushop with repo, remote, outgoing
2787 """Return util.hooks consists of a pushop with repo, remote, outgoing
2781 methods, which are called before pushing changesets.
2788 methods, which are called before pushing changesets.
2782 """
2789 """
2783 return util.hooks()
2790 return util.hooks()
2784
2791
2785 def pushkey(self, namespace, key, old, new):
2792 def pushkey(self, namespace, key, old, new):
2786 try:
2793 try:
2787 tr = self.currenttransaction()
2794 tr = self.currenttransaction()
2788 hookargs = {}
2795 hookargs = {}
2789 if tr is not None:
2796 if tr is not None:
2790 hookargs.update(tr.hookargs)
2797 hookargs.update(tr.hookargs)
2791 hookargs = pycompat.strkwargs(hookargs)
2798 hookargs = pycompat.strkwargs(hookargs)
2792 hookargs[r'namespace'] = namespace
2799 hookargs[r'namespace'] = namespace
2793 hookargs[r'key'] = key
2800 hookargs[r'key'] = key
2794 hookargs[r'old'] = old
2801 hookargs[r'old'] = old
2795 hookargs[r'new'] = new
2802 hookargs[r'new'] = new
2796 self.hook('prepushkey', throw=True, **hookargs)
2803 self.hook('prepushkey', throw=True, **hookargs)
2797 except error.HookAbort as exc:
2804 except error.HookAbort as exc:
2798 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2805 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2799 if exc.hint:
2806 if exc.hint:
2800 self.ui.write_err(_("(%s)\n") % exc.hint)
2807 self.ui.write_err(_("(%s)\n") % exc.hint)
2801 return False
2808 return False
2802 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2809 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2803 ret = pushkey.push(self, namespace, key, old, new)
2810 ret = pushkey.push(self, namespace, key, old, new)
2804 def runhook():
2811 def runhook():
2805 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2812 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2806 ret=ret)
2813 ret=ret)
2807 self._afterlock(runhook)
2814 self._afterlock(runhook)
2808 return ret
2815 return ret
2809
2816
2810 def listkeys(self, namespace):
2817 def listkeys(self, namespace):
2811 self.hook('prelistkeys', throw=True, namespace=namespace)
2818 self.hook('prelistkeys', throw=True, namespace=namespace)
2812 self.ui.debug('listing keys for "%s"\n' % namespace)
2819 self.ui.debug('listing keys for "%s"\n' % namespace)
2813 values = pushkey.list(self, namespace)
2820 values = pushkey.list(self, namespace)
2814 self.hook('listkeys', namespace=namespace, values=values)
2821 self.hook('listkeys', namespace=namespace, values=values)
2815 return values
2822 return values
2816
2823
2817 def debugwireargs(self, one, two, three=None, four=None, five=None):
2824 def debugwireargs(self, one, two, three=None, four=None, five=None):
2818 '''used to test argument passing over the wire'''
2825 '''used to test argument passing over the wire'''
2819 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2826 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2820 pycompat.bytestr(four),
2827 pycompat.bytestr(four),
2821 pycompat.bytestr(five))
2828 pycompat.bytestr(five))
2822
2829
2823 def savecommitmessage(self, text):
2830 def savecommitmessage(self, text):
2824 fp = self.vfs('last-message.txt', 'wb')
2831 fp = self.vfs('last-message.txt', 'wb')
2825 try:
2832 try:
2826 fp.write(text)
2833 fp.write(text)
2827 finally:
2834 finally:
2828 fp.close()
2835 fp.close()
2829 return self.pathto(fp.name[len(self.root) + 1:])
2836 return self.pathto(fp.name[len(self.root) + 1:])
2830
2837
2831 # used to avoid circular references so destructors work
2838 # used to avoid circular references so destructors work
2832 def aftertrans(files):
2839 def aftertrans(files):
2833 renamefiles = [tuple(t) for t in files]
2840 renamefiles = [tuple(t) for t in files]
2834 def a():
2841 def a():
2835 for vfs, src, dest in renamefiles:
2842 for vfs, src, dest in renamefiles:
2836 # if src and dest refer to a same file, vfs.rename is a no-op,
2843 # if src and dest refer to a same file, vfs.rename is a no-op,
2837 # leaving both src and dest on disk. delete dest to make sure
2844 # leaving both src and dest on disk. delete dest to make sure
2838 # the rename couldn't be such a no-op.
2845 # the rename couldn't be such a no-op.
2839 vfs.tryunlink(dest)
2846 vfs.tryunlink(dest)
2840 try:
2847 try:
2841 vfs.rename(src, dest)
2848 vfs.rename(src, dest)
2842 except OSError: # journal file does not yet exist
2849 except OSError: # journal file does not yet exist
2843 pass
2850 pass
2844 return a
2851 return a
2845
2852
2846 def undoname(fn):
2853 def undoname(fn):
2847 base, name = os.path.split(fn)
2854 base, name = os.path.split(fn)
2848 assert name.startswith('journal')
2855 assert name.startswith('journal')
2849 return os.path.join(base, name.replace('journal', 'undo', 1))
2856 return os.path.join(base, name.replace('journal', 'undo', 1))
2850
2857
2851 def instance(ui, path, create, intents=None, createopts=None):
2858 def instance(ui, path, create, intents=None, createopts=None):
2852 localpath = util.urllocalpath(path)
2859 localpath = util.urllocalpath(path)
2853 if create:
2860 if create:
2854 createrepository(ui, localpath, createopts=createopts)
2861 createrepository(ui, localpath, createopts=createopts)
2855
2862
2856 return makelocalrepository(ui, localpath, intents=intents)
2863 return makelocalrepository(ui, localpath, intents=intents)
2857
2864
2858 def islocal(path):
2865 def islocal(path):
2859 return True
2866 return True
2860
2867
2861 def defaultcreateopts(ui, createopts=None):
2868 def defaultcreateopts(ui, createopts=None):
2862 """Populate the default creation options for a repository.
2869 """Populate the default creation options for a repository.
2863
2870
2864 A dictionary of explicitly requested creation options can be passed
2871 A dictionary of explicitly requested creation options can be passed
2865 in. Missing keys will be populated.
2872 in. Missing keys will be populated.
2866 """
2873 """
2867 createopts = dict(createopts or {})
2874 createopts = dict(createopts or {})
2868
2875
2869 if 'backend' not in createopts:
2876 if 'backend' not in createopts:
2870 # experimental config: storage.new-repo-backend
2877 # experimental config: storage.new-repo-backend
2871 createopts['backend'] = ui.config('storage', 'new-repo-backend')
2878 createopts['backend'] = ui.config('storage', 'new-repo-backend')
2872
2879
2873 return createopts
2880 return createopts
2874
2881
2875 def newreporequirements(ui, createopts):
2882 def newreporequirements(ui, createopts):
2876 """Determine the set of requirements for a new local repository.
2883 """Determine the set of requirements for a new local repository.
2877
2884
2878 Extensions can wrap this function to specify custom requirements for
2885 Extensions can wrap this function to specify custom requirements for
2879 new repositories.
2886 new repositories.
2880 """
2887 """
2881 # If the repo is being created from a shared repository, we copy
2888 # If the repo is being created from a shared repository, we copy
2882 # its requirements.
2889 # its requirements.
2883 if 'sharedrepo' in createopts:
2890 if 'sharedrepo' in createopts:
2884 requirements = set(createopts['sharedrepo'].requirements)
2891 requirements = set(createopts['sharedrepo'].requirements)
2885 if createopts.get('sharedrelative'):
2892 if createopts.get('sharedrelative'):
2886 requirements.add('relshared')
2893 requirements.add('relshared')
2887 else:
2894 else:
2888 requirements.add('shared')
2895 requirements.add('shared')
2889
2896
2890 return requirements
2897 return requirements
2891
2898
2892 if 'backend' not in createopts:
2899 if 'backend' not in createopts:
2893 raise error.ProgrammingError('backend key not present in createopts; '
2900 raise error.ProgrammingError('backend key not present in createopts; '
2894 'was defaultcreateopts() called?')
2901 'was defaultcreateopts() called?')
2895
2902
2896 if createopts['backend'] != 'revlogv1':
2903 if createopts['backend'] != 'revlogv1':
2897 raise error.Abort(_('unable to determine repository requirements for '
2904 raise error.Abort(_('unable to determine repository requirements for '
2898 'storage backend: %s') % createopts['backend'])
2905 'storage backend: %s') % createopts['backend'])
2899
2906
2900 requirements = {'revlogv1'}
2907 requirements = {'revlogv1'}
2901 if ui.configbool('format', 'usestore'):
2908 if ui.configbool('format', 'usestore'):
2902 requirements.add('store')
2909 requirements.add('store')
2903 if ui.configbool('format', 'usefncache'):
2910 if ui.configbool('format', 'usefncache'):
2904 requirements.add('fncache')
2911 requirements.add('fncache')
2905 if ui.configbool('format', 'dotencode'):
2912 if ui.configbool('format', 'dotencode'):
2906 requirements.add('dotencode')
2913 requirements.add('dotencode')
2907
2914
2908 compengine = ui.config('experimental', 'format.compression')
2915 compengine = ui.config('experimental', 'format.compression')
2909 if compengine not in util.compengines:
2916 if compengine not in util.compengines:
2910 raise error.Abort(_('compression engine %s defined by '
2917 raise error.Abort(_('compression engine %s defined by '
2911 'experimental.format.compression not available') %
2918 'experimental.format.compression not available') %
2912 compengine,
2919 compengine,
2913 hint=_('run "hg debuginstall" to list available '
2920 hint=_('run "hg debuginstall" to list available '
2914 'compression engines'))
2921 'compression engines'))
2915
2922
2916 # zlib is the historical default and doesn't need an explicit requirement.
2923 # zlib is the historical default and doesn't need an explicit requirement.
2917 if compengine != 'zlib':
2924 if compengine != 'zlib':
2918 requirements.add('exp-compression-%s' % compengine)
2925 requirements.add('exp-compression-%s' % compengine)
2919
2926
2920 if scmutil.gdinitconfig(ui):
2927 if scmutil.gdinitconfig(ui):
2921 requirements.add('generaldelta')
2928 requirements.add('generaldelta')
2922 # experimental config: format.sparse-revlog
2929 # experimental config: format.sparse-revlog
2923 if ui.configbool('format', 'sparse-revlog'):
2930 if ui.configbool('format', 'sparse-revlog'):
2924 requirements.add(SPARSEREVLOG_REQUIREMENT)
2931 requirements.add(SPARSEREVLOG_REQUIREMENT)
2925 if ui.configbool('experimental', 'treemanifest'):
2932 if ui.configbool('experimental', 'treemanifest'):
2926 requirements.add('treemanifest')
2933 requirements.add('treemanifest')
2927
2934
2928 revlogv2 = ui.config('experimental', 'revlogv2')
2935 revlogv2 = ui.config('experimental', 'revlogv2')
2929 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2936 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2930 requirements.remove('revlogv1')
2937 requirements.remove('revlogv1')
2931 # generaldelta is implied by revlogv2.
2938 # generaldelta is implied by revlogv2.
2932 requirements.discard('generaldelta')
2939 requirements.discard('generaldelta')
2933 requirements.add(REVLOGV2_REQUIREMENT)
2940 requirements.add(REVLOGV2_REQUIREMENT)
2934 # experimental config: format.internal-phase
2941 # experimental config: format.internal-phase
2935 if ui.configbool('format', 'internal-phase'):
2942 if ui.configbool('format', 'internal-phase'):
2936 requirements.add('internal-phase')
2943 requirements.add('internal-phase')
2937
2944
2938 if createopts.get('narrowfiles'):
2945 if createopts.get('narrowfiles'):
2939 requirements.add(repository.NARROW_REQUIREMENT)
2946 requirements.add(repository.NARROW_REQUIREMENT)
2940
2947
2941 if createopts.get('lfs'):
2948 if createopts.get('lfs'):
2942 requirements.add('lfs')
2949 requirements.add('lfs')
2943
2950
2944 return requirements
2951 return requirements
2945
2952
2946 def filterknowncreateopts(ui, createopts):
2953 def filterknowncreateopts(ui, createopts):
2947 """Filters a dict of repo creation options against options that are known.
2954 """Filters a dict of repo creation options against options that are known.
2948
2955
2949 Receives a dict of repo creation options and returns a dict of those
2956 Receives a dict of repo creation options and returns a dict of those
2950 options that we don't know how to handle.
2957 options that we don't know how to handle.
2951
2958
2952 This function is called as part of repository creation. If the
2959 This function is called as part of repository creation. If the
2953 returned dict contains any items, repository creation will not
2960 returned dict contains any items, repository creation will not
2954 be allowed, as it means there was a request to create a repository
2961 be allowed, as it means there was a request to create a repository
2955 with options not recognized by loaded code.
2962 with options not recognized by loaded code.
2956
2963
2957 Extensions can wrap this function to filter out creation options
2964 Extensions can wrap this function to filter out creation options
2958 they know how to handle.
2965 they know how to handle.
2959 """
2966 """
2960 known = {
2967 known = {
2961 'backend',
2968 'backend',
2962 'lfs',
2969 'lfs',
2963 'narrowfiles',
2970 'narrowfiles',
2964 'sharedrepo',
2971 'sharedrepo',
2965 'sharedrelative',
2972 'sharedrelative',
2966 'shareditems',
2973 'shareditems',
2967 'shallowfilestore',
2974 'shallowfilestore',
2968 }
2975 }
2969
2976
2970 return {k: v for k, v in createopts.items() if k not in known}
2977 return {k: v for k, v in createopts.items() if k not in known}
2971
2978
2972 def createrepository(ui, path, createopts=None):
2979 def createrepository(ui, path, createopts=None):
2973 """Create a new repository in a vfs.
2980 """Create a new repository in a vfs.
2974
2981
2975 ``path`` path to the new repo's working directory.
2982 ``path`` path to the new repo's working directory.
2976 ``createopts`` options for the new repository.
2983 ``createopts`` options for the new repository.
2977
2984
2978 The following keys for ``createopts`` are recognized:
2985 The following keys for ``createopts`` are recognized:
2979
2986
2980 backend
2987 backend
2981 The storage backend to use.
2988 The storage backend to use.
2982 lfs
2989 lfs
2983 Repository will be created with ``lfs`` requirement. The lfs extension
2990 Repository will be created with ``lfs`` requirement. The lfs extension
2984 will automatically be loaded when the repository is accessed.
2991 will automatically be loaded when the repository is accessed.
2985 narrowfiles
2992 narrowfiles
2986 Set up repository to support narrow file storage.
2993 Set up repository to support narrow file storage.
2987 sharedrepo
2994 sharedrepo
2988 Repository object from which storage should be shared.
2995 Repository object from which storage should be shared.
2989 sharedrelative
2996 sharedrelative
2990 Boolean indicating if the path to the shared repo should be
2997 Boolean indicating if the path to the shared repo should be
2991 stored as relative. By default, the pointer to the "parent" repo
2998 stored as relative. By default, the pointer to the "parent" repo
2992 is stored as an absolute path.
2999 is stored as an absolute path.
2993 shareditems
3000 shareditems
2994 Set of items to share to the new repository (in addition to storage).
3001 Set of items to share to the new repository (in addition to storage).
2995 shallowfilestore
3002 shallowfilestore
2996 Indicates that storage for files should be shallow (not all ancestor
3003 Indicates that storage for files should be shallow (not all ancestor
2997 revisions are known).
3004 revisions are known).
2998 """
3005 """
2999 createopts = defaultcreateopts(ui, createopts=createopts)
3006 createopts = defaultcreateopts(ui, createopts=createopts)
3000
3007
3001 unknownopts = filterknowncreateopts(ui, createopts)
3008 unknownopts = filterknowncreateopts(ui, createopts)
3002
3009
3003 if not isinstance(unknownopts, dict):
3010 if not isinstance(unknownopts, dict):
3004 raise error.ProgrammingError('filterknowncreateopts() did not return '
3011 raise error.ProgrammingError('filterknowncreateopts() did not return '
3005 'a dict')
3012 'a dict')
3006
3013
3007 if unknownopts:
3014 if unknownopts:
3008 raise error.Abort(_('unable to create repository because of unknown '
3015 raise error.Abort(_('unable to create repository because of unknown '
3009 'creation option: %s') %
3016 'creation option: %s') %
3010 ', '.join(sorted(unknownopts)),
3017 ', '.join(sorted(unknownopts)),
3011 hint=_('is a required extension not loaded?'))
3018 hint=_('is a required extension not loaded?'))
3012
3019
3013 requirements = newreporequirements(ui, createopts=createopts)
3020 requirements = newreporequirements(ui, createopts=createopts)
3014
3021
3015 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3022 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3016
3023
3017 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3024 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3018 if hgvfs.exists():
3025 if hgvfs.exists():
3019 raise error.RepoError(_('repository %s already exists') % path)
3026 raise error.RepoError(_('repository %s already exists') % path)
3020
3027
3021 if 'sharedrepo' in createopts:
3028 if 'sharedrepo' in createopts:
3022 sharedpath = createopts['sharedrepo'].sharedpath
3029 sharedpath = createopts['sharedrepo'].sharedpath
3023
3030
3024 if createopts.get('sharedrelative'):
3031 if createopts.get('sharedrelative'):
3025 try:
3032 try:
3026 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3033 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3027 except (IOError, ValueError) as e:
3034 except (IOError, ValueError) as e:
3028 # ValueError is raised on Windows if the drive letters differ
3035 # ValueError is raised on Windows if the drive letters differ
3029 # on each path.
3036 # on each path.
3030 raise error.Abort(_('cannot calculate relative path'),
3037 raise error.Abort(_('cannot calculate relative path'),
3031 hint=stringutil.forcebytestr(e))
3038 hint=stringutil.forcebytestr(e))
3032
3039
3033 if not wdirvfs.exists():
3040 if not wdirvfs.exists():
3034 wdirvfs.makedirs()
3041 wdirvfs.makedirs()
3035
3042
3036 hgvfs.makedir(notindexed=True)
3043 hgvfs.makedir(notindexed=True)
3037 if 'sharedrepo' not in createopts:
3044 if 'sharedrepo' not in createopts:
3038 hgvfs.mkdir(b'cache')
3045 hgvfs.mkdir(b'cache')
3039 hgvfs.mkdir(b'wcache')
3046 hgvfs.mkdir(b'wcache')
3040
3047
3041 if b'store' in requirements and 'sharedrepo' not in createopts:
3048 if b'store' in requirements and 'sharedrepo' not in createopts:
3042 hgvfs.mkdir(b'store')
3049 hgvfs.mkdir(b'store')
3043
3050
3044 # We create an invalid changelog outside the store so very old
3051 # We create an invalid changelog outside the store so very old
3045 # Mercurial versions (which didn't know about the requirements
3052 # Mercurial versions (which didn't know about the requirements
3046 # file) encounter an error on reading the changelog. This
3053 # file) encounter an error on reading the changelog. This
3047 # effectively locks out old clients and prevents them from
3054 # effectively locks out old clients and prevents them from
3048 # mucking with a repo in an unknown format.
3055 # mucking with a repo in an unknown format.
3049 #
3056 #
3050 # The revlog header has version 2, which won't be recognized by
3057 # The revlog header has version 2, which won't be recognized by
3051 # such old clients.
3058 # such old clients.
3052 hgvfs.append(b'00changelog.i',
3059 hgvfs.append(b'00changelog.i',
3053 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3060 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3054 b'layout')
3061 b'layout')
3055
3062
3056 scmutil.writerequires(hgvfs, requirements)
3063 scmutil.writerequires(hgvfs, requirements)
3057
3064
3058 # Write out file telling readers where to find the shared store.
3065 # Write out file telling readers where to find the shared store.
3059 if 'sharedrepo' in createopts:
3066 if 'sharedrepo' in createopts:
3060 hgvfs.write(b'sharedpath', sharedpath)
3067 hgvfs.write(b'sharedpath', sharedpath)
3061
3068
3062 if createopts.get('shareditems'):
3069 if createopts.get('shareditems'):
3063 shared = b'\n'.join(sorted(createopts['shareditems'])) + b'\n'
3070 shared = b'\n'.join(sorted(createopts['shareditems'])) + b'\n'
3064 hgvfs.write(b'shared', shared)
3071 hgvfs.write(b'shared', shared)
3065
3072
3066 def poisonrepository(repo):
3073 def poisonrepository(repo):
3067 """Poison a repository instance so it can no longer be used."""
3074 """Poison a repository instance so it can no longer be used."""
3068 # Perform any cleanup on the instance.
3075 # Perform any cleanup on the instance.
3069 repo.close()
3076 repo.close()
3070
3077
3071 # Our strategy is to replace the type of the object with one that
3078 # Our strategy is to replace the type of the object with one that
3072 # has all attribute lookups result in error.
3079 # has all attribute lookups result in error.
3073 #
3080 #
3074 # But we have to allow the close() method because some constructors
3081 # But we have to allow the close() method because some constructors
3075 # of repos call close() on repo references.
3082 # of repos call close() on repo references.
3076 class poisonedrepository(object):
3083 class poisonedrepository(object):
3077 def __getattribute__(self, item):
3084 def __getattribute__(self, item):
3078 if item == r'close':
3085 if item == r'close':
3079 return object.__getattribute__(self, item)
3086 return object.__getattribute__(self, item)
3080
3087
3081 raise error.ProgrammingError('repo instances should not be used '
3088 raise error.ProgrammingError('repo instances should not be used '
3082 'after unshare')
3089 'after unshare')
3083
3090
3084 def close(self):
3091 def close(self):
3085 pass
3092 pass
3086
3093
3087 # We may have a repoview, which intercepts __setattr__. So be sure
3094 # We may have a repoview, which intercepts __setattr__. So be sure
3088 # we operate at the lowest level possible.
3095 # we operate at the lowest level possible.
3089 object.__setattr__(repo, r'__class__', poisonedrepository)
3096 object.__setattr__(repo, r'__class__', poisonedrepository)
General Comments 0
You need to be logged in to leave comments. Login now