##// END OF EJS Templates
narrow: extract repo property for store narrowmatcher...
Martin von Zweigbergk -
r41266:d2d716cc default
parent child Browse files
Show More
@@ -1,380 +1,380 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will assume the remote repository also handles blob storage
91 91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
92 92 # use this value.
93 93 # (default: unset)
94 94 url = https://example.com/repo.git/info/lfs
95 95
96 96 # Which files to track in LFS. Path tests are "**.extname" for file
97 97 # extensions, and "path:under/some/directory" for path prefix. Both
98 98 # are relative to the repository root.
99 99 # File size can be tested with the "size()" fileset, and tests can be
100 100 # joined with fileset operators. (See "hg help filesets.operators".)
101 101 #
102 102 # Some examples:
103 103 # - all() # everything
104 104 # - none() # nothing
105 105 # - size(">20MB") # larger than 20MB
106 106 # - !**.txt # anything not a *.txt file
107 107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 108 # - path:bin # files under "bin" in the project root
109 109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 111 # (default: none())
112 112 #
113 113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 114 # will eventually be deprecated and removed.
115 115 track = size(">10M")
116 116
117 117 # how many times to retry before giving up on transferring an object
118 118 retry = 5
119 119
120 120 # the local directory to store lfs files for sharing across local clones.
121 121 # If not set, the cache is located in an OS specific cache location.
122 122 usercache = /path/to/global/cache
123 123 """
124 124
125 125 from __future__ import absolute_import
126 126
127 127 import sys
128 128
129 129 from mercurial.i18n import _
130 130
131 131 from mercurial import (
132 132 config,
133 133 error,
134 134 exchange,
135 135 extensions,
136 136 exthelper,
137 137 filelog,
138 138 filesetlang,
139 139 localrepo,
140 140 minifileset,
141 141 node,
142 142 pycompat,
143 143 repository,
144 144 revlog,
145 145 scmutil,
146 146 templateutil,
147 147 util,
148 148 )
149 149
150 150 from . import (
151 151 blobstore,
152 152 wireprotolfsserver,
153 153 wrapper,
154 154 )
155 155
156 156 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
157 157 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
158 158 # be specifying the version(s) of Mercurial they are tested with, or
159 159 # leave the attribute unspecified.
160 160 testedwith = 'ships-with-hg-core'
161 161
162 162 eh = exthelper.exthelper()
163 163 eh.merge(wrapper.eh)
164 164 eh.merge(wireprotolfsserver.eh)
165 165
166 166 cmdtable = eh.cmdtable
167 167 configtable = eh.configtable
168 168 extsetup = eh.finalextsetup
169 169 uisetup = eh.finaluisetup
170 170 filesetpredicate = eh.filesetpredicate
171 171 reposetup = eh.finalreposetup
172 172 templatekeyword = eh.templatekeyword
173 173
174 174 eh.configitem('experimental', 'lfs.serve',
175 175 default=True,
176 176 )
177 177 eh.configitem('experimental', 'lfs.user-agent',
178 178 default=None,
179 179 )
180 180 eh.configitem('experimental', 'lfs.disableusercache',
181 181 default=False,
182 182 )
183 183 eh.configitem('experimental', 'lfs.worker-enable',
184 184 default=False,
185 185 )
186 186
187 187 eh.configitem('lfs', 'url',
188 188 default=None,
189 189 )
190 190 eh.configitem('lfs', 'usercache',
191 191 default=None,
192 192 )
193 193 # Deprecated
194 194 eh.configitem('lfs', 'threshold',
195 195 default=None,
196 196 )
197 197 eh.configitem('lfs', 'track',
198 198 default='none()',
199 199 )
200 200 eh.configitem('lfs', 'retry',
201 201 default=5,
202 202 )
203 203
204 204 lfsprocessor = (
205 205 wrapper.readfromstore,
206 206 wrapper.writetostore,
207 207 wrapper.bypasscheckhash,
208 208 )
209 209
210 210 def featuresetup(ui, supported):
211 211 # don't die on seeing a repo with the lfs requirement
212 212 supported |= {'lfs'}
213 213
214 214 @eh.uisetup
215 215 def _uisetup(ui):
216 216 localrepo.featuresetupfuncs.add(featuresetup)
217 217
218 218 @eh.reposetup
219 219 def _reposetup(ui, repo):
220 220 # Nothing to do with a remote repo
221 221 if not repo.local():
222 222 return
223 223
224 224 repo.svfs.lfslocalblobstore = blobstore.local(repo)
225 225 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
226 226
227 227 class lfsrepo(repo.__class__):
228 228 @localrepo.unfilteredmethod
229 229 def commitctx(self, ctx, error=False):
230 230 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
231 231 return super(lfsrepo, self).commitctx(ctx, error)
232 232
233 233 repo.__class__ = lfsrepo
234 234
235 235 if 'lfs' not in repo.requirements:
236 236 def checkrequireslfs(ui, repo, **kwargs):
237 237 if 'lfs' in repo.requirements:
238 238 return 0
239 239
240 240 last = kwargs.get(r'node_last')
241 241 _bin = node.bin
242 242 if last:
243 243 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
244 244 else:
245 245 s = repo.set('%n', _bin(kwargs[r'node']))
246 match = repo.narrowmatch()
246 match = repo._storenarrowmatch
247 247 for ctx in s:
248 248 # TODO: is there a way to just walk the files in the commit?
249 249 if any(ctx[f].islfs() for f in ctx.files()
250 250 if f in ctx and match(f)):
251 251 repo.requirements.add('lfs')
252 252 repo.features.add(repository.REPO_FEATURE_LFS)
253 253 repo._writerequirements()
254 254 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
255 255 break
256 256
257 257 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
258 258 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
259 259 else:
260 260 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
261 261
262 262 def _trackedmatcher(repo):
263 263 """Return a function (path, size) -> bool indicating whether or not to
264 264 track a given file with lfs."""
265 265 if not repo.wvfs.exists('.hglfs'):
266 266 # No '.hglfs' in wdir. Fallback to config for now.
267 267 trackspec = repo.ui.config('lfs', 'track')
268 268
269 269 # deprecated config: lfs.threshold
270 270 threshold = repo.ui.configbytes('lfs', 'threshold')
271 271 if threshold:
272 272 filesetlang.parse(trackspec) # make sure syntax errors are confined
273 273 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
274 274
275 275 return minifileset.compile(trackspec)
276 276
277 277 data = repo.wvfs.tryread('.hglfs')
278 278 if not data:
279 279 return lambda p, s: False
280 280
281 281 # Parse errors here will abort with a message that points to the .hglfs file
282 282 # and line number.
283 283 cfg = config.config()
284 284 cfg.parse('.hglfs', data)
285 285
286 286 try:
287 287 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
288 288 for pattern, rule in cfg.items('track')]
289 289 except error.ParseError as e:
290 290 # The original exception gives no indicator that the error is in the
291 291 # .hglfs file, so add that.
292 292
293 293 # TODO: See if the line number of the file can be made available.
294 294 raise error.Abort(_('parse error in .hglfs: %s') % e)
295 295
296 296 def _match(path, size):
297 297 for pat, rule in rules:
298 298 if pat(path, size):
299 299 return rule(path, size)
300 300
301 301 return False
302 302
303 303 return _match
304 304
305 305 # Called by remotefilelog
306 306 def wrapfilelog(filelog):
307 307 wrapfunction = extensions.wrapfunction
308 308
309 309 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
310 310 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
311 311 wrapfunction(filelog, 'size', wrapper.filelogsize)
312 312
313 313 @eh.wrapfunction(localrepo, 'resolverevlogstorevfsoptions')
314 314 def _resolverevlogstorevfsoptions(orig, ui, requirements, features):
315 315 opts = orig(ui, requirements, features)
316 316 for name, module in extensions.extensions(ui):
317 317 if module is sys.modules[__name__]:
318 318 if revlog.REVIDX_EXTSTORED in opts[b'flagprocessors']:
319 319 msg = (_(b"cannot register multiple processors on flag '%#x'.")
320 320 % revlog.REVIDX_EXTSTORED)
321 321 raise error.Abort(msg)
322 322
323 323 opts[b'flagprocessors'][revlog.REVIDX_EXTSTORED] = lfsprocessor
324 324 break
325 325
326 326 return opts
327 327
328 328 @eh.extsetup
329 329 def _extsetup(ui):
330 330 wrapfilelog(filelog.filelog)
331 331
332 332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
333 333
334 334 # Make bundle choose changegroup3 instead of changegroup2. This affects
335 335 # "hg bundle" command. Note: it does not cover all bundle formats like
336 336 # "packed1". Using "packed1" with lfs will likely cause trouble.
337 337 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
338 338
339 339 @eh.filesetpredicate('lfs()')
340 340 def lfsfileset(mctx, x):
341 341 """File that uses LFS storage."""
342 342 # i18n: "lfs" is a keyword
343 343 filesetlang.getargs(x, 0, 0, _("lfs takes no arguments"))
344 344 ctx = mctx.ctx
345 345 def lfsfilep(f):
346 346 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
347 347 return mctx.predicate(lfsfilep, predrepr='<lfs>')
348 348
349 349 @eh.templatekeyword('lfs_files', requires={'ctx'})
350 350 def lfsfiles(context, mapping):
351 351 """List of strings. All files modified, added, or removed by this
352 352 changeset."""
353 353 ctx = context.resource(mapping, 'ctx')
354 354
355 355 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
356 356 files = sorted(pointers.keys())
357 357
358 358 def pointer(v):
359 359 # In the file spec, version is first and the other keys are sorted.
360 360 sortkeyfunc = lambda x: (x[0] != 'version', x)
361 361 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
362 362 return util.sortdict(items)
363 363
364 364 makemap = lambda v: {
365 365 'file': v,
366 366 'lfsoid': pointers[v].oid() if pointers[v] else None,
367 367 'lfspointer': templateutil.hybriddict(pointer(v)),
368 368 }
369 369
370 370 # TODO: make the separator ', '?
371 371 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
372 372 return templateutil.hybrid(f, files, makemap, pycompat.identity)
373 373
374 374 @eh.command('debuglfsupload',
375 375 [('r', 'rev', [], _('upload large files introduced by REV'))])
376 376 def debuglfsupload(ui, repo, **opts):
377 377 """upload lfs blobs added by the working copy parent or given revisions"""
378 378 revs = opts.get(r'rev', [])
379 379 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
380 380 wrapper.uploadblobs(repo, pointers)
@@ -1,3089 +1,3096 b''
1 1 # localrepo.py - read/write repository class for mercurial
2 2 #
3 3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import errno
11 11 import hashlib
12 12 import os
13 13 import random
14 14 import sys
15 15 import time
16 16 import weakref
17 17
18 18 from .i18n import _
19 19 from .node import (
20 20 bin,
21 21 hex,
22 22 nullid,
23 23 nullrev,
24 24 short,
25 25 )
26 26 from . import (
27 27 bookmarks,
28 28 branchmap,
29 29 bundle2,
30 30 changegroup,
31 31 changelog,
32 32 color,
33 33 context,
34 34 dirstate,
35 35 dirstateguard,
36 36 discovery,
37 37 encoding,
38 38 error,
39 39 exchange,
40 40 extensions,
41 41 filelog,
42 42 hook,
43 43 lock as lockmod,
44 44 manifest,
45 45 match as matchmod,
46 46 merge as mergemod,
47 47 mergeutil,
48 48 namespaces,
49 49 narrowspec,
50 50 obsolete,
51 51 pathutil,
52 52 phases,
53 53 pushkey,
54 54 pycompat,
55 55 repository,
56 56 repoview,
57 57 revset,
58 58 revsetlang,
59 59 scmutil,
60 60 sparse,
61 61 store as storemod,
62 62 subrepoutil,
63 63 tags as tagsmod,
64 64 transaction,
65 65 txnutil,
66 66 util,
67 67 vfs as vfsmod,
68 68 )
69 69 from .utils import (
70 70 interfaceutil,
71 71 procutil,
72 72 stringutil,
73 73 )
74 74
75 75 from .revlogutils import (
76 76 constants as revlogconst,
77 77 )
78 78
79 79 release = lockmod.release
80 80 urlerr = util.urlerr
81 81 urlreq = util.urlreq
82 82
83 83 # set of (path, vfs-location) tuples. vfs-location is:
84 84 # - 'plain for vfs relative paths
85 85 # - '' for svfs relative paths
86 86 _cachedfiles = set()
87 87
88 88 class _basefilecache(scmutil.filecache):
89 89 """All filecache usage on repo are done for logic that should be unfiltered
90 90 """
91 91 def __get__(self, repo, type=None):
92 92 if repo is None:
93 93 return self
94 94 # proxy to unfiltered __dict__ since filtered repo has no entry
95 95 unfi = repo.unfiltered()
96 96 try:
97 97 return unfi.__dict__[self.sname]
98 98 except KeyError:
99 99 pass
100 100 return super(_basefilecache, self).__get__(unfi, type)
101 101
102 102 def set(self, repo, value):
103 103 return super(_basefilecache, self).set(repo.unfiltered(), value)
104 104
105 105 class repofilecache(_basefilecache):
106 106 """filecache for files in .hg but outside of .hg/store"""
107 107 def __init__(self, *paths):
108 108 super(repofilecache, self).__init__(*paths)
109 109 for path in paths:
110 110 _cachedfiles.add((path, 'plain'))
111 111
112 112 def join(self, obj, fname):
113 113 return obj.vfs.join(fname)
114 114
115 115 class storecache(_basefilecache):
116 116 """filecache for files in the store"""
117 117 def __init__(self, *paths):
118 118 super(storecache, self).__init__(*paths)
119 119 for path in paths:
120 120 _cachedfiles.add((path, ''))
121 121
122 122 def join(self, obj, fname):
123 123 return obj.sjoin(fname)
124 124
125 125 def isfilecached(repo, name):
126 126 """check if a repo has already cached "name" filecache-ed property
127 127
128 128 This returns (cachedobj-or-None, iscached) tuple.
129 129 """
130 130 cacheentry = repo.unfiltered()._filecache.get(name, None)
131 131 if not cacheentry:
132 132 return None, False
133 133 return cacheentry.obj, True
134 134
135 135 class unfilteredpropertycache(util.propertycache):
136 136 """propertycache that apply to unfiltered repo only"""
137 137
138 138 def __get__(self, repo, type=None):
139 139 unfi = repo.unfiltered()
140 140 if unfi is repo:
141 141 return super(unfilteredpropertycache, self).__get__(unfi)
142 142 return getattr(unfi, self.name)
143 143
144 144 class filteredpropertycache(util.propertycache):
145 145 """propertycache that must take filtering in account"""
146 146
147 147 def cachevalue(self, obj, value):
148 148 object.__setattr__(obj, self.name, value)
149 149
150 150
151 151 def hasunfilteredcache(repo, name):
152 152 """check if a repo has an unfilteredpropertycache value for <name>"""
153 153 return name in vars(repo.unfiltered())
154 154
155 155 def unfilteredmethod(orig):
156 156 """decorate method that always need to be run on unfiltered version"""
157 157 def wrapper(repo, *args, **kwargs):
158 158 return orig(repo.unfiltered(), *args, **kwargs)
159 159 return wrapper
160 160
161 161 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
162 162 'unbundle'}
163 163 legacycaps = moderncaps.union({'changegroupsubset'})
164 164
165 165 @interfaceutil.implementer(repository.ipeercommandexecutor)
166 166 class localcommandexecutor(object):
167 167 def __init__(self, peer):
168 168 self._peer = peer
169 169 self._sent = False
170 170 self._closed = False
171 171
172 172 def __enter__(self):
173 173 return self
174 174
175 175 def __exit__(self, exctype, excvalue, exctb):
176 176 self.close()
177 177
178 178 def callcommand(self, command, args):
179 179 if self._sent:
180 180 raise error.ProgrammingError('callcommand() cannot be used after '
181 181 'sendcommands()')
182 182
183 183 if self._closed:
184 184 raise error.ProgrammingError('callcommand() cannot be used after '
185 185 'close()')
186 186
187 187 # We don't need to support anything fancy. Just call the named
188 188 # method on the peer and return a resolved future.
189 189 fn = getattr(self._peer, pycompat.sysstr(command))
190 190
191 191 f = pycompat.futures.Future()
192 192
193 193 try:
194 194 result = fn(**pycompat.strkwargs(args))
195 195 except Exception:
196 196 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
197 197 else:
198 198 f.set_result(result)
199 199
200 200 return f
201 201
202 202 def sendcommands(self):
203 203 self._sent = True
204 204
205 205 def close(self):
206 206 self._closed = True
207 207
208 208 @interfaceutil.implementer(repository.ipeercommands)
209 209 class localpeer(repository.peer):
210 210 '''peer for a local repo; reflects only the most recent API'''
211 211
212 212 def __init__(self, repo, caps=None):
213 213 super(localpeer, self).__init__()
214 214
215 215 if caps is None:
216 216 caps = moderncaps.copy()
217 217 self._repo = repo.filtered('served')
218 218 self.ui = repo.ui
219 219 self._caps = repo._restrictcapabilities(caps)
220 220
221 221 # Begin of _basepeer interface.
222 222
223 223 def url(self):
224 224 return self._repo.url()
225 225
226 226 def local(self):
227 227 return self._repo
228 228
229 229 def peer(self):
230 230 return self
231 231
232 232 def canpush(self):
233 233 return True
234 234
235 235 def close(self):
236 236 self._repo.close()
237 237
238 238 # End of _basepeer interface.
239 239
240 240 # Begin of _basewirecommands interface.
241 241
242 242 def branchmap(self):
243 243 return self._repo.branchmap()
244 244
245 245 def capabilities(self):
246 246 return self._caps
247 247
248 248 def clonebundles(self):
249 249 return self._repo.tryread('clonebundles.manifest')
250 250
251 251 def debugwireargs(self, one, two, three=None, four=None, five=None):
252 252 """Used to test argument passing over the wire"""
253 253 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
254 254 pycompat.bytestr(four),
255 255 pycompat.bytestr(five))
256 256
257 257 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
258 258 **kwargs):
259 259 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
260 260 common=common, bundlecaps=bundlecaps,
261 261 **kwargs)[1]
262 262 cb = util.chunkbuffer(chunks)
263 263
264 264 if exchange.bundle2requested(bundlecaps):
265 265 # When requesting a bundle2, getbundle returns a stream to make the
266 266 # wire level function happier. We need to build a proper object
267 267 # from it in local peer.
268 268 return bundle2.getunbundler(self.ui, cb)
269 269 else:
270 270 return changegroup.getunbundler('01', cb, None)
271 271
272 272 def heads(self):
273 273 return self._repo.heads()
274 274
275 275 def known(self, nodes):
276 276 return self._repo.known(nodes)
277 277
278 278 def listkeys(self, namespace):
279 279 return self._repo.listkeys(namespace)
280 280
281 281 def lookup(self, key):
282 282 return self._repo.lookup(key)
283 283
284 284 def pushkey(self, namespace, key, old, new):
285 285 return self._repo.pushkey(namespace, key, old, new)
286 286
287 287 def stream_out(self):
288 288 raise error.Abort(_('cannot perform stream clone against local '
289 289 'peer'))
290 290
291 291 def unbundle(self, bundle, heads, url):
292 292 """apply a bundle on a repo
293 293
294 294 This function handles the repo locking itself."""
295 295 try:
296 296 try:
297 297 bundle = exchange.readbundle(self.ui, bundle, None)
298 298 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
299 299 if util.safehasattr(ret, 'getchunks'):
300 300 # This is a bundle20 object, turn it into an unbundler.
301 301 # This little dance should be dropped eventually when the
302 302 # API is finally improved.
303 303 stream = util.chunkbuffer(ret.getchunks())
304 304 ret = bundle2.getunbundler(self.ui, stream)
305 305 return ret
306 306 except Exception as exc:
307 307 # If the exception contains output salvaged from a bundle2
308 308 # reply, we need to make sure it is printed before continuing
309 309 # to fail. So we build a bundle2 with such output and consume
310 310 # it directly.
311 311 #
312 312 # This is not very elegant but allows a "simple" solution for
313 313 # issue4594
314 314 output = getattr(exc, '_bundle2salvagedoutput', ())
315 315 if output:
316 316 bundler = bundle2.bundle20(self._repo.ui)
317 317 for out in output:
318 318 bundler.addpart(out)
319 319 stream = util.chunkbuffer(bundler.getchunks())
320 320 b = bundle2.getunbundler(self.ui, stream)
321 321 bundle2.processbundle(self._repo, b)
322 322 raise
323 323 except error.PushRaced as exc:
324 324 raise error.ResponseError(_('push failed:'),
325 325 stringutil.forcebytestr(exc))
326 326
327 327 # End of _basewirecommands interface.
328 328
329 329 # Begin of peer interface.
330 330
331 331 def commandexecutor(self):
332 332 return localcommandexecutor(self)
333 333
334 334 # End of peer interface.
335 335
336 336 @interfaceutil.implementer(repository.ipeerlegacycommands)
337 337 class locallegacypeer(localpeer):
338 338 '''peer extension which implements legacy methods too; used for tests with
339 339 restricted capabilities'''
340 340
341 341 def __init__(self, repo):
342 342 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
343 343
344 344 # Begin of baselegacywirecommands interface.
345 345
346 346 def between(self, pairs):
347 347 return self._repo.between(pairs)
348 348
349 349 def branches(self, nodes):
350 350 return self._repo.branches(nodes)
351 351
352 352 def changegroup(self, nodes, source):
353 353 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
354 354 missingheads=self._repo.heads())
355 355 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
356 356
357 357 def changegroupsubset(self, bases, heads, source):
358 358 outgoing = discovery.outgoing(self._repo, missingroots=bases,
359 359 missingheads=heads)
360 360 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
361 361
362 362 # End of baselegacywirecommands interface.
363 363
364 364 # Increment the sub-version when the revlog v2 format changes to lock out old
365 365 # clients.
366 366 REVLOGV2_REQUIREMENT = 'exp-revlogv2.1'
367 367
368 368 # A repository with the sparserevlog feature will have delta chains that
369 369 # can spread over a larger span. Sparse reading cuts these large spans into
370 370 # pieces, so that each piece isn't too big.
371 371 # Without the sparserevlog capability, reading from the repository could use
372 372 # huge amounts of memory, because the whole span would be read at once,
373 373 # including all the intermediate revisions that aren't pertinent for the chain.
374 374 # This is why once a repository has enabled sparse-read, it becomes required.
375 375 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
376 376
377 377 # Functions receiving (ui, features) that extensions can register to impact
378 378 # the ability to load repositories with custom requirements. Only
379 379 # functions defined in loaded extensions are called.
380 380 #
381 381 # The function receives a set of requirement strings that the repository
382 382 # is capable of opening. Functions will typically add elements to the
383 383 # set to reflect that the extension knows how to handle that requirements.
384 384 featuresetupfuncs = set()
385 385
386 386 def makelocalrepository(baseui, path, intents=None):
387 387 """Create a local repository object.
388 388
389 389 Given arguments needed to construct a local repository, this function
390 390 performs various early repository loading functionality (such as
391 391 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
392 392 the repository can be opened, derives a type suitable for representing
393 393 that repository, and returns an instance of it.
394 394
395 395 The returned object conforms to the ``repository.completelocalrepository``
396 396 interface.
397 397
398 398 The repository type is derived by calling a series of factory functions
399 399 for each aspect/interface of the final repository. These are defined by
400 400 ``REPO_INTERFACES``.
401 401
402 402 Each factory function is called to produce a type implementing a specific
403 403 interface. The cumulative list of returned types will be combined into a
404 404 new type and that type will be instantiated to represent the local
405 405 repository.
406 406
407 407 The factory functions each receive various state that may be consulted
408 408 as part of deriving a type.
409 409
410 410 Extensions should wrap these factory functions to customize repository type
411 411 creation. Note that an extension's wrapped function may be called even if
412 412 that extension is not loaded for the repo being constructed. Extensions
413 413 should check if their ``__name__`` appears in the
414 414 ``extensionmodulenames`` set passed to the factory function and no-op if
415 415 not.
416 416 """
417 417 ui = baseui.copy()
418 418 # Prevent copying repo configuration.
419 419 ui.copy = baseui.copy
420 420
421 421 # Working directory VFS rooted at repository root.
422 422 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
423 423
424 424 # Main VFS for .hg/ directory.
425 425 hgpath = wdirvfs.join(b'.hg')
426 426 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
427 427
428 428 # The .hg/ path should exist and should be a directory. All other
429 429 # cases are errors.
430 430 if not hgvfs.isdir():
431 431 try:
432 432 hgvfs.stat()
433 433 except OSError as e:
434 434 if e.errno != errno.ENOENT:
435 435 raise
436 436
437 437 raise error.RepoError(_(b'repository %s not found') % path)
438 438
439 439 # .hg/requires file contains a newline-delimited list of
440 440 # features/capabilities the opener (us) must have in order to use
441 441 # the repository. This file was introduced in Mercurial 0.9.2,
442 442 # which means very old repositories may not have one. We assume
443 443 # a missing file translates to no requirements.
444 444 try:
445 445 requirements = set(hgvfs.read(b'requires').splitlines())
446 446 except IOError as e:
447 447 if e.errno != errno.ENOENT:
448 448 raise
449 449 requirements = set()
450 450
451 451 # The .hg/hgrc file may load extensions or contain config options
452 452 # that influence repository construction. Attempt to load it and
453 453 # process any new extensions that it may have pulled in.
454 454 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
455 455 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
456 456 extensions.loadall(ui)
457 457 extensions.populateui(ui)
458 458
459 459 # Set of module names of extensions loaded for this repository.
460 460 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
461 461
462 462 supportedrequirements = gathersupportedrequirements(ui)
463 463
464 464 # We first validate the requirements are known.
465 465 ensurerequirementsrecognized(requirements, supportedrequirements)
466 466
467 467 # Then we validate that the known set is reasonable to use together.
468 468 ensurerequirementscompatible(ui, requirements)
469 469
470 470 # TODO there are unhandled edge cases related to opening repositories with
471 471 # shared storage. If storage is shared, we should also test for requirements
472 472 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
473 473 # that repo, as that repo may load extensions needed to open it. This is a
474 474 # bit complicated because we don't want the other hgrc to overwrite settings
475 475 # in this hgrc.
476 476 #
477 477 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
478 478 # file when sharing repos. But if a requirement is added after the share is
479 479 # performed, thereby introducing a new requirement for the opener, we may
480 480 # will not see that and could encounter a run-time error interacting with
481 481 # that shared store since it has an unknown-to-us requirement.
482 482
483 483 # At this point, we know we should be capable of opening the repository.
484 484 # Now get on with doing that.
485 485
486 486 features = set()
487 487
488 488 # The "store" part of the repository holds versioned data. How it is
489 489 # accessed is determined by various requirements. The ``shared`` or
490 490 # ``relshared`` requirements indicate the store lives in the path contained
491 491 # in the ``.hg/sharedpath`` file. This is an absolute path for
492 492 # ``shared`` and relative to ``.hg/`` for ``relshared``.
493 493 if b'shared' in requirements or b'relshared' in requirements:
494 494 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
495 495 if b'relshared' in requirements:
496 496 sharedpath = hgvfs.join(sharedpath)
497 497
498 498 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
499 499
500 500 if not sharedvfs.exists():
501 501 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
502 502 b'directory %s') % sharedvfs.base)
503 503
504 504 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
505 505
506 506 storebasepath = sharedvfs.base
507 507 cachepath = sharedvfs.join(b'cache')
508 508 else:
509 509 storebasepath = hgvfs.base
510 510 cachepath = hgvfs.join(b'cache')
511 511 wcachepath = hgvfs.join(b'wcache')
512 512
513 513
514 514 # The store has changed over time and the exact layout is dictated by
515 515 # requirements. The store interface abstracts differences across all
516 516 # of them.
517 517 store = makestore(requirements, storebasepath,
518 518 lambda base: vfsmod.vfs(base, cacheaudited=True))
519 519 hgvfs.createmode = store.createmode
520 520
521 521 storevfs = store.vfs
522 522 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
523 523
524 524 # The cache vfs is used to manage cache files.
525 525 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
526 526 cachevfs.createmode = store.createmode
527 527 # The cache vfs is used to manage cache files related to the working copy
528 528 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
529 529 wcachevfs.createmode = store.createmode
530 530
531 531 # Now resolve the type for the repository object. We do this by repeatedly
532 532 # calling a factory function to produces types for specific aspects of the
533 533 # repo's operation. The aggregate returned types are used as base classes
534 534 # for a dynamically-derived type, which will represent our new repository.
535 535
536 536 bases = []
537 537 extrastate = {}
538 538
539 539 for iface, fn in REPO_INTERFACES:
540 540 # We pass all potentially useful state to give extensions tons of
541 541 # flexibility.
542 542 typ = fn()(ui=ui,
543 543 intents=intents,
544 544 requirements=requirements,
545 545 features=features,
546 546 wdirvfs=wdirvfs,
547 547 hgvfs=hgvfs,
548 548 store=store,
549 549 storevfs=storevfs,
550 550 storeoptions=storevfs.options,
551 551 cachevfs=cachevfs,
552 552 wcachevfs=wcachevfs,
553 553 extensionmodulenames=extensionmodulenames,
554 554 extrastate=extrastate,
555 555 baseclasses=bases)
556 556
557 557 if not isinstance(typ, type):
558 558 raise error.ProgrammingError('unable to construct type for %s' %
559 559 iface)
560 560
561 561 bases.append(typ)
562 562
563 563 # type() allows you to use characters in type names that wouldn't be
564 564 # recognized as Python symbols in source code. We abuse that to add
565 565 # rich information about our constructed repo.
566 566 name = pycompat.sysstr(b'derivedrepo:%s<%s>' % (
567 567 wdirvfs.base,
568 568 b','.join(sorted(requirements))))
569 569
570 570 cls = type(name, tuple(bases), {})
571 571
572 572 return cls(
573 573 baseui=baseui,
574 574 ui=ui,
575 575 origroot=path,
576 576 wdirvfs=wdirvfs,
577 577 hgvfs=hgvfs,
578 578 requirements=requirements,
579 579 supportedrequirements=supportedrequirements,
580 580 sharedpath=storebasepath,
581 581 store=store,
582 582 cachevfs=cachevfs,
583 583 wcachevfs=wcachevfs,
584 584 features=features,
585 585 intents=intents)
586 586
587 587 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
588 588 """Load hgrc files/content into a ui instance.
589 589
590 590 This is called during repository opening to load any additional
591 591 config files or settings relevant to the current repository.
592 592
593 593 Returns a bool indicating whether any additional configs were loaded.
594 594
595 595 Extensions should monkeypatch this function to modify how per-repo
596 596 configs are loaded. For example, an extension may wish to pull in
597 597 configs from alternate files or sources.
598 598 """
599 599 try:
600 600 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
601 601 return True
602 602 except IOError:
603 603 return False
604 604
605 605 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
606 606 """Perform additional actions after .hg/hgrc is loaded.
607 607
608 608 This function is called during repository loading immediately after
609 609 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
610 610
611 611 The function can be used to validate configs, automatically add
612 612 options (including extensions) based on requirements, etc.
613 613 """
614 614
615 615 # Map of requirements to list of extensions to load automatically when
616 616 # requirement is present.
617 617 autoextensions = {
618 618 b'largefiles': [b'largefiles'],
619 619 b'lfs': [b'lfs'],
620 620 }
621 621
622 622 for requirement, names in sorted(autoextensions.items()):
623 623 if requirement not in requirements:
624 624 continue
625 625
626 626 for name in names:
627 627 if not ui.hasconfig(b'extensions', name):
628 628 ui.setconfig(b'extensions', name, b'', source='autoload')
629 629
630 630 def gathersupportedrequirements(ui):
631 631 """Determine the complete set of recognized requirements."""
632 632 # Start with all requirements supported by this file.
633 633 supported = set(localrepository._basesupported)
634 634
635 635 # Execute ``featuresetupfuncs`` entries if they belong to an extension
636 636 # relevant to this ui instance.
637 637 modules = {m.__name__ for n, m in extensions.extensions(ui)}
638 638
639 639 for fn in featuresetupfuncs:
640 640 if fn.__module__ in modules:
641 641 fn(ui, supported)
642 642
643 643 # Add derived requirements from registered compression engines.
644 644 for name in util.compengines:
645 645 engine = util.compengines[name]
646 646 if engine.revlogheader():
647 647 supported.add(b'exp-compression-%s' % name)
648 648
649 649 return supported
650 650
651 651 def ensurerequirementsrecognized(requirements, supported):
652 652 """Validate that a set of local requirements is recognized.
653 653
654 654 Receives a set of requirements. Raises an ``error.RepoError`` if there
655 655 exists any requirement in that set that currently loaded code doesn't
656 656 recognize.
657 657
658 658 Returns a set of supported requirements.
659 659 """
660 660 missing = set()
661 661
662 662 for requirement in requirements:
663 663 if requirement in supported:
664 664 continue
665 665
666 666 if not requirement or not requirement[0:1].isalnum():
667 667 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
668 668
669 669 missing.add(requirement)
670 670
671 671 if missing:
672 672 raise error.RequirementError(
673 673 _(b'repository requires features unknown to this Mercurial: %s') %
674 674 b' '.join(sorted(missing)),
675 675 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
676 676 b'for more information'))
677 677
678 678 def ensurerequirementscompatible(ui, requirements):
679 679 """Validates that a set of recognized requirements is mutually compatible.
680 680
681 681 Some requirements may not be compatible with others or require
682 682 config options that aren't enabled. This function is called during
683 683 repository opening to ensure that the set of requirements needed
684 684 to open a repository is sane and compatible with config options.
685 685
686 686 Extensions can monkeypatch this function to perform additional
687 687 checking.
688 688
689 689 ``error.RepoError`` should be raised on failure.
690 690 """
691 691 if b'exp-sparse' in requirements and not sparse.enabled:
692 692 raise error.RepoError(_(b'repository is using sparse feature but '
693 693 b'sparse is not enabled; enable the '
694 694 b'"sparse" extensions to access'))
695 695
696 696 def makestore(requirements, path, vfstype):
697 697 """Construct a storage object for a repository."""
698 698 if b'store' in requirements:
699 699 if b'fncache' in requirements:
700 700 return storemod.fncachestore(path, vfstype,
701 701 b'dotencode' in requirements)
702 702
703 703 return storemod.encodedstore(path, vfstype)
704 704
705 705 return storemod.basicstore(path, vfstype)
706 706
707 707 def resolvestorevfsoptions(ui, requirements, features):
708 708 """Resolve the options to pass to the store vfs opener.
709 709
710 710 The returned dict is used to influence behavior of the storage layer.
711 711 """
712 712 options = {}
713 713
714 714 if b'treemanifest' in requirements:
715 715 options[b'treemanifest'] = True
716 716
717 717 # experimental config: format.manifestcachesize
718 718 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
719 719 if manifestcachesize is not None:
720 720 options[b'manifestcachesize'] = manifestcachesize
721 721
722 722 # In the absence of another requirement superseding a revlog-related
723 723 # requirement, we have to assume the repo is using revlog version 0.
724 724 # This revlog format is super old and we don't bother trying to parse
725 725 # opener options for it because those options wouldn't do anything
726 726 # meaningful on such old repos.
727 727 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
728 728 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
729 729
730 730 return options
731 731
732 732 def resolverevlogstorevfsoptions(ui, requirements, features):
733 733 """Resolve opener options specific to revlogs."""
734 734
735 735 options = {}
736 736 options[b'flagprocessors'] = {}
737 737
738 738 if b'revlogv1' in requirements:
739 739 options[b'revlogv1'] = True
740 740 if REVLOGV2_REQUIREMENT in requirements:
741 741 options[b'revlogv2'] = True
742 742
743 743 if b'generaldelta' in requirements:
744 744 options[b'generaldelta'] = True
745 745
746 746 # experimental config: format.chunkcachesize
747 747 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
748 748 if chunkcachesize is not None:
749 749 options[b'chunkcachesize'] = chunkcachesize
750 750
751 751 deltabothparents = ui.configbool(b'storage',
752 752 b'revlog.optimize-delta-parent-choice')
753 753 options[b'deltabothparents'] = deltabothparents
754 754
755 755 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
756 756
757 757 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
758 758 if 0 <= chainspan:
759 759 options[b'maxdeltachainspan'] = chainspan
760 760
761 761 mmapindexthreshold = ui.configbytes(b'storage', b'mmap-threshold')
762 762 if mmapindexthreshold is not None:
763 763 options[b'mmapindexthreshold'] = mmapindexthreshold
764 764
765 765 withsparseread = ui.configbool(b'experimental', b'sparse-read')
766 766 srdensitythres = float(ui.config(b'experimental',
767 767 b'sparse-read.density-threshold'))
768 768 srmingapsize = ui.configbytes(b'experimental',
769 769 b'sparse-read.min-gap-size')
770 770 options[b'with-sparse-read'] = withsparseread
771 771 options[b'sparse-read-density-threshold'] = srdensitythres
772 772 options[b'sparse-read-min-gap-size'] = srmingapsize
773 773
774 774 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
775 775 options[b'sparse-revlog'] = sparserevlog
776 776 if sparserevlog:
777 777 options[b'generaldelta'] = True
778 778
779 779 maxchainlen = None
780 780 if sparserevlog:
781 781 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
782 782 # experimental config: format.maxchainlen
783 783 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
784 784 if maxchainlen is not None:
785 785 options[b'maxchainlen'] = maxchainlen
786 786
787 787 for r in requirements:
788 788 if r.startswith(b'exp-compression-'):
789 789 options[b'compengine'] = r[len(b'exp-compression-'):]
790 790
791 791 if repository.NARROW_REQUIREMENT in requirements:
792 792 options[b'enableellipsis'] = True
793 793
794 794 return options
795 795
796 796 def makemain(**kwargs):
797 797 """Produce a type conforming to ``ilocalrepositorymain``."""
798 798 return localrepository
799 799
800 800 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
801 801 class revlogfilestorage(object):
802 802 """File storage when using revlogs."""
803 803
804 804 def file(self, path):
805 805 if path[0] == b'/':
806 806 path = path[1:]
807 807
808 808 return filelog.filelog(self.svfs, path)
809 809
810 810 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
811 811 class revlognarrowfilestorage(object):
812 812 """File storage when using revlogs and narrow files."""
813 813
814 814 def file(self, path):
815 815 if path[0] == b'/':
816 816 path = path[1:]
817 817
818 return filelog.narrowfilelog(self.svfs, path, self.narrowmatch())
818 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
819 819
820 820 def makefilestorage(requirements, features, **kwargs):
821 821 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
822 822 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
823 823 features.add(repository.REPO_FEATURE_STREAM_CLONE)
824 824
825 825 if repository.NARROW_REQUIREMENT in requirements:
826 826 return revlognarrowfilestorage
827 827 else:
828 828 return revlogfilestorage
829 829
830 830 # List of repository interfaces and factory functions for them. Each
831 831 # will be called in order during ``makelocalrepository()`` to iteratively
832 832 # derive the final type for a local repository instance. We capture the
833 833 # function as a lambda so we don't hold a reference and the module-level
834 834 # functions can be wrapped.
835 835 REPO_INTERFACES = [
836 836 (repository.ilocalrepositorymain, lambda: makemain),
837 837 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
838 838 ]
839 839
840 840 @interfaceutil.implementer(repository.ilocalrepositorymain)
841 841 class localrepository(object):
842 842 """Main class for representing local repositories.
843 843
844 844 All local repositories are instances of this class.
845 845
846 846 Constructed on its own, instances of this class are not usable as
847 847 repository objects. To obtain a usable repository object, call
848 848 ``hg.repository()``, ``localrepo.instance()``, or
849 849 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
850 850 ``instance()`` adds support for creating new repositories.
851 851 ``hg.repository()`` adds more extension integration, including calling
852 852 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
853 853 used.
854 854 """
855 855
856 856 # obsolete experimental requirements:
857 857 # - manifestv2: An experimental new manifest format that allowed
858 858 # for stem compression of long paths. Experiment ended up not
859 859 # being successful (repository sizes went up due to worse delta
860 860 # chains), and the code was deleted in 4.6.
861 861 supportedformats = {
862 862 'revlogv1',
863 863 'generaldelta',
864 864 'treemanifest',
865 865 REVLOGV2_REQUIREMENT,
866 866 SPARSEREVLOG_REQUIREMENT,
867 867 }
868 868 _basesupported = supportedformats | {
869 869 'store',
870 870 'fncache',
871 871 'shared',
872 872 'relshared',
873 873 'dotencode',
874 874 'exp-sparse',
875 875 'internal-phase'
876 876 }
877 877
878 878 # list of prefix for file which can be written without 'wlock'
879 879 # Extensions should extend this list when needed
880 880 _wlockfreeprefix = {
881 881 # We migh consider requiring 'wlock' for the next
882 882 # two, but pretty much all the existing code assume
883 883 # wlock is not needed so we keep them excluded for
884 884 # now.
885 885 'hgrc',
886 886 'requires',
887 887 # XXX cache is a complicatged business someone
888 888 # should investigate this in depth at some point
889 889 'cache/',
890 890 # XXX shouldn't be dirstate covered by the wlock?
891 891 'dirstate',
892 892 # XXX bisect was still a bit too messy at the time
893 893 # this changeset was introduced. Someone should fix
894 894 # the remainig bit and drop this line
895 895 'bisect.state',
896 896 }
897 897
898 898 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
899 899 supportedrequirements, sharedpath, store, cachevfs, wcachevfs,
900 900 features, intents=None):
901 901 """Create a new local repository instance.
902 902
903 903 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
904 904 or ``localrepo.makelocalrepository()`` for obtaining a new repository
905 905 object.
906 906
907 907 Arguments:
908 908
909 909 baseui
910 910 ``ui.ui`` instance that ``ui`` argument was based off of.
911 911
912 912 ui
913 913 ``ui.ui`` instance for use by the repository.
914 914
915 915 origroot
916 916 ``bytes`` path to working directory root of this repository.
917 917
918 918 wdirvfs
919 919 ``vfs.vfs`` rooted at the working directory.
920 920
921 921 hgvfs
922 922 ``vfs.vfs`` rooted at .hg/
923 923
924 924 requirements
925 925 ``set`` of bytestrings representing repository opening requirements.
926 926
927 927 supportedrequirements
928 928 ``set`` of bytestrings representing repository requirements that we
929 929 know how to open. May be a supetset of ``requirements``.
930 930
931 931 sharedpath
932 932 ``bytes`` Defining path to storage base directory. Points to a
933 933 ``.hg/`` directory somewhere.
934 934
935 935 store
936 936 ``store.basicstore`` (or derived) instance providing access to
937 937 versioned storage.
938 938
939 939 cachevfs
940 940 ``vfs.vfs`` used for cache files.
941 941
942 942 wcachevfs
943 943 ``vfs.vfs`` used for cache files related to the working copy.
944 944
945 945 features
946 946 ``set`` of bytestrings defining features/capabilities of this
947 947 instance.
948 948
949 949 intents
950 950 ``set`` of system strings indicating what this repo will be used
951 951 for.
952 952 """
953 953 self.baseui = baseui
954 954 self.ui = ui
955 955 self.origroot = origroot
956 956 # vfs rooted at working directory.
957 957 self.wvfs = wdirvfs
958 958 self.root = wdirvfs.base
959 959 # vfs rooted at .hg/. Used to access most non-store paths.
960 960 self.vfs = hgvfs
961 961 self.path = hgvfs.base
962 962 self.requirements = requirements
963 963 self.supported = supportedrequirements
964 964 self.sharedpath = sharedpath
965 965 self.store = store
966 966 self.cachevfs = cachevfs
967 967 self.wcachevfs = wcachevfs
968 968 self.features = features
969 969
970 970 self.filtername = None
971 971
972 972 if (self.ui.configbool('devel', 'all-warnings') or
973 973 self.ui.configbool('devel', 'check-locks')):
974 974 self.vfs.audit = self._getvfsward(self.vfs.audit)
975 975 # A list of callback to shape the phase if no data were found.
976 976 # Callback are in the form: func(repo, roots) --> processed root.
977 977 # This list it to be filled by extension during repo setup
978 978 self._phasedefaults = []
979 979
980 980 color.setup(self.ui)
981 981
982 982 self.spath = self.store.path
983 983 self.svfs = self.store.vfs
984 984 self.sjoin = self.store.join
985 985 if (self.ui.configbool('devel', 'all-warnings') or
986 986 self.ui.configbool('devel', 'check-locks')):
987 987 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
988 988 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
989 989 else: # standard vfs
990 990 self.svfs.audit = self._getsvfsward(self.svfs.audit)
991 991
992 992 self._dirstatevalidatewarned = False
993 993
994 994 self._branchcaches = {}
995 995 self._revbranchcache = None
996 996 self._filterpats = {}
997 997 self._datafilters = {}
998 998 self._transref = self._lockref = self._wlockref = None
999 999
1000 1000 # A cache for various files under .hg/ that tracks file changes,
1001 1001 # (used by the filecache decorator)
1002 1002 #
1003 1003 # Maps a property name to its util.filecacheentry
1004 1004 self._filecache = {}
1005 1005
1006 1006 # hold sets of revision to be filtered
1007 1007 # should be cleared when something might have changed the filter value:
1008 1008 # - new changesets,
1009 1009 # - phase change,
1010 1010 # - new obsolescence marker,
1011 1011 # - working directory parent change,
1012 1012 # - bookmark changes
1013 1013 self.filteredrevcache = {}
1014 1014
1015 1015 # post-dirstate-status hooks
1016 1016 self._postdsstatus = []
1017 1017
1018 1018 # generic mapping between names and nodes
1019 1019 self.names = namespaces.namespaces()
1020 1020
1021 1021 # Key to signature value.
1022 1022 self._sparsesignaturecache = {}
1023 1023 # Signature to cached matcher instance.
1024 1024 self._sparsematchercache = {}
1025 1025
1026 1026 def _getvfsward(self, origfunc):
1027 1027 """build a ward for self.vfs"""
1028 1028 rref = weakref.ref(self)
1029 1029 def checkvfs(path, mode=None):
1030 1030 ret = origfunc(path, mode=mode)
1031 1031 repo = rref()
1032 1032 if (repo is None
1033 1033 or not util.safehasattr(repo, '_wlockref')
1034 1034 or not util.safehasattr(repo, '_lockref')):
1035 1035 return
1036 1036 if mode in (None, 'r', 'rb'):
1037 1037 return
1038 1038 if path.startswith(repo.path):
1039 1039 # truncate name relative to the repository (.hg)
1040 1040 path = path[len(repo.path) + 1:]
1041 1041 if path.startswith('cache/'):
1042 1042 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
1043 1043 repo.ui.develwarn(msg % path, stacklevel=3, config="cache-vfs")
1044 1044 if path.startswith('journal.') or path.startswith('undo.'):
1045 1045 # journal is covered by 'lock'
1046 1046 if repo._currentlock(repo._lockref) is None:
1047 1047 repo.ui.develwarn('write with no lock: "%s"' % path,
1048 1048 stacklevel=3, config='check-locks')
1049 1049 elif repo._currentlock(repo._wlockref) is None:
1050 1050 # rest of vfs files are covered by 'wlock'
1051 1051 #
1052 1052 # exclude special files
1053 1053 for prefix in self._wlockfreeprefix:
1054 1054 if path.startswith(prefix):
1055 1055 return
1056 1056 repo.ui.develwarn('write with no wlock: "%s"' % path,
1057 1057 stacklevel=3, config='check-locks')
1058 1058 return ret
1059 1059 return checkvfs
1060 1060
1061 1061 def _getsvfsward(self, origfunc):
1062 1062 """build a ward for self.svfs"""
1063 1063 rref = weakref.ref(self)
1064 1064 def checksvfs(path, mode=None):
1065 1065 ret = origfunc(path, mode=mode)
1066 1066 repo = rref()
1067 1067 if repo is None or not util.safehasattr(repo, '_lockref'):
1068 1068 return
1069 1069 if mode in (None, 'r', 'rb'):
1070 1070 return
1071 1071 if path.startswith(repo.sharedpath):
1072 1072 # truncate name relative to the repository (.hg)
1073 1073 path = path[len(repo.sharedpath) + 1:]
1074 1074 if repo._currentlock(repo._lockref) is None:
1075 1075 repo.ui.develwarn('write with no lock: "%s"' % path,
1076 1076 stacklevel=4)
1077 1077 return ret
1078 1078 return checksvfs
1079 1079
1080 1080 def close(self):
1081 1081 self._writecaches()
1082 1082
1083 1083 def _writecaches(self):
1084 1084 if self._revbranchcache:
1085 1085 self._revbranchcache.write()
1086 1086
1087 1087 def _restrictcapabilities(self, caps):
1088 1088 if self.ui.configbool('experimental', 'bundle2-advertise'):
1089 1089 caps = set(caps)
1090 1090 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
1091 1091 role='client'))
1092 1092 caps.add('bundle2=' + urlreq.quote(capsblob))
1093 1093 return caps
1094 1094
1095 1095 def _writerequirements(self):
1096 1096 scmutil.writerequires(self.vfs, self.requirements)
1097 1097
1098 1098 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1099 1099 # self -> auditor -> self._checknested -> self
1100 1100
1101 1101 @property
1102 1102 def auditor(self):
1103 1103 # This is only used by context.workingctx.match in order to
1104 1104 # detect files in subrepos.
1105 1105 return pathutil.pathauditor(self.root, callback=self._checknested)
1106 1106
1107 1107 @property
1108 1108 def nofsauditor(self):
1109 1109 # This is only used by context.basectx.match in order to detect
1110 1110 # files in subrepos.
1111 1111 return pathutil.pathauditor(self.root, callback=self._checknested,
1112 1112 realfs=False, cached=True)
1113 1113
1114 1114 def _checknested(self, path):
1115 1115 """Determine if path is a legal nested repository."""
1116 1116 if not path.startswith(self.root):
1117 1117 return False
1118 1118 subpath = path[len(self.root) + 1:]
1119 1119 normsubpath = util.pconvert(subpath)
1120 1120
1121 1121 # XXX: Checking against the current working copy is wrong in
1122 1122 # the sense that it can reject things like
1123 1123 #
1124 1124 # $ hg cat -r 10 sub/x.txt
1125 1125 #
1126 1126 # if sub/ is no longer a subrepository in the working copy
1127 1127 # parent revision.
1128 1128 #
1129 1129 # However, it can of course also allow things that would have
1130 1130 # been rejected before, such as the above cat command if sub/
1131 1131 # is a subrepository now, but was a normal directory before.
1132 1132 # The old path auditor would have rejected by mistake since it
1133 1133 # panics when it sees sub/.hg/.
1134 1134 #
1135 1135 # All in all, checking against the working copy seems sensible
1136 1136 # since we want to prevent access to nested repositories on
1137 1137 # the filesystem *now*.
1138 1138 ctx = self[None]
1139 1139 parts = util.splitpath(subpath)
1140 1140 while parts:
1141 1141 prefix = '/'.join(parts)
1142 1142 if prefix in ctx.substate:
1143 1143 if prefix == normsubpath:
1144 1144 return True
1145 1145 else:
1146 1146 sub = ctx.sub(prefix)
1147 1147 return sub.checknested(subpath[len(prefix) + 1:])
1148 1148 else:
1149 1149 parts.pop()
1150 1150 return False
1151 1151
1152 1152 def peer(self):
1153 1153 return localpeer(self) # not cached to avoid reference cycle
1154 1154
1155 1155 def unfiltered(self):
1156 1156 """Return unfiltered version of the repository
1157 1157
1158 1158 Intended to be overwritten by filtered repo."""
1159 1159 return self
1160 1160
1161 1161 def filtered(self, name, visibilityexceptions=None):
1162 1162 """Return a filtered version of a repository"""
1163 1163 cls = repoview.newtype(self.unfiltered().__class__)
1164 1164 return cls(self, name, visibilityexceptions)
1165 1165
1166 1166 @repofilecache('bookmarks', 'bookmarks.current')
1167 1167 def _bookmarks(self):
1168 1168 return bookmarks.bmstore(self)
1169 1169
1170 1170 @property
1171 1171 def _activebookmark(self):
1172 1172 return self._bookmarks.active
1173 1173
1174 1174 # _phasesets depend on changelog. what we need is to call
1175 1175 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1176 1176 # can't be easily expressed in filecache mechanism.
1177 1177 @storecache('phaseroots', '00changelog.i')
1178 1178 def _phasecache(self):
1179 1179 return phases.phasecache(self, self._phasedefaults)
1180 1180
1181 1181 @storecache('obsstore')
1182 1182 def obsstore(self):
1183 1183 return obsolete.makestore(self.ui, self)
1184 1184
1185 1185 @storecache('00changelog.i')
1186 1186 def changelog(self):
1187 1187 return changelog.changelog(self.svfs,
1188 1188 trypending=txnutil.mayhavepending(self.root))
1189 1189
1190 1190 @storecache('00manifest.i')
1191 1191 def manifestlog(self):
1192 1192 rootstore = manifest.manifestrevlog(self.svfs)
1193 1193 return manifest.manifestlog(self.svfs, self, rootstore,
1194 self.narrowmatch())
1194 self._storenarrowmatch)
1195 1195
1196 1196 @repofilecache('dirstate')
1197 1197 def dirstate(self):
1198 1198 return self._makedirstate()
1199 1199
1200 1200 def _makedirstate(self):
1201 1201 """Extension point for wrapping the dirstate per-repo."""
1202 1202 sparsematchfn = lambda: sparse.matcher(self)
1203 1203
1204 1204 return dirstate.dirstate(self.vfs, self.ui, self.root,
1205 1205 self._dirstatevalidate, sparsematchfn)
1206 1206
1207 1207 def _dirstatevalidate(self, node):
1208 1208 try:
1209 1209 self.changelog.rev(node)
1210 1210 return node
1211 1211 except error.LookupError:
1212 1212 if not self._dirstatevalidatewarned:
1213 1213 self._dirstatevalidatewarned = True
1214 1214 self.ui.warn(_("warning: ignoring unknown"
1215 1215 " working parent %s!\n") % short(node))
1216 1216 return nullid
1217 1217
1218 1218 @storecache(narrowspec.FILENAME)
1219 1219 def narrowpats(self):
1220 1220 """matcher patterns for this repository's narrowspec
1221 1221
1222 1222 A tuple of (includes, excludes).
1223 1223 """
1224 1224 return narrowspec.load(self)
1225 1225
1226 1226 @storecache(narrowspec.FILENAME)
1227 def _storenarrowmatch(self):
1228 if repository.NARROW_REQUIREMENT not in self.requirements:
1229 return matchmod.always(self.root, '')
1230 include, exclude = self.narrowpats
1231 return narrowspec.match(self.root, include=include, exclude=exclude)
1232
1233 @storecache(narrowspec.FILENAME)
1227 1234 def _narrowmatch(self):
1228 1235 if repository.NARROW_REQUIREMENT not in self.requirements:
1229 1236 return matchmod.always(self.root, '')
1230 1237 narrowspec.checkworkingcopynarrowspec(self)
1231 1238 include, exclude = self.narrowpats
1232 1239 return narrowspec.match(self.root, include=include, exclude=exclude)
1233 1240
1234 1241 def narrowmatch(self, match=None, includeexact=False):
1235 1242 """matcher corresponding the the repo's narrowspec
1236 1243
1237 1244 If `match` is given, then that will be intersected with the narrow
1238 1245 matcher.
1239 1246
1240 1247 If `includeexact` is True, then any exact matches from `match` will
1241 1248 be included even if they're outside the narrowspec.
1242 1249 """
1243 1250 if match:
1244 1251 if includeexact and not self._narrowmatch.always():
1245 1252 # do not exclude explicitly-specified paths so that they can
1246 1253 # be warned later on
1247 1254 em = matchmod.exact(match._root, match._cwd, match.files())
1248 1255 nm = matchmod.unionmatcher([self._narrowmatch, em])
1249 1256 return matchmod.intersectmatchers(match, nm)
1250 1257 return matchmod.intersectmatchers(match, self._narrowmatch)
1251 1258 return self._narrowmatch
1252 1259
1253 1260 def setnarrowpats(self, newincludes, newexcludes):
1254 1261 narrowspec.save(self, newincludes, newexcludes)
1255 1262 narrowspec.copytoworkingcopy(self)
1256 1263 self.invalidate(clearfilecache=True)
1257 1264 # So the next access won't be considered a conflict
1258 1265 # TODO: It seems like there should be a way of doing this that
1259 1266 # doesn't involve replacing these attributes.
1260 1267 self.narrowpats = newincludes, newexcludes
1261 1268 self._narrowmatch = narrowspec.match(self.root, include=newincludes,
1262 1269 exclude=newexcludes)
1263 1270
1264 1271 def __getitem__(self, changeid):
1265 1272 if changeid is None:
1266 1273 return context.workingctx(self)
1267 1274 if isinstance(changeid, context.basectx):
1268 1275 return changeid
1269 1276 if isinstance(changeid, slice):
1270 1277 # wdirrev isn't contiguous so the slice shouldn't include it
1271 1278 return [self[i]
1272 1279 for i in pycompat.xrange(*changeid.indices(len(self)))
1273 1280 if i not in self.changelog.filteredrevs]
1274 1281 try:
1275 1282 if isinstance(changeid, int):
1276 1283 node = self.changelog.node(changeid)
1277 1284 rev = changeid
1278 1285 elif changeid == 'null':
1279 1286 node = nullid
1280 1287 rev = nullrev
1281 1288 elif changeid == 'tip':
1282 1289 node = self.changelog.tip()
1283 1290 rev = self.changelog.rev(node)
1284 1291 elif changeid == '.':
1285 1292 # this is a hack to delay/avoid loading obsmarkers
1286 1293 # when we know that '.' won't be hidden
1287 1294 node = self.dirstate.p1()
1288 1295 rev = self.unfiltered().changelog.rev(node)
1289 1296 elif len(changeid) == 20:
1290 1297 try:
1291 1298 node = changeid
1292 1299 rev = self.changelog.rev(changeid)
1293 1300 except error.FilteredLookupError:
1294 1301 changeid = hex(changeid) # for the error message
1295 1302 raise
1296 1303 except LookupError:
1297 1304 # check if it might have come from damaged dirstate
1298 1305 #
1299 1306 # XXX we could avoid the unfiltered if we had a recognizable
1300 1307 # exception for filtered changeset access
1301 1308 if (self.local()
1302 1309 and changeid in self.unfiltered().dirstate.parents()):
1303 1310 msg = _("working directory has unknown parent '%s'!")
1304 1311 raise error.Abort(msg % short(changeid))
1305 1312 changeid = hex(changeid) # for the error message
1306 1313 raise
1307 1314
1308 1315 elif len(changeid) == 40:
1309 1316 node = bin(changeid)
1310 1317 rev = self.changelog.rev(node)
1311 1318 else:
1312 1319 raise error.ProgrammingError(
1313 1320 "unsupported changeid '%s' of type %s" %
1314 1321 (changeid, type(changeid)))
1315 1322
1316 1323 return context.changectx(self, rev, node)
1317 1324
1318 1325 except (error.FilteredIndexError, error.FilteredLookupError):
1319 1326 raise error.FilteredRepoLookupError(_("filtered revision '%s'")
1320 1327 % pycompat.bytestr(changeid))
1321 1328 except (IndexError, LookupError):
1322 1329 raise error.RepoLookupError(
1323 1330 _("unknown revision '%s'") % pycompat.bytestr(changeid))
1324 1331 except error.WdirUnsupported:
1325 1332 return context.workingctx(self)
1326 1333
1327 1334 def __contains__(self, changeid):
1328 1335 """True if the given changeid exists
1329 1336
1330 1337 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1331 1338 specified.
1332 1339 """
1333 1340 try:
1334 1341 self[changeid]
1335 1342 return True
1336 1343 except error.RepoLookupError:
1337 1344 return False
1338 1345
1339 1346 def __nonzero__(self):
1340 1347 return True
1341 1348
1342 1349 __bool__ = __nonzero__
1343 1350
1344 1351 def __len__(self):
1345 1352 # no need to pay the cost of repoview.changelog
1346 1353 unfi = self.unfiltered()
1347 1354 return len(unfi.changelog)
1348 1355
1349 1356 def __iter__(self):
1350 1357 return iter(self.changelog)
1351 1358
1352 1359 def revs(self, expr, *args):
1353 1360 '''Find revisions matching a revset.
1354 1361
1355 1362 The revset is specified as a string ``expr`` that may contain
1356 1363 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1357 1364
1358 1365 Revset aliases from the configuration are not expanded. To expand
1359 1366 user aliases, consider calling ``scmutil.revrange()`` or
1360 1367 ``repo.anyrevs([expr], user=True)``.
1361 1368
1362 1369 Returns a revset.abstractsmartset, which is a list-like interface
1363 1370 that contains integer revisions.
1364 1371 '''
1365 1372 tree = revsetlang.spectree(expr, *args)
1366 1373 return revset.makematcher(tree)(self)
1367 1374
1368 1375 def set(self, expr, *args):
1369 1376 '''Find revisions matching a revset and emit changectx instances.
1370 1377
1371 1378 This is a convenience wrapper around ``revs()`` that iterates the
1372 1379 result and is a generator of changectx instances.
1373 1380
1374 1381 Revset aliases from the configuration are not expanded. To expand
1375 1382 user aliases, consider calling ``scmutil.revrange()``.
1376 1383 '''
1377 1384 for r in self.revs(expr, *args):
1378 1385 yield self[r]
1379 1386
1380 1387 def anyrevs(self, specs, user=False, localalias=None):
1381 1388 '''Find revisions matching one of the given revsets.
1382 1389
1383 1390 Revset aliases from the configuration are not expanded by default. To
1384 1391 expand user aliases, specify ``user=True``. To provide some local
1385 1392 definitions overriding user aliases, set ``localalias`` to
1386 1393 ``{name: definitionstring}``.
1387 1394 '''
1388 1395 if user:
1389 1396 m = revset.matchany(self.ui, specs,
1390 1397 lookup=revset.lookupfn(self),
1391 1398 localalias=localalias)
1392 1399 else:
1393 1400 m = revset.matchany(None, specs, localalias=localalias)
1394 1401 return m(self)
1395 1402
1396 1403 def url(self):
1397 1404 return 'file:' + self.root
1398 1405
1399 1406 def hook(self, name, throw=False, **args):
1400 1407 """Call a hook, passing this repo instance.
1401 1408
1402 1409 This a convenience method to aid invoking hooks. Extensions likely
1403 1410 won't call this unless they have registered a custom hook or are
1404 1411 replacing code that is expected to call a hook.
1405 1412 """
1406 1413 return hook.hook(self.ui, self, name, throw, **args)
1407 1414
1408 1415 @filteredpropertycache
1409 1416 def _tagscache(self):
1410 1417 '''Returns a tagscache object that contains various tags related
1411 1418 caches.'''
1412 1419
1413 1420 # This simplifies its cache management by having one decorated
1414 1421 # function (this one) and the rest simply fetch things from it.
1415 1422 class tagscache(object):
1416 1423 def __init__(self):
1417 1424 # These two define the set of tags for this repository. tags
1418 1425 # maps tag name to node; tagtypes maps tag name to 'global' or
1419 1426 # 'local'. (Global tags are defined by .hgtags across all
1420 1427 # heads, and local tags are defined in .hg/localtags.)
1421 1428 # They constitute the in-memory cache of tags.
1422 1429 self.tags = self.tagtypes = None
1423 1430
1424 1431 self.nodetagscache = self.tagslist = None
1425 1432
1426 1433 cache = tagscache()
1427 1434 cache.tags, cache.tagtypes = self._findtags()
1428 1435
1429 1436 return cache
1430 1437
1431 1438 def tags(self):
1432 1439 '''return a mapping of tag to node'''
1433 1440 t = {}
1434 1441 if self.changelog.filteredrevs:
1435 1442 tags, tt = self._findtags()
1436 1443 else:
1437 1444 tags = self._tagscache.tags
1438 1445 rev = self.changelog.rev
1439 1446 for k, v in tags.iteritems():
1440 1447 try:
1441 1448 # ignore tags to unknown nodes
1442 1449 rev(v)
1443 1450 t[k] = v
1444 1451 except (error.LookupError, ValueError):
1445 1452 pass
1446 1453 return t
1447 1454
1448 1455 def _findtags(self):
1449 1456 '''Do the hard work of finding tags. Return a pair of dicts
1450 1457 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1451 1458 maps tag name to a string like \'global\' or \'local\'.
1452 1459 Subclasses or extensions are free to add their own tags, but
1453 1460 should be aware that the returned dicts will be retained for the
1454 1461 duration of the localrepo object.'''
1455 1462
1456 1463 # XXX what tagtype should subclasses/extensions use? Currently
1457 1464 # mq and bookmarks add tags, but do not set the tagtype at all.
1458 1465 # Should each extension invent its own tag type? Should there
1459 1466 # be one tagtype for all such "virtual" tags? Or is the status
1460 1467 # quo fine?
1461 1468
1462 1469
1463 1470 # map tag name to (node, hist)
1464 1471 alltags = tagsmod.findglobaltags(self.ui, self)
1465 1472 # map tag name to tag type
1466 1473 tagtypes = dict((tag, 'global') for tag in alltags)
1467 1474
1468 1475 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1469 1476
1470 1477 # Build the return dicts. Have to re-encode tag names because
1471 1478 # the tags module always uses UTF-8 (in order not to lose info
1472 1479 # writing to the cache), but the rest of Mercurial wants them in
1473 1480 # local encoding.
1474 1481 tags = {}
1475 1482 for (name, (node, hist)) in alltags.iteritems():
1476 1483 if node != nullid:
1477 1484 tags[encoding.tolocal(name)] = node
1478 1485 tags['tip'] = self.changelog.tip()
1479 1486 tagtypes = dict([(encoding.tolocal(name), value)
1480 1487 for (name, value) in tagtypes.iteritems()])
1481 1488 return (tags, tagtypes)
1482 1489
1483 1490 def tagtype(self, tagname):
1484 1491 '''
1485 1492 return the type of the given tag. result can be:
1486 1493
1487 1494 'local' : a local tag
1488 1495 'global' : a global tag
1489 1496 None : tag does not exist
1490 1497 '''
1491 1498
1492 1499 return self._tagscache.tagtypes.get(tagname)
1493 1500
1494 1501 def tagslist(self):
1495 1502 '''return a list of tags ordered by revision'''
1496 1503 if not self._tagscache.tagslist:
1497 1504 l = []
1498 1505 for t, n in self.tags().iteritems():
1499 1506 l.append((self.changelog.rev(n), t, n))
1500 1507 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1501 1508
1502 1509 return self._tagscache.tagslist
1503 1510
1504 1511 def nodetags(self, node):
1505 1512 '''return the tags associated with a node'''
1506 1513 if not self._tagscache.nodetagscache:
1507 1514 nodetagscache = {}
1508 1515 for t, n in self._tagscache.tags.iteritems():
1509 1516 nodetagscache.setdefault(n, []).append(t)
1510 1517 for tags in nodetagscache.itervalues():
1511 1518 tags.sort()
1512 1519 self._tagscache.nodetagscache = nodetagscache
1513 1520 return self._tagscache.nodetagscache.get(node, [])
1514 1521
1515 1522 def nodebookmarks(self, node):
1516 1523 """return the list of bookmarks pointing to the specified node"""
1517 1524 return self._bookmarks.names(node)
1518 1525
1519 1526 def branchmap(self):
1520 1527 '''returns a dictionary {branch: [branchheads]} with branchheads
1521 1528 ordered by increasing revision number'''
1522 1529 branchmap.updatecache(self)
1523 1530 return self._branchcaches[self.filtername]
1524 1531
1525 1532 @unfilteredmethod
1526 1533 def revbranchcache(self):
1527 1534 if not self._revbranchcache:
1528 1535 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1529 1536 return self._revbranchcache
1530 1537
1531 1538 def branchtip(self, branch, ignoremissing=False):
1532 1539 '''return the tip node for a given branch
1533 1540
1534 1541 If ignoremissing is True, then this method will not raise an error.
1535 1542 This is helpful for callers that only expect None for a missing branch
1536 1543 (e.g. namespace).
1537 1544
1538 1545 '''
1539 1546 try:
1540 1547 return self.branchmap().branchtip(branch)
1541 1548 except KeyError:
1542 1549 if not ignoremissing:
1543 1550 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1544 1551 else:
1545 1552 pass
1546 1553
1547 1554 def lookup(self, key):
1548 1555 return scmutil.revsymbol(self, key).node()
1549 1556
1550 1557 def lookupbranch(self, key):
1551 1558 if key in self.branchmap():
1552 1559 return key
1553 1560
1554 1561 return scmutil.revsymbol(self, key).branch()
1555 1562
1556 1563 def known(self, nodes):
1557 1564 cl = self.changelog
1558 1565 nm = cl.nodemap
1559 1566 filtered = cl.filteredrevs
1560 1567 result = []
1561 1568 for n in nodes:
1562 1569 r = nm.get(n)
1563 1570 resp = not (r is None or r in filtered)
1564 1571 result.append(resp)
1565 1572 return result
1566 1573
1567 1574 def local(self):
1568 1575 return self
1569 1576
1570 1577 def publishing(self):
1571 1578 # it's safe (and desirable) to trust the publish flag unconditionally
1572 1579 # so that we don't finalize changes shared between users via ssh or nfs
1573 1580 return self.ui.configbool('phases', 'publish', untrusted=True)
1574 1581
1575 1582 def cancopy(self):
1576 1583 # so statichttprepo's override of local() works
1577 1584 if not self.local():
1578 1585 return False
1579 1586 if not self.publishing():
1580 1587 return True
1581 1588 # if publishing we can't copy if there is filtered content
1582 1589 return not self.filtered('visible').changelog.filteredrevs
1583 1590
1584 1591 def shared(self):
1585 1592 '''the type of shared repository (None if not shared)'''
1586 1593 if self.sharedpath != self.path:
1587 1594 return 'store'
1588 1595 return None
1589 1596
1590 1597 def wjoin(self, f, *insidef):
1591 1598 return self.vfs.reljoin(self.root, f, *insidef)
1592 1599
1593 1600 def setparents(self, p1, p2=nullid):
1594 1601 with self.dirstate.parentchange():
1595 1602 copies = self.dirstate.setparents(p1, p2)
1596 1603 pctx = self[p1]
1597 1604 if copies:
1598 1605 # Adjust copy records, the dirstate cannot do it, it
1599 1606 # requires access to parents manifests. Preserve them
1600 1607 # only for entries added to first parent.
1601 1608 for f in copies:
1602 1609 if f not in pctx and copies[f] in pctx:
1603 1610 self.dirstate.copy(copies[f], f)
1604 1611 if p2 == nullid:
1605 1612 for f, s in sorted(self.dirstate.copies().items()):
1606 1613 if f not in pctx and s not in pctx:
1607 1614 self.dirstate.copy(None, f)
1608 1615
1609 1616 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1610 1617 """changeid must be a changeset revision, if specified.
1611 1618 fileid can be a file revision or node."""
1612 1619 return context.filectx(self, path, changeid, fileid,
1613 1620 changectx=changectx)
1614 1621
1615 1622 def getcwd(self):
1616 1623 return self.dirstate.getcwd()
1617 1624
1618 1625 def pathto(self, f, cwd=None):
1619 1626 return self.dirstate.pathto(f, cwd)
1620 1627
1621 1628 def _loadfilter(self, filter):
1622 1629 if filter not in self._filterpats:
1623 1630 l = []
1624 1631 for pat, cmd in self.ui.configitems(filter):
1625 1632 if cmd == '!':
1626 1633 continue
1627 1634 mf = matchmod.match(self.root, '', [pat])
1628 1635 fn = None
1629 1636 params = cmd
1630 1637 for name, filterfn in self._datafilters.iteritems():
1631 1638 if cmd.startswith(name):
1632 1639 fn = filterfn
1633 1640 params = cmd[len(name):].lstrip()
1634 1641 break
1635 1642 if not fn:
1636 1643 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1637 1644 # Wrap old filters not supporting keyword arguments
1638 1645 if not pycompat.getargspec(fn)[2]:
1639 1646 oldfn = fn
1640 1647 fn = lambda s, c, **kwargs: oldfn(s, c)
1641 1648 l.append((mf, fn, params))
1642 1649 self._filterpats[filter] = l
1643 1650 return self._filterpats[filter]
1644 1651
1645 1652 def _filter(self, filterpats, filename, data):
1646 1653 for mf, fn, cmd in filterpats:
1647 1654 if mf(filename):
1648 1655 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1649 1656 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1650 1657 break
1651 1658
1652 1659 return data
1653 1660
1654 1661 @unfilteredpropertycache
1655 1662 def _encodefilterpats(self):
1656 1663 return self._loadfilter('encode')
1657 1664
1658 1665 @unfilteredpropertycache
1659 1666 def _decodefilterpats(self):
1660 1667 return self._loadfilter('decode')
1661 1668
1662 1669 def adddatafilter(self, name, filter):
1663 1670 self._datafilters[name] = filter
1664 1671
1665 1672 def wread(self, filename):
1666 1673 if self.wvfs.islink(filename):
1667 1674 data = self.wvfs.readlink(filename)
1668 1675 else:
1669 1676 data = self.wvfs.read(filename)
1670 1677 return self._filter(self._encodefilterpats, filename, data)
1671 1678
1672 1679 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1673 1680 """write ``data`` into ``filename`` in the working directory
1674 1681
1675 1682 This returns length of written (maybe decoded) data.
1676 1683 """
1677 1684 data = self._filter(self._decodefilterpats, filename, data)
1678 1685 if 'l' in flags:
1679 1686 self.wvfs.symlink(data, filename)
1680 1687 else:
1681 1688 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1682 1689 **kwargs)
1683 1690 if 'x' in flags:
1684 1691 self.wvfs.setflags(filename, False, True)
1685 1692 else:
1686 1693 self.wvfs.setflags(filename, False, False)
1687 1694 return len(data)
1688 1695
1689 1696 def wwritedata(self, filename, data):
1690 1697 return self._filter(self._decodefilterpats, filename, data)
1691 1698
1692 1699 def currenttransaction(self):
1693 1700 """return the current transaction or None if non exists"""
1694 1701 if self._transref:
1695 1702 tr = self._transref()
1696 1703 else:
1697 1704 tr = None
1698 1705
1699 1706 if tr and tr.running():
1700 1707 return tr
1701 1708 return None
1702 1709
1703 1710 def transaction(self, desc, report=None):
1704 1711 if (self.ui.configbool('devel', 'all-warnings')
1705 1712 or self.ui.configbool('devel', 'check-locks')):
1706 1713 if self._currentlock(self._lockref) is None:
1707 1714 raise error.ProgrammingError('transaction requires locking')
1708 1715 tr = self.currenttransaction()
1709 1716 if tr is not None:
1710 1717 return tr.nest(name=desc)
1711 1718
1712 1719 # abort here if the journal already exists
1713 1720 if self.svfs.exists("journal"):
1714 1721 raise error.RepoError(
1715 1722 _("abandoned transaction found"),
1716 1723 hint=_("run 'hg recover' to clean up transaction"))
1717 1724
1718 1725 idbase = "%.40f#%f" % (random.random(), time.time())
1719 1726 ha = hex(hashlib.sha1(idbase).digest())
1720 1727 txnid = 'TXN:' + ha
1721 1728 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1722 1729
1723 1730 self._writejournal(desc)
1724 1731 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1725 1732 if report:
1726 1733 rp = report
1727 1734 else:
1728 1735 rp = self.ui.warn
1729 1736 vfsmap = {'plain': self.vfs, 'store': self.svfs} # root of .hg/
1730 1737 # we must avoid cyclic reference between repo and transaction.
1731 1738 reporef = weakref.ref(self)
1732 1739 # Code to track tag movement
1733 1740 #
1734 1741 # Since tags are all handled as file content, it is actually quite hard
1735 1742 # to track these movement from a code perspective. So we fallback to a
1736 1743 # tracking at the repository level. One could envision to track changes
1737 1744 # to the '.hgtags' file through changegroup apply but that fails to
1738 1745 # cope with case where transaction expose new heads without changegroup
1739 1746 # being involved (eg: phase movement).
1740 1747 #
1741 1748 # For now, We gate the feature behind a flag since this likely comes
1742 1749 # with performance impacts. The current code run more often than needed
1743 1750 # and do not use caches as much as it could. The current focus is on
1744 1751 # the behavior of the feature so we disable it by default. The flag
1745 1752 # will be removed when we are happy with the performance impact.
1746 1753 #
1747 1754 # Once this feature is no longer experimental move the following
1748 1755 # documentation to the appropriate help section:
1749 1756 #
1750 1757 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1751 1758 # tags (new or changed or deleted tags). In addition the details of
1752 1759 # these changes are made available in a file at:
1753 1760 # ``REPOROOT/.hg/changes/tags.changes``.
1754 1761 # Make sure you check for HG_TAG_MOVED before reading that file as it
1755 1762 # might exist from a previous transaction even if no tag were touched
1756 1763 # in this one. Changes are recorded in a line base format::
1757 1764 #
1758 1765 # <action> <hex-node> <tag-name>\n
1759 1766 #
1760 1767 # Actions are defined as follow:
1761 1768 # "-R": tag is removed,
1762 1769 # "+A": tag is added,
1763 1770 # "-M": tag is moved (old value),
1764 1771 # "+M": tag is moved (new value),
1765 1772 tracktags = lambda x: None
1766 1773 # experimental config: experimental.hook-track-tags
1767 1774 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1768 1775 if desc != 'strip' and shouldtracktags:
1769 1776 oldheads = self.changelog.headrevs()
1770 1777 def tracktags(tr2):
1771 1778 repo = reporef()
1772 1779 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1773 1780 newheads = repo.changelog.headrevs()
1774 1781 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1775 1782 # notes: we compare lists here.
1776 1783 # As we do it only once buiding set would not be cheaper
1777 1784 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1778 1785 if changes:
1779 1786 tr2.hookargs['tag_moved'] = '1'
1780 1787 with repo.vfs('changes/tags.changes', 'w',
1781 1788 atomictemp=True) as changesfile:
1782 1789 # note: we do not register the file to the transaction
1783 1790 # because we needs it to still exist on the transaction
1784 1791 # is close (for txnclose hooks)
1785 1792 tagsmod.writediff(changesfile, changes)
1786 1793 def validate(tr2):
1787 1794 """will run pre-closing hooks"""
1788 1795 # XXX the transaction API is a bit lacking here so we take a hacky
1789 1796 # path for now
1790 1797 #
1791 1798 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1792 1799 # dict is copied before these run. In addition we needs the data
1793 1800 # available to in memory hooks too.
1794 1801 #
1795 1802 # Moreover, we also need to make sure this runs before txnclose
1796 1803 # hooks and there is no "pending" mechanism that would execute
1797 1804 # logic only if hooks are about to run.
1798 1805 #
1799 1806 # Fixing this limitation of the transaction is also needed to track
1800 1807 # other families of changes (bookmarks, phases, obsolescence).
1801 1808 #
1802 1809 # This will have to be fixed before we remove the experimental
1803 1810 # gating.
1804 1811 tracktags(tr2)
1805 1812 repo = reporef()
1806 1813 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1807 1814 scmutil.enforcesinglehead(repo, tr2, desc)
1808 1815 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1809 1816 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1810 1817 args = tr.hookargs.copy()
1811 1818 args.update(bookmarks.preparehookargs(name, old, new))
1812 1819 repo.hook('pretxnclose-bookmark', throw=True,
1813 1820 txnname=desc,
1814 1821 **pycompat.strkwargs(args))
1815 1822 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1816 1823 cl = repo.unfiltered().changelog
1817 1824 for rev, (old, new) in tr.changes['phases'].items():
1818 1825 args = tr.hookargs.copy()
1819 1826 node = hex(cl.node(rev))
1820 1827 args.update(phases.preparehookargs(node, old, new))
1821 1828 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1822 1829 **pycompat.strkwargs(args))
1823 1830
1824 1831 repo.hook('pretxnclose', throw=True,
1825 1832 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1826 1833 def releasefn(tr, success):
1827 1834 repo = reporef()
1828 1835 if success:
1829 1836 # this should be explicitly invoked here, because
1830 1837 # in-memory changes aren't written out at closing
1831 1838 # transaction, if tr.addfilegenerator (via
1832 1839 # dirstate.write or so) isn't invoked while
1833 1840 # transaction running
1834 1841 repo.dirstate.write(None)
1835 1842 else:
1836 1843 # discard all changes (including ones already written
1837 1844 # out) in this transaction
1838 1845 narrowspec.restorebackup(self, 'journal.narrowspec')
1839 1846 narrowspec.restorewcbackup(self, 'journal.narrowspec.dirstate')
1840 1847 repo.dirstate.restorebackup(None, 'journal.dirstate')
1841 1848
1842 1849 repo.invalidate(clearfilecache=True)
1843 1850
1844 1851 tr = transaction.transaction(rp, self.svfs, vfsmap,
1845 1852 "journal",
1846 1853 "undo",
1847 1854 aftertrans(renames),
1848 1855 self.store.createmode,
1849 1856 validator=validate,
1850 1857 releasefn=releasefn,
1851 1858 checkambigfiles=_cachedfiles,
1852 1859 name=desc)
1853 1860 tr.changes['origrepolen'] = len(self)
1854 1861 tr.changes['obsmarkers'] = set()
1855 1862 tr.changes['phases'] = {}
1856 1863 tr.changes['bookmarks'] = {}
1857 1864
1858 1865 tr.hookargs['txnid'] = txnid
1859 1866 # note: writing the fncache only during finalize mean that the file is
1860 1867 # outdated when running hooks. As fncache is used for streaming clone,
1861 1868 # this is not expected to break anything that happen during the hooks.
1862 1869 tr.addfinalize('flush-fncache', self.store.write)
1863 1870 def txnclosehook(tr2):
1864 1871 """To be run if transaction is successful, will schedule a hook run
1865 1872 """
1866 1873 # Don't reference tr2 in hook() so we don't hold a reference.
1867 1874 # This reduces memory consumption when there are multiple
1868 1875 # transactions per lock. This can likely go away if issue5045
1869 1876 # fixes the function accumulation.
1870 1877 hookargs = tr2.hookargs
1871 1878
1872 1879 def hookfunc():
1873 1880 repo = reporef()
1874 1881 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1875 1882 bmchanges = sorted(tr.changes['bookmarks'].items())
1876 1883 for name, (old, new) in bmchanges:
1877 1884 args = tr.hookargs.copy()
1878 1885 args.update(bookmarks.preparehookargs(name, old, new))
1879 1886 repo.hook('txnclose-bookmark', throw=False,
1880 1887 txnname=desc, **pycompat.strkwargs(args))
1881 1888
1882 1889 if hook.hashook(repo.ui, 'txnclose-phase'):
1883 1890 cl = repo.unfiltered().changelog
1884 1891 phasemv = sorted(tr.changes['phases'].items())
1885 1892 for rev, (old, new) in phasemv:
1886 1893 args = tr.hookargs.copy()
1887 1894 node = hex(cl.node(rev))
1888 1895 args.update(phases.preparehookargs(node, old, new))
1889 1896 repo.hook('txnclose-phase', throw=False, txnname=desc,
1890 1897 **pycompat.strkwargs(args))
1891 1898
1892 1899 repo.hook('txnclose', throw=False, txnname=desc,
1893 1900 **pycompat.strkwargs(hookargs))
1894 1901 reporef()._afterlock(hookfunc)
1895 1902 tr.addfinalize('txnclose-hook', txnclosehook)
1896 1903 # Include a leading "-" to make it happen before the transaction summary
1897 1904 # reports registered via scmutil.registersummarycallback() whose names
1898 1905 # are 00-txnreport etc. That way, the caches will be warm when the
1899 1906 # callbacks run.
1900 1907 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1901 1908 def txnaborthook(tr2):
1902 1909 """To be run if transaction is aborted
1903 1910 """
1904 1911 reporef().hook('txnabort', throw=False, txnname=desc,
1905 1912 **pycompat.strkwargs(tr2.hookargs))
1906 1913 tr.addabort('txnabort-hook', txnaborthook)
1907 1914 # avoid eager cache invalidation. in-memory data should be identical
1908 1915 # to stored data if transaction has no error.
1909 1916 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1910 1917 self._transref = weakref.ref(tr)
1911 1918 scmutil.registersummarycallback(self, tr, desc)
1912 1919 return tr
1913 1920
1914 1921 def _journalfiles(self):
1915 1922 return ((self.svfs, 'journal'),
1916 1923 (self.svfs, 'journal.narrowspec'),
1917 1924 (self.vfs, 'journal.narrowspec.dirstate'),
1918 1925 (self.vfs, 'journal.dirstate'),
1919 1926 (self.vfs, 'journal.branch'),
1920 1927 (self.vfs, 'journal.desc'),
1921 1928 (self.vfs, 'journal.bookmarks'),
1922 1929 (self.svfs, 'journal.phaseroots'))
1923 1930
1924 1931 def undofiles(self):
1925 1932 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1926 1933
1927 1934 @unfilteredmethod
1928 1935 def _writejournal(self, desc):
1929 1936 self.dirstate.savebackup(None, 'journal.dirstate')
1930 1937 narrowspec.savewcbackup(self, 'journal.narrowspec.dirstate')
1931 1938 narrowspec.savebackup(self, 'journal.narrowspec')
1932 1939 self.vfs.write("journal.branch",
1933 1940 encoding.fromlocal(self.dirstate.branch()))
1934 1941 self.vfs.write("journal.desc",
1935 1942 "%d\n%s\n" % (len(self), desc))
1936 1943 self.vfs.write("journal.bookmarks",
1937 1944 self.vfs.tryread("bookmarks"))
1938 1945 self.svfs.write("journal.phaseroots",
1939 1946 self.svfs.tryread("phaseroots"))
1940 1947
1941 1948 def recover(self):
1942 1949 with self.lock():
1943 1950 if self.svfs.exists("journal"):
1944 1951 self.ui.status(_("rolling back interrupted transaction\n"))
1945 1952 vfsmap = {'': self.svfs,
1946 1953 'plain': self.vfs,}
1947 1954 transaction.rollback(self.svfs, vfsmap, "journal",
1948 1955 self.ui.warn,
1949 1956 checkambigfiles=_cachedfiles)
1950 1957 self.invalidate()
1951 1958 return True
1952 1959 else:
1953 1960 self.ui.warn(_("no interrupted transaction available\n"))
1954 1961 return False
1955 1962
1956 1963 def rollback(self, dryrun=False, force=False):
1957 1964 wlock = lock = dsguard = None
1958 1965 try:
1959 1966 wlock = self.wlock()
1960 1967 lock = self.lock()
1961 1968 if self.svfs.exists("undo"):
1962 1969 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1963 1970
1964 1971 return self._rollback(dryrun, force, dsguard)
1965 1972 else:
1966 1973 self.ui.warn(_("no rollback information available\n"))
1967 1974 return 1
1968 1975 finally:
1969 1976 release(dsguard, lock, wlock)
1970 1977
1971 1978 @unfilteredmethod # Until we get smarter cache management
1972 1979 def _rollback(self, dryrun, force, dsguard):
1973 1980 ui = self.ui
1974 1981 try:
1975 1982 args = self.vfs.read('undo.desc').splitlines()
1976 1983 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1977 1984 if len(args) >= 3:
1978 1985 detail = args[2]
1979 1986 oldtip = oldlen - 1
1980 1987
1981 1988 if detail and ui.verbose:
1982 1989 msg = (_('repository tip rolled back to revision %d'
1983 1990 ' (undo %s: %s)\n')
1984 1991 % (oldtip, desc, detail))
1985 1992 else:
1986 1993 msg = (_('repository tip rolled back to revision %d'
1987 1994 ' (undo %s)\n')
1988 1995 % (oldtip, desc))
1989 1996 except IOError:
1990 1997 msg = _('rolling back unknown transaction\n')
1991 1998 desc = None
1992 1999
1993 2000 if not force and self['.'] != self['tip'] and desc == 'commit':
1994 2001 raise error.Abort(
1995 2002 _('rollback of last commit while not checked out '
1996 2003 'may lose data'), hint=_('use -f to force'))
1997 2004
1998 2005 ui.status(msg)
1999 2006 if dryrun:
2000 2007 return 0
2001 2008
2002 2009 parents = self.dirstate.parents()
2003 2010 self.destroying()
2004 2011 vfsmap = {'plain': self.vfs, '': self.svfs}
2005 2012 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
2006 2013 checkambigfiles=_cachedfiles)
2007 2014 if self.vfs.exists('undo.bookmarks'):
2008 2015 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
2009 2016 if self.svfs.exists('undo.phaseroots'):
2010 2017 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
2011 2018 self.invalidate()
2012 2019
2013 2020 parentgone = (parents[0] not in self.changelog.nodemap or
2014 2021 parents[1] not in self.changelog.nodemap)
2015 2022 if parentgone:
2016 2023 # prevent dirstateguard from overwriting already restored one
2017 2024 dsguard.close()
2018 2025
2019 2026 narrowspec.restorebackup(self, 'undo.narrowspec')
2020 2027 narrowspec.restorewcbackup(self, 'undo.narrowspec.dirstate')
2021 2028 self.dirstate.restorebackup(None, 'undo.dirstate')
2022 2029 try:
2023 2030 branch = self.vfs.read('undo.branch')
2024 2031 self.dirstate.setbranch(encoding.tolocal(branch))
2025 2032 except IOError:
2026 2033 ui.warn(_('named branch could not be reset: '
2027 2034 'current branch is still \'%s\'\n')
2028 2035 % self.dirstate.branch())
2029 2036
2030 2037 parents = tuple([p.rev() for p in self[None].parents()])
2031 2038 if len(parents) > 1:
2032 2039 ui.status(_('working directory now based on '
2033 2040 'revisions %d and %d\n') % parents)
2034 2041 else:
2035 2042 ui.status(_('working directory now based on '
2036 2043 'revision %d\n') % parents)
2037 2044 mergemod.mergestate.clean(self, self['.'].node())
2038 2045
2039 2046 # TODO: if we know which new heads may result from this rollback, pass
2040 2047 # them to destroy(), which will prevent the branchhead cache from being
2041 2048 # invalidated.
2042 2049 self.destroyed()
2043 2050 return 0
2044 2051
2045 2052 def _buildcacheupdater(self, newtransaction):
2046 2053 """called during transaction to build the callback updating cache
2047 2054
2048 2055 Lives on the repository to help extension who might want to augment
2049 2056 this logic. For this purpose, the created transaction is passed to the
2050 2057 method.
2051 2058 """
2052 2059 # we must avoid cyclic reference between repo and transaction.
2053 2060 reporef = weakref.ref(self)
2054 2061 def updater(tr):
2055 2062 repo = reporef()
2056 2063 repo.updatecaches(tr)
2057 2064 return updater
2058 2065
2059 2066 @unfilteredmethod
2060 2067 def updatecaches(self, tr=None, full=False):
2061 2068 """warm appropriate caches
2062 2069
2063 2070 If this function is called after a transaction closed. The transaction
2064 2071 will be available in the 'tr' argument. This can be used to selectively
2065 2072 update caches relevant to the changes in that transaction.
2066 2073
2067 2074 If 'full' is set, make sure all caches the function knows about have
2068 2075 up-to-date data. Even the ones usually loaded more lazily.
2069 2076 """
2070 2077 if tr is not None and tr.hookargs.get('source') == 'strip':
2071 2078 # During strip, many caches are invalid but
2072 2079 # later call to `destroyed` will refresh them.
2073 2080 return
2074 2081
2075 2082 if tr is None or tr.changes['origrepolen'] < len(self):
2076 2083 # updating the unfiltered branchmap should refresh all the others,
2077 2084 self.ui.debug('updating the branch cache\n')
2078 2085 branchmap.updatecache(self.filtered('served'))
2079 2086
2080 2087 if full:
2081 2088 rbc = self.revbranchcache()
2082 2089 for r in self.changelog:
2083 2090 rbc.branchinfo(r)
2084 2091 rbc.write()
2085 2092
2086 2093 # ensure the working copy parents are in the manifestfulltextcache
2087 2094 for ctx in self['.'].parents():
2088 2095 ctx.manifest() # accessing the manifest is enough
2089 2096
2090 2097 def invalidatecaches(self):
2091 2098
2092 2099 if r'_tagscache' in vars(self):
2093 2100 # can't use delattr on proxy
2094 2101 del self.__dict__[r'_tagscache']
2095 2102
2096 2103 self.unfiltered()._branchcaches.clear()
2097 2104 self.invalidatevolatilesets()
2098 2105 self._sparsesignaturecache.clear()
2099 2106
2100 2107 def invalidatevolatilesets(self):
2101 2108 self.filteredrevcache.clear()
2102 2109 obsolete.clearobscaches(self)
2103 2110
2104 2111 def invalidatedirstate(self):
2105 2112 '''Invalidates the dirstate, causing the next call to dirstate
2106 2113 to check if it was modified since the last time it was read,
2107 2114 rereading it if it has.
2108 2115
2109 2116 This is different to dirstate.invalidate() that it doesn't always
2110 2117 rereads the dirstate. Use dirstate.invalidate() if you want to
2111 2118 explicitly read the dirstate again (i.e. restoring it to a previous
2112 2119 known good state).'''
2113 2120 if hasunfilteredcache(self, r'dirstate'):
2114 2121 for k in self.dirstate._filecache:
2115 2122 try:
2116 2123 delattr(self.dirstate, k)
2117 2124 except AttributeError:
2118 2125 pass
2119 2126 delattr(self.unfiltered(), r'dirstate')
2120 2127
2121 2128 def invalidate(self, clearfilecache=False):
2122 2129 '''Invalidates both store and non-store parts other than dirstate
2123 2130
2124 2131 If a transaction is running, invalidation of store is omitted,
2125 2132 because discarding in-memory changes might cause inconsistency
2126 2133 (e.g. incomplete fncache causes unintentional failure, but
2127 2134 redundant one doesn't).
2128 2135 '''
2129 2136 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2130 2137 for k in list(self._filecache.keys()):
2131 2138 # dirstate is invalidated separately in invalidatedirstate()
2132 2139 if k == 'dirstate':
2133 2140 continue
2134 2141 if (k == 'changelog' and
2135 2142 self.currenttransaction() and
2136 2143 self.changelog._delayed):
2137 2144 # The changelog object may store unwritten revisions. We don't
2138 2145 # want to lose them.
2139 2146 # TODO: Solve the problem instead of working around it.
2140 2147 continue
2141 2148
2142 2149 if clearfilecache:
2143 2150 del self._filecache[k]
2144 2151 try:
2145 2152 delattr(unfiltered, k)
2146 2153 except AttributeError:
2147 2154 pass
2148 2155 self.invalidatecaches()
2149 2156 if not self.currenttransaction():
2150 2157 # TODO: Changing contents of store outside transaction
2151 2158 # causes inconsistency. We should make in-memory store
2152 2159 # changes detectable, and abort if changed.
2153 2160 self.store.invalidatecaches()
2154 2161
2155 2162 def invalidateall(self):
2156 2163 '''Fully invalidates both store and non-store parts, causing the
2157 2164 subsequent operation to reread any outside changes.'''
2158 2165 # extension should hook this to invalidate its caches
2159 2166 self.invalidate()
2160 2167 self.invalidatedirstate()
2161 2168
2162 2169 @unfilteredmethod
2163 2170 def _refreshfilecachestats(self, tr):
2164 2171 """Reload stats of cached files so that they are flagged as valid"""
2165 2172 for k, ce in self._filecache.items():
2166 2173 k = pycompat.sysstr(k)
2167 2174 if k == r'dirstate' or k not in self.__dict__:
2168 2175 continue
2169 2176 ce.refresh()
2170 2177
2171 2178 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
2172 2179 inheritchecker=None, parentenvvar=None):
2173 2180 parentlock = None
2174 2181 # the contents of parentenvvar are used by the underlying lock to
2175 2182 # determine whether it can be inherited
2176 2183 if parentenvvar is not None:
2177 2184 parentlock = encoding.environ.get(parentenvvar)
2178 2185
2179 2186 timeout = 0
2180 2187 warntimeout = 0
2181 2188 if wait:
2182 2189 timeout = self.ui.configint("ui", "timeout")
2183 2190 warntimeout = self.ui.configint("ui", "timeout.warn")
2184 2191 # internal config: ui.signal-safe-lock
2185 2192 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
2186 2193
2187 2194 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
2188 2195 releasefn=releasefn,
2189 2196 acquirefn=acquirefn, desc=desc,
2190 2197 inheritchecker=inheritchecker,
2191 2198 parentlock=parentlock,
2192 2199 signalsafe=signalsafe)
2193 2200 return l
2194 2201
2195 2202 def _afterlock(self, callback):
2196 2203 """add a callback to be run when the repository is fully unlocked
2197 2204
2198 2205 The callback will be executed when the outermost lock is released
2199 2206 (with wlock being higher level than 'lock')."""
2200 2207 for ref in (self._wlockref, self._lockref):
2201 2208 l = ref and ref()
2202 2209 if l and l.held:
2203 2210 l.postrelease.append(callback)
2204 2211 break
2205 2212 else: # no lock have been found.
2206 2213 callback()
2207 2214
2208 2215 def lock(self, wait=True):
2209 2216 '''Lock the repository store (.hg/store) and return a weak reference
2210 2217 to the lock. Use this before modifying the store (e.g. committing or
2211 2218 stripping). If you are opening a transaction, get a lock as well.)
2212 2219
2213 2220 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2214 2221 'wlock' first to avoid a dead-lock hazard.'''
2215 2222 l = self._currentlock(self._lockref)
2216 2223 if l is not None:
2217 2224 l.lock()
2218 2225 return l
2219 2226
2220 2227 l = self._lock(self.svfs, "lock", wait, None,
2221 2228 self.invalidate, _('repository %s') % self.origroot)
2222 2229 self._lockref = weakref.ref(l)
2223 2230 return l
2224 2231
2225 2232 def _wlockchecktransaction(self):
2226 2233 if self.currenttransaction() is not None:
2227 2234 raise error.LockInheritanceContractViolation(
2228 2235 'wlock cannot be inherited in the middle of a transaction')
2229 2236
2230 2237 def wlock(self, wait=True):
2231 2238 '''Lock the non-store parts of the repository (everything under
2232 2239 .hg except .hg/store) and return a weak reference to the lock.
2233 2240
2234 2241 Use this before modifying files in .hg.
2235 2242
2236 2243 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2237 2244 'wlock' first to avoid a dead-lock hazard.'''
2238 2245 l = self._wlockref and self._wlockref()
2239 2246 if l is not None and l.held:
2240 2247 l.lock()
2241 2248 return l
2242 2249
2243 2250 # We do not need to check for non-waiting lock acquisition. Such
2244 2251 # acquisition would not cause dead-lock as they would just fail.
2245 2252 if wait and (self.ui.configbool('devel', 'all-warnings')
2246 2253 or self.ui.configbool('devel', 'check-locks')):
2247 2254 if self._currentlock(self._lockref) is not None:
2248 2255 self.ui.develwarn('"wlock" acquired after "lock"')
2249 2256
2250 2257 def unlock():
2251 2258 if self.dirstate.pendingparentchange():
2252 2259 self.dirstate.invalidate()
2253 2260 else:
2254 2261 self.dirstate.write(None)
2255 2262
2256 2263 self._filecache['dirstate'].refresh()
2257 2264
2258 2265 l = self._lock(self.vfs, "wlock", wait, unlock,
2259 2266 self.invalidatedirstate, _('working directory of %s') %
2260 2267 self.origroot,
2261 2268 inheritchecker=self._wlockchecktransaction,
2262 2269 parentenvvar='HG_WLOCK_LOCKER')
2263 2270 self._wlockref = weakref.ref(l)
2264 2271 return l
2265 2272
2266 2273 def _currentlock(self, lockref):
2267 2274 """Returns the lock if it's held, or None if it's not."""
2268 2275 if lockref is None:
2269 2276 return None
2270 2277 l = lockref()
2271 2278 if l is None or not l.held:
2272 2279 return None
2273 2280 return l
2274 2281
2275 2282 def currentwlock(self):
2276 2283 """Returns the wlock if it's held, or None if it's not."""
2277 2284 return self._currentlock(self._wlockref)
2278 2285
2279 2286 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2280 2287 """
2281 2288 commit an individual file as part of a larger transaction
2282 2289 """
2283 2290
2284 2291 fname = fctx.path()
2285 2292 fparent1 = manifest1.get(fname, nullid)
2286 2293 fparent2 = manifest2.get(fname, nullid)
2287 2294 if isinstance(fctx, context.filectx):
2288 2295 node = fctx.filenode()
2289 2296 if node in [fparent1, fparent2]:
2290 2297 self.ui.debug('reusing %s filelog entry\n' % fname)
2291 2298 if manifest1.flags(fname) != fctx.flags():
2292 2299 changelist.append(fname)
2293 2300 return node
2294 2301
2295 2302 flog = self.file(fname)
2296 2303 meta = {}
2297 2304 copy = fctx.renamed()
2298 2305 if copy and copy[0] != fname:
2299 2306 # Mark the new revision of this file as a copy of another
2300 2307 # file. This copy data will effectively act as a parent
2301 2308 # of this new revision. If this is a merge, the first
2302 2309 # parent will be the nullid (meaning "look up the copy data")
2303 2310 # and the second one will be the other parent. For example:
2304 2311 #
2305 2312 # 0 --- 1 --- 3 rev1 changes file foo
2306 2313 # \ / rev2 renames foo to bar and changes it
2307 2314 # \- 2 -/ rev3 should have bar with all changes and
2308 2315 # should record that bar descends from
2309 2316 # bar in rev2 and foo in rev1
2310 2317 #
2311 2318 # this allows this merge to succeed:
2312 2319 #
2313 2320 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2314 2321 # \ / merging rev3 and rev4 should use bar@rev2
2315 2322 # \- 2 --- 4 as the merge base
2316 2323 #
2317 2324
2318 2325 cfname = copy[0]
2319 2326 crev = manifest1.get(cfname)
2320 2327 newfparent = fparent2
2321 2328
2322 2329 if manifest2: # branch merge
2323 2330 if fparent2 == nullid or crev is None: # copied on remote side
2324 2331 if cfname in manifest2:
2325 2332 crev = manifest2[cfname]
2326 2333 newfparent = fparent1
2327 2334
2328 2335 # Here, we used to search backwards through history to try to find
2329 2336 # where the file copy came from if the source of a copy was not in
2330 2337 # the parent directory. However, this doesn't actually make sense to
2331 2338 # do (what does a copy from something not in your working copy even
2332 2339 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2333 2340 # the user that copy information was dropped, so if they didn't
2334 2341 # expect this outcome it can be fixed, but this is the correct
2335 2342 # behavior in this circumstance.
2336 2343
2337 2344 if crev:
2338 2345 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2339 2346 meta["copy"] = cfname
2340 2347 meta["copyrev"] = hex(crev)
2341 2348 fparent1, fparent2 = nullid, newfparent
2342 2349 else:
2343 2350 self.ui.warn(_("warning: can't find ancestor for '%s' "
2344 2351 "copied from '%s'!\n") % (fname, cfname))
2345 2352
2346 2353 elif fparent1 == nullid:
2347 2354 fparent1, fparent2 = fparent2, nullid
2348 2355 elif fparent2 != nullid:
2349 2356 # is one parent an ancestor of the other?
2350 2357 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2351 2358 if fparent1 in fparentancestors:
2352 2359 fparent1, fparent2 = fparent2, nullid
2353 2360 elif fparent2 in fparentancestors:
2354 2361 fparent2 = nullid
2355 2362
2356 2363 # is the file changed?
2357 2364 text = fctx.data()
2358 2365 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2359 2366 changelist.append(fname)
2360 2367 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2361 2368 # are just the flags changed during merge?
2362 2369 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2363 2370 changelist.append(fname)
2364 2371
2365 2372 return fparent1
2366 2373
2367 2374 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2368 2375 """check for commit arguments that aren't committable"""
2369 2376 if match.isexact() or match.prefix():
2370 2377 matched = set(status.modified + status.added + status.removed)
2371 2378
2372 2379 for f in match.files():
2373 2380 f = self.dirstate.normalize(f)
2374 2381 if f == '.' or f in matched or f in wctx.substate:
2375 2382 continue
2376 2383 if f in status.deleted:
2377 2384 fail(f, _('file not found!'))
2378 2385 if f in vdirs: # visited directory
2379 2386 d = f + '/'
2380 2387 for mf in matched:
2381 2388 if mf.startswith(d):
2382 2389 break
2383 2390 else:
2384 2391 fail(f, _("no match under directory!"))
2385 2392 elif f not in self.dirstate:
2386 2393 fail(f, _("file not tracked!"))
2387 2394
2388 2395 @unfilteredmethod
2389 2396 def commit(self, text="", user=None, date=None, match=None, force=False,
2390 2397 editor=False, extra=None):
2391 2398 """Add a new revision to current repository.
2392 2399
2393 2400 Revision information is gathered from the working directory,
2394 2401 match can be used to filter the committed files. If editor is
2395 2402 supplied, it is called to get a commit message.
2396 2403 """
2397 2404 if extra is None:
2398 2405 extra = {}
2399 2406
2400 2407 def fail(f, msg):
2401 2408 raise error.Abort('%s: %s' % (f, msg))
2402 2409
2403 2410 if not match:
2404 2411 match = matchmod.always(self.root, '')
2405 2412
2406 2413 if not force:
2407 2414 vdirs = []
2408 2415 match.explicitdir = vdirs.append
2409 2416 match.bad = fail
2410 2417
2411 2418 wlock = lock = tr = None
2412 2419 try:
2413 2420 wlock = self.wlock()
2414 2421 lock = self.lock() # for recent changelog (see issue4368)
2415 2422
2416 2423 wctx = self[None]
2417 2424 merge = len(wctx.parents()) > 1
2418 2425
2419 2426 if not force and merge and not match.always():
2420 2427 raise error.Abort(_('cannot partially commit a merge '
2421 2428 '(do not specify files or patterns)'))
2422 2429
2423 2430 status = self.status(match=match, clean=force)
2424 2431 if force:
2425 2432 status.modified.extend(status.clean) # mq may commit clean files
2426 2433
2427 2434 # check subrepos
2428 2435 subs, commitsubs, newstate = subrepoutil.precommit(
2429 2436 self.ui, wctx, status, match, force=force)
2430 2437
2431 2438 # make sure all explicit patterns are matched
2432 2439 if not force:
2433 2440 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2434 2441
2435 2442 cctx = context.workingcommitctx(self, status,
2436 2443 text, user, date, extra)
2437 2444
2438 2445 # internal config: ui.allowemptycommit
2439 2446 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2440 2447 or extra.get('close') or merge or cctx.files()
2441 2448 or self.ui.configbool('ui', 'allowemptycommit'))
2442 2449 if not allowemptycommit:
2443 2450 return None
2444 2451
2445 2452 if merge and cctx.deleted():
2446 2453 raise error.Abort(_("cannot commit merge with missing files"))
2447 2454
2448 2455 ms = mergemod.mergestate.read(self)
2449 2456 mergeutil.checkunresolved(ms)
2450 2457
2451 2458 if editor:
2452 2459 cctx._text = editor(self, cctx, subs)
2453 2460 edited = (text != cctx._text)
2454 2461
2455 2462 # Save commit message in case this transaction gets rolled back
2456 2463 # (e.g. by a pretxncommit hook). Leave the content alone on
2457 2464 # the assumption that the user will use the same editor again.
2458 2465 msgfn = self.savecommitmessage(cctx._text)
2459 2466
2460 2467 # commit subs and write new state
2461 2468 if subs:
2462 2469 for s in sorted(commitsubs):
2463 2470 sub = wctx.sub(s)
2464 2471 self.ui.status(_('committing subrepository %s\n') %
2465 2472 subrepoutil.subrelpath(sub))
2466 2473 sr = sub.commit(cctx._text, user, date)
2467 2474 newstate[s] = (newstate[s][0], sr)
2468 2475 subrepoutil.writestate(self, newstate)
2469 2476
2470 2477 p1, p2 = self.dirstate.parents()
2471 2478 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2472 2479 try:
2473 2480 self.hook("precommit", throw=True, parent1=hookp1,
2474 2481 parent2=hookp2)
2475 2482 tr = self.transaction('commit')
2476 2483 ret = self.commitctx(cctx, True)
2477 2484 except: # re-raises
2478 2485 if edited:
2479 2486 self.ui.write(
2480 2487 _('note: commit message saved in %s\n') % msgfn)
2481 2488 raise
2482 2489 # update bookmarks, dirstate and mergestate
2483 2490 bookmarks.update(self, [p1, p2], ret)
2484 2491 cctx.markcommitted(ret)
2485 2492 ms.reset()
2486 2493 tr.close()
2487 2494
2488 2495 finally:
2489 2496 lockmod.release(tr, lock, wlock)
2490 2497
2491 2498 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2492 2499 # hack for command that use a temporary commit (eg: histedit)
2493 2500 # temporary commit got stripped before hook release
2494 2501 if self.changelog.hasnode(ret):
2495 2502 self.hook("commit", node=node, parent1=parent1,
2496 2503 parent2=parent2)
2497 2504 self._afterlock(commithook)
2498 2505 return ret
2499 2506
2500 2507 @unfilteredmethod
2501 2508 def commitctx(self, ctx, error=False):
2502 2509 """Add a new revision to current repository.
2503 2510 Revision information is passed via the context argument.
2504 2511
2505 2512 ctx.files() should list all files involved in this commit, i.e.
2506 2513 modified/added/removed files. On merge, it may be wider than the
2507 2514 ctx.files() to be committed, since any file nodes derived directly
2508 2515 from p1 or p2 are excluded from the committed ctx.files().
2509 2516 """
2510 2517
2511 2518 tr = None
2512 2519 p1, p2 = ctx.p1(), ctx.p2()
2513 2520 user = ctx.user()
2514 2521
2515 2522 lock = self.lock()
2516 2523 try:
2517 2524 tr = self.transaction("commit")
2518 2525 trp = weakref.proxy(tr)
2519 2526
2520 2527 if ctx.manifestnode():
2521 2528 # reuse an existing manifest revision
2522 2529 self.ui.debug('reusing known manifest\n')
2523 2530 mn = ctx.manifestnode()
2524 2531 files = ctx.files()
2525 2532 elif ctx.files():
2526 2533 m1ctx = p1.manifestctx()
2527 2534 m2ctx = p2.manifestctx()
2528 2535 mctx = m1ctx.copy()
2529 2536
2530 2537 m = mctx.read()
2531 2538 m1 = m1ctx.read()
2532 2539 m2 = m2ctx.read()
2533 2540
2534 2541 # check in files
2535 2542 added = []
2536 2543 changed = []
2537 2544 removed = list(ctx.removed())
2538 2545 linkrev = len(self)
2539 2546 self.ui.note(_("committing files:\n"))
2540 2547 for f in sorted(ctx.modified() + ctx.added()):
2541 2548 self.ui.note(f + "\n")
2542 2549 try:
2543 2550 fctx = ctx[f]
2544 2551 if fctx is None:
2545 2552 removed.append(f)
2546 2553 else:
2547 2554 added.append(f)
2548 2555 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2549 2556 trp, changed)
2550 2557 m.setflag(f, fctx.flags())
2551 2558 except OSError as inst:
2552 2559 self.ui.warn(_("trouble committing %s!\n") % f)
2553 2560 raise
2554 2561 except IOError as inst:
2555 2562 errcode = getattr(inst, 'errno', errno.ENOENT)
2556 2563 if error or errcode and errcode != errno.ENOENT:
2557 2564 self.ui.warn(_("trouble committing %s!\n") % f)
2558 2565 raise
2559 2566
2560 2567 # update manifest
2561 2568 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2562 2569 drop = [f for f in removed if f in m]
2563 2570 for f in drop:
2564 2571 del m[f]
2565 2572 files = changed + removed
2566 2573 md = None
2567 2574 if not files:
2568 2575 # if no "files" actually changed in terms of the changelog,
2569 2576 # try hard to detect unmodified manifest entry so that the
2570 2577 # exact same commit can be reproduced later on convert.
2571 2578 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2572 2579 if not files and md:
2573 2580 self.ui.debug('not reusing manifest (no file change in '
2574 2581 'changelog, but manifest differs)\n')
2575 2582 if files or md:
2576 2583 self.ui.note(_("committing manifest\n"))
2577 2584 # we're using narrowmatch here since it's already applied at
2578 2585 # other stages (such as dirstate.walk), so we're already
2579 2586 # ignoring things outside of narrowspec in most cases. The
2580 2587 # one case where we might have files outside the narrowspec
2581 2588 # at this point is merges, and we already error out in the
2582 2589 # case where the merge has files outside of the narrowspec,
2583 2590 # so this is safe.
2584 2591 mn = mctx.write(trp, linkrev,
2585 2592 p1.manifestnode(), p2.manifestnode(),
2586 2593 added, drop, match=self.narrowmatch())
2587 2594 else:
2588 2595 self.ui.debug('reusing manifest form p1 (listed files '
2589 2596 'actually unchanged)\n')
2590 2597 mn = p1.manifestnode()
2591 2598 else:
2592 2599 self.ui.debug('reusing manifest from p1 (no file change)\n')
2593 2600 mn = p1.manifestnode()
2594 2601 files = []
2595 2602
2596 2603 # update changelog
2597 2604 self.ui.note(_("committing changelog\n"))
2598 2605 self.changelog.delayupdate(tr)
2599 2606 n = self.changelog.add(mn, files, ctx.description(),
2600 2607 trp, p1.node(), p2.node(),
2601 2608 user, ctx.date(), ctx.extra().copy())
2602 2609 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2603 2610 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2604 2611 parent2=xp2)
2605 2612 # set the new commit is proper phase
2606 2613 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2607 2614 if targetphase:
2608 2615 # retract boundary do not alter parent changeset.
2609 2616 # if a parent have higher the resulting phase will
2610 2617 # be compliant anyway
2611 2618 #
2612 2619 # if minimal phase was 0 we don't need to retract anything
2613 2620 phases.registernew(self, tr, targetphase, [n])
2614 2621 tr.close()
2615 2622 return n
2616 2623 finally:
2617 2624 if tr:
2618 2625 tr.release()
2619 2626 lock.release()
2620 2627
2621 2628 @unfilteredmethod
2622 2629 def destroying(self):
2623 2630 '''Inform the repository that nodes are about to be destroyed.
2624 2631 Intended for use by strip and rollback, so there's a common
2625 2632 place for anything that has to be done before destroying history.
2626 2633
2627 2634 This is mostly useful for saving state that is in memory and waiting
2628 2635 to be flushed when the current lock is released. Because a call to
2629 2636 destroyed is imminent, the repo will be invalidated causing those
2630 2637 changes to stay in memory (waiting for the next unlock), or vanish
2631 2638 completely.
2632 2639 '''
2633 2640 # When using the same lock to commit and strip, the phasecache is left
2634 2641 # dirty after committing. Then when we strip, the repo is invalidated,
2635 2642 # causing those changes to disappear.
2636 2643 if '_phasecache' in vars(self):
2637 2644 self._phasecache.write()
2638 2645
2639 2646 @unfilteredmethod
2640 2647 def destroyed(self):
2641 2648 '''Inform the repository that nodes have been destroyed.
2642 2649 Intended for use by strip and rollback, so there's a common
2643 2650 place for anything that has to be done after destroying history.
2644 2651 '''
2645 2652 # When one tries to:
2646 2653 # 1) destroy nodes thus calling this method (e.g. strip)
2647 2654 # 2) use phasecache somewhere (e.g. commit)
2648 2655 #
2649 2656 # then 2) will fail because the phasecache contains nodes that were
2650 2657 # removed. We can either remove phasecache from the filecache,
2651 2658 # causing it to reload next time it is accessed, or simply filter
2652 2659 # the removed nodes now and write the updated cache.
2653 2660 self._phasecache.filterunknown(self)
2654 2661 self._phasecache.write()
2655 2662
2656 2663 # refresh all repository caches
2657 2664 self.updatecaches()
2658 2665
2659 2666 # Ensure the persistent tag cache is updated. Doing it now
2660 2667 # means that the tag cache only has to worry about destroyed
2661 2668 # heads immediately after a strip/rollback. That in turn
2662 2669 # guarantees that "cachetip == currenttip" (comparing both rev
2663 2670 # and node) always means no nodes have been added or destroyed.
2664 2671
2665 2672 # XXX this is suboptimal when qrefresh'ing: we strip the current
2666 2673 # head, refresh the tag cache, then immediately add a new head.
2667 2674 # But I think doing it this way is necessary for the "instant
2668 2675 # tag cache retrieval" case to work.
2669 2676 self.invalidate()
2670 2677
2671 2678 def status(self, node1='.', node2=None, match=None,
2672 2679 ignored=False, clean=False, unknown=False,
2673 2680 listsubrepos=False):
2674 2681 '''a convenience method that calls node1.status(node2)'''
2675 2682 return self[node1].status(node2, match, ignored, clean, unknown,
2676 2683 listsubrepos)
2677 2684
2678 2685 def addpostdsstatus(self, ps):
2679 2686 """Add a callback to run within the wlock, at the point at which status
2680 2687 fixups happen.
2681 2688
2682 2689 On status completion, callback(wctx, status) will be called with the
2683 2690 wlock held, unless the dirstate has changed from underneath or the wlock
2684 2691 couldn't be grabbed.
2685 2692
2686 2693 Callbacks should not capture and use a cached copy of the dirstate --
2687 2694 it might change in the meanwhile. Instead, they should access the
2688 2695 dirstate via wctx.repo().dirstate.
2689 2696
2690 2697 This list is emptied out after each status run -- extensions should
2691 2698 make sure it adds to this list each time dirstate.status is called.
2692 2699 Extensions should also make sure they don't call this for statuses
2693 2700 that don't involve the dirstate.
2694 2701 """
2695 2702
2696 2703 # The list is located here for uniqueness reasons -- it is actually
2697 2704 # managed by the workingctx, but that isn't unique per-repo.
2698 2705 self._postdsstatus.append(ps)
2699 2706
2700 2707 def postdsstatus(self):
2701 2708 """Used by workingctx to get the list of post-dirstate-status hooks."""
2702 2709 return self._postdsstatus
2703 2710
2704 2711 def clearpostdsstatus(self):
2705 2712 """Used by workingctx to clear post-dirstate-status hooks."""
2706 2713 del self._postdsstatus[:]
2707 2714
2708 2715 def heads(self, start=None):
2709 2716 if start is None:
2710 2717 cl = self.changelog
2711 2718 headrevs = reversed(cl.headrevs())
2712 2719 return [cl.node(rev) for rev in headrevs]
2713 2720
2714 2721 heads = self.changelog.heads(start)
2715 2722 # sort the output in rev descending order
2716 2723 return sorted(heads, key=self.changelog.rev, reverse=True)
2717 2724
2718 2725 def branchheads(self, branch=None, start=None, closed=False):
2719 2726 '''return a (possibly filtered) list of heads for the given branch
2720 2727
2721 2728 Heads are returned in topological order, from newest to oldest.
2722 2729 If branch is None, use the dirstate branch.
2723 2730 If start is not None, return only heads reachable from start.
2724 2731 If closed is True, return heads that are marked as closed as well.
2725 2732 '''
2726 2733 if branch is None:
2727 2734 branch = self[None].branch()
2728 2735 branches = self.branchmap()
2729 2736 if branch not in branches:
2730 2737 return []
2731 2738 # the cache returns heads ordered lowest to highest
2732 2739 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2733 2740 if start is not None:
2734 2741 # filter out the heads that cannot be reached from startrev
2735 2742 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2736 2743 bheads = [h for h in bheads if h in fbheads]
2737 2744 return bheads
2738 2745
2739 2746 def branches(self, nodes):
2740 2747 if not nodes:
2741 2748 nodes = [self.changelog.tip()]
2742 2749 b = []
2743 2750 for n in nodes:
2744 2751 t = n
2745 2752 while True:
2746 2753 p = self.changelog.parents(n)
2747 2754 if p[1] != nullid or p[0] == nullid:
2748 2755 b.append((t, n, p[0], p[1]))
2749 2756 break
2750 2757 n = p[0]
2751 2758 return b
2752 2759
2753 2760 def between(self, pairs):
2754 2761 r = []
2755 2762
2756 2763 for top, bottom in pairs:
2757 2764 n, l, i = top, [], 0
2758 2765 f = 1
2759 2766
2760 2767 while n != bottom and n != nullid:
2761 2768 p = self.changelog.parents(n)[0]
2762 2769 if i == f:
2763 2770 l.append(n)
2764 2771 f = f * 2
2765 2772 n = p
2766 2773 i += 1
2767 2774
2768 2775 r.append(l)
2769 2776
2770 2777 return r
2771 2778
2772 2779 def checkpush(self, pushop):
2773 2780 """Extensions can override this function if additional checks have
2774 2781 to be performed before pushing, or call it if they override push
2775 2782 command.
2776 2783 """
2777 2784
2778 2785 @unfilteredpropertycache
2779 2786 def prepushoutgoinghooks(self):
2780 2787 """Return util.hooks consists of a pushop with repo, remote, outgoing
2781 2788 methods, which are called before pushing changesets.
2782 2789 """
2783 2790 return util.hooks()
2784 2791
2785 2792 def pushkey(self, namespace, key, old, new):
2786 2793 try:
2787 2794 tr = self.currenttransaction()
2788 2795 hookargs = {}
2789 2796 if tr is not None:
2790 2797 hookargs.update(tr.hookargs)
2791 2798 hookargs = pycompat.strkwargs(hookargs)
2792 2799 hookargs[r'namespace'] = namespace
2793 2800 hookargs[r'key'] = key
2794 2801 hookargs[r'old'] = old
2795 2802 hookargs[r'new'] = new
2796 2803 self.hook('prepushkey', throw=True, **hookargs)
2797 2804 except error.HookAbort as exc:
2798 2805 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2799 2806 if exc.hint:
2800 2807 self.ui.write_err(_("(%s)\n") % exc.hint)
2801 2808 return False
2802 2809 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2803 2810 ret = pushkey.push(self, namespace, key, old, new)
2804 2811 def runhook():
2805 2812 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2806 2813 ret=ret)
2807 2814 self._afterlock(runhook)
2808 2815 return ret
2809 2816
2810 2817 def listkeys(self, namespace):
2811 2818 self.hook('prelistkeys', throw=True, namespace=namespace)
2812 2819 self.ui.debug('listing keys for "%s"\n' % namespace)
2813 2820 values = pushkey.list(self, namespace)
2814 2821 self.hook('listkeys', namespace=namespace, values=values)
2815 2822 return values
2816 2823
2817 2824 def debugwireargs(self, one, two, three=None, four=None, five=None):
2818 2825 '''used to test argument passing over the wire'''
2819 2826 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2820 2827 pycompat.bytestr(four),
2821 2828 pycompat.bytestr(five))
2822 2829
2823 2830 def savecommitmessage(self, text):
2824 2831 fp = self.vfs('last-message.txt', 'wb')
2825 2832 try:
2826 2833 fp.write(text)
2827 2834 finally:
2828 2835 fp.close()
2829 2836 return self.pathto(fp.name[len(self.root) + 1:])
2830 2837
2831 2838 # used to avoid circular references so destructors work
2832 2839 def aftertrans(files):
2833 2840 renamefiles = [tuple(t) for t in files]
2834 2841 def a():
2835 2842 for vfs, src, dest in renamefiles:
2836 2843 # if src and dest refer to a same file, vfs.rename is a no-op,
2837 2844 # leaving both src and dest on disk. delete dest to make sure
2838 2845 # the rename couldn't be such a no-op.
2839 2846 vfs.tryunlink(dest)
2840 2847 try:
2841 2848 vfs.rename(src, dest)
2842 2849 except OSError: # journal file does not yet exist
2843 2850 pass
2844 2851 return a
2845 2852
2846 2853 def undoname(fn):
2847 2854 base, name = os.path.split(fn)
2848 2855 assert name.startswith('journal')
2849 2856 return os.path.join(base, name.replace('journal', 'undo', 1))
2850 2857
2851 2858 def instance(ui, path, create, intents=None, createopts=None):
2852 2859 localpath = util.urllocalpath(path)
2853 2860 if create:
2854 2861 createrepository(ui, localpath, createopts=createopts)
2855 2862
2856 2863 return makelocalrepository(ui, localpath, intents=intents)
2857 2864
2858 2865 def islocal(path):
2859 2866 return True
2860 2867
2861 2868 def defaultcreateopts(ui, createopts=None):
2862 2869 """Populate the default creation options for a repository.
2863 2870
2864 2871 A dictionary of explicitly requested creation options can be passed
2865 2872 in. Missing keys will be populated.
2866 2873 """
2867 2874 createopts = dict(createopts or {})
2868 2875
2869 2876 if 'backend' not in createopts:
2870 2877 # experimental config: storage.new-repo-backend
2871 2878 createopts['backend'] = ui.config('storage', 'new-repo-backend')
2872 2879
2873 2880 return createopts
2874 2881
2875 2882 def newreporequirements(ui, createopts):
2876 2883 """Determine the set of requirements for a new local repository.
2877 2884
2878 2885 Extensions can wrap this function to specify custom requirements for
2879 2886 new repositories.
2880 2887 """
2881 2888 # If the repo is being created from a shared repository, we copy
2882 2889 # its requirements.
2883 2890 if 'sharedrepo' in createopts:
2884 2891 requirements = set(createopts['sharedrepo'].requirements)
2885 2892 if createopts.get('sharedrelative'):
2886 2893 requirements.add('relshared')
2887 2894 else:
2888 2895 requirements.add('shared')
2889 2896
2890 2897 return requirements
2891 2898
2892 2899 if 'backend' not in createopts:
2893 2900 raise error.ProgrammingError('backend key not present in createopts; '
2894 2901 'was defaultcreateopts() called?')
2895 2902
2896 2903 if createopts['backend'] != 'revlogv1':
2897 2904 raise error.Abort(_('unable to determine repository requirements for '
2898 2905 'storage backend: %s') % createopts['backend'])
2899 2906
2900 2907 requirements = {'revlogv1'}
2901 2908 if ui.configbool('format', 'usestore'):
2902 2909 requirements.add('store')
2903 2910 if ui.configbool('format', 'usefncache'):
2904 2911 requirements.add('fncache')
2905 2912 if ui.configbool('format', 'dotencode'):
2906 2913 requirements.add('dotencode')
2907 2914
2908 2915 compengine = ui.config('experimental', 'format.compression')
2909 2916 if compengine not in util.compengines:
2910 2917 raise error.Abort(_('compression engine %s defined by '
2911 2918 'experimental.format.compression not available') %
2912 2919 compengine,
2913 2920 hint=_('run "hg debuginstall" to list available '
2914 2921 'compression engines'))
2915 2922
2916 2923 # zlib is the historical default and doesn't need an explicit requirement.
2917 2924 if compengine != 'zlib':
2918 2925 requirements.add('exp-compression-%s' % compengine)
2919 2926
2920 2927 if scmutil.gdinitconfig(ui):
2921 2928 requirements.add('generaldelta')
2922 2929 # experimental config: format.sparse-revlog
2923 2930 if ui.configbool('format', 'sparse-revlog'):
2924 2931 requirements.add(SPARSEREVLOG_REQUIREMENT)
2925 2932 if ui.configbool('experimental', 'treemanifest'):
2926 2933 requirements.add('treemanifest')
2927 2934
2928 2935 revlogv2 = ui.config('experimental', 'revlogv2')
2929 2936 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2930 2937 requirements.remove('revlogv1')
2931 2938 # generaldelta is implied by revlogv2.
2932 2939 requirements.discard('generaldelta')
2933 2940 requirements.add(REVLOGV2_REQUIREMENT)
2934 2941 # experimental config: format.internal-phase
2935 2942 if ui.configbool('format', 'internal-phase'):
2936 2943 requirements.add('internal-phase')
2937 2944
2938 2945 if createopts.get('narrowfiles'):
2939 2946 requirements.add(repository.NARROW_REQUIREMENT)
2940 2947
2941 2948 if createopts.get('lfs'):
2942 2949 requirements.add('lfs')
2943 2950
2944 2951 return requirements
2945 2952
2946 2953 def filterknowncreateopts(ui, createopts):
2947 2954 """Filters a dict of repo creation options against options that are known.
2948 2955
2949 2956 Receives a dict of repo creation options and returns a dict of those
2950 2957 options that we don't know how to handle.
2951 2958
2952 2959 This function is called as part of repository creation. If the
2953 2960 returned dict contains any items, repository creation will not
2954 2961 be allowed, as it means there was a request to create a repository
2955 2962 with options not recognized by loaded code.
2956 2963
2957 2964 Extensions can wrap this function to filter out creation options
2958 2965 they know how to handle.
2959 2966 """
2960 2967 known = {
2961 2968 'backend',
2962 2969 'lfs',
2963 2970 'narrowfiles',
2964 2971 'sharedrepo',
2965 2972 'sharedrelative',
2966 2973 'shareditems',
2967 2974 'shallowfilestore',
2968 2975 }
2969 2976
2970 2977 return {k: v for k, v in createopts.items() if k not in known}
2971 2978
2972 2979 def createrepository(ui, path, createopts=None):
2973 2980 """Create a new repository in a vfs.
2974 2981
2975 2982 ``path`` path to the new repo's working directory.
2976 2983 ``createopts`` options for the new repository.
2977 2984
2978 2985 The following keys for ``createopts`` are recognized:
2979 2986
2980 2987 backend
2981 2988 The storage backend to use.
2982 2989 lfs
2983 2990 Repository will be created with ``lfs`` requirement. The lfs extension
2984 2991 will automatically be loaded when the repository is accessed.
2985 2992 narrowfiles
2986 2993 Set up repository to support narrow file storage.
2987 2994 sharedrepo
2988 2995 Repository object from which storage should be shared.
2989 2996 sharedrelative
2990 2997 Boolean indicating if the path to the shared repo should be
2991 2998 stored as relative. By default, the pointer to the "parent" repo
2992 2999 is stored as an absolute path.
2993 3000 shareditems
2994 3001 Set of items to share to the new repository (in addition to storage).
2995 3002 shallowfilestore
2996 3003 Indicates that storage for files should be shallow (not all ancestor
2997 3004 revisions are known).
2998 3005 """
2999 3006 createopts = defaultcreateopts(ui, createopts=createopts)
3000 3007
3001 3008 unknownopts = filterknowncreateopts(ui, createopts)
3002 3009
3003 3010 if not isinstance(unknownopts, dict):
3004 3011 raise error.ProgrammingError('filterknowncreateopts() did not return '
3005 3012 'a dict')
3006 3013
3007 3014 if unknownopts:
3008 3015 raise error.Abort(_('unable to create repository because of unknown '
3009 3016 'creation option: %s') %
3010 3017 ', '.join(sorted(unknownopts)),
3011 3018 hint=_('is a required extension not loaded?'))
3012 3019
3013 3020 requirements = newreporequirements(ui, createopts=createopts)
3014 3021
3015 3022 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3016 3023
3017 3024 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3018 3025 if hgvfs.exists():
3019 3026 raise error.RepoError(_('repository %s already exists') % path)
3020 3027
3021 3028 if 'sharedrepo' in createopts:
3022 3029 sharedpath = createopts['sharedrepo'].sharedpath
3023 3030
3024 3031 if createopts.get('sharedrelative'):
3025 3032 try:
3026 3033 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3027 3034 except (IOError, ValueError) as e:
3028 3035 # ValueError is raised on Windows if the drive letters differ
3029 3036 # on each path.
3030 3037 raise error.Abort(_('cannot calculate relative path'),
3031 3038 hint=stringutil.forcebytestr(e))
3032 3039
3033 3040 if not wdirvfs.exists():
3034 3041 wdirvfs.makedirs()
3035 3042
3036 3043 hgvfs.makedir(notindexed=True)
3037 3044 if 'sharedrepo' not in createopts:
3038 3045 hgvfs.mkdir(b'cache')
3039 3046 hgvfs.mkdir(b'wcache')
3040 3047
3041 3048 if b'store' in requirements and 'sharedrepo' not in createopts:
3042 3049 hgvfs.mkdir(b'store')
3043 3050
3044 3051 # We create an invalid changelog outside the store so very old
3045 3052 # Mercurial versions (which didn't know about the requirements
3046 3053 # file) encounter an error on reading the changelog. This
3047 3054 # effectively locks out old clients and prevents them from
3048 3055 # mucking with a repo in an unknown format.
3049 3056 #
3050 3057 # The revlog header has version 2, which won't be recognized by
3051 3058 # such old clients.
3052 3059 hgvfs.append(b'00changelog.i',
3053 3060 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3054 3061 b'layout')
3055 3062
3056 3063 scmutil.writerequires(hgvfs, requirements)
3057 3064
3058 3065 # Write out file telling readers where to find the shared store.
3059 3066 if 'sharedrepo' in createopts:
3060 3067 hgvfs.write(b'sharedpath', sharedpath)
3061 3068
3062 3069 if createopts.get('shareditems'):
3063 3070 shared = b'\n'.join(sorted(createopts['shareditems'])) + b'\n'
3064 3071 hgvfs.write(b'shared', shared)
3065 3072
3066 3073 def poisonrepository(repo):
3067 3074 """Poison a repository instance so it can no longer be used."""
3068 3075 # Perform any cleanup on the instance.
3069 3076 repo.close()
3070 3077
3071 3078 # Our strategy is to replace the type of the object with one that
3072 3079 # has all attribute lookups result in error.
3073 3080 #
3074 3081 # But we have to allow the close() method because some constructors
3075 3082 # of repos call close() on repo references.
3076 3083 class poisonedrepository(object):
3077 3084 def __getattribute__(self, item):
3078 3085 if item == r'close':
3079 3086 return object.__getattribute__(self, item)
3080 3087
3081 3088 raise error.ProgrammingError('repo instances should not be used '
3082 3089 'after unshare')
3083 3090
3084 3091 def close(self):
3085 3092 pass
3086 3093
3087 3094 # We may have a repoview, which intercepts __setattr__. So be sure
3088 3095 # we operate at the lowest level possible.
3089 3096 object.__setattr__(repo, r'__class__', poisonedrepository)
General Comments 0
You need to be logged in to leave comments. Login now