##// END OF EJS Templates
match: delete unused root and cwd arguments to constructors (API)...
Martin von Zweigbergk -
r41824:ddbebce9 default
parent child Browse files
Show More
@@ -1,347 +1,347 b''
1 # sparse.py - allow sparse checkouts of the working directory
1 # sparse.py - allow sparse checkouts of the working directory
2 #
2 #
3 # Copyright 2014 Facebook, Inc.
3 # Copyright 2014 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """allow sparse checkouts of the working directory (EXPERIMENTAL)
8 """allow sparse checkouts of the working directory (EXPERIMENTAL)
9
9
10 (This extension is not yet protected by backwards compatibility
10 (This extension is not yet protected by backwards compatibility
11 guarantees. Any aspect may break in future releases until this
11 guarantees. Any aspect may break in future releases until this
12 notice is removed.)
12 notice is removed.)
13
13
14 This extension allows the working directory to only consist of a
14 This extension allows the working directory to only consist of a
15 subset of files for the revision. This allows specific files or
15 subset of files for the revision. This allows specific files or
16 directories to be explicitly included or excluded. Many repository
16 directories to be explicitly included or excluded. Many repository
17 operations have performance proportional to the number of files in
17 operations have performance proportional to the number of files in
18 the working directory. So only realizing a subset of files in the
18 the working directory. So only realizing a subset of files in the
19 working directory can improve performance.
19 working directory can improve performance.
20
20
21 Sparse Config Files
21 Sparse Config Files
22 -------------------
22 -------------------
23
23
24 The set of files that are part of a sparse checkout are defined by
24 The set of files that are part of a sparse checkout are defined by
25 a sparse config file. The file defines 3 things: includes (files to
25 a sparse config file. The file defines 3 things: includes (files to
26 include in the sparse checkout), excludes (files to exclude from the
26 include in the sparse checkout), excludes (files to exclude from the
27 sparse checkout), and profiles (links to other config files).
27 sparse checkout), and profiles (links to other config files).
28
28
29 The file format is newline delimited. Empty lines and lines beginning
29 The file format is newline delimited. Empty lines and lines beginning
30 with ``#`` are ignored.
30 with ``#`` are ignored.
31
31
32 Lines beginning with ``%include `` denote another sparse config file
32 Lines beginning with ``%include `` denote another sparse config file
33 to include. e.g. ``%include tests.sparse``. The filename is relative
33 to include. e.g. ``%include tests.sparse``. The filename is relative
34 to the repository root.
34 to the repository root.
35
35
36 The special lines ``[include]`` and ``[exclude]`` denote the section
36 The special lines ``[include]`` and ``[exclude]`` denote the section
37 for includes and excludes that follow, respectively. It is illegal to
37 for includes and excludes that follow, respectively. It is illegal to
38 have ``[include]`` after ``[exclude]``.
38 have ``[include]`` after ``[exclude]``.
39
39
40 Non-special lines resemble file patterns to be added to either includes
40 Non-special lines resemble file patterns to be added to either includes
41 or excludes. The syntax of these lines is documented by :hg:`help patterns`.
41 or excludes. The syntax of these lines is documented by :hg:`help patterns`.
42 Patterns are interpreted as ``glob:`` by default and match against the
42 Patterns are interpreted as ``glob:`` by default and match against the
43 root of the repository.
43 root of the repository.
44
44
45 Exclusion patterns take precedence over inclusion patterns. So even
45 Exclusion patterns take precedence over inclusion patterns. So even
46 if a file is explicitly included, an ``[exclude]`` entry can remove it.
46 if a file is explicitly included, an ``[exclude]`` entry can remove it.
47
47
48 For example, say you have a repository with 3 directories, ``frontend/``,
48 For example, say you have a repository with 3 directories, ``frontend/``,
49 ``backend/``, and ``tools/``. ``frontend/`` and ``backend/`` correspond
49 ``backend/``, and ``tools/``. ``frontend/`` and ``backend/`` correspond
50 to different projects and it is uncommon for someone working on one
50 to different projects and it is uncommon for someone working on one
51 to need the files for the other. But ``tools/`` contains files shared
51 to need the files for the other. But ``tools/`` contains files shared
52 between both projects. Your sparse config files may resemble::
52 between both projects. Your sparse config files may resemble::
53
53
54 # frontend.sparse
54 # frontend.sparse
55 frontend/**
55 frontend/**
56 tools/**
56 tools/**
57
57
58 # backend.sparse
58 # backend.sparse
59 backend/**
59 backend/**
60 tools/**
60 tools/**
61
61
62 Say the backend grows in size. Or there's a directory with thousands
62 Say the backend grows in size. Or there's a directory with thousands
63 of files you wish to exclude. You can modify the profile to exclude
63 of files you wish to exclude. You can modify the profile to exclude
64 certain files::
64 certain files::
65
65
66 [include]
66 [include]
67 backend/**
67 backend/**
68 tools/**
68 tools/**
69
69
70 [exclude]
70 [exclude]
71 tools/tests/**
71 tools/tests/**
72 """
72 """
73
73
74 from __future__ import absolute_import
74 from __future__ import absolute_import
75
75
76 from mercurial.i18n import _
76 from mercurial.i18n import _
77 from mercurial import (
77 from mercurial import (
78 commands,
78 commands,
79 dirstate,
79 dirstate,
80 error,
80 error,
81 extensions,
81 extensions,
82 hg,
82 hg,
83 logcmdutil,
83 logcmdutil,
84 match as matchmod,
84 match as matchmod,
85 pycompat,
85 pycompat,
86 registrar,
86 registrar,
87 sparse,
87 sparse,
88 util,
88 util,
89 )
89 )
90
90
91 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
91 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
92 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
92 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
93 # be specifying the version(s) of Mercurial they are tested with, or
93 # be specifying the version(s) of Mercurial they are tested with, or
94 # leave the attribute unspecified.
94 # leave the attribute unspecified.
95 testedwith = 'ships-with-hg-core'
95 testedwith = 'ships-with-hg-core'
96
96
97 cmdtable = {}
97 cmdtable = {}
98 command = registrar.command(cmdtable)
98 command = registrar.command(cmdtable)
99
99
100 def extsetup(ui):
100 def extsetup(ui):
101 sparse.enabled = True
101 sparse.enabled = True
102
102
103 _setupclone(ui)
103 _setupclone(ui)
104 _setuplog(ui)
104 _setuplog(ui)
105 _setupadd(ui)
105 _setupadd(ui)
106 _setupdirstate(ui)
106 _setupdirstate(ui)
107
107
108 def replacefilecache(cls, propname, replacement):
108 def replacefilecache(cls, propname, replacement):
109 """Replace a filecache property with a new class. This allows changing the
109 """Replace a filecache property with a new class. This allows changing the
110 cache invalidation condition."""
110 cache invalidation condition."""
111 origcls = cls
111 origcls = cls
112 assert callable(replacement)
112 assert callable(replacement)
113 while cls is not object:
113 while cls is not object:
114 if propname in cls.__dict__:
114 if propname in cls.__dict__:
115 orig = cls.__dict__[propname]
115 orig = cls.__dict__[propname]
116 setattr(cls, propname, replacement(orig))
116 setattr(cls, propname, replacement(orig))
117 break
117 break
118 cls = cls.__bases__[0]
118 cls = cls.__bases__[0]
119
119
120 if cls is object:
120 if cls is object:
121 raise AttributeError(_("type '%s' has no property '%s'") % (origcls,
121 raise AttributeError(_("type '%s' has no property '%s'") % (origcls,
122 propname))
122 propname))
123
123
124 def _setuplog(ui):
124 def _setuplog(ui):
125 entry = commands.table['log|history']
125 entry = commands.table['log|history']
126 entry[1].append(('', 'sparse', None,
126 entry[1].append(('', 'sparse', None,
127 "limit to changesets affecting the sparse checkout"))
127 "limit to changesets affecting the sparse checkout"))
128
128
129 def _initialrevs(orig, repo, opts):
129 def _initialrevs(orig, repo, opts):
130 revs = orig(repo, opts)
130 revs = orig(repo, opts)
131 if opts.get('sparse'):
131 if opts.get('sparse'):
132 sparsematch = sparse.matcher(repo)
132 sparsematch = sparse.matcher(repo)
133 def ctxmatch(rev):
133 def ctxmatch(rev):
134 ctx = repo[rev]
134 ctx = repo[rev]
135 return any(f for f in ctx.files() if sparsematch(f))
135 return any(f for f in ctx.files() if sparsematch(f))
136 revs = revs.filter(ctxmatch)
136 revs = revs.filter(ctxmatch)
137 return revs
137 return revs
138 extensions.wrapfunction(logcmdutil, '_initialrevs', _initialrevs)
138 extensions.wrapfunction(logcmdutil, '_initialrevs', _initialrevs)
139
139
140 def _clonesparsecmd(orig, ui, repo, *args, **opts):
140 def _clonesparsecmd(orig, ui, repo, *args, **opts):
141 include_pat = opts.get(r'include')
141 include_pat = opts.get(r'include')
142 exclude_pat = opts.get(r'exclude')
142 exclude_pat = opts.get(r'exclude')
143 enableprofile_pat = opts.get(r'enable_profile')
143 enableprofile_pat = opts.get(r'enable_profile')
144 narrow_pat = opts.get(r'narrow')
144 narrow_pat = opts.get(r'narrow')
145 include = exclude = enableprofile = False
145 include = exclude = enableprofile = False
146 if include_pat:
146 if include_pat:
147 pat = include_pat
147 pat = include_pat
148 include = True
148 include = True
149 if exclude_pat:
149 if exclude_pat:
150 pat = exclude_pat
150 pat = exclude_pat
151 exclude = True
151 exclude = True
152 if enableprofile_pat:
152 if enableprofile_pat:
153 pat = enableprofile_pat
153 pat = enableprofile_pat
154 enableprofile = True
154 enableprofile = True
155 if sum([include, exclude, enableprofile]) > 1:
155 if sum([include, exclude, enableprofile]) > 1:
156 raise error.Abort(_("too many flags specified."))
156 raise error.Abort(_("too many flags specified."))
157 # if --narrow is passed, it means they are includes and excludes for narrow
157 # if --narrow is passed, it means they are includes and excludes for narrow
158 # clone
158 # clone
159 if not narrow_pat and (include or exclude or enableprofile):
159 if not narrow_pat and (include or exclude or enableprofile):
160 def clonesparse(orig, self, node, overwrite, *args, **kwargs):
160 def clonesparse(orig, self, node, overwrite, *args, **kwargs):
161 sparse.updateconfig(self.unfiltered(), pat, {}, include=include,
161 sparse.updateconfig(self.unfiltered(), pat, {}, include=include,
162 exclude=exclude, enableprofile=enableprofile,
162 exclude=exclude, enableprofile=enableprofile,
163 usereporootpaths=True)
163 usereporootpaths=True)
164 return orig(self, node, overwrite, *args, **kwargs)
164 return orig(self, node, overwrite, *args, **kwargs)
165 extensions.wrapfunction(hg, 'updaterepo', clonesparse)
165 extensions.wrapfunction(hg, 'updaterepo', clonesparse)
166 return orig(ui, repo, *args, **opts)
166 return orig(ui, repo, *args, **opts)
167
167
168 def _setupclone(ui):
168 def _setupclone(ui):
169 entry = commands.table['clone']
169 entry = commands.table['clone']
170 entry[1].append(('', 'enable-profile', [],
170 entry[1].append(('', 'enable-profile', [],
171 'enable a sparse profile'))
171 'enable a sparse profile'))
172 entry[1].append(('', 'include', [],
172 entry[1].append(('', 'include', [],
173 'include sparse pattern'))
173 'include sparse pattern'))
174 entry[1].append(('', 'exclude', [],
174 entry[1].append(('', 'exclude', [],
175 'exclude sparse pattern'))
175 'exclude sparse pattern'))
176 extensions.wrapcommand(commands.table, 'clone', _clonesparsecmd)
176 extensions.wrapcommand(commands.table, 'clone', _clonesparsecmd)
177
177
178 def _setupadd(ui):
178 def _setupadd(ui):
179 entry = commands.table['add']
179 entry = commands.table['add']
180 entry[1].append(('s', 'sparse', None,
180 entry[1].append(('s', 'sparse', None,
181 'also include directories of added files in sparse config'))
181 'also include directories of added files in sparse config'))
182
182
183 def _add(orig, ui, repo, *pats, **opts):
183 def _add(orig, ui, repo, *pats, **opts):
184 if opts.get(r'sparse'):
184 if opts.get(r'sparse'):
185 dirs = set()
185 dirs = set()
186 for pat in pats:
186 for pat in pats:
187 dirname, basename = util.split(pat)
187 dirname, basename = util.split(pat)
188 dirs.add(dirname)
188 dirs.add(dirname)
189 sparse.updateconfig(repo, list(dirs), opts, include=True)
189 sparse.updateconfig(repo, list(dirs), opts, include=True)
190 return orig(ui, repo, *pats, **opts)
190 return orig(ui, repo, *pats, **opts)
191
191
192 extensions.wrapcommand(commands.table, 'add', _add)
192 extensions.wrapcommand(commands.table, 'add', _add)
193
193
194 def _setupdirstate(ui):
194 def _setupdirstate(ui):
195 """Modify the dirstate to prevent stat'ing excluded files,
195 """Modify the dirstate to prevent stat'ing excluded files,
196 and to prevent modifications to files outside the checkout.
196 and to prevent modifications to files outside the checkout.
197 """
197 """
198
198
199 def walk(orig, self, match, subrepos, unknown, ignored, full=True):
199 def walk(orig, self, match, subrepos, unknown, ignored, full=True):
200 # hack to not exclude explicitly-specified paths so that they can
200 # hack to not exclude explicitly-specified paths so that they can
201 # be warned later on e.g. dirstate.add()
201 # be warned later on e.g. dirstate.add()
202 em = matchmod.exact(match._root, match._cwd, match.files())
202 em = matchmod.exact(None, None, match.files())
203 sm = matchmod.unionmatcher([self._sparsematcher, em])
203 sm = matchmod.unionmatcher([self._sparsematcher, em])
204 match = matchmod.intersectmatchers(match, sm)
204 match = matchmod.intersectmatchers(match, sm)
205 return orig(self, match, subrepos, unknown, ignored, full)
205 return orig(self, match, subrepos, unknown, ignored, full)
206
206
207 extensions.wrapfunction(dirstate.dirstate, 'walk', walk)
207 extensions.wrapfunction(dirstate.dirstate, 'walk', walk)
208
208
209 # dirstate.rebuild should not add non-matching files
209 # dirstate.rebuild should not add non-matching files
210 def _rebuild(orig, self, parent, allfiles, changedfiles=None):
210 def _rebuild(orig, self, parent, allfiles, changedfiles=None):
211 matcher = self._sparsematcher
211 matcher = self._sparsematcher
212 if not matcher.always():
212 if not matcher.always():
213 allfiles = [f for f in allfiles if matcher(f)]
213 allfiles = [f for f in allfiles if matcher(f)]
214 if changedfiles:
214 if changedfiles:
215 changedfiles = [f for f in changedfiles if matcher(f)]
215 changedfiles = [f for f in changedfiles if matcher(f)]
216
216
217 if changedfiles is not None:
217 if changedfiles is not None:
218 # In _rebuild, these files will be deleted from the dirstate
218 # In _rebuild, these files will be deleted from the dirstate
219 # when they are not found to be in allfiles
219 # when they are not found to be in allfiles
220 dirstatefilestoremove = set(f for f in self if not matcher(f))
220 dirstatefilestoremove = set(f for f in self if not matcher(f))
221 changedfiles = dirstatefilestoremove.union(changedfiles)
221 changedfiles = dirstatefilestoremove.union(changedfiles)
222
222
223 return orig(self, parent, allfiles, changedfiles)
223 return orig(self, parent, allfiles, changedfiles)
224 extensions.wrapfunction(dirstate.dirstate, 'rebuild', _rebuild)
224 extensions.wrapfunction(dirstate.dirstate, 'rebuild', _rebuild)
225
225
226 # Prevent adding files that are outside the sparse checkout
226 # Prevent adding files that are outside the sparse checkout
227 editfuncs = ['normal', 'add', 'normallookup', 'copy', 'remove', 'merge']
227 editfuncs = ['normal', 'add', 'normallookup', 'copy', 'remove', 'merge']
228 hint = _('include file with `hg debugsparse --include <pattern>` or use ' +
228 hint = _('include file with `hg debugsparse --include <pattern>` or use ' +
229 '`hg add -s <file>` to include file directory while adding')
229 '`hg add -s <file>` to include file directory while adding')
230 for func in editfuncs:
230 for func in editfuncs:
231 def _wrapper(orig, self, *args):
231 def _wrapper(orig, self, *args):
232 sparsematch = self._sparsematcher
232 sparsematch = self._sparsematcher
233 if not sparsematch.always():
233 if not sparsematch.always():
234 for f in args:
234 for f in args:
235 if (f is not None and not sparsematch(f) and
235 if (f is not None and not sparsematch(f) and
236 f not in self):
236 f not in self):
237 raise error.Abort(_("cannot add '%s' - it is outside "
237 raise error.Abort(_("cannot add '%s' - it is outside "
238 "the sparse checkout") % f,
238 "the sparse checkout") % f,
239 hint=hint)
239 hint=hint)
240 return orig(self, *args)
240 return orig(self, *args)
241 extensions.wrapfunction(dirstate.dirstate, func, _wrapper)
241 extensions.wrapfunction(dirstate.dirstate, func, _wrapper)
242
242
243 @command('debugsparse', [
243 @command('debugsparse', [
244 ('I', 'include', False, _('include files in the sparse checkout')),
244 ('I', 'include', False, _('include files in the sparse checkout')),
245 ('X', 'exclude', False, _('exclude files in the sparse checkout')),
245 ('X', 'exclude', False, _('exclude files in the sparse checkout')),
246 ('d', 'delete', False, _('delete an include/exclude rule')),
246 ('d', 'delete', False, _('delete an include/exclude rule')),
247 ('f', 'force', False, _('allow changing rules even with pending changes')),
247 ('f', 'force', False, _('allow changing rules even with pending changes')),
248 ('', 'enable-profile', False, _('enables the specified profile')),
248 ('', 'enable-profile', False, _('enables the specified profile')),
249 ('', 'disable-profile', False, _('disables the specified profile')),
249 ('', 'disable-profile', False, _('disables the specified profile')),
250 ('', 'import-rules', False, _('imports rules from a file')),
250 ('', 'import-rules', False, _('imports rules from a file')),
251 ('', 'clear-rules', False, _('clears local include/exclude rules')),
251 ('', 'clear-rules', False, _('clears local include/exclude rules')),
252 ('', 'refresh', False, _('updates the working after sparseness changes')),
252 ('', 'refresh', False, _('updates the working after sparseness changes')),
253 ('', 'reset', False, _('makes the repo full again')),
253 ('', 'reset', False, _('makes the repo full again')),
254 ] + commands.templateopts,
254 ] + commands.templateopts,
255 _('[--OPTION] PATTERN...'),
255 _('[--OPTION] PATTERN...'),
256 helpbasic=True)
256 helpbasic=True)
257 def debugsparse(ui, repo, *pats, **opts):
257 def debugsparse(ui, repo, *pats, **opts):
258 """make the current checkout sparse, or edit the existing checkout
258 """make the current checkout sparse, or edit the existing checkout
259
259
260 The sparse command is used to make the current checkout sparse.
260 The sparse command is used to make the current checkout sparse.
261 This means files that don't meet the sparse condition will not be
261 This means files that don't meet the sparse condition will not be
262 written to disk, or show up in any working copy operations. It does
262 written to disk, or show up in any working copy operations. It does
263 not affect files in history in any way.
263 not affect files in history in any way.
264
264
265 Passing no arguments prints the currently applied sparse rules.
265 Passing no arguments prints the currently applied sparse rules.
266
266
267 --include and --exclude are used to add and remove files from the sparse
267 --include and --exclude are used to add and remove files from the sparse
268 checkout. The effects of adding an include or exclude rule are applied
268 checkout. The effects of adding an include or exclude rule are applied
269 immediately. If applying the new rule would cause a file with pending
269 immediately. If applying the new rule would cause a file with pending
270 changes to be added or removed, the command will fail. Pass --force to
270 changes to be added or removed, the command will fail. Pass --force to
271 force a rule change even with pending changes (the changes on disk will
271 force a rule change even with pending changes (the changes on disk will
272 be preserved).
272 be preserved).
273
273
274 --delete removes an existing include/exclude rule. The effects are
274 --delete removes an existing include/exclude rule. The effects are
275 immediate.
275 immediate.
276
276
277 --refresh refreshes the files on disk based on the sparse rules. This is
277 --refresh refreshes the files on disk based on the sparse rules. This is
278 only necessary if .hg/sparse was changed by hand.
278 only necessary if .hg/sparse was changed by hand.
279
279
280 --enable-profile and --disable-profile accept a path to a .hgsparse file.
280 --enable-profile and --disable-profile accept a path to a .hgsparse file.
281 This allows defining sparse checkouts and tracking them inside the
281 This allows defining sparse checkouts and tracking them inside the
282 repository. This is useful for defining commonly used sparse checkouts for
282 repository. This is useful for defining commonly used sparse checkouts for
283 many people to use. As the profile definition changes over time, the sparse
283 many people to use. As the profile definition changes over time, the sparse
284 checkout will automatically be updated appropriately, depending on which
284 checkout will automatically be updated appropriately, depending on which
285 changeset is checked out. Changes to .hgsparse are not applied until they
285 changeset is checked out. Changes to .hgsparse are not applied until they
286 have been committed.
286 have been committed.
287
287
288 --import-rules accepts a path to a file containing rules in the .hgsparse
288 --import-rules accepts a path to a file containing rules in the .hgsparse
289 format, allowing you to add --include, --exclude and --enable-profile rules
289 format, allowing you to add --include, --exclude and --enable-profile rules
290 in bulk. Like the --include, --exclude and --enable-profile switches, the
290 in bulk. Like the --include, --exclude and --enable-profile switches, the
291 changes are applied immediately.
291 changes are applied immediately.
292
292
293 --clear-rules removes all local include and exclude rules, while leaving
293 --clear-rules removes all local include and exclude rules, while leaving
294 any enabled profiles in place.
294 any enabled profiles in place.
295
295
296 Returns 0 if editing the sparse checkout succeeds.
296 Returns 0 if editing the sparse checkout succeeds.
297 """
297 """
298 opts = pycompat.byteskwargs(opts)
298 opts = pycompat.byteskwargs(opts)
299 include = opts.get('include')
299 include = opts.get('include')
300 exclude = opts.get('exclude')
300 exclude = opts.get('exclude')
301 force = opts.get('force')
301 force = opts.get('force')
302 enableprofile = opts.get('enable_profile')
302 enableprofile = opts.get('enable_profile')
303 disableprofile = opts.get('disable_profile')
303 disableprofile = opts.get('disable_profile')
304 importrules = opts.get('import_rules')
304 importrules = opts.get('import_rules')
305 clearrules = opts.get('clear_rules')
305 clearrules = opts.get('clear_rules')
306 delete = opts.get('delete')
306 delete = opts.get('delete')
307 refresh = opts.get('refresh')
307 refresh = opts.get('refresh')
308 reset = opts.get('reset')
308 reset = opts.get('reset')
309 count = sum([include, exclude, enableprofile, disableprofile, delete,
309 count = sum([include, exclude, enableprofile, disableprofile, delete,
310 importrules, refresh, clearrules, reset])
310 importrules, refresh, clearrules, reset])
311 if count > 1:
311 if count > 1:
312 raise error.Abort(_("too many flags specified"))
312 raise error.Abort(_("too many flags specified"))
313
313
314 if count == 0:
314 if count == 0:
315 if repo.vfs.exists('sparse'):
315 if repo.vfs.exists('sparse'):
316 ui.status(repo.vfs.read("sparse") + "\n")
316 ui.status(repo.vfs.read("sparse") + "\n")
317 temporaryincludes = sparse.readtemporaryincludes(repo)
317 temporaryincludes = sparse.readtemporaryincludes(repo)
318 if temporaryincludes:
318 if temporaryincludes:
319 ui.status(_("Temporarily Included Files (for merge/rebase):\n"))
319 ui.status(_("Temporarily Included Files (for merge/rebase):\n"))
320 ui.status(("\n".join(temporaryincludes) + "\n"))
320 ui.status(("\n".join(temporaryincludes) + "\n"))
321 else:
321 else:
322 ui.status(_('repo is not sparse\n'))
322 ui.status(_('repo is not sparse\n'))
323 return
323 return
324
324
325 if include or exclude or delete or reset or enableprofile or disableprofile:
325 if include or exclude or delete or reset or enableprofile or disableprofile:
326 sparse.updateconfig(repo, pats, opts, include=include, exclude=exclude,
326 sparse.updateconfig(repo, pats, opts, include=include, exclude=exclude,
327 reset=reset, delete=delete,
327 reset=reset, delete=delete,
328 enableprofile=enableprofile,
328 enableprofile=enableprofile,
329 disableprofile=disableprofile, force=force)
329 disableprofile=disableprofile, force=force)
330
330
331 if importrules:
331 if importrules:
332 sparse.importfromfiles(repo, opts, pats, force=force)
332 sparse.importfromfiles(repo, opts, pats, force=force)
333
333
334 if clearrules:
334 if clearrules:
335 sparse.clearrules(repo, force=force)
335 sparse.clearrules(repo, force=force)
336
336
337 if refresh:
337 if refresh:
338 try:
338 try:
339 wlock = repo.wlock()
339 wlock = repo.wlock()
340 fcounts = map(
340 fcounts = map(
341 len,
341 len,
342 sparse.refreshwdir(repo, repo.status(), sparse.matcher(repo),
342 sparse.refreshwdir(repo, repo.status(), sparse.matcher(repo),
343 force=force))
343 force=force))
344 sparse.printchanges(ui, opts, added=fcounts[0], dropped=fcounts[1],
344 sparse.printchanges(ui, opts, added=fcounts[0], dropped=fcounts[1],
345 conflicting=fcounts[2])
345 conflicting=fcounts[2])
346 finally:
346 finally:
347 wlock.release()
347 wlock.release()
@@ -1,561 +1,560 b''
1 # fileset.py - file set queries for mercurial
1 # fileset.py - file set queries for mercurial
2 #
2 #
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import re
11 import re
12
12
13 from .i18n import _
13 from .i18n import _
14 from . import (
14 from . import (
15 error,
15 error,
16 filesetlang,
16 filesetlang,
17 match as matchmod,
17 match as matchmod,
18 merge,
18 merge,
19 pycompat,
19 pycompat,
20 registrar,
20 registrar,
21 scmutil,
21 scmutil,
22 util,
22 util,
23 )
23 )
24 from .utils import (
24 from .utils import (
25 stringutil,
25 stringutil,
26 )
26 )
27
27
28 # common weight constants
28 # common weight constants
29 _WEIGHT_CHECK_FILENAME = filesetlang.WEIGHT_CHECK_FILENAME
29 _WEIGHT_CHECK_FILENAME = filesetlang.WEIGHT_CHECK_FILENAME
30 _WEIGHT_READ_CONTENTS = filesetlang.WEIGHT_READ_CONTENTS
30 _WEIGHT_READ_CONTENTS = filesetlang.WEIGHT_READ_CONTENTS
31 _WEIGHT_STATUS = filesetlang.WEIGHT_STATUS
31 _WEIGHT_STATUS = filesetlang.WEIGHT_STATUS
32 _WEIGHT_STATUS_THOROUGH = filesetlang.WEIGHT_STATUS_THOROUGH
32 _WEIGHT_STATUS_THOROUGH = filesetlang.WEIGHT_STATUS_THOROUGH
33
33
34 # helpers for processing parsed tree
34 # helpers for processing parsed tree
35 getsymbol = filesetlang.getsymbol
35 getsymbol = filesetlang.getsymbol
36 getstring = filesetlang.getstring
36 getstring = filesetlang.getstring
37 _getkindpat = filesetlang.getkindpat
37 _getkindpat = filesetlang.getkindpat
38 getpattern = filesetlang.getpattern
38 getpattern = filesetlang.getpattern
39 getargs = filesetlang.getargs
39 getargs = filesetlang.getargs
40
40
41 def getmatch(mctx, x):
41 def getmatch(mctx, x):
42 if not x:
42 if not x:
43 raise error.ParseError(_("missing argument"))
43 raise error.ParseError(_("missing argument"))
44 return methods[x[0]](mctx, *x[1:])
44 return methods[x[0]](mctx, *x[1:])
45
45
46 def getmatchwithstatus(mctx, x, hint):
46 def getmatchwithstatus(mctx, x, hint):
47 keys = set(getstring(hint, 'status hint must be a string').split())
47 keys = set(getstring(hint, 'status hint must be a string').split())
48 return getmatch(mctx.withstatus(keys), x)
48 return getmatch(mctx.withstatus(keys), x)
49
49
50 def stringmatch(mctx, x):
50 def stringmatch(mctx, x):
51 return mctx.matcher([x])
51 return mctx.matcher([x])
52
52
53 def kindpatmatch(mctx, x, y):
53 def kindpatmatch(mctx, x, y):
54 return stringmatch(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
54 return stringmatch(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
55 _("pattern must be a string")))
55 _("pattern must be a string")))
56
56
57 def patternsmatch(mctx, *xs):
57 def patternsmatch(mctx, *xs):
58 allkinds = matchmod.allpatternkinds
58 allkinds = matchmod.allpatternkinds
59 patterns = [getpattern(x, allkinds, _("pattern must be a string"))
59 patterns = [getpattern(x, allkinds, _("pattern must be a string"))
60 for x in xs]
60 for x in xs]
61 return mctx.matcher(patterns)
61 return mctx.matcher(patterns)
62
62
63 def andmatch(mctx, x, y):
63 def andmatch(mctx, x, y):
64 xm = getmatch(mctx, x)
64 xm = getmatch(mctx, x)
65 ym = getmatch(mctx.narrowed(xm), y)
65 ym = getmatch(mctx.narrowed(xm), y)
66 return matchmod.intersectmatchers(xm, ym)
66 return matchmod.intersectmatchers(xm, ym)
67
67
68 def ormatch(mctx, *xs):
68 def ormatch(mctx, *xs):
69 ms = [getmatch(mctx, x) for x in xs]
69 ms = [getmatch(mctx, x) for x in xs]
70 return matchmod.unionmatcher(ms)
70 return matchmod.unionmatcher(ms)
71
71
72 def notmatch(mctx, x):
72 def notmatch(mctx, x):
73 m = getmatch(mctx, x)
73 m = getmatch(mctx, x)
74 return mctx.predicate(lambda f: not m(f), predrepr=('<not %r>', m))
74 return mctx.predicate(lambda f: not m(f), predrepr=('<not %r>', m))
75
75
76 def minusmatch(mctx, x, y):
76 def minusmatch(mctx, x, y):
77 xm = getmatch(mctx, x)
77 xm = getmatch(mctx, x)
78 ym = getmatch(mctx.narrowed(xm), y)
78 ym = getmatch(mctx.narrowed(xm), y)
79 return matchmod.differencematcher(xm, ym)
79 return matchmod.differencematcher(xm, ym)
80
80
81 def listmatch(mctx, *xs):
81 def listmatch(mctx, *xs):
82 raise error.ParseError(_("can't use a list in this context"),
82 raise error.ParseError(_("can't use a list in this context"),
83 hint=_('see \'hg help "filesets.x or y"\''))
83 hint=_('see \'hg help "filesets.x or y"\''))
84
84
85 def func(mctx, a, b):
85 def func(mctx, a, b):
86 funcname = getsymbol(a)
86 funcname = getsymbol(a)
87 if funcname in symbols:
87 if funcname in symbols:
88 return symbols[funcname](mctx, b)
88 return symbols[funcname](mctx, b)
89
89
90 keep = lambda fn: getattr(fn, '__doc__', None) is not None
90 keep = lambda fn: getattr(fn, '__doc__', None) is not None
91
91
92 syms = [s for (s, fn) in symbols.items() if keep(fn)]
92 syms = [s for (s, fn) in symbols.items() if keep(fn)]
93 raise error.UnknownIdentifier(funcname, syms)
93 raise error.UnknownIdentifier(funcname, syms)
94
94
95 # symbols are callable like:
95 # symbols are callable like:
96 # fun(mctx, x)
96 # fun(mctx, x)
97 # with:
97 # with:
98 # mctx - current matchctx instance
98 # mctx - current matchctx instance
99 # x - argument in tree form
99 # x - argument in tree form
100 symbols = filesetlang.symbols
100 symbols = filesetlang.symbols
101
101
102 predicate = registrar.filesetpredicate(symbols)
102 predicate = registrar.filesetpredicate(symbols)
103
103
104 @predicate('modified()', callstatus=True, weight=_WEIGHT_STATUS)
104 @predicate('modified()', callstatus=True, weight=_WEIGHT_STATUS)
105 def modified(mctx, x):
105 def modified(mctx, x):
106 """File that is modified according to :hg:`status`.
106 """File that is modified according to :hg:`status`.
107 """
107 """
108 # i18n: "modified" is a keyword
108 # i18n: "modified" is a keyword
109 getargs(x, 0, 0, _("modified takes no arguments"))
109 getargs(x, 0, 0, _("modified takes no arguments"))
110 s = set(mctx.status().modified)
110 s = set(mctx.status().modified)
111 return mctx.predicate(s.__contains__, predrepr='modified')
111 return mctx.predicate(s.__contains__, predrepr='modified')
112
112
113 @predicate('added()', callstatus=True, weight=_WEIGHT_STATUS)
113 @predicate('added()', callstatus=True, weight=_WEIGHT_STATUS)
114 def added(mctx, x):
114 def added(mctx, x):
115 """File that is added according to :hg:`status`.
115 """File that is added according to :hg:`status`.
116 """
116 """
117 # i18n: "added" is a keyword
117 # i18n: "added" is a keyword
118 getargs(x, 0, 0, _("added takes no arguments"))
118 getargs(x, 0, 0, _("added takes no arguments"))
119 s = set(mctx.status().added)
119 s = set(mctx.status().added)
120 return mctx.predicate(s.__contains__, predrepr='added')
120 return mctx.predicate(s.__contains__, predrepr='added')
121
121
122 @predicate('removed()', callstatus=True, weight=_WEIGHT_STATUS)
122 @predicate('removed()', callstatus=True, weight=_WEIGHT_STATUS)
123 def removed(mctx, x):
123 def removed(mctx, x):
124 """File that is removed according to :hg:`status`.
124 """File that is removed according to :hg:`status`.
125 """
125 """
126 # i18n: "removed" is a keyword
126 # i18n: "removed" is a keyword
127 getargs(x, 0, 0, _("removed takes no arguments"))
127 getargs(x, 0, 0, _("removed takes no arguments"))
128 s = set(mctx.status().removed)
128 s = set(mctx.status().removed)
129 return mctx.predicate(s.__contains__, predrepr='removed')
129 return mctx.predicate(s.__contains__, predrepr='removed')
130
130
131 @predicate('deleted()', callstatus=True, weight=_WEIGHT_STATUS)
131 @predicate('deleted()', callstatus=True, weight=_WEIGHT_STATUS)
132 def deleted(mctx, x):
132 def deleted(mctx, x):
133 """Alias for ``missing()``.
133 """Alias for ``missing()``.
134 """
134 """
135 # i18n: "deleted" is a keyword
135 # i18n: "deleted" is a keyword
136 getargs(x, 0, 0, _("deleted takes no arguments"))
136 getargs(x, 0, 0, _("deleted takes no arguments"))
137 s = set(mctx.status().deleted)
137 s = set(mctx.status().deleted)
138 return mctx.predicate(s.__contains__, predrepr='deleted')
138 return mctx.predicate(s.__contains__, predrepr='deleted')
139
139
140 @predicate('missing()', callstatus=True, weight=_WEIGHT_STATUS)
140 @predicate('missing()', callstatus=True, weight=_WEIGHT_STATUS)
141 def missing(mctx, x):
141 def missing(mctx, x):
142 """File that is missing according to :hg:`status`.
142 """File that is missing according to :hg:`status`.
143 """
143 """
144 # i18n: "missing" is a keyword
144 # i18n: "missing" is a keyword
145 getargs(x, 0, 0, _("missing takes no arguments"))
145 getargs(x, 0, 0, _("missing takes no arguments"))
146 s = set(mctx.status().deleted)
146 s = set(mctx.status().deleted)
147 return mctx.predicate(s.__contains__, predrepr='deleted')
147 return mctx.predicate(s.__contains__, predrepr='deleted')
148
148
149 @predicate('unknown()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
149 @predicate('unknown()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
150 def unknown(mctx, x):
150 def unknown(mctx, x):
151 """File that is unknown according to :hg:`status`."""
151 """File that is unknown according to :hg:`status`."""
152 # i18n: "unknown" is a keyword
152 # i18n: "unknown" is a keyword
153 getargs(x, 0, 0, _("unknown takes no arguments"))
153 getargs(x, 0, 0, _("unknown takes no arguments"))
154 s = set(mctx.status().unknown)
154 s = set(mctx.status().unknown)
155 return mctx.predicate(s.__contains__, predrepr='unknown')
155 return mctx.predicate(s.__contains__, predrepr='unknown')
156
156
157 @predicate('ignored()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
157 @predicate('ignored()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
158 def ignored(mctx, x):
158 def ignored(mctx, x):
159 """File that is ignored according to :hg:`status`."""
159 """File that is ignored according to :hg:`status`."""
160 # i18n: "ignored" is a keyword
160 # i18n: "ignored" is a keyword
161 getargs(x, 0, 0, _("ignored takes no arguments"))
161 getargs(x, 0, 0, _("ignored takes no arguments"))
162 s = set(mctx.status().ignored)
162 s = set(mctx.status().ignored)
163 return mctx.predicate(s.__contains__, predrepr='ignored')
163 return mctx.predicate(s.__contains__, predrepr='ignored')
164
164
165 @predicate('clean()', callstatus=True, weight=_WEIGHT_STATUS)
165 @predicate('clean()', callstatus=True, weight=_WEIGHT_STATUS)
166 def clean(mctx, x):
166 def clean(mctx, x):
167 """File that is clean according to :hg:`status`.
167 """File that is clean according to :hg:`status`.
168 """
168 """
169 # i18n: "clean" is a keyword
169 # i18n: "clean" is a keyword
170 getargs(x, 0, 0, _("clean takes no arguments"))
170 getargs(x, 0, 0, _("clean takes no arguments"))
171 s = set(mctx.status().clean)
171 s = set(mctx.status().clean)
172 return mctx.predicate(s.__contains__, predrepr='clean')
172 return mctx.predicate(s.__contains__, predrepr='clean')
173
173
174 @predicate('tracked()')
174 @predicate('tracked()')
175 def tracked(mctx, x):
175 def tracked(mctx, x):
176 """File that is under Mercurial control."""
176 """File that is under Mercurial control."""
177 # i18n: "tracked" is a keyword
177 # i18n: "tracked" is a keyword
178 getargs(x, 0, 0, _("tracked takes no arguments"))
178 getargs(x, 0, 0, _("tracked takes no arguments"))
179 return mctx.predicate(mctx.ctx.__contains__, predrepr='tracked')
179 return mctx.predicate(mctx.ctx.__contains__, predrepr='tracked')
180
180
181 @predicate('binary()', weight=_WEIGHT_READ_CONTENTS)
181 @predicate('binary()', weight=_WEIGHT_READ_CONTENTS)
182 def binary(mctx, x):
182 def binary(mctx, x):
183 """File that appears to be binary (contains NUL bytes).
183 """File that appears to be binary (contains NUL bytes).
184 """
184 """
185 # i18n: "binary" is a keyword
185 # i18n: "binary" is a keyword
186 getargs(x, 0, 0, _("binary takes no arguments"))
186 getargs(x, 0, 0, _("binary takes no arguments"))
187 return mctx.fpredicate(lambda fctx: fctx.isbinary(),
187 return mctx.fpredicate(lambda fctx: fctx.isbinary(),
188 predrepr='binary', cache=True)
188 predrepr='binary', cache=True)
189
189
190 @predicate('exec()')
190 @predicate('exec()')
191 def exec_(mctx, x):
191 def exec_(mctx, x):
192 """File that is marked as executable.
192 """File that is marked as executable.
193 """
193 """
194 # i18n: "exec" is a keyword
194 # i18n: "exec" is a keyword
195 getargs(x, 0, 0, _("exec takes no arguments"))
195 getargs(x, 0, 0, _("exec takes no arguments"))
196 ctx = mctx.ctx
196 ctx = mctx.ctx
197 return mctx.predicate(lambda f: ctx.flags(f) == 'x', predrepr='exec')
197 return mctx.predicate(lambda f: ctx.flags(f) == 'x', predrepr='exec')
198
198
199 @predicate('symlink()')
199 @predicate('symlink()')
200 def symlink(mctx, x):
200 def symlink(mctx, x):
201 """File that is marked as a symlink.
201 """File that is marked as a symlink.
202 """
202 """
203 # i18n: "symlink" is a keyword
203 # i18n: "symlink" is a keyword
204 getargs(x, 0, 0, _("symlink takes no arguments"))
204 getargs(x, 0, 0, _("symlink takes no arguments"))
205 ctx = mctx.ctx
205 ctx = mctx.ctx
206 return mctx.predicate(lambda f: ctx.flags(f) == 'l', predrepr='symlink')
206 return mctx.predicate(lambda f: ctx.flags(f) == 'l', predrepr='symlink')
207
207
208 @predicate('resolved()', weight=_WEIGHT_STATUS)
208 @predicate('resolved()', weight=_WEIGHT_STATUS)
209 def resolved(mctx, x):
209 def resolved(mctx, x):
210 """File that is marked resolved according to :hg:`resolve -l`.
210 """File that is marked resolved according to :hg:`resolve -l`.
211 """
211 """
212 # i18n: "resolved" is a keyword
212 # i18n: "resolved" is a keyword
213 getargs(x, 0, 0, _("resolved takes no arguments"))
213 getargs(x, 0, 0, _("resolved takes no arguments"))
214 if mctx.ctx.rev() is not None:
214 if mctx.ctx.rev() is not None:
215 return mctx.never()
215 return mctx.never()
216 ms = merge.mergestate.read(mctx.ctx.repo())
216 ms = merge.mergestate.read(mctx.ctx.repo())
217 return mctx.predicate(lambda f: f in ms and ms[f] == 'r',
217 return mctx.predicate(lambda f: f in ms and ms[f] == 'r',
218 predrepr='resolved')
218 predrepr='resolved')
219
219
220 @predicate('unresolved()', weight=_WEIGHT_STATUS)
220 @predicate('unresolved()', weight=_WEIGHT_STATUS)
221 def unresolved(mctx, x):
221 def unresolved(mctx, x):
222 """File that is marked unresolved according to :hg:`resolve -l`.
222 """File that is marked unresolved according to :hg:`resolve -l`.
223 """
223 """
224 # i18n: "unresolved" is a keyword
224 # i18n: "unresolved" is a keyword
225 getargs(x, 0, 0, _("unresolved takes no arguments"))
225 getargs(x, 0, 0, _("unresolved takes no arguments"))
226 if mctx.ctx.rev() is not None:
226 if mctx.ctx.rev() is not None:
227 return mctx.never()
227 return mctx.never()
228 ms = merge.mergestate.read(mctx.ctx.repo())
228 ms = merge.mergestate.read(mctx.ctx.repo())
229 return mctx.predicate(lambda f: f in ms and ms[f] == 'u',
229 return mctx.predicate(lambda f: f in ms and ms[f] == 'u',
230 predrepr='unresolved')
230 predrepr='unresolved')
231
231
232 @predicate('hgignore()', weight=_WEIGHT_STATUS)
232 @predicate('hgignore()', weight=_WEIGHT_STATUS)
233 def hgignore(mctx, x):
233 def hgignore(mctx, x):
234 """File that matches the active .hgignore pattern.
234 """File that matches the active .hgignore pattern.
235 """
235 """
236 # i18n: "hgignore" is a keyword
236 # i18n: "hgignore" is a keyword
237 getargs(x, 0, 0, _("hgignore takes no arguments"))
237 getargs(x, 0, 0, _("hgignore takes no arguments"))
238 return mctx.ctx.repo().dirstate._ignore
238 return mctx.ctx.repo().dirstate._ignore
239
239
240 @predicate('portable()', weight=_WEIGHT_CHECK_FILENAME)
240 @predicate('portable()', weight=_WEIGHT_CHECK_FILENAME)
241 def portable(mctx, x):
241 def portable(mctx, x):
242 """File that has a portable name. (This doesn't include filenames with case
242 """File that has a portable name. (This doesn't include filenames with case
243 collisions.)
243 collisions.)
244 """
244 """
245 # i18n: "portable" is a keyword
245 # i18n: "portable" is a keyword
246 getargs(x, 0, 0, _("portable takes no arguments"))
246 getargs(x, 0, 0, _("portable takes no arguments"))
247 return mctx.predicate(lambda f: util.checkwinfilename(f) is None,
247 return mctx.predicate(lambda f: util.checkwinfilename(f) is None,
248 predrepr='portable')
248 predrepr='portable')
249
249
250 @predicate('grep(regex)', weight=_WEIGHT_READ_CONTENTS)
250 @predicate('grep(regex)', weight=_WEIGHT_READ_CONTENTS)
251 def grep(mctx, x):
251 def grep(mctx, x):
252 """File contains the given regular expression.
252 """File contains the given regular expression.
253 """
253 """
254 try:
254 try:
255 # i18n: "grep" is a keyword
255 # i18n: "grep" is a keyword
256 r = re.compile(getstring(x, _("grep requires a pattern")))
256 r = re.compile(getstring(x, _("grep requires a pattern")))
257 except re.error as e:
257 except re.error as e:
258 raise error.ParseError(_('invalid match pattern: %s') %
258 raise error.ParseError(_('invalid match pattern: %s') %
259 stringutil.forcebytestr(e))
259 stringutil.forcebytestr(e))
260 return mctx.fpredicate(lambda fctx: r.search(fctx.data()),
260 return mctx.fpredicate(lambda fctx: r.search(fctx.data()),
261 predrepr=('grep(%r)', r.pattern), cache=True)
261 predrepr=('grep(%r)', r.pattern), cache=True)
262
262
263 def _sizetomax(s):
263 def _sizetomax(s):
264 try:
264 try:
265 s = s.strip().lower()
265 s = s.strip().lower()
266 for k, v in util._sizeunits:
266 for k, v in util._sizeunits:
267 if s.endswith(k):
267 if s.endswith(k):
268 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
268 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
269 n = s[:-len(k)]
269 n = s[:-len(k)]
270 inc = 1.0
270 inc = 1.0
271 if "." in n:
271 if "." in n:
272 inc /= 10 ** len(n.split(".")[1])
272 inc /= 10 ** len(n.split(".")[1])
273 return int((float(n) + inc) * v) - 1
273 return int((float(n) + inc) * v) - 1
274 # no extension, this is a precise value
274 # no extension, this is a precise value
275 return int(s)
275 return int(s)
276 except ValueError:
276 except ValueError:
277 raise error.ParseError(_("couldn't parse size: %s") % s)
277 raise error.ParseError(_("couldn't parse size: %s") % s)
278
278
279 def sizematcher(expr):
279 def sizematcher(expr):
280 """Return a function(size) -> bool from the ``size()`` expression"""
280 """Return a function(size) -> bool from the ``size()`` expression"""
281 expr = expr.strip()
281 expr = expr.strip()
282 if '-' in expr: # do we have a range?
282 if '-' in expr: # do we have a range?
283 a, b = expr.split('-', 1)
283 a, b = expr.split('-', 1)
284 a = util.sizetoint(a)
284 a = util.sizetoint(a)
285 b = util.sizetoint(b)
285 b = util.sizetoint(b)
286 return lambda x: x >= a and x <= b
286 return lambda x: x >= a and x <= b
287 elif expr.startswith("<="):
287 elif expr.startswith("<="):
288 a = util.sizetoint(expr[2:])
288 a = util.sizetoint(expr[2:])
289 return lambda x: x <= a
289 return lambda x: x <= a
290 elif expr.startswith("<"):
290 elif expr.startswith("<"):
291 a = util.sizetoint(expr[1:])
291 a = util.sizetoint(expr[1:])
292 return lambda x: x < a
292 return lambda x: x < a
293 elif expr.startswith(">="):
293 elif expr.startswith(">="):
294 a = util.sizetoint(expr[2:])
294 a = util.sizetoint(expr[2:])
295 return lambda x: x >= a
295 return lambda x: x >= a
296 elif expr.startswith(">"):
296 elif expr.startswith(">"):
297 a = util.sizetoint(expr[1:])
297 a = util.sizetoint(expr[1:])
298 return lambda x: x > a
298 return lambda x: x > a
299 else:
299 else:
300 a = util.sizetoint(expr)
300 a = util.sizetoint(expr)
301 b = _sizetomax(expr)
301 b = _sizetomax(expr)
302 return lambda x: x >= a and x <= b
302 return lambda x: x >= a and x <= b
303
303
304 @predicate('size(expression)', weight=_WEIGHT_STATUS)
304 @predicate('size(expression)', weight=_WEIGHT_STATUS)
305 def size(mctx, x):
305 def size(mctx, x):
306 """File size matches the given expression. Examples:
306 """File size matches the given expression. Examples:
307
307
308 - size('1k') - files from 1024 to 2047 bytes
308 - size('1k') - files from 1024 to 2047 bytes
309 - size('< 20k') - files less than 20480 bytes
309 - size('< 20k') - files less than 20480 bytes
310 - size('>= .5MB') - files at least 524288 bytes
310 - size('>= .5MB') - files at least 524288 bytes
311 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
311 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
312 """
312 """
313 # i18n: "size" is a keyword
313 # i18n: "size" is a keyword
314 expr = getstring(x, _("size requires an expression"))
314 expr = getstring(x, _("size requires an expression"))
315 m = sizematcher(expr)
315 m = sizematcher(expr)
316 return mctx.fpredicate(lambda fctx: m(fctx.size()),
316 return mctx.fpredicate(lambda fctx: m(fctx.size()),
317 predrepr=('size(%r)', expr), cache=True)
317 predrepr=('size(%r)', expr), cache=True)
318
318
319 @predicate('encoding(name)', weight=_WEIGHT_READ_CONTENTS)
319 @predicate('encoding(name)', weight=_WEIGHT_READ_CONTENTS)
320 def encoding(mctx, x):
320 def encoding(mctx, x):
321 """File can be successfully decoded with the given character
321 """File can be successfully decoded with the given character
322 encoding. May not be useful for encodings other than ASCII and
322 encoding. May not be useful for encodings other than ASCII and
323 UTF-8.
323 UTF-8.
324 """
324 """
325
325
326 # i18n: "encoding" is a keyword
326 # i18n: "encoding" is a keyword
327 enc = getstring(x, _("encoding requires an encoding name"))
327 enc = getstring(x, _("encoding requires an encoding name"))
328
328
329 def encp(fctx):
329 def encp(fctx):
330 d = fctx.data()
330 d = fctx.data()
331 try:
331 try:
332 d.decode(pycompat.sysstr(enc))
332 d.decode(pycompat.sysstr(enc))
333 return True
333 return True
334 except LookupError:
334 except LookupError:
335 raise error.Abort(_("unknown encoding '%s'") % enc)
335 raise error.Abort(_("unknown encoding '%s'") % enc)
336 except UnicodeDecodeError:
336 except UnicodeDecodeError:
337 return False
337 return False
338
338
339 return mctx.fpredicate(encp, predrepr=('encoding(%r)', enc), cache=True)
339 return mctx.fpredicate(encp, predrepr=('encoding(%r)', enc), cache=True)
340
340
341 @predicate('eol(style)', weight=_WEIGHT_READ_CONTENTS)
341 @predicate('eol(style)', weight=_WEIGHT_READ_CONTENTS)
342 def eol(mctx, x):
342 def eol(mctx, x):
343 """File contains newlines of the given style (dos, unix, mac). Binary
343 """File contains newlines of the given style (dos, unix, mac). Binary
344 files are excluded, files with mixed line endings match multiple
344 files are excluded, files with mixed line endings match multiple
345 styles.
345 styles.
346 """
346 """
347
347
348 # i18n: "eol" is a keyword
348 # i18n: "eol" is a keyword
349 enc = getstring(x, _("eol requires a style name"))
349 enc = getstring(x, _("eol requires a style name"))
350
350
351 def eolp(fctx):
351 def eolp(fctx):
352 if fctx.isbinary():
352 if fctx.isbinary():
353 return False
353 return False
354 d = fctx.data()
354 d = fctx.data()
355 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
355 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
356 return True
356 return True
357 elif enc == 'unix' and re.search('(?<!\r)\n', d):
357 elif enc == 'unix' and re.search('(?<!\r)\n', d):
358 return True
358 return True
359 elif enc == 'mac' and re.search('\r(?!\n)', d):
359 elif enc == 'mac' and re.search('\r(?!\n)', d):
360 return True
360 return True
361 return False
361 return False
362 return mctx.fpredicate(eolp, predrepr=('eol(%r)', enc), cache=True)
362 return mctx.fpredicate(eolp, predrepr=('eol(%r)', enc), cache=True)
363
363
364 @predicate('copied()')
364 @predicate('copied()')
365 def copied(mctx, x):
365 def copied(mctx, x):
366 """File that is recorded as being copied.
366 """File that is recorded as being copied.
367 """
367 """
368 # i18n: "copied" is a keyword
368 # i18n: "copied" is a keyword
369 getargs(x, 0, 0, _("copied takes no arguments"))
369 getargs(x, 0, 0, _("copied takes no arguments"))
370 def copiedp(fctx):
370 def copiedp(fctx):
371 p = fctx.parents()
371 p = fctx.parents()
372 return p and p[0].path() != fctx.path()
372 return p and p[0].path() != fctx.path()
373 return mctx.fpredicate(copiedp, predrepr='copied', cache=True)
373 return mctx.fpredicate(copiedp, predrepr='copied', cache=True)
374
374
375 @predicate('revs(revs, pattern)', weight=_WEIGHT_STATUS)
375 @predicate('revs(revs, pattern)', weight=_WEIGHT_STATUS)
376 def revs(mctx, x):
376 def revs(mctx, x):
377 """Evaluate set in the specified revisions. If the revset match multiple
377 """Evaluate set in the specified revisions. If the revset match multiple
378 revs, this will return file matching pattern in any of the revision.
378 revs, this will return file matching pattern in any of the revision.
379 """
379 """
380 # i18n: "revs" is a keyword
380 # i18n: "revs" is a keyword
381 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
381 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
382 # i18n: "revs" is a keyword
382 # i18n: "revs" is a keyword
383 revspec = getstring(r, _("first argument to revs must be a revision"))
383 revspec = getstring(r, _("first argument to revs must be a revision"))
384 repo = mctx.ctx.repo()
384 repo = mctx.ctx.repo()
385 revs = scmutil.revrange(repo, [revspec])
385 revs = scmutil.revrange(repo, [revspec])
386
386
387 matchers = []
387 matchers = []
388 for r in revs:
388 for r in revs:
389 ctx = repo[r]
389 ctx = repo[r]
390 mc = mctx.switch(ctx.p1(), ctx)
390 mc = mctx.switch(ctx.p1(), ctx)
391 matchers.append(getmatch(mc, x))
391 matchers.append(getmatch(mc, x))
392 if not matchers:
392 if not matchers:
393 return mctx.never()
393 return mctx.never()
394 if len(matchers) == 1:
394 if len(matchers) == 1:
395 return matchers[0]
395 return matchers[0]
396 return matchmod.unionmatcher(matchers)
396 return matchmod.unionmatcher(matchers)
397
397
398 @predicate('status(base, rev, pattern)', weight=_WEIGHT_STATUS)
398 @predicate('status(base, rev, pattern)', weight=_WEIGHT_STATUS)
399 def status(mctx, x):
399 def status(mctx, x):
400 """Evaluate predicate using status change between ``base`` and
400 """Evaluate predicate using status change between ``base`` and
401 ``rev``. Examples:
401 ``rev``. Examples:
402
402
403 - ``status(3, 7, added())`` - matches files added from "3" to "7"
403 - ``status(3, 7, added())`` - matches files added from "3" to "7"
404 """
404 """
405 repo = mctx.ctx.repo()
405 repo = mctx.ctx.repo()
406 # i18n: "status" is a keyword
406 # i18n: "status" is a keyword
407 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
407 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
408 # i18n: "status" is a keyword
408 # i18n: "status" is a keyword
409 baseerr = _("first argument to status must be a revision")
409 baseerr = _("first argument to status must be a revision")
410 baserevspec = getstring(b, baseerr)
410 baserevspec = getstring(b, baseerr)
411 if not baserevspec:
411 if not baserevspec:
412 raise error.ParseError(baseerr)
412 raise error.ParseError(baseerr)
413 reverr = _("second argument to status must be a revision")
413 reverr = _("second argument to status must be a revision")
414 revspec = getstring(r, reverr)
414 revspec = getstring(r, reverr)
415 if not revspec:
415 if not revspec:
416 raise error.ParseError(reverr)
416 raise error.ParseError(reverr)
417 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
417 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
418 mc = mctx.switch(basectx, ctx)
418 mc = mctx.switch(basectx, ctx)
419 return getmatch(mc, x)
419 return getmatch(mc, x)
420
420
421 @predicate('subrepo([pattern])')
421 @predicate('subrepo([pattern])')
422 def subrepo(mctx, x):
422 def subrepo(mctx, x):
423 """Subrepositories whose paths match the given pattern.
423 """Subrepositories whose paths match the given pattern.
424 """
424 """
425 # i18n: "subrepo" is a keyword
425 # i18n: "subrepo" is a keyword
426 getargs(x, 0, 1, _("subrepo takes at most one argument"))
426 getargs(x, 0, 1, _("subrepo takes at most one argument"))
427 ctx = mctx.ctx
427 ctx = mctx.ctx
428 sstate = ctx.substate
428 sstate = ctx.substate
429 if x:
429 if x:
430 pat = getpattern(x, matchmod.allpatternkinds,
430 pat = getpattern(x, matchmod.allpatternkinds,
431 # i18n: "subrepo" is a keyword
431 # i18n: "subrepo" is a keyword
432 _("subrepo requires a pattern or no arguments"))
432 _("subrepo requires a pattern or no arguments"))
433 fast = not matchmod.patkind(pat)
433 fast = not matchmod.patkind(pat)
434 if fast:
434 if fast:
435 def m(s):
435 def m(s):
436 return (s == pat)
436 return (s == pat)
437 else:
437 else:
438 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
438 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
439 return mctx.predicate(lambda f: f in sstate and m(f),
439 return mctx.predicate(lambda f: f in sstate and m(f),
440 predrepr=('subrepo(%r)', pat))
440 predrepr=('subrepo(%r)', pat))
441 else:
441 else:
442 return mctx.predicate(sstate.__contains__, predrepr='subrepo')
442 return mctx.predicate(sstate.__contains__, predrepr='subrepo')
443
443
444 methods = {
444 methods = {
445 'withstatus': getmatchwithstatus,
445 'withstatus': getmatchwithstatus,
446 'string': stringmatch,
446 'string': stringmatch,
447 'symbol': stringmatch,
447 'symbol': stringmatch,
448 'kindpat': kindpatmatch,
448 'kindpat': kindpatmatch,
449 'patterns': patternsmatch,
449 'patterns': patternsmatch,
450 'and': andmatch,
450 'and': andmatch,
451 'or': ormatch,
451 'or': ormatch,
452 'minus': minusmatch,
452 'minus': minusmatch,
453 'list': listmatch,
453 'list': listmatch,
454 'not': notmatch,
454 'not': notmatch,
455 'func': func,
455 'func': func,
456 }
456 }
457
457
458 class matchctx(object):
458 class matchctx(object):
459 def __init__(self, basectx, ctx, badfn=None):
459 def __init__(self, basectx, ctx, badfn=None):
460 self._basectx = basectx
460 self._basectx = basectx
461 self.ctx = ctx
461 self.ctx = ctx
462 self._badfn = badfn
462 self._badfn = badfn
463 self._match = None
463 self._match = None
464 self._status = None
464 self._status = None
465
465
466 def narrowed(self, match):
466 def narrowed(self, match):
467 """Create matchctx for a sub-tree narrowed by the given matcher"""
467 """Create matchctx for a sub-tree narrowed by the given matcher"""
468 mctx = matchctx(self._basectx, self.ctx, self._badfn)
468 mctx = matchctx(self._basectx, self.ctx, self._badfn)
469 mctx._match = match
469 mctx._match = match
470 # leave wider status which we don't have to care
470 # leave wider status which we don't have to care
471 mctx._status = self._status
471 mctx._status = self._status
472 return mctx
472 return mctx
473
473
474 def switch(self, basectx, ctx):
474 def switch(self, basectx, ctx):
475 mctx = matchctx(basectx, ctx, self._badfn)
475 mctx = matchctx(basectx, ctx, self._badfn)
476 mctx._match = self._match
476 mctx._match = self._match
477 return mctx
477 return mctx
478
478
479 def withstatus(self, keys):
479 def withstatus(self, keys):
480 """Create matchctx which has precomputed status specified by the keys"""
480 """Create matchctx which has precomputed status specified by the keys"""
481 mctx = matchctx(self._basectx, self.ctx, self._badfn)
481 mctx = matchctx(self._basectx, self.ctx, self._badfn)
482 mctx._match = self._match
482 mctx._match = self._match
483 mctx._buildstatus(keys)
483 mctx._buildstatus(keys)
484 return mctx
484 return mctx
485
485
486 def _buildstatus(self, keys):
486 def _buildstatus(self, keys):
487 self._status = self._basectx.status(self.ctx, self._match,
487 self._status = self._basectx.status(self.ctx, self._match,
488 listignored='ignored' in keys,
488 listignored='ignored' in keys,
489 listclean='clean' in keys,
489 listclean='clean' in keys,
490 listunknown='unknown' in keys)
490 listunknown='unknown' in keys)
491
491
492 def status(self):
492 def status(self):
493 return self._status
493 return self._status
494
494
495 def matcher(self, patterns):
495 def matcher(self, patterns):
496 return self.ctx.match(patterns, badfn=self._badfn)
496 return self.ctx.match(patterns, badfn=self._badfn)
497
497
498 def predicate(self, predfn, predrepr=None, cache=False):
498 def predicate(self, predfn, predrepr=None, cache=False):
499 """Create a matcher to select files by predfn(filename)"""
499 """Create a matcher to select files by predfn(filename)"""
500 if cache:
500 if cache:
501 predfn = util.cachefunc(predfn)
501 predfn = util.cachefunc(predfn)
502 repo = self.ctx.repo()
502 return matchmod.predicatematcher(predfn, predrepr=predrepr,
503 return matchmod.predicatematcher(repo.root, repo.getcwd(), predfn,
503 badfn=self._badfn)
504 predrepr=predrepr, badfn=self._badfn)
505
504
506 def fpredicate(self, predfn, predrepr=None, cache=False):
505 def fpredicate(self, predfn, predrepr=None, cache=False):
507 """Create a matcher to select files by predfn(fctx) at the current
506 """Create a matcher to select files by predfn(fctx) at the current
508 revision
507 revision
509
508
510 Missing files are ignored.
509 Missing files are ignored.
511 """
510 """
512 ctx = self.ctx
511 ctx = self.ctx
513 if ctx.rev() is None:
512 if ctx.rev() is None:
514 def fctxpredfn(f):
513 def fctxpredfn(f):
515 try:
514 try:
516 fctx = ctx[f]
515 fctx = ctx[f]
517 except error.LookupError:
516 except error.LookupError:
518 return False
517 return False
519 try:
518 try:
520 fctx.audit()
519 fctx.audit()
521 except error.Abort:
520 except error.Abort:
522 return False
521 return False
523 try:
522 try:
524 return predfn(fctx)
523 return predfn(fctx)
525 except (IOError, OSError) as e:
524 except (IOError, OSError) as e:
526 # open()-ing a directory fails with EACCES on Windows
525 # open()-ing a directory fails with EACCES on Windows
527 if e.errno in (errno.ENOENT, errno.EACCES, errno.ENOTDIR,
526 if e.errno in (errno.ENOENT, errno.EACCES, errno.ENOTDIR,
528 errno.EISDIR):
527 errno.EISDIR):
529 return False
528 return False
530 raise
529 raise
531 else:
530 else:
532 def fctxpredfn(f):
531 def fctxpredfn(f):
533 try:
532 try:
534 fctx = ctx[f]
533 fctx = ctx[f]
535 except error.LookupError:
534 except error.LookupError:
536 return False
535 return False
537 return predfn(fctx)
536 return predfn(fctx)
538 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
537 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
539
538
540 def never(self):
539 def never(self):
541 """Create a matcher to select nothing"""
540 """Create a matcher to select nothing"""
542 repo = self.ctx.repo()
541 repo = self.ctx.repo()
543 return matchmod.never(repo.root, repo.getcwd(), badfn=self._badfn)
542 return matchmod.never(repo.root, repo.getcwd(), badfn=self._badfn)
544
543
545 def match(ctx, expr, badfn=None):
544 def match(ctx, expr, badfn=None):
546 """Create a matcher for a single fileset expression"""
545 """Create a matcher for a single fileset expression"""
547 tree = filesetlang.parse(expr)
546 tree = filesetlang.parse(expr)
548 tree = filesetlang.analyze(tree)
547 tree = filesetlang.analyze(tree)
549 tree = filesetlang.optimize(tree)
548 tree = filesetlang.optimize(tree)
550 mctx = matchctx(ctx.p1(), ctx, badfn=badfn)
549 mctx = matchctx(ctx.p1(), ctx, badfn=badfn)
551 return getmatch(mctx, tree)
550 return getmatch(mctx, tree)
552
551
553
552
554 def loadpredicate(ui, extname, registrarobj):
553 def loadpredicate(ui, extname, registrarobj):
555 """Load fileset predicates from specified registrarobj
554 """Load fileset predicates from specified registrarobj
556 """
555 """
557 for name, func in registrarobj._table.iteritems():
556 for name, func in registrarobj._table.iteritems():
558 symbols[name] = func
557 symbols[name] = func
559
558
560 # tell hggettext to extract docstrings from these functions:
559 # tell hggettext to extract docstrings from these functions:
561 i18nfunctions = symbols.values()
560 i18nfunctions = symbols.values()
@@ -1,3072 +1,3072 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import hashlib
11 import hashlib
12 import os
12 import os
13 import random
13 import random
14 import sys
14 import sys
15 import time
15 import time
16 import weakref
16 import weakref
17
17
18 from .i18n import _
18 from .i18n import _
19 from .node import (
19 from .node import (
20 bin,
20 bin,
21 hex,
21 hex,
22 nullid,
22 nullid,
23 nullrev,
23 nullrev,
24 short,
24 short,
25 )
25 )
26 from . import (
26 from . import (
27 bookmarks,
27 bookmarks,
28 branchmap,
28 branchmap,
29 bundle2,
29 bundle2,
30 changegroup,
30 changegroup,
31 changelog,
31 changelog,
32 color,
32 color,
33 context,
33 context,
34 dirstate,
34 dirstate,
35 dirstateguard,
35 dirstateguard,
36 discovery,
36 discovery,
37 encoding,
37 encoding,
38 error,
38 error,
39 exchange,
39 exchange,
40 extensions,
40 extensions,
41 filelog,
41 filelog,
42 hook,
42 hook,
43 lock as lockmod,
43 lock as lockmod,
44 manifest,
44 manifest,
45 match as matchmod,
45 match as matchmod,
46 merge as mergemod,
46 merge as mergemod,
47 mergeutil,
47 mergeutil,
48 namespaces,
48 namespaces,
49 narrowspec,
49 narrowspec,
50 obsolete,
50 obsolete,
51 pathutil,
51 pathutil,
52 phases,
52 phases,
53 pushkey,
53 pushkey,
54 pycompat,
54 pycompat,
55 repository,
55 repository,
56 repoview,
56 repoview,
57 revset,
57 revset,
58 revsetlang,
58 revsetlang,
59 scmutil,
59 scmutil,
60 sparse,
60 sparse,
61 store as storemod,
61 store as storemod,
62 subrepoutil,
62 subrepoutil,
63 tags as tagsmod,
63 tags as tagsmod,
64 transaction,
64 transaction,
65 txnutil,
65 txnutil,
66 util,
66 util,
67 vfs as vfsmod,
67 vfs as vfsmod,
68 )
68 )
69 from .utils import (
69 from .utils import (
70 interfaceutil,
70 interfaceutil,
71 procutil,
71 procutil,
72 stringutil,
72 stringutil,
73 )
73 )
74
74
75 from .revlogutils import (
75 from .revlogutils import (
76 constants as revlogconst,
76 constants as revlogconst,
77 )
77 )
78
78
79 release = lockmod.release
79 release = lockmod.release
80 urlerr = util.urlerr
80 urlerr = util.urlerr
81 urlreq = util.urlreq
81 urlreq = util.urlreq
82
82
83 # set of (path, vfs-location) tuples. vfs-location is:
83 # set of (path, vfs-location) tuples. vfs-location is:
84 # - 'plain for vfs relative paths
84 # - 'plain for vfs relative paths
85 # - '' for svfs relative paths
85 # - '' for svfs relative paths
86 _cachedfiles = set()
86 _cachedfiles = set()
87
87
88 class _basefilecache(scmutil.filecache):
88 class _basefilecache(scmutil.filecache):
89 """All filecache usage on repo are done for logic that should be unfiltered
89 """All filecache usage on repo are done for logic that should be unfiltered
90 """
90 """
91 def __get__(self, repo, type=None):
91 def __get__(self, repo, type=None):
92 if repo is None:
92 if repo is None:
93 return self
93 return self
94 # proxy to unfiltered __dict__ since filtered repo has no entry
94 # proxy to unfiltered __dict__ since filtered repo has no entry
95 unfi = repo.unfiltered()
95 unfi = repo.unfiltered()
96 try:
96 try:
97 return unfi.__dict__[self.sname]
97 return unfi.__dict__[self.sname]
98 except KeyError:
98 except KeyError:
99 pass
99 pass
100 return super(_basefilecache, self).__get__(unfi, type)
100 return super(_basefilecache, self).__get__(unfi, type)
101
101
102 def set(self, repo, value):
102 def set(self, repo, value):
103 return super(_basefilecache, self).set(repo.unfiltered(), value)
103 return super(_basefilecache, self).set(repo.unfiltered(), value)
104
104
105 class repofilecache(_basefilecache):
105 class repofilecache(_basefilecache):
106 """filecache for files in .hg but outside of .hg/store"""
106 """filecache for files in .hg but outside of .hg/store"""
107 def __init__(self, *paths):
107 def __init__(self, *paths):
108 super(repofilecache, self).__init__(*paths)
108 super(repofilecache, self).__init__(*paths)
109 for path in paths:
109 for path in paths:
110 _cachedfiles.add((path, 'plain'))
110 _cachedfiles.add((path, 'plain'))
111
111
112 def join(self, obj, fname):
112 def join(self, obj, fname):
113 return obj.vfs.join(fname)
113 return obj.vfs.join(fname)
114
114
115 class storecache(_basefilecache):
115 class storecache(_basefilecache):
116 """filecache for files in the store"""
116 """filecache for files in the store"""
117 def __init__(self, *paths):
117 def __init__(self, *paths):
118 super(storecache, self).__init__(*paths)
118 super(storecache, self).__init__(*paths)
119 for path in paths:
119 for path in paths:
120 _cachedfiles.add((path, ''))
120 _cachedfiles.add((path, ''))
121
121
122 def join(self, obj, fname):
122 def join(self, obj, fname):
123 return obj.sjoin(fname)
123 return obj.sjoin(fname)
124
124
125 def isfilecached(repo, name):
125 def isfilecached(repo, name):
126 """check if a repo has already cached "name" filecache-ed property
126 """check if a repo has already cached "name" filecache-ed property
127
127
128 This returns (cachedobj-or-None, iscached) tuple.
128 This returns (cachedobj-or-None, iscached) tuple.
129 """
129 """
130 cacheentry = repo.unfiltered()._filecache.get(name, None)
130 cacheentry = repo.unfiltered()._filecache.get(name, None)
131 if not cacheentry:
131 if not cacheentry:
132 return None, False
132 return None, False
133 return cacheentry.obj, True
133 return cacheentry.obj, True
134
134
135 class unfilteredpropertycache(util.propertycache):
135 class unfilteredpropertycache(util.propertycache):
136 """propertycache that apply to unfiltered repo only"""
136 """propertycache that apply to unfiltered repo only"""
137
137
138 def __get__(self, repo, type=None):
138 def __get__(self, repo, type=None):
139 unfi = repo.unfiltered()
139 unfi = repo.unfiltered()
140 if unfi is repo:
140 if unfi is repo:
141 return super(unfilteredpropertycache, self).__get__(unfi)
141 return super(unfilteredpropertycache, self).__get__(unfi)
142 return getattr(unfi, self.name)
142 return getattr(unfi, self.name)
143
143
144 class filteredpropertycache(util.propertycache):
144 class filteredpropertycache(util.propertycache):
145 """propertycache that must take filtering in account"""
145 """propertycache that must take filtering in account"""
146
146
147 def cachevalue(self, obj, value):
147 def cachevalue(self, obj, value):
148 object.__setattr__(obj, self.name, value)
148 object.__setattr__(obj, self.name, value)
149
149
150
150
151 def hasunfilteredcache(repo, name):
151 def hasunfilteredcache(repo, name):
152 """check if a repo has an unfilteredpropertycache value for <name>"""
152 """check if a repo has an unfilteredpropertycache value for <name>"""
153 return name in vars(repo.unfiltered())
153 return name in vars(repo.unfiltered())
154
154
155 def unfilteredmethod(orig):
155 def unfilteredmethod(orig):
156 """decorate method that always need to be run on unfiltered version"""
156 """decorate method that always need to be run on unfiltered version"""
157 def wrapper(repo, *args, **kwargs):
157 def wrapper(repo, *args, **kwargs):
158 return orig(repo.unfiltered(), *args, **kwargs)
158 return orig(repo.unfiltered(), *args, **kwargs)
159 return wrapper
159 return wrapper
160
160
161 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
161 moderncaps = {'lookup', 'branchmap', 'pushkey', 'known', 'getbundle',
162 'unbundle'}
162 'unbundle'}
163 legacycaps = moderncaps.union({'changegroupsubset'})
163 legacycaps = moderncaps.union({'changegroupsubset'})
164
164
165 @interfaceutil.implementer(repository.ipeercommandexecutor)
165 @interfaceutil.implementer(repository.ipeercommandexecutor)
166 class localcommandexecutor(object):
166 class localcommandexecutor(object):
167 def __init__(self, peer):
167 def __init__(self, peer):
168 self._peer = peer
168 self._peer = peer
169 self._sent = False
169 self._sent = False
170 self._closed = False
170 self._closed = False
171
171
172 def __enter__(self):
172 def __enter__(self):
173 return self
173 return self
174
174
175 def __exit__(self, exctype, excvalue, exctb):
175 def __exit__(self, exctype, excvalue, exctb):
176 self.close()
176 self.close()
177
177
178 def callcommand(self, command, args):
178 def callcommand(self, command, args):
179 if self._sent:
179 if self._sent:
180 raise error.ProgrammingError('callcommand() cannot be used after '
180 raise error.ProgrammingError('callcommand() cannot be used after '
181 'sendcommands()')
181 'sendcommands()')
182
182
183 if self._closed:
183 if self._closed:
184 raise error.ProgrammingError('callcommand() cannot be used after '
184 raise error.ProgrammingError('callcommand() cannot be used after '
185 'close()')
185 'close()')
186
186
187 # We don't need to support anything fancy. Just call the named
187 # We don't need to support anything fancy. Just call the named
188 # method on the peer and return a resolved future.
188 # method on the peer and return a resolved future.
189 fn = getattr(self._peer, pycompat.sysstr(command))
189 fn = getattr(self._peer, pycompat.sysstr(command))
190
190
191 f = pycompat.futures.Future()
191 f = pycompat.futures.Future()
192
192
193 try:
193 try:
194 result = fn(**pycompat.strkwargs(args))
194 result = fn(**pycompat.strkwargs(args))
195 except Exception:
195 except Exception:
196 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
196 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
197 else:
197 else:
198 f.set_result(result)
198 f.set_result(result)
199
199
200 return f
200 return f
201
201
202 def sendcommands(self):
202 def sendcommands(self):
203 self._sent = True
203 self._sent = True
204
204
205 def close(self):
205 def close(self):
206 self._closed = True
206 self._closed = True
207
207
208 @interfaceutil.implementer(repository.ipeercommands)
208 @interfaceutil.implementer(repository.ipeercommands)
209 class localpeer(repository.peer):
209 class localpeer(repository.peer):
210 '''peer for a local repo; reflects only the most recent API'''
210 '''peer for a local repo; reflects only the most recent API'''
211
211
212 def __init__(self, repo, caps=None):
212 def __init__(self, repo, caps=None):
213 super(localpeer, self).__init__()
213 super(localpeer, self).__init__()
214
214
215 if caps is None:
215 if caps is None:
216 caps = moderncaps.copy()
216 caps = moderncaps.copy()
217 self._repo = repo.filtered('served')
217 self._repo = repo.filtered('served')
218 self.ui = repo.ui
218 self.ui = repo.ui
219 self._caps = repo._restrictcapabilities(caps)
219 self._caps = repo._restrictcapabilities(caps)
220
220
221 # Begin of _basepeer interface.
221 # Begin of _basepeer interface.
222
222
223 def url(self):
223 def url(self):
224 return self._repo.url()
224 return self._repo.url()
225
225
226 def local(self):
226 def local(self):
227 return self._repo
227 return self._repo
228
228
229 def peer(self):
229 def peer(self):
230 return self
230 return self
231
231
232 def canpush(self):
232 def canpush(self):
233 return True
233 return True
234
234
235 def close(self):
235 def close(self):
236 self._repo.close()
236 self._repo.close()
237
237
238 # End of _basepeer interface.
238 # End of _basepeer interface.
239
239
240 # Begin of _basewirecommands interface.
240 # Begin of _basewirecommands interface.
241
241
242 def branchmap(self):
242 def branchmap(self):
243 return self._repo.branchmap()
243 return self._repo.branchmap()
244
244
245 def capabilities(self):
245 def capabilities(self):
246 return self._caps
246 return self._caps
247
247
248 def clonebundles(self):
248 def clonebundles(self):
249 return self._repo.tryread('clonebundles.manifest')
249 return self._repo.tryread('clonebundles.manifest')
250
250
251 def debugwireargs(self, one, two, three=None, four=None, five=None):
251 def debugwireargs(self, one, two, three=None, four=None, five=None):
252 """Used to test argument passing over the wire"""
252 """Used to test argument passing over the wire"""
253 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
253 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
254 pycompat.bytestr(four),
254 pycompat.bytestr(four),
255 pycompat.bytestr(five))
255 pycompat.bytestr(five))
256
256
257 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
257 def getbundle(self, source, heads=None, common=None, bundlecaps=None,
258 **kwargs):
258 **kwargs):
259 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
259 chunks = exchange.getbundlechunks(self._repo, source, heads=heads,
260 common=common, bundlecaps=bundlecaps,
260 common=common, bundlecaps=bundlecaps,
261 **kwargs)[1]
261 **kwargs)[1]
262 cb = util.chunkbuffer(chunks)
262 cb = util.chunkbuffer(chunks)
263
263
264 if exchange.bundle2requested(bundlecaps):
264 if exchange.bundle2requested(bundlecaps):
265 # When requesting a bundle2, getbundle returns a stream to make the
265 # When requesting a bundle2, getbundle returns a stream to make the
266 # wire level function happier. We need to build a proper object
266 # wire level function happier. We need to build a proper object
267 # from it in local peer.
267 # from it in local peer.
268 return bundle2.getunbundler(self.ui, cb)
268 return bundle2.getunbundler(self.ui, cb)
269 else:
269 else:
270 return changegroup.getunbundler('01', cb, None)
270 return changegroup.getunbundler('01', cb, None)
271
271
272 def heads(self):
272 def heads(self):
273 return self._repo.heads()
273 return self._repo.heads()
274
274
275 def known(self, nodes):
275 def known(self, nodes):
276 return self._repo.known(nodes)
276 return self._repo.known(nodes)
277
277
278 def listkeys(self, namespace):
278 def listkeys(self, namespace):
279 return self._repo.listkeys(namespace)
279 return self._repo.listkeys(namespace)
280
280
281 def lookup(self, key):
281 def lookup(self, key):
282 return self._repo.lookup(key)
282 return self._repo.lookup(key)
283
283
284 def pushkey(self, namespace, key, old, new):
284 def pushkey(self, namespace, key, old, new):
285 return self._repo.pushkey(namespace, key, old, new)
285 return self._repo.pushkey(namespace, key, old, new)
286
286
287 def stream_out(self):
287 def stream_out(self):
288 raise error.Abort(_('cannot perform stream clone against local '
288 raise error.Abort(_('cannot perform stream clone against local '
289 'peer'))
289 'peer'))
290
290
291 def unbundle(self, bundle, heads, url):
291 def unbundle(self, bundle, heads, url):
292 """apply a bundle on a repo
292 """apply a bundle on a repo
293
293
294 This function handles the repo locking itself."""
294 This function handles the repo locking itself."""
295 try:
295 try:
296 try:
296 try:
297 bundle = exchange.readbundle(self.ui, bundle, None)
297 bundle = exchange.readbundle(self.ui, bundle, None)
298 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
298 ret = exchange.unbundle(self._repo, bundle, heads, 'push', url)
299 if util.safehasattr(ret, 'getchunks'):
299 if util.safehasattr(ret, 'getchunks'):
300 # This is a bundle20 object, turn it into an unbundler.
300 # This is a bundle20 object, turn it into an unbundler.
301 # This little dance should be dropped eventually when the
301 # This little dance should be dropped eventually when the
302 # API is finally improved.
302 # API is finally improved.
303 stream = util.chunkbuffer(ret.getchunks())
303 stream = util.chunkbuffer(ret.getchunks())
304 ret = bundle2.getunbundler(self.ui, stream)
304 ret = bundle2.getunbundler(self.ui, stream)
305 return ret
305 return ret
306 except Exception as exc:
306 except Exception as exc:
307 # If the exception contains output salvaged from a bundle2
307 # If the exception contains output salvaged from a bundle2
308 # reply, we need to make sure it is printed before continuing
308 # reply, we need to make sure it is printed before continuing
309 # to fail. So we build a bundle2 with such output and consume
309 # to fail. So we build a bundle2 with such output and consume
310 # it directly.
310 # it directly.
311 #
311 #
312 # This is not very elegant but allows a "simple" solution for
312 # This is not very elegant but allows a "simple" solution for
313 # issue4594
313 # issue4594
314 output = getattr(exc, '_bundle2salvagedoutput', ())
314 output = getattr(exc, '_bundle2salvagedoutput', ())
315 if output:
315 if output:
316 bundler = bundle2.bundle20(self._repo.ui)
316 bundler = bundle2.bundle20(self._repo.ui)
317 for out in output:
317 for out in output:
318 bundler.addpart(out)
318 bundler.addpart(out)
319 stream = util.chunkbuffer(bundler.getchunks())
319 stream = util.chunkbuffer(bundler.getchunks())
320 b = bundle2.getunbundler(self.ui, stream)
320 b = bundle2.getunbundler(self.ui, stream)
321 bundle2.processbundle(self._repo, b)
321 bundle2.processbundle(self._repo, b)
322 raise
322 raise
323 except error.PushRaced as exc:
323 except error.PushRaced as exc:
324 raise error.ResponseError(_('push failed:'),
324 raise error.ResponseError(_('push failed:'),
325 stringutil.forcebytestr(exc))
325 stringutil.forcebytestr(exc))
326
326
327 # End of _basewirecommands interface.
327 # End of _basewirecommands interface.
328
328
329 # Begin of peer interface.
329 # Begin of peer interface.
330
330
331 def commandexecutor(self):
331 def commandexecutor(self):
332 return localcommandexecutor(self)
332 return localcommandexecutor(self)
333
333
334 # End of peer interface.
334 # End of peer interface.
335
335
336 @interfaceutil.implementer(repository.ipeerlegacycommands)
336 @interfaceutil.implementer(repository.ipeerlegacycommands)
337 class locallegacypeer(localpeer):
337 class locallegacypeer(localpeer):
338 '''peer extension which implements legacy methods too; used for tests with
338 '''peer extension which implements legacy methods too; used for tests with
339 restricted capabilities'''
339 restricted capabilities'''
340
340
341 def __init__(self, repo):
341 def __init__(self, repo):
342 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
342 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
343
343
344 # Begin of baselegacywirecommands interface.
344 # Begin of baselegacywirecommands interface.
345
345
346 def between(self, pairs):
346 def between(self, pairs):
347 return self._repo.between(pairs)
347 return self._repo.between(pairs)
348
348
349 def branches(self, nodes):
349 def branches(self, nodes):
350 return self._repo.branches(nodes)
350 return self._repo.branches(nodes)
351
351
352 def changegroup(self, nodes, source):
352 def changegroup(self, nodes, source):
353 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
353 outgoing = discovery.outgoing(self._repo, missingroots=nodes,
354 missingheads=self._repo.heads())
354 missingheads=self._repo.heads())
355 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
355 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
356
356
357 def changegroupsubset(self, bases, heads, source):
357 def changegroupsubset(self, bases, heads, source):
358 outgoing = discovery.outgoing(self._repo, missingroots=bases,
358 outgoing = discovery.outgoing(self._repo, missingroots=bases,
359 missingheads=heads)
359 missingheads=heads)
360 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
360 return changegroup.makechangegroup(self._repo, outgoing, '01', source)
361
361
362 # End of baselegacywirecommands interface.
362 # End of baselegacywirecommands interface.
363
363
364 # Increment the sub-version when the revlog v2 format changes to lock out old
364 # Increment the sub-version when the revlog v2 format changes to lock out old
365 # clients.
365 # clients.
366 REVLOGV2_REQUIREMENT = 'exp-revlogv2.1'
366 REVLOGV2_REQUIREMENT = 'exp-revlogv2.1'
367
367
368 # A repository with the sparserevlog feature will have delta chains that
368 # A repository with the sparserevlog feature will have delta chains that
369 # can spread over a larger span. Sparse reading cuts these large spans into
369 # can spread over a larger span. Sparse reading cuts these large spans into
370 # pieces, so that each piece isn't too big.
370 # pieces, so that each piece isn't too big.
371 # Without the sparserevlog capability, reading from the repository could use
371 # Without the sparserevlog capability, reading from the repository could use
372 # huge amounts of memory, because the whole span would be read at once,
372 # huge amounts of memory, because the whole span would be read at once,
373 # including all the intermediate revisions that aren't pertinent for the chain.
373 # including all the intermediate revisions that aren't pertinent for the chain.
374 # This is why once a repository has enabled sparse-read, it becomes required.
374 # This is why once a repository has enabled sparse-read, it becomes required.
375 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
375 SPARSEREVLOG_REQUIREMENT = 'sparserevlog'
376
376
377 # Functions receiving (ui, features) that extensions can register to impact
377 # Functions receiving (ui, features) that extensions can register to impact
378 # the ability to load repositories with custom requirements. Only
378 # the ability to load repositories with custom requirements. Only
379 # functions defined in loaded extensions are called.
379 # functions defined in loaded extensions are called.
380 #
380 #
381 # The function receives a set of requirement strings that the repository
381 # The function receives a set of requirement strings that the repository
382 # is capable of opening. Functions will typically add elements to the
382 # is capable of opening. Functions will typically add elements to the
383 # set to reflect that the extension knows how to handle that requirements.
383 # set to reflect that the extension knows how to handle that requirements.
384 featuresetupfuncs = set()
384 featuresetupfuncs = set()
385
385
386 def makelocalrepository(baseui, path, intents=None):
386 def makelocalrepository(baseui, path, intents=None):
387 """Create a local repository object.
387 """Create a local repository object.
388
388
389 Given arguments needed to construct a local repository, this function
389 Given arguments needed to construct a local repository, this function
390 performs various early repository loading functionality (such as
390 performs various early repository loading functionality (such as
391 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
391 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
392 the repository can be opened, derives a type suitable for representing
392 the repository can be opened, derives a type suitable for representing
393 that repository, and returns an instance of it.
393 that repository, and returns an instance of it.
394
394
395 The returned object conforms to the ``repository.completelocalrepository``
395 The returned object conforms to the ``repository.completelocalrepository``
396 interface.
396 interface.
397
397
398 The repository type is derived by calling a series of factory functions
398 The repository type is derived by calling a series of factory functions
399 for each aspect/interface of the final repository. These are defined by
399 for each aspect/interface of the final repository. These are defined by
400 ``REPO_INTERFACES``.
400 ``REPO_INTERFACES``.
401
401
402 Each factory function is called to produce a type implementing a specific
402 Each factory function is called to produce a type implementing a specific
403 interface. The cumulative list of returned types will be combined into a
403 interface. The cumulative list of returned types will be combined into a
404 new type and that type will be instantiated to represent the local
404 new type and that type will be instantiated to represent the local
405 repository.
405 repository.
406
406
407 The factory functions each receive various state that may be consulted
407 The factory functions each receive various state that may be consulted
408 as part of deriving a type.
408 as part of deriving a type.
409
409
410 Extensions should wrap these factory functions to customize repository type
410 Extensions should wrap these factory functions to customize repository type
411 creation. Note that an extension's wrapped function may be called even if
411 creation. Note that an extension's wrapped function may be called even if
412 that extension is not loaded for the repo being constructed. Extensions
412 that extension is not loaded for the repo being constructed. Extensions
413 should check if their ``__name__`` appears in the
413 should check if their ``__name__`` appears in the
414 ``extensionmodulenames`` set passed to the factory function and no-op if
414 ``extensionmodulenames`` set passed to the factory function and no-op if
415 not.
415 not.
416 """
416 """
417 ui = baseui.copy()
417 ui = baseui.copy()
418 # Prevent copying repo configuration.
418 # Prevent copying repo configuration.
419 ui.copy = baseui.copy
419 ui.copy = baseui.copy
420
420
421 # Working directory VFS rooted at repository root.
421 # Working directory VFS rooted at repository root.
422 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
422 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
423
423
424 # Main VFS for .hg/ directory.
424 # Main VFS for .hg/ directory.
425 hgpath = wdirvfs.join(b'.hg')
425 hgpath = wdirvfs.join(b'.hg')
426 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
426 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
427
427
428 # The .hg/ path should exist and should be a directory. All other
428 # The .hg/ path should exist and should be a directory. All other
429 # cases are errors.
429 # cases are errors.
430 if not hgvfs.isdir():
430 if not hgvfs.isdir():
431 try:
431 try:
432 hgvfs.stat()
432 hgvfs.stat()
433 except OSError as e:
433 except OSError as e:
434 if e.errno != errno.ENOENT:
434 if e.errno != errno.ENOENT:
435 raise
435 raise
436
436
437 raise error.RepoError(_(b'repository %s not found') % path)
437 raise error.RepoError(_(b'repository %s not found') % path)
438
438
439 # .hg/requires file contains a newline-delimited list of
439 # .hg/requires file contains a newline-delimited list of
440 # features/capabilities the opener (us) must have in order to use
440 # features/capabilities the opener (us) must have in order to use
441 # the repository. This file was introduced in Mercurial 0.9.2,
441 # the repository. This file was introduced in Mercurial 0.9.2,
442 # which means very old repositories may not have one. We assume
442 # which means very old repositories may not have one. We assume
443 # a missing file translates to no requirements.
443 # a missing file translates to no requirements.
444 try:
444 try:
445 requirements = set(hgvfs.read(b'requires').splitlines())
445 requirements = set(hgvfs.read(b'requires').splitlines())
446 except IOError as e:
446 except IOError as e:
447 if e.errno != errno.ENOENT:
447 if e.errno != errno.ENOENT:
448 raise
448 raise
449 requirements = set()
449 requirements = set()
450
450
451 # The .hg/hgrc file may load extensions or contain config options
451 # The .hg/hgrc file may load extensions or contain config options
452 # that influence repository construction. Attempt to load it and
452 # that influence repository construction. Attempt to load it and
453 # process any new extensions that it may have pulled in.
453 # process any new extensions that it may have pulled in.
454 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
454 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
455 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
455 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
456 extensions.loadall(ui)
456 extensions.loadall(ui)
457 extensions.populateui(ui)
457 extensions.populateui(ui)
458
458
459 # Set of module names of extensions loaded for this repository.
459 # Set of module names of extensions loaded for this repository.
460 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
460 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
461
461
462 supportedrequirements = gathersupportedrequirements(ui)
462 supportedrequirements = gathersupportedrequirements(ui)
463
463
464 # We first validate the requirements are known.
464 # We first validate the requirements are known.
465 ensurerequirementsrecognized(requirements, supportedrequirements)
465 ensurerequirementsrecognized(requirements, supportedrequirements)
466
466
467 # Then we validate that the known set is reasonable to use together.
467 # Then we validate that the known set is reasonable to use together.
468 ensurerequirementscompatible(ui, requirements)
468 ensurerequirementscompatible(ui, requirements)
469
469
470 # TODO there are unhandled edge cases related to opening repositories with
470 # TODO there are unhandled edge cases related to opening repositories with
471 # shared storage. If storage is shared, we should also test for requirements
471 # shared storage. If storage is shared, we should also test for requirements
472 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
472 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
473 # that repo, as that repo may load extensions needed to open it. This is a
473 # that repo, as that repo may load extensions needed to open it. This is a
474 # bit complicated because we don't want the other hgrc to overwrite settings
474 # bit complicated because we don't want the other hgrc to overwrite settings
475 # in this hgrc.
475 # in this hgrc.
476 #
476 #
477 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
477 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
478 # file when sharing repos. But if a requirement is added after the share is
478 # file when sharing repos. But if a requirement is added after the share is
479 # performed, thereby introducing a new requirement for the opener, we may
479 # performed, thereby introducing a new requirement for the opener, we may
480 # will not see that and could encounter a run-time error interacting with
480 # will not see that and could encounter a run-time error interacting with
481 # that shared store since it has an unknown-to-us requirement.
481 # that shared store since it has an unknown-to-us requirement.
482
482
483 # At this point, we know we should be capable of opening the repository.
483 # At this point, we know we should be capable of opening the repository.
484 # Now get on with doing that.
484 # Now get on with doing that.
485
485
486 features = set()
486 features = set()
487
487
488 # The "store" part of the repository holds versioned data. How it is
488 # The "store" part of the repository holds versioned data. How it is
489 # accessed is determined by various requirements. The ``shared`` or
489 # accessed is determined by various requirements. The ``shared`` or
490 # ``relshared`` requirements indicate the store lives in the path contained
490 # ``relshared`` requirements indicate the store lives in the path contained
491 # in the ``.hg/sharedpath`` file. This is an absolute path for
491 # in the ``.hg/sharedpath`` file. This is an absolute path for
492 # ``shared`` and relative to ``.hg/`` for ``relshared``.
492 # ``shared`` and relative to ``.hg/`` for ``relshared``.
493 if b'shared' in requirements or b'relshared' in requirements:
493 if b'shared' in requirements or b'relshared' in requirements:
494 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
494 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
495 if b'relshared' in requirements:
495 if b'relshared' in requirements:
496 sharedpath = hgvfs.join(sharedpath)
496 sharedpath = hgvfs.join(sharedpath)
497
497
498 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
498 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
499
499
500 if not sharedvfs.exists():
500 if not sharedvfs.exists():
501 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
501 raise error.RepoError(_(b'.hg/sharedpath points to nonexistent '
502 b'directory %s') % sharedvfs.base)
502 b'directory %s') % sharedvfs.base)
503
503
504 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
504 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
505
505
506 storebasepath = sharedvfs.base
506 storebasepath = sharedvfs.base
507 cachepath = sharedvfs.join(b'cache')
507 cachepath = sharedvfs.join(b'cache')
508 else:
508 else:
509 storebasepath = hgvfs.base
509 storebasepath = hgvfs.base
510 cachepath = hgvfs.join(b'cache')
510 cachepath = hgvfs.join(b'cache')
511 wcachepath = hgvfs.join(b'wcache')
511 wcachepath = hgvfs.join(b'wcache')
512
512
513
513
514 # The store has changed over time and the exact layout is dictated by
514 # The store has changed over time and the exact layout is dictated by
515 # requirements. The store interface abstracts differences across all
515 # requirements. The store interface abstracts differences across all
516 # of them.
516 # of them.
517 store = makestore(requirements, storebasepath,
517 store = makestore(requirements, storebasepath,
518 lambda base: vfsmod.vfs(base, cacheaudited=True))
518 lambda base: vfsmod.vfs(base, cacheaudited=True))
519 hgvfs.createmode = store.createmode
519 hgvfs.createmode = store.createmode
520
520
521 storevfs = store.vfs
521 storevfs = store.vfs
522 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
522 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
523
523
524 # The cache vfs is used to manage cache files.
524 # The cache vfs is used to manage cache files.
525 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
525 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
526 cachevfs.createmode = store.createmode
526 cachevfs.createmode = store.createmode
527 # The cache vfs is used to manage cache files related to the working copy
527 # The cache vfs is used to manage cache files related to the working copy
528 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
528 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
529 wcachevfs.createmode = store.createmode
529 wcachevfs.createmode = store.createmode
530
530
531 # Now resolve the type for the repository object. We do this by repeatedly
531 # Now resolve the type for the repository object. We do this by repeatedly
532 # calling a factory function to produces types for specific aspects of the
532 # calling a factory function to produces types for specific aspects of the
533 # repo's operation. The aggregate returned types are used as base classes
533 # repo's operation. The aggregate returned types are used as base classes
534 # for a dynamically-derived type, which will represent our new repository.
534 # for a dynamically-derived type, which will represent our new repository.
535
535
536 bases = []
536 bases = []
537 extrastate = {}
537 extrastate = {}
538
538
539 for iface, fn in REPO_INTERFACES:
539 for iface, fn in REPO_INTERFACES:
540 # We pass all potentially useful state to give extensions tons of
540 # We pass all potentially useful state to give extensions tons of
541 # flexibility.
541 # flexibility.
542 typ = fn()(ui=ui,
542 typ = fn()(ui=ui,
543 intents=intents,
543 intents=intents,
544 requirements=requirements,
544 requirements=requirements,
545 features=features,
545 features=features,
546 wdirvfs=wdirvfs,
546 wdirvfs=wdirvfs,
547 hgvfs=hgvfs,
547 hgvfs=hgvfs,
548 store=store,
548 store=store,
549 storevfs=storevfs,
549 storevfs=storevfs,
550 storeoptions=storevfs.options,
550 storeoptions=storevfs.options,
551 cachevfs=cachevfs,
551 cachevfs=cachevfs,
552 wcachevfs=wcachevfs,
552 wcachevfs=wcachevfs,
553 extensionmodulenames=extensionmodulenames,
553 extensionmodulenames=extensionmodulenames,
554 extrastate=extrastate,
554 extrastate=extrastate,
555 baseclasses=bases)
555 baseclasses=bases)
556
556
557 if not isinstance(typ, type):
557 if not isinstance(typ, type):
558 raise error.ProgrammingError('unable to construct type for %s' %
558 raise error.ProgrammingError('unable to construct type for %s' %
559 iface)
559 iface)
560
560
561 bases.append(typ)
561 bases.append(typ)
562
562
563 # type() allows you to use characters in type names that wouldn't be
563 # type() allows you to use characters in type names that wouldn't be
564 # recognized as Python symbols in source code. We abuse that to add
564 # recognized as Python symbols in source code. We abuse that to add
565 # rich information about our constructed repo.
565 # rich information about our constructed repo.
566 name = pycompat.sysstr(b'derivedrepo:%s<%s>' % (
566 name = pycompat.sysstr(b'derivedrepo:%s<%s>' % (
567 wdirvfs.base,
567 wdirvfs.base,
568 b','.join(sorted(requirements))))
568 b','.join(sorted(requirements))))
569
569
570 cls = type(name, tuple(bases), {})
570 cls = type(name, tuple(bases), {})
571
571
572 return cls(
572 return cls(
573 baseui=baseui,
573 baseui=baseui,
574 ui=ui,
574 ui=ui,
575 origroot=path,
575 origroot=path,
576 wdirvfs=wdirvfs,
576 wdirvfs=wdirvfs,
577 hgvfs=hgvfs,
577 hgvfs=hgvfs,
578 requirements=requirements,
578 requirements=requirements,
579 supportedrequirements=supportedrequirements,
579 supportedrequirements=supportedrequirements,
580 sharedpath=storebasepath,
580 sharedpath=storebasepath,
581 store=store,
581 store=store,
582 cachevfs=cachevfs,
582 cachevfs=cachevfs,
583 wcachevfs=wcachevfs,
583 wcachevfs=wcachevfs,
584 features=features,
584 features=features,
585 intents=intents)
585 intents=intents)
586
586
587 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
587 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
588 """Load hgrc files/content into a ui instance.
588 """Load hgrc files/content into a ui instance.
589
589
590 This is called during repository opening to load any additional
590 This is called during repository opening to load any additional
591 config files or settings relevant to the current repository.
591 config files or settings relevant to the current repository.
592
592
593 Returns a bool indicating whether any additional configs were loaded.
593 Returns a bool indicating whether any additional configs were loaded.
594
594
595 Extensions should monkeypatch this function to modify how per-repo
595 Extensions should monkeypatch this function to modify how per-repo
596 configs are loaded. For example, an extension may wish to pull in
596 configs are loaded. For example, an extension may wish to pull in
597 configs from alternate files or sources.
597 configs from alternate files or sources.
598 """
598 """
599 try:
599 try:
600 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
600 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
601 return True
601 return True
602 except IOError:
602 except IOError:
603 return False
603 return False
604
604
605 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
605 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
606 """Perform additional actions after .hg/hgrc is loaded.
606 """Perform additional actions after .hg/hgrc is loaded.
607
607
608 This function is called during repository loading immediately after
608 This function is called during repository loading immediately after
609 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
609 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
610
610
611 The function can be used to validate configs, automatically add
611 The function can be used to validate configs, automatically add
612 options (including extensions) based on requirements, etc.
612 options (including extensions) based on requirements, etc.
613 """
613 """
614
614
615 # Map of requirements to list of extensions to load automatically when
615 # Map of requirements to list of extensions to load automatically when
616 # requirement is present.
616 # requirement is present.
617 autoextensions = {
617 autoextensions = {
618 b'largefiles': [b'largefiles'],
618 b'largefiles': [b'largefiles'],
619 b'lfs': [b'lfs'],
619 b'lfs': [b'lfs'],
620 }
620 }
621
621
622 for requirement, names in sorted(autoextensions.items()):
622 for requirement, names in sorted(autoextensions.items()):
623 if requirement not in requirements:
623 if requirement not in requirements:
624 continue
624 continue
625
625
626 for name in names:
626 for name in names:
627 if not ui.hasconfig(b'extensions', name):
627 if not ui.hasconfig(b'extensions', name):
628 ui.setconfig(b'extensions', name, b'', source='autoload')
628 ui.setconfig(b'extensions', name, b'', source='autoload')
629
629
630 def gathersupportedrequirements(ui):
630 def gathersupportedrequirements(ui):
631 """Determine the complete set of recognized requirements."""
631 """Determine the complete set of recognized requirements."""
632 # Start with all requirements supported by this file.
632 # Start with all requirements supported by this file.
633 supported = set(localrepository._basesupported)
633 supported = set(localrepository._basesupported)
634
634
635 # Execute ``featuresetupfuncs`` entries if they belong to an extension
635 # Execute ``featuresetupfuncs`` entries if they belong to an extension
636 # relevant to this ui instance.
636 # relevant to this ui instance.
637 modules = {m.__name__ for n, m in extensions.extensions(ui)}
637 modules = {m.__name__ for n, m in extensions.extensions(ui)}
638
638
639 for fn in featuresetupfuncs:
639 for fn in featuresetupfuncs:
640 if fn.__module__ in modules:
640 if fn.__module__ in modules:
641 fn(ui, supported)
641 fn(ui, supported)
642
642
643 # Add derived requirements from registered compression engines.
643 # Add derived requirements from registered compression engines.
644 for name in util.compengines:
644 for name in util.compengines:
645 engine = util.compengines[name]
645 engine = util.compengines[name]
646 if engine.revlogheader():
646 if engine.revlogheader():
647 supported.add(b'exp-compression-%s' % name)
647 supported.add(b'exp-compression-%s' % name)
648
648
649 return supported
649 return supported
650
650
651 def ensurerequirementsrecognized(requirements, supported):
651 def ensurerequirementsrecognized(requirements, supported):
652 """Validate that a set of local requirements is recognized.
652 """Validate that a set of local requirements is recognized.
653
653
654 Receives a set of requirements. Raises an ``error.RepoError`` if there
654 Receives a set of requirements. Raises an ``error.RepoError`` if there
655 exists any requirement in that set that currently loaded code doesn't
655 exists any requirement in that set that currently loaded code doesn't
656 recognize.
656 recognize.
657
657
658 Returns a set of supported requirements.
658 Returns a set of supported requirements.
659 """
659 """
660 missing = set()
660 missing = set()
661
661
662 for requirement in requirements:
662 for requirement in requirements:
663 if requirement in supported:
663 if requirement in supported:
664 continue
664 continue
665
665
666 if not requirement or not requirement[0:1].isalnum():
666 if not requirement or not requirement[0:1].isalnum():
667 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
667 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
668
668
669 missing.add(requirement)
669 missing.add(requirement)
670
670
671 if missing:
671 if missing:
672 raise error.RequirementError(
672 raise error.RequirementError(
673 _(b'repository requires features unknown to this Mercurial: %s') %
673 _(b'repository requires features unknown to this Mercurial: %s') %
674 b' '.join(sorted(missing)),
674 b' '.join(sorted(missing)),
675 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
675 hint=_(b'see https://mercurial-scm.org/wiki/MissingRequirement '
676 b'for more information'))
676 b'for more information'))
677
677
678 def ensurerequirementscompatible(ui, requirements):
678 def ensurerequirementscompatible(ui, requirements):
679 """Validates that a set of recognized requirements is mutually compatible.
679 """Validates that a set of recognized requirements is mutually compatible.
680
680
681 Some requirements may not be compatible with others or require
681 Some requirements may not be compatible with others or require
682 config options that aren't enabled. This function is called during
682 config options that aren't enabled. This function is called during
683 repository opening to ensure that the set of requirements needed
683 repository opening to ensure that the set of requirements needed
684 to open a repository is sane and compatible with config options.
684 to open a repository is sane and compatible with config options.
685
685
686 Extensions can monkeypatch this function to perform additional
686 Extensions can monkeypatch this function to perform additional
687 checking.
687 checking.
688
688
689 ``error.RepoError`` should be raised on failure.
689 ``error.RepoError`` should be raised on failure.
690 """
690 """
691 if b'exp-sparse' in requirements and not sparse.enabled:
691 if b'exp-sparse' in requirements and not sparse.enabled:
692 raise error.RepoError(_(b'repository is using sparse feature but '
692 raise error.RepoError(_(b'repository is using sparse feature but '
693 b'sparse is not enabled; enable the '
693 b'sparse is not enabled; enable the '
694 b'"sparse" extensions to access'))
694 b'"sparse" extensions to access'))
695
695
696 def makestore(requirements, path, vfstype):
696 def makestore(requirements, path, vfstype):
697 """Construct a storage object for a repository."""
697 """Construct a storage object for a repository."""
698 if b'store' in requirements:
698 if b'store' in requirements:
699 if b'fncache' in requirements:
699 if b'fncache' in requirements:
700 return storemod.fncachestore(path, vfstype,
700 return storemod.fncachestore(path, vfstype,
701 b'dotencode' in requirements)
701 b'dotencode' in requirements)
702
702
703 return storemod.encodedstore(path, vfstype)
703 return storemod.encodedstore(path, vfstype)
704
704
705 return storemod.basicstore(path, vfstype)
705 return storemod.basicstore(path, vfstype)
706
706
707 def resolvestorevfsoptions(ui, requirements, features):
707 def resolvestorevfsoptions(ui, requirements, features):
708 """Resolve the options to pass to the store vfs opener.
708 """Resolve the options to pass to the store vfs opener.
709
709
710 The returned dict is used to influence behavior of the storage layer.
710 The returned dict is used to influence behavior of the storage layer.
711 """
711 """
712 options = {}
712 options = {}
713
713
714 if b'treemanifest' in requirements:
714 if b'treemanifest' in requirements:
715 options[b'treemanifest'] = True
715 options[b'treemanifest'] = True
716
716
717 # experimental config: format.manifestcachesize
717 # experimental config: format.manifestcachesize
718 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
718 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
719 if manifestcachesize is not None:
719 if manifestcachesize is not None:
720 options[b'manifestcachesize'] = manifestcachesize
720 options[b'manifestcachesize'] = manifestcachesize
721
721
722 # In the absence of another requirement superseding a revlog-related
722 # In the absence of another requirement superseding a revlog-related
723 # requirement, we have to assume the repo is using revlog version 0.
723 # requirement, we have to assume the repo is using revlog version 0.
724 # This revlog format is super old and we don't bother trying to parse
724 # This revlog format is super old and we don't bother trying to parse
725 # opener options for it because those options wouldn't do anything
725 # opener options for it because those options wouldn't do anything
726 # meaningful on such old repos.
726 # meaningful on such old repos.
727 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
727 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
728 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
728 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
729
729
730 return options
730 return options
731
731
732 def resolverevlogstorevfsoptions(ui, requirements, features):
732 def resolverevlogstorevfsoptions(ui, requirements, features):
733 """Resolve opener options specific to revlogs."""
733 """Resolve opener options specific to revlogs."""
734
734
735 options = {}
735 options = {}
736 options[b'flagprocessors'] = {}
736 options[b'flagprocessors'] = {}
737
737
738 if b'revlogv1' in requirements:
738 if b'revlogv1' in requirements:
739 options[b'revlogv1'] = True
739 options[b'revlogv1'] = True
740 if REVLOGV2_REQUIREMENT in requirements:
740 if REVLOGV2_REQUIREMENT in requirements:
741 options[b'revlogv2'] = True
741 options[b'revlogv2'] = True
742
742
743 if b'generaldelta' in requirements:
743 if b'generaldelta' in requirements:
744 options[b'generaldelta'] = True
744 options[b'generaldelta'] = True
745
745
746 # experimental config: format.chunkcachesize
746 # experimental config: format.chunkcachesize
747 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
747 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
748 if chunkcachesize is not None:
748 if chunkcachesize is not None:
749 options[b'chunkcachesize'] = chunkcachesize
749 options[b'chunkcachesize'] = chunkcachesize
750
750
751 deltabothparents = ui.configbool(b'storage',
751 deltabothparents = ui.configbool(b'storage',
752 b'revlog.optimize-delta-parent-choice')
752 b'revlog.optimize-delta-parent-choice')
753 options[b'deltabothparents'] = deltabothparents
753 options[b'deltabothparents'] = deltabothparents
754
754
755 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
755 options[b'lazydeltabase'] = not scmutil.gddeltaconfig(ui)
756
756
757 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
757 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
758 if 0 <= chainspan:
758 if 0 <= chainspan:
759 options[b'maxdeltachainspan'] = chainspan
759 options[b'maxdeltachainspan'] = chainspan
760
760
761 mmapindexthreshold = ui.configbytes(b'experimental',
761 mmapindexthreshold = ui.configbytes(b'experimental',
762 b'mmapindexthreshold')
762 b'mmapindexthreshold')
763 if mmapindexthreshold is not None:
763 if mmapindexthreshold is not None:
764 options[b'mmapindexthreshold'] = mmapindexthreshold
764 options[b'mmapindexthreshold'] = mmapindexthreshold
765
765
766 withsparseread = ui.configbool(b'experimental', b'sparse-read')
766 withsparseread = ui.configbool(b'experimental', b'sparse-read')
767 srdensitythres = float(ui.config(b'experimental',
767 srdensitythres = float(ui.config(b'experimental',
768 b'sparse-read.density-threshold'))
768 b'sparse-read.density-threshold'))
769 srmingapsize = ui.configbytes(b'experimental',
769 srmingapsize = ui.configbytes(b'experimental',
770 b'sparse-read.min-gap-size')
770 b'sparse-read.min-gap-size')
771 options[b'with-sparse-read'] = withsparseread
771 options[b'with-sparse-read'] = withsparseread
772 options[b'sparse-read-density-threshold'] = srdensitythres
772 options[b'sparse-read-density-threshold'] = srdensitythres
773 options[b'sparse-read-min-gap-size'] = srmingapsize
773 options[b'sparse-read-min-gap-size'] = srmingapsize
774
774
775 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
775 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
776 options[b'sparse-revlog'] = sparserevlog
776 options[b'sparse-revlog'] = sparserevlog
777 if sparserevlog:
777 if sparserevlog:
778 options[b'generaldelta'] = True
778 options[b'generaldelta'] = True
779
779
780 maxchainlen = None
780 maxchainlen = None
781 if sparserevlog:
781 if sparserevlog:
782 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
782 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
783 # experimental config: format.maxchainlen
783 # experimental config: format.maxchainlen
784 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
784 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
785 if maxchainlen is not None:
785 if maxchainlen is not None:
786 options[b'maxchainlen'] = maxchainlen
786 options[b'maxchainlen'] = maxchainlen
787
787
788 for r in requirements:
788 for r in requirements:
789 if r.startswith(b'exp-compression-'):
789 if r.startswith(b'exp-compression-'):
790 options[b'compengine'] = r[len(b'exp-compression-'):]
790 options[b'compengine'] = r[len(b'exp-compression-'):]
791
791
792 if repository.NARROW_REQUIREMENT in requirements:
792 if repository.NARROW_REQUIREMENT in requirements:
793 options[b'enableellipsis'] = True
793 options[b'enableellipsis'] = True
794
794
795 return options
795 return options
796
796
797 def makemain(**kwargs):
797 def makemain(**kwargs):
798 """Produce a type conforming to ``ilocalrepositorymain``."""
798 """Produce a type conforming to ``ilocalrepositorymain``."""
799 return localrepository
799 return localrepository
800
800
801 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
801 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
802 class revlogfilestorage(object):
802 class revlogfilestorage(object):
803 """File storage when using revlogs."""
803 """File storage when using revlogs."""
804
804
805 def file(self, path):
805 def file(self, path):
806 if path[0] == b'/':
806 if path[0] == b'/':
807 path = path[1:]
807 path = path[1:]
808
808
809 return filelog.filelog(self.svfs, path)
809 return filelog.filelog(self.svfs, path)
810
810
811 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
811 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
812 class revlognarrowfilestorage(object):
812 class revlognarrowfilestorage(object):
813 """File storage when using revlogs and narrow files."""
813 """File storage when using revlogs and narrow files."""
814
814
815 def file(self, path):
815 def file(self, path):
816 if path[0] == b'/':
816 if path[0] == b'/':
817 path = path[1:]
817 path = path[1:]
818
818
819 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
819 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
820
820
821 def makefilestorage(requirements, features, **kwargs):
821 def makefilestorage(requirements, features, **kwargs):
822 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
822 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
823 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
823 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
824 features.add(repository.REPO_FEATURE_STREAM_CLONE)
824 features.add(repository.REPO_FEATURE_STREAM_CLONE)
825
825
826 if repository.NARROW_REQUIREMENT in requirements:
826 if repository.NARROW_REQUIREMENT in requirements:
827 return revlognarrowfilestorage
827 return revlognarrowfilestorage
828 else:
828 else:
829 return revlogfilestorage
829 return revlogfilestorage
830
830
831 # List of repository interfaces and factory functions for them. Each
831 # List of repository interfaces and factory functions for them. Each
832 # will be called in order during ``makelocalrepository()`` to iteratively
832 # will be called in order during ``makelocalrepository()`` to iteratively
833 # derive the final type for a local repository instance. We capture the
833 # derive the final type for a local repository instance. We capture the
834 # function as a lambda so we don't hold a reference and the module-level
834 # function as a lambda so we don't hold a reference and the module-level
835 # functions can be wrapped.
835 # functions can be wrapped.
836 REPO_INTERFACES = [
836 REPO_INTERFACES = [
837 (repository.ilocalrepositorymain, lambda: makemain),
837 (repository.ilocalrepositorymain, lambda: makemain),
838 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
838 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
839 ]
839 ]
840
840
841 @interfaceutil.implementer(repository.ilocalrepositorymain)
841 @interfaceutil.implementer(repository.ilocalrepositorymain)
842 class localrepository(object):
842 class localrepository(object):
843 """Main class for representing local repositories.
843 """Main class for representing local repositories.
844
844
845 All local repositories are instances of this class.
845 All local repositories are instances of this class.
846
846
847 Constructed on its own, instances of this class are not usable as
847 Constructed on its own, instances of this class are not usable as
848 repository objects. To obtain a usable repository object, call
848 repository objects. To obtain a usable repository object, call
849 ``hg.repository()``, ``localrepo.instance()``, or
849 ``hg.repository()``, ``localrepo.instance()``, or
850 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
850 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
851 ``instance()`` adds support for creating new repositories.
851 ``instance()`` adds support for creating new repositories.
852 ``hg.repository()`` adds more extension integration, including calling
852 ``hg.repository()`` adds more extension integration, including calling
853 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
853 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
854 used.
854 used.
855 """
855 """
856
856
857 # obsolete experimental requirements:
857 # obsolete experimental requirements:
858 # - manifestv2: An experimental new manifest format that allowed
858 # - manifestv2: An experimental new manifest format that allowed
859 # for stem compression of long paths. Experiment ended up not
859 # for stem compression of long paths. Experiment ended up not
860 # being successful (repository sizes went up due to worse delta
860 # being successful (repository sizes went up due to worse delta
861 # chains), and the code was deleted in 4.6.
861 # chains), and the code was deleted in 4.6.
862 supportedformats = {
862 supportedformats = {
863 'revlogv1',
863 'revlogv1',
864 'generaldelta',
864 'generaldelta',
865 'treemanifest',
865 'treemanifest',
866 REVLOGV2_REQUIREMENT,
866 REVLOGV2_REQUIREMENT,
867 SPARSEREVLOG_REQUIREMENT,
867 SPARSEREVLOG_REQUIREMENT,
868 }
868 }
869 _basesupported = supportedformats | {
869 _basesupported = supportedformats | {
870 'store',
870 'store',
871 'fncache',
871 'fncache',
872 'shared',
872 'shared',
873 'relshared',
873 'relshared',
874 'dotencode',
874 'dotencode',
875 'exp-sparse',
875 'exp-sparse',
876 'internal-phase'
876 'internal-phase'
877 }
877 }
878
878
879 # list of prefix for file which can be written without 'wlock'
879 # list of prefix for file which can be written without 'wlock'
880 # Extensions should extend this list when needed
880 # Extensions should extend this list when needed
881 _wlockfreeprefix = {
881 _wlockfreeprefix = {
882 # We migh consider requiring 'wlock' for the next
882 # We migh consider requiring 'wlock' for the next
883 # two, but pretty much all the existing code assume
883 # two, but pretty much all the existing code assume
884 # wlock is not needed so we keep them excluded for
884 # wlock is not needed so we keep them excluded for
885 # now.
885 # now.
886 'hgrc',
886 'hgrc',
887 'requires',
887 'requires',
888 # XXX cache is a complicatged business someone
888 # XXX cache is a complicatged business someone
889 # should investigate this in depth at some point
889 # should investigate this in depth at some point
890 'cache/',
890 'cache/',
891 # XXX shouldn't be dirstate covered by the wlock?
891 # XXX shouldn't be dirstate covered by the wlock?
892 'dirstate',
892 'dirstate',
893 # XXX bisect was still a bit too messy at the time
893 # XXX bisect was still a bit too messy at the time
894 # this changeset was introduced. Someone should fix
894 # this changeset was introduced. Someone should fix
895 # the remainig bit and drop this line
895 # the remainig bit and drop this line
896 'bisect.state',
896 'bisect.state',
897 }
897 }
898
898
899 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
899 def __init__(self, baseui, ui, origroot, wdirvfs, hgvfs, requirements,
900 supportedrequirements, sharedpath, store, cachevfs, wcachevfs,
900 supportedrequirements, sharedpath, store, cachevfs, wcachevfs,
901 features, intents=None):
901 features, intents=None):
902 """Create a new local repository instance.
902 """Create a new local repository instance.
903
903
904 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
904 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
905 or ``localrepo.makelocalrepository()`` for obtaining a new repository
905 or ``localrepo.makelocalrepository()`` for obtaining a new repository
906 object.
906 object.
907
907
908 Arguments:
908 Arguments:
909
909
910 baseui
910 baseui
911 ``ui.ui`` instance that ``ui`` argument was based off of.
911 ``ui.ui`` instance that ``ui`` argument was based off of.
912
912
913 ui
913 ui
914 ``ui.ui`` instance for use by the repository.
914 ``ui.ui`` instance for use by the repository.
915
915
916 origroot
916 origroot
917 ``bytes`` path to working directory root of this repository.
917 ``bytes`` path to working directory root of this repository.
918
918
919 wdirvfs
919 wdirvfs
920 ``vfs.vfs`` rooted at the working directory.
920 ``vfs.vfs`` rooted at the working directory.
921
921
922 hgvfs
922 hgvfs
923 ``vfs.vfs`` rooted at .hg/
923 ``vfs.vfs`` rooted at .hg/
924
924
925 requirements
925 requirements
926 ``set`` of bytestrings representing repository opening requirements.
926 ``set`` of bytestrings representing repository opening requirements.
927
927
928 supportedrequirements
928 supportedrequirements
929 ``set`` of bytestrings representing repository requirements that we
929 ``set`` of bytestrings representing repository requirements that we
930 know how to open. May be a supetset of ``requirements``.
930 know how to open. May be a supetset of ``requirements``.
931
931
932 sharedpath
932 sharedpath
933 ``bytes`` Defining path to storage base directory. Points to a
933 ``bytes`` Defining path to storage base directory. Points to a
934 ``.hg/`` directory somewhere.
934 ``.hg/`` directory somewhere.
935
935
936 store
936 store
937 ``store.basicstore`` (or derived) instance providing access to
937 ``store.basicstore`` (or derived) instance providing access to
938 versioned storage.
938 versioned storage.
939
939
940 cachevfs
940 cachevfs
941 ``vfs.vfs`` used for cache files.
941 ``vfs.vfs`` used for cache files.
942
942
943 wcachevfs
943 wcachevfs
944 ``vfs.vfs`` used for cache files related to the working copy.
944 ``vfs.vfs`` used for cache files related to the working copy.
945
945
946 features
946 features
947 ``set`` of bytestrings defining features/capabilities of this
947 ``set`` of bytestrings defining features/capabilities of this
948 instance.
948 instance.
949
949
950 intents
950 intents
951 ``set`` of system strings indicating what this repo will be used
951 ``set`` of system strings indicating what this repo will be used
952 for.
952 for.
953 """
953 """
954 self.baseui = baseui
954 self.baseui = baseui
955 self.ui = ui
955 self.ui = ui
956 self.origroot = origroot
956 self.origroot = origroot
957 # vfs rooted at working directory.
957 # vfs rooted at working directory.
958 self.wvfs = wdirvfs
958 self.wvfs = wdirvfs
959 self.root = wdirvfs.base
959 self.root = wdirvfs.base
960 # vfs rooted at .hg/. Used to access most non-store paths.
960 # vfs rooted at .hg/. Used to access most non-store paths.
961 self.vfs = hgvfs
961 self.vfs = hgvfs
962 self.path = hgvfs.base
962 self.path = hgvfs.base
963 self.requirements = requirements
963 self.requirements = requirements
964 self.supported = supportedrequirements
964 self.supported = supportedrequirements
965 self.sharedpath = sharedpath
965 self.sharedpath = sharedpath
966 self.store = store
966 self.store = store
967 self.cachevfs = cachevfs
967 self.cachevfs = cachevfs
968 self.wcachevfs = wcachevfs
968 self.wcachevfs = wcachevfs
969 self.features = features
969 self.features = features
970
970
971 self.filtername = None
971 self.filtername = None
972
972
973 if (self.ui.configbool('devel', 'all-warnings') or
973 if (self.ui.configbool('devel', 'all-warnings') or
974 self.ui.configbool('devel', 'check-locks')):
974 self.ui.configbool('devel', 'check-locks')):
975 self.vfs.audit = self._getvfsward(self.vfs.audit)
975 self.vfs.audit = self._getvfsward(self.vfs.audit)
976 # A list of callback to shape the phase if no data were found.
976 # A list of callback to shape the phase if no data were found.
977 # Callback are in the form: func(repo, roots) --> processed root.
977 # Callback are in the form: func(repo, roots) --> processed root.
978 # This list it to be filled by extension during repo setup
978 # This list it to be filled by extension during repo setup
979 self._phasedefaults = []
979 self._phasedefaults = []
980
980
981 color.setup(self.ui)
981 color.setup(self.ui)
982
982
983 self.spath = self.store.path
983 self.spath = self.store.path
984 self.svfs = self.store.vfs
984 self.svfs = self.store.vfs
985 self.sjoin = self.store.join
985 self.sjoin = self.store.join
986 if (self.ui.configbool('devel', 'all-warnings') or
986 if (self.ui.configbool('devel', 'all-warnings') or
987 self.ui.configbool('devel', 'check-locks')):
987 self.ui.configbool('devel', 'check-locks')):
988 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
988 if util.safehasattr(self.svfs, 'vfs'): # this is filtervfs
989 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
989 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
990 else: # standard vfs
990 else: # standard vfs
991 self.svfs.audit = self._getsvfsward(self.svfs.audit)
991 self.svfs.audit = self._getsvfsward(self.svfs.audit)
992
992
993 self._dirstatevalidatewarned = False
993 self._dirstatevalidatewarned = False
994
994
995 self._branchcaches = branchmap.BranchMapCache()
995 self._branchcaches = branchmap.BranchMapCache()
996 self._revbranchcache = None
996 self._revbranchcache = None
997 self._filterpats = {}
997 self._filterpats = {}
998 self._datafilters = {}
998 self._datafilters = {}
999 self._transref = self._lockref = self._wlockref = None
999 self._transref = self._lockref = self._wlockref = None
1000
1000
1001 # A cache for various files under .hg/ that tracks file changes,
1001 # A cache for various files under .hg/ that tracks file changes,
1002 # (used by the filecache decorator)
1002 # (used by the filecache decorator)
1003 #
1003 #
1004 # Maps a property name to its util.filecacheentry
1004 # Maps a property name to its util.filecacheentry
1005 self._filecache = {}
1005 self._filecache = {}
1006
1006
1007 # hold sets of revision to be filtered
1007 # hold sets of revision to be filtered
1008 # should be cleared when something might have changed the filter value:
1008 # should be cleared when something might have changed the filter value:
1009 # - new changesets,
1009 # - new changesets,
1010 # - phase change,
1010 # - phase change,
1011 # - new obsolescence marker,
1011 # - new obsolescence marker,
1012 # - working directory parent change,
1012 # - working directory parent change,
1013 # - bookmark changes
1013 # - bookmark changes
1014 self.filteredrevcache = {}
1014 self.filteredrevcache = {}
1015
1015
1016 # post-dirstate-status hooks
1016 # post-dirstate-status hooks
1017 self._postdsstatus = []
1017 self._postdsstatus = []
1018
1018
1019 # generic mapping between names and nodes
1019 # generic mapping between names and nodes
1020 self.names = namespaces.namespaces()
1020 self.names = namespaces.namespaces()
1021
1021
1022 # Key to signature value.
1022 # Key to signature value.
1023 self._sparsesignaturecache = {}
1023 self._sparsesignaturecache = {}
1024 # Signature to cached matcher instance.
1024 # Signature to cached matcher instance.
1025 self._sparsematchercache = {}
1025 self._sparsematchercache = {}
1026
1026
1027 def _getvfsward(self, origfunc):
1027 def _getvfsward(self, origfunc):
1028 """build a ward for self.vfs"""
1028 """build a ward for self.vfs"""
1029 rref = weakref.ref(self)
1029 rref = weakref.ref(self)
1030 def checkvfs(path, mode=None):
1030 def checkvfs(path, mode=None):
1031 ret = origfunc(path, mode=mode)
1031 ret = origfunc(path, mode=mode)
1032 repo = rref()
1032 repo = rref()
1033 if (repo is None
1033 if (repo is None
1034 or not util.safehasattr(repo, '_wlockref')
1034 or not util.safehasattr(repo, '_wlockref')
1035 or not util.safehasattr(repo, '_lockref')):
1035 or not util.safehasattr(repo, '_lockref')):
1036 return
1036 return
1037 if mode in (None, 'r', 'rb'):
1037 if mode in (None, 'r', 'rb'):
1038 return
1038 return
1039 if path.startswith(repo.path):
1039 if path.startswith(repo.path):
1040 # truncate name relative to the repository (.hg)
1040 # truncate name relative to the repository (.hg)
1041 path = path[len(repo.path) + 1:]
1041 path = path[len(repo.path) + 1:]
1042 if path.startswith('cache/'):
1042 if path.startswith('cache/'):
1043 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
1043 msg = 'accessing cache with vfs instead of cachevfs: "%s"'
1044 repo.ui.develwarn(msg % path, stacklevel=3, config="cache-vfs")
1044 repo.ui.develwarn(msg % path, stacklevel=3, config="cache-vfs")
1045 if path.startswith('journal.') or path.startswith('undo.'):
1045 if path.startswith('journal.') or path.startswith('undo.'):
1046 # journal is covered by 'lock'
1046 # journal is covered by 'lock'
1047 if repo._currentlock(repo._lockref) is None:
1047 if repo._currentlock(repo._lockref) is None:
1048 repo.ui.develwarn('write with no lock: "%s"' % path,
1048 repo.ui.develwarn('write with no lock: "%s"' % path,
1049 stacklevel=3, config='check-locks')
1049 stacklevel=3, config='check-locks')
1050 elif repo._currentlock(repo._wlockref) is None:
1050 elif repo._currentlock(repo._wlockref) is None:
1051 # rest of vfs files are covered by 'wlock'
1051 # rest of vfs files are covered by 'wlock'
1052 #
1052 #
1053 # exclude special files
1053 # exclude special files
1054 for prefix in self._wlockfreeprefix:
1054 for prefix in self._wlockfreeprefix:
1055 if path.startswith(prefix):
1055 if path.startswith(prefix):
1056 return
1056 return
1057 repo.ui.develwarn('write with no wlock: "%s"' % path,
1057 repo.ui.develwarn('write with no wlock: "%s"' % path,
1058 stacklevel=3, config='check-locks')
1058 stacklevel=3, config='check-locks')
1059 return ret
1059 return ret
1060 return checkvfs
1060 return checkvfs
1061
1061
1062 def _getsvfsward(self, origfunc):
1062 def _getsvfsward(self, origfunc):
1063 """build a ward for self.svfs"""
1063 """build a ward for self.svfs"""
1064 rref = weakref.ref(self)
1064 rref = weakref.ref(self)
1065 def checksvfs(path, mode=None):
1065 def checksvfs(path, mode=None):
1066 ret = origfunc(path, mode=mode)
1066 ret = origfunc(path, mode=mode)
1067 repo = rref()
1067 repo = rref()
1068 if repo is None or not util.safehasattr(repo, '_lockref'):
1068 if repo is None or not util.safehasattr(repo, '_lockref'):
1069 return
1069 return
1070 if mode in (None, 'r', 'rb'):
1070 if mode in (None, 'r', 'rb'):
1071 return
1071 return
1072 if path.startswith(repo.sharedpath):
1072 if path.startswith(repo.sharedpath):
1073 # truncate name relative to the repository (.hg)
1073 # truncate name relative to the repository (.hg)
1074 path = path[len(repo.sharedpath) + 1:]
1074 path = path[len(repo.sharedpath) + 1:]
1075 if repo._currentlock(repo._lockref) is None:
1075 if repo._currentlock(repo._lockref) is None:
1076 repo.ui.develwarn('write with no lock: "%s"' % path,
1076 repo.ui.develwarn('write with no lock: "%s"' % path,
1077 stacklevel=4)
1077 stacklevel=4)
1078 return ret
1078 return ret
1079 return checksvfs
1079 return checksvfs
1080
1080
1081 def close(self):
1081 def close(self):
1082 self._writecaches()
1082 self._writecaches()
1083
1083
1084 def _writecaches(self):
1084 def _writecaches(self):
1085 if self._revbranchcache:
1085 if self._revbranchcache:
1086 self._revbranchcache.write()
1086 self._revbranchcache.write()
1087
1087
1088 def _restrictcapabilities(self, caps):
1088 def _restrictcapabilities(self, caps):
1089 if self.ui.configbool('experimental', 'bundle2-advertise'):
1089 if self.ui.configbool('experimental', 'bundle2-advertise'):
1090 caps = set(caps)
1090 caps = set(caps)
1091 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
1091 capsblob = bundle2.encodecaps(bundle2.getrepocaps(self,
1092 role='client'))
1092 role='client'))
1093 caps.add('bundle2=' + urlreq.quote(capsblob))
1093 caps.add('bundle2=' + urlreq.quote(capsblob))
1094 return caps
1094 return caps
1095
1095
1096 def _writerequirements(self):
1096 def _writerequirements(self):
1097 scmutil.writerequires(self.vfs, self.requirements)
1097 scmutil.writerequires(self.vfs, self.requirements)
1098
1098
1099 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1099 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1100 # self -> auditor -> self._checknested -> self
1100 # self -> auditor -> self._checknested -> self
1101
1101
1102 @property
1102 @property
1103 def auditor(self):
1103 def auditor(self):
1104 # This is only used by context.workingctx.match in order to
1104 # This is only used by context.workingctx.match in order to
1105 # detect files in subrepos.
1105 # detect files in subrepos.
1106 return pathutil.pathauditor(self.root, callback=self._checknested)
1106 return pathutil.pathauditor(self.root, callback=self._checknested)
1107
1107
1108 @property
1108 @property
1109 def nofsauditor(self):
1109 def nofsauditor(self):
1110 # This is only used by context.basectx.match in order to detect
1110 # This is only used by context.basectx.match in order to detect
1111 # files in subrepos.
1111 # files in subrepos.
1112 return pathutil.pathauditor(self.root, callback=self._checknested,
1112 return pathutil.pathauditor(self.root, callback=self._checknested,
1113 realfs=False, cached=True)
1113 realfs=False, cached=True)
1114
1114
1115 def _checknested(self, path):
1115 def _checknested(self, path):
1116 """Determine if path is a legal nested repository."""
1116 """Determine if path is a legal nested repository."""
1117 if not path.startswith(self.root):
1117 if not path.startswith(self.root):
1118 return False
1118 return False
1119 subpath = path[len(self.root) + 1:]
1119 subpath = path[len(self.root) + 1:]
1120 normsubpath = util.pconvert(subpath)
1120 normsubpath = util.pconvert(subpath)
1121
1121
1122 # XXX: Checking against the current working copy is wrong in
1122 # XXX: Checking against the current working copy is wrong in
1123 # the sense that it can reject things like
1123 # the sense that it can reject things like
1124 #
1124 #
1125 # $ hg cat -r 10 sub/x.txt
1125 # $ hg cat -r 10 sub/x.txt
1126 #
1126 #
1127 # if sub/ is no longer a subrepository in the working copy
1127 # if sub/ is no longer a subrepository in the working copy
1128 # parent revision.
1128 # parent revision.
1129 #
1129 #
1130 # However, it can of course also allow things that would have
1130 # However, it can of course also allow things that would have
1131 # been rejected before, such as the above cat command if sub/
1131 # been rejected before, such as the above cat command if sub/
1132 # is a subrepository now, but was a normal directory before.
1132 # is a subrepository now, but was a normal directory before.
1133 # The old path auditor would have rejected by mistake since it
1133 # The old path auditor would have rejected by mistake since it
1134 # panics when it sees sub/.hg/.
1134 # panics when it sees sub/.hg/.
1135 #
1135 #
1136 # All in all, checking against the working copy seems sensible
1136 # All in all, checking against the working copy seems sensible
1137 # since we want to prevent access to nested repositories on
1137 # since we want to prevent access to nested repositories on
1138 # the filesystem *now*.
1138 # the filesystem *now*.
1139 ctx = self[None]
1139 ctx = self[None]
1140 parts = util.splitpath(subpath)
1140 parts = util.splitpath(subpath)
1141 while parts:
1141 while parts:
1142 prefix = '/'.join(parts)
1142 prefix = '/'.join(parts)
1143 if prefix in ctx.substate:
1143 if prefix in ctx.substate:
1144 if prefix == normsubpath:
1144 if prefix == normsubpath:
1145 return True
1145 return True
1146 else:
1146 else:
1147 sub = ctx.sub(prefix)
1147 sub = ctx.sub(prefix)
1148 return sub.checknested(subpath[len(prefix) + 1:])
1148 return sub.checknested(subpath[len(prefix) + 1:])
1149 else:
1149 else:
1150 parts.pop()
1150 parts.pop()
1151 return False
1151 return False
1152
1152
1153 def peer(self):
1153 def peer(self):
1154 return localpeer(self) # not cached to avoid reference cycle
1154 return localpeer(self) # not cached to avoid reference cycle
1155
1155
1156 def unfiltered(self):
1156 def unfiltered(self):
1157 """Return unfiltered version of the repository
1157 """Return unfiltered version of the repository
1158
1158
1159 Intended to be overwritten by filtered repo."""
1159 Intended to be overwritten by filtered repo."""
1160 return self
1160 return self
1161
1161
1162 def filtered(self, name, visibilityexceptions=None):
1162 def filtered(self, name, visibilityexceptions=None):
1163 """Return a filtered version of a repository"""
1163 """Return a filtered version of a repository"""
1164 cls = repoview.newtype(self.unfiltered().__class__)
1164 cls = repoview.newtype(self.unfiltered().__class__)
1165 return cls(self, name, visibilityexceptions)
1165 return cls(self, name, visibilityexceptions)
1166
1166
1167 @repofilecache('bookmarks', 'bookmarks.current')
1167 @repofilecache('bookmarks', 'bookmarks.current')
1168 def _bookmarks(self):
1168 def _bookmarks(self):
1169 return bookmarks.bmstore(self)
1169 return bookmarks.bmstore(self)
1170
1170
1171 @property
1171 @property
1172 def _activebookmark(self):
1172 def _activebookmark(self):
1173 return self._bookmarks.active
1173 return self._bookmarks.active
1174
1174
1175 # _phasesets depend on changelog. what we need is to call
1175 # _phasesets depend on changelog. what we need is to call
1176 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1176 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1177 # can't be easily expressed in filecache mechanism.
1177 # can't be easily expressed in filecache mechanism.
1178 @storecache('phaseroots', '00changelog.i')
1178 @storecache('phaseroots', '00changelog.i')
1179 def _phasecache(self):
1179 def _phasecache(self):
1180 return phases.phasecache(self, self._phasedefaults)
1180 return phases.phasecache(self, self._phasedefaults)
1181
1181
1182 @storecache('obsstore')
1182 @storecache('obsstore')
1183 def obsstore(self):
1183 def obsstore(self):
1184 return obsolete.makestore(self.ui, self)
1184 return obsolete.makestore(self.ui, self)
1185
1185
1186 @storecache('00changelog.i')
1186 @storecache('00changelog.i')
1187 def changelog(self):
1187 def changelog(self):
1188 return changelog.changelog(self.svfs,
1188 return changelog.changelog(self.svfs,
1189 trypending=txnutil.mayhavepending(self.root))
1189 trypending=txnutil.mayhavepending(self.root))
1190
1190
1191 @storecache('00manifest.i')
1191 @storecache('00manifest.i')
1192 def manifestlog(self):
1192 def manifestlog(self):
1193 rootstore = manifest.manifestrevlog(self.svfs)
1193 rootstore = manifest.manifestrevlog(self.svfs)
1194 return manifest.manifestlog(self.svfs, self, rootstore,
1194 return manifest.manifestlog(self.svfs, self, rootstore,
1195 self._storenarrowmatch)
1195 self._storenarrowmatch)
1196
1196
1197 @repofilecache('dirstate')
1197 @repofilecache('dirstate')
1198 def dirstate(self):
1198 def dirstate(self):
1199 return self._makedirstate()
1199 return self._makedirstate()
1200
1200
1201 def _makedirstate(self):
1201 def _makedirstate(self):
1202 """Extension point for wrapping the dirstate per-repo."""
1202 """Extension point for wrapping the dirstate per-repo."""
1203 sparsematchfn = lambda: sparse.matcher(self)
1203 sparsematchfn = lambda: sparse.matcher(self)
1204
1204
1205 return dirstate.dirstate(self.vfs, self.ui, self.root,
1205 return dirstate.dirstate(self.vfs, self.ui, self.root,
1206 self._dirstatevalidate, sparsematchfn)
1206 self._dirstatevalidate, sparsematchfn)
1207
1207
1208 def _dirstatevalidate(self, node):
1208 def _dirstatevalidate(self, node):
1209 try:
1209 try:
1210 self.changelog.rev(node)
1210 self.changelog.rev(node)
1211 return node
1211 return node
1212 except error.LookupError:
1212 except error.LookupError:
1213 if not self._dirstatevalidatewarned:
1213 if not self._dirstatevalidatewarned:
1214 self._dirstatevalidatewarned = True
1214 self._dirstatevalidatewarned = True
1215 self.ui.warn(_("warning: ignoring unknown"
1215 self.ui.warn(_("warning: ignoring unknown"
1216 " working parent %s!\n") % short(node))
1216 " working parent %s!\n") % short(node))
1217 return nullid
1217 return nullid
1218
1218
1219 @storecache(narrowspec.FILENAME)
1219 @storecache(narrowspec.FILENAME)
1220 def narrowpats(self):
1220 def narrowpats(self):
1221 """matcher patterns for this repository's narrowspec
1221 """matcher patterns for this repository's narrowspec
1222
1222
1223 A tuple of (includes, excludes).
1223 A tuple of (includes, excludes).
1224 """
1224 """
1225 return narrowspec.load(self)
1225 return narrowspec.load(self)
1226
1226
1227 @storecache(narrowspec.FILENAME)
1227 @storecache(narrowspec.FILENAME)
1228 def _storenarrowmatch(self):
1228 def _storenarrowmatch(self):
1229 if repository.NARROW_REQUIREMENT not in self.requirements:
1229 if repository.NARROW_REQUIREMENT not in self.requirements:
1230 return matchmod.always(self.root, '')
1230 return matchmod.always(self.root, '')
1231 include, exclude = self.narrowpats
1231 include, exclude = self.narrowpats
1232 return narrowspec.match(self.root, include=include, exclude=exclude)
1232 return narrowspec.match(self.root, include=include, exclude=exclude)
1233
1233
1234 @storecache(narrowspec.FILENAME)
1234 @storecache(narrowspec.FILENAME)
1235 def _narrowmatch(self):
1235 def _narrowmatch(self):
1236 if repository.NARROW_REQUIREMENT not in self.requirements:
1236 if repository.NARROW_REQUIREMENT not in self.requirements:
1237 return matchmod.always(self.root, '')
1237 return matchmod.always(self.root, '')
1238 narrowspec.checkworkingcopynarrowspec(self)
1238 narrowspec.checkworkingcopynarrowspec(self)
1239 include, exclude = self.narrowpats
1239 include, exclude = self.narrowpats
1240 return narrowspec.match(self.root, include=include, exclude=exclude)
1240 return narrowspec.match(self.root, include=include, exclude=exclude)
1241
1241
1242 def narrowmatch(self, match=None, includeexact=False):
1242 def narrowmatch(self, match=None, includeexact=False):
1243 """matcher corresponding the the repo's narrowspec
1243 """matcher corresponding the the repo's narrowspec
1244
1244
1245 If `match` is given, then that will be intersected with the narrow
1245 If `match` is given, then that will be intersected with the narrow
1246 matcher.
1246 matcher.
1247
1247
1248 If `includeexact` is True, then any exact matches from `match` will
1248 If `includeexact` is True, then any exact matches from `match` will
1249 be included even if they're outside the narrowspec.
1249 be included even if they're outside the narrowspec.
1250 """
1250 """
1251 if match:
1251 if match:
1252 if includeexact and not self._narrowmatch.always():
1252 if includeexact and not self._narrowmatch.always():
1253 # do not exclude explicitly-specified paths so that they can
1253 # do not exclude explicitly-specified paths so that they can
1254 # be warned later on
1254 # be warned later on
1255 em = matchmod.exact(match._root, match._cwd, match.files())
1255 em = matchmod.exact(None, None, match.files())
1256 nm = matchmod.unionmatcher([self._narrowmatch, em])
1256 nm = matchmod.unionmatcher([self._narrowmatch, em])
1257 return matchmod.intersectmatchers(match, nm)
1257 return matchmod.intersectmatchers(match, nm)
1258 return matchmod.intersectmatchers(match, self._narrowmatch)
1258 return matchmod.intersectmatchers(match, self._narrowmatch)
1259 return self._narrowmatch
1259 return self._narrowmatch
1260
1260
1261 def setnarrowpats(self, newincludes, newexcludes):
1261 def setnarrowpats(self, newincludes, newexcludes):
1262 narrowspec.save(self, newincludes, newexcludes)
1262 narrowspec.save(self, newincludes, newexcludes)
1263 self.invalidate(clearfilecache=True)
1263 self.invalidate(clearfilecache=True)
1264
1264
1265 def __getitem__(self, changeid):
1265 def __getitem__(self, changeid):
1266 if changeid is None:
1266 if changeid is None:
1267 return context.workingctx(self)
1267 return context.workingctx(self)
1268 if isinstance(changeid, context.basectx):
1268 if isinstance(changeid, context.basectx):
1269 return changeid
1269 return changeid
1270 if isinstance(changeid, slice):
1270 if isinstance(changeid, slice):
1271 # wdirrev isn't contiguous so the slice shouldn't include it
1271 # wdirrev isn't contiguous so the slice shouldn't include it
1272 return [self[i]
1272 return [self[i]
1273 for i in pycompat.xrange(*changeid.indices(len(self)))
1273 for i in pycompat.xrange(*changeid.indices(len(self)))
1274 if i not in self.changelog.filteredrevs]
1274 if i not in self.changelog.filteredrevs]
1275 try:
1275 try:
1276 if isinstance(changeid, int):
1276 if isinstance(changeid, int):
1277 node = self.changelog.node(changeid)
1277 node = self.changelog.node(changeid)
1278 rev = changeid
1278 rev = changeid
1279 elif changeid == 'null':
1279 elif changeid == 'null':
1280 node = nullid
1280 node = nullid
1281 rev = nullrev
1281 rev = nullrev
1282 elif changeid == 'tip':
1282 elif changeid == 'tip':
1283 node = self.changelog.tip()
1283 node = self.changelog.tip()
1284 rev = self.changelog.rev(node)
1284 rev = self.changelog.rev(node)
1285 elif changeid == '.':
1285 elif changeid == '.':
1286 # this is a hack to delay/avoid loading obsmarkers
1286 # this is a hack to delay/avoid loading obsmarkers
1287 # when we know that '.' won't be hidden
1287 # when we know that '.' won't be hidden
1288 node = self.dirstate.p1()
1288 node = self.dirstate.p1()
1289 rev = self.unfiltered().changelog.rev(node)
1289 rev = self.unfiltered().changelog.rev(node)
1290 elif len(changeid) == 20:
1290 elif len(changeid) == 20:
1291 try:
1291 try:
1292 node = changeid
1292 node = changeid
1293 rev = self.changelog.rev(changeid)
1293 rev = self.changelog.rev(changeid)
1294 except error.FilteredLookupError:
1294 except error.FilteredLookupError:
1295 changeid = hex(changeid) # for the error message
1295 changeid = hex(changeid) # for the error message
1296 raise
1296 raise
1297 except LookupError:
1297 except LookupError:
1298 # check if it might have come from damaged dirstate
1298 # check if it might have come from damaged dirstate
1299 #
1299 #
1300 # XXX we could avoid the unfiltered if we had a recognizable
1300 # XXX we could avoid the unfiltered if we had a recognizable
1301 # exception for filtered changeset access
1301 # exception for filtered changeset access
1302 if (self.local()
1302 if (self.local()
1303 and changeid in self.unfiltered().dirstate.parents()):
1303 and changeid in self.unfiltered().dirstate.parents()):
1304 msg = _("working directory has unknown parent '%s'!")
1304 msg = _("working directory has unknown parent '%s'!")
1305 raise error.Abort(msg % short(changeid))
1305 raise error.Abort(msg % short(changeid))
1306 changeid = hex(changeid) # for the error message
1306 changeid = hex(changeid) # for the error message
1307 raise
1307 raise
1308
1308
1309 elif len(changeid) == 40:
1309 elif len(changeid) == 40:
1310 node = bin(changeid)
1310 node = bin(changeid)
1311 rev = self.changelog.rev(node)
1311 rev = self.changelog.rev(node)
1312 else:
1312 else:
1313 raise error.ProgrammingError(
1313 raise error.ProgrammingError(
1314 "unsupported changeid '%s' of type %s" %
1314 "unsupported changeid '%s' of type %s" %
1315 (changeid, type(changeid)))
1315 (changeid, type(changeid)))
1316
1316
1317 return context.changectx(self, rev, node)
1317 return context.changectx(self, rev, node)
1318
1318
1319 except (error.FilteredIndexError, error.FilteredLookupError):
1319 except (error.FilteredIndexError, error.FilteredLookupError):
1320 raise error.FilteredRepoLookupError(_("filtered revision '%s'")
1320 raise error.FilteredRepoLookupError(_("filtered revision '%s'")
1321 % pycompat.bytestr(changeid))
1321 % pycompat.bytestr(changeid))
1322 except (IndexError, LookupError):
1322 except (IndexError, LookupError):
1323 raise error.RepoLookupError(
1323 raise error.RepoLookupError(
1324 _("unknown revision '%s'") % pycompat.bytestr(changeid))
1324 _("unknown revision '%s'") % pycompat.bytestr(changeid))
1325 except error.WdirUnsupported:
1325 except error.WdirUnsupported:
1326 return context.workingctx(self)
1326 return context.workingctx(self)
1327
1327
1328 def __contains__(self, changeid):
1328 def __contains__(self, changeid):
1329 """True if the given changeid exists
1329 """True if the given changeid exists
1330
1330
1331 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1331 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1332 specified.
1332 specified.
1333 """
1333 """
1334 try:
1334 try:
1335 self[changeid]
1335 self[changeid]
1336 return True
1336 return True
1337 except error.RepoLookupError:
1337 except error.RepoLookupError:
1338 return False
1338 return False
1339
1339
1340 def __nonzero__(self):
1340 def __nonzero__(self):
1341 return True
1341 return True
1342
1342
1343 __bool__ = __nonzero__
1343 __bool__ = __nonzero__
1344
1344
1345 def __len__(self):
1345 def __len__(self):
1346 # no need to pay the cost of repoview.changelog
1346 # no need to pay the cost of repoview.changelog
1347 unfi = self.unfiltered()
1347 unfi = self.unfiltered()
1348 return len(unfi.changelog)
1348 return len(unfi.changelog)
1349
1349
1350 def __iter__(self):
1350 def __iter__(self):
1351 return iter(self.changelog)
1351 return iter(self.changelog)
1352
1352
1353 def revs(self, expr, *args):
1353 def revs(self, expr, *args):
1354 '''Find revisions matching a revset.
1354 '''Find revisions matching a revset.
1355
1355
1356 The revset is specified as a string ``expr`` that may contain
1356 The revset is specified as a string ``expr`` that may contain
1357 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1357 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1358
1358
1359 Revset aliases from the configuration are not expanded. To expand
1359 Revset aliases from the configuration are not expanded. To expand
1360 user aliases, consider calling ``scmutil.revrange()`` or
1360 user aliases, consider calling ``scmutil.revrange()`` or
1361 ``repo.anyrevs([expr], user=True)``.
1361 ``repo.anyrevs([expr], user=True)``.
1362
1362
1363 Returns a revset.abstractsmartset, which is a list-like interface
1363 Returns a revset.abstractsmartset, which is a list-like interface
1364 that contains integer revisions.
1364 that contains integer revisions.
1365 '''
1365 '''
1366 tree = revsetlang.spectree(expr, *args)
1366 tree = revsetlang.spectree(expr, *args)
1367 return revset.makematcher(tree)(self)
1367 return revset.makematcher(tree)(self)
1368
1368
1369 def set(self, expr, *args):
1369 def set(self, expr, *args):
1370 '''Find revisions matching a revset and emit changectx instances.
1370 '''Find revisions matching a revset and emit changectx instances.
1371
1371
1372 This is a convenience wrapper around ``revs()`` that iterates the
1372 This is a convenience wrapper around ``revs()`` that iterates the
1373 result and is a generator of changectx instances.
1373 result and is a generator of changectx instances.
1374
1374
1375 Revset aliases from the configuration are not expanded. To expand
1375 Revset aliases from the configuration are not expanded. To expand
1376 user aliases, consider calling ``scmutil.revrange()``.
1376 user aliases, consider calling ``scmutil.revrange()``.
1377 '''
1377 '''
1378 for r in self.revs(expr, *args):
1378 for r in self.revs(expr, *args):
1379 yield self[r]
1379 yield self[r]
1380
1380
1381 def anyrevs(self, specs, user=False, localalias=None):
1381 def anyrevs(self, specs, user=False, localalias=None):
1382 '''Find revisions matching one of the given revsets.
1382 '''Find revisions matching one of the given revsets.
1383
1383
1384 Revset aliases from the configuration are not expanded by default. To
1384 Revset aliases from the configuration are not expanded by default. To
1385 expand user aliases, specify ``user=True``. To provide some local
1385 expand user aliases, specify ``user=True``. To provide some local
1386 definitions overriding user aliases, set ``localalias`` to
1386 definitions overriding user aliases, set ``localalias`` to
1387 ``{name: definitionstring}``.
1387 ``{name: definitionstring}``.
1388 '''
1388 '''
1389 if user:
1389 if user:
1390 m = revset.matchany(self.ui, specs,
1390 m = revset.matchany(self.ui, specs,
1391 lookup=revset.lookupfn(self),
1391 lookup=revset.lookupfn(self),
1392 localalias=localalias)
1392 localalias=localalias)
1393 else:
1393 else:
1394 m = revset.matchany(None, specs, localalias=localalias)
1394 m = revset.matchany(None, specs, localalias=localalias)
1395 return m(self)
1395 return m(self)
1396
1396
1397 def url(self):
1397 def url(self):
1398 return 'file:' + self.root
1398 return 'file:' + self.root
1399
1399
1400 def hook(self, name, throw=False, **args):
1400 def hook(self, name, throw=False, **args):
1401 """Call a hook, passing this repo instance.
1401 """Call a hook, passing this repo instance.
1402
1402
1403 This a convenience method to aid invoking hooks. Extensions likely
1403 This a convenience method to aid invoking hooks. Extensions likely
1404 won't call this unless they have registered a custom hook or are
1404 won't call this unless they have registered a custom hook or are
1405 replacing code that is expected to call a hook.
1405 replacing code that is expected to call a hook.
1406 """
1406 """
1407 return hook.hook(self.ui, self, name, throw, **args)
1407 return hook.hook(self.ui, self, name, throw, **args)
1408
1408
1409 @filteredpropertycache
1409 @filteredpropertycache
1410 def _tagscache(self):
1410 def _tagscache(self):
1411 '''Returns a tagscache object that contains various tags related
1411 '''Returns a tagscache object that contains various tags related
1412 caches.'''
1412 caches.'''
1413
1413
1414 # This simplifies its cache management by having one decorated
1414 # This simplifies its cache management by having one decorated
1415 # function (this one) and the rest simply fetch things from it.
1415 # function (this one) and the rest simply fetch things from it.
1416 class tagscache(object):
1416 class tagscache(object):
1417 def __init__(self):
1417 def __init__(self):
1418 # These two define the set of tags for this repository. tags
1418 # These two define the set of tags for this repository. tags
1419 # maps tag name to node; tagtypes maps tag name to 'global' or
1419 # maps tag name to node; tagtypes maps tag name to 'global' or
1420 # 'local'. (Global tags are defined by .hgtags across all
1420 # 'local'. (Global tags are defined by .hgtags across all
1421 # heads, and local tags are defined in .hg/localtags.)
1421 # heads, and local tags are defined in .hg/localtags.)
1422 # They constitute the in-memory cache of tags.
1422 # They constitute the in-memory cache of tags.
1423 self.tags = self.tagtypes = None
1423 self.tags = self.tagtypes = None
1424
1424
1425 self.nodetagscache = self.tagslist = None
1425 self.nodetagscache = self.tagslist = None
1426
1426
1427 cache = tagscache()
1427 cache = tagscache()
1428 cache.tags, cache.tagtypes = self._findtags()
1428 cache.tags, cache.tagtypes = self._findtags()
1429
1429
1430 return cache
1430 return cache
1431
1431
1432 def tags(self):
1432 def tags(self):
1433 '''return a mapping of tag to node'''
1433 '''return a mapping of tag to node'''
1434 t = {}
1434 t = {}
1435 if self.changelog.filteredrevs:
1435 if self.changelog.filteredrevs:
1436 tags, tt = self._findtags()
1436 tags, tt = self._findtags()
1437 else:
1437 else:
1438 tags = self._tagscache.tags
1438 tags = self._tagscache.tags
1439 rev = self.changelog.rev
1439 rev = self.changelog.rev
1440 for k, v in tags.iteritems():
1440 for k, v in tags.iteritems():
1441 try:
1441 try:
1442 # ignore tags to unknown nodes
1442 # ignore tags to unknown nodes
1443 rev(v)
1443 rev(v)
1444 t[k] = v
1444 t[k] = v
1445 except (error.LookupError, ValueError):
1445 except (error.LookupError, ValueError):
1446 pass
1446 pass
1447 return t
1447 return t
1448
1448
1449 def _findtags(self):
1449 def _findtags(self):
1450 '''Do the hard work of finding tags. Return a pair of dicts
1450 '''Do the hard work of finding tags. Return a pair of dicts
1451 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1451 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1452 maps tag name to a string like \'global\' or \'local\'.
1452 maps tag name to a string like \'global\' or \'local\'.
1453 Subclasses or extensions are free to add their own tags, but
1453 Subclasses or extensions are free to add their own tags, but
1454 should be aware that the returned dicts will be retained for the
1454 should be aware that the returned dicts will be retained for the
1455 duration of the localrepo object.'''
1455 duration of the localrepo object.'''
1456
1456
1457 # XXX what tagtype should subclasses/extensions use? Currently
1457 # XXX what tagtype should subclasses/extensions use? Currently
1458 # mq and bookmarks add tags, but do not set the tagtype at all.
1458 # mq and bookmarks add tags, but do not set the tagtype at all.
1459 # Should each extension invent its own tag type? Should there
1459 # Should each extension invent its own tag type? Should there
1460 # be one tagtype for all such "virtual" tags? Or is the status
1460 # be one tagtype for all such "virtual" tags? Or is the status
1461 # quo fine?
1461 # quo fine?
1462
1462
1463
1463
1464 # map tag name to (node, hist)
1464 # map tag name to (node, hist)
1465 alltags = tagsmod.findglobaltags(self.ui, self)
1465 alltags = tagsmod.findglobaltags(self.ui, self)
1466 # map tag name to tag type
1466 # map tag name to tag type
1467 tagtypes = dict((tag, 'global') for tag in alltags)
1467 tagtypes = dict((tag, 'global') for tag in alltags)
1468
1468
1469 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1469 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1470
1470
1471 # Build the return dicts. Have to re-encode tag names because
1471 # Build the return dicts. Have to re-encode tag names because
1472 # the tags module always uses UTF-8 (in order not to lose info
1472 # the tags module always uses UTF-8 (in order not to lose info
1473 # writing to the cache), but the rest of Mercurial wants them in
1473 # writing to the cache), but the rest of Mercurial wants them in
1474 # local encoding.
1474 # local encoding.
1475 tags = {}
1475 tags = {}
1476 for (name, (node, hist)) in alltags.iteritems():
1476 for (name, (node, hist)) in alltags.iteritems():
1477 if node != nullid:
1477 if node != nullid:
1478 tags[encoding.tolocal(name)] = node
1478 tags[encoding.tolocal(name)] = node
1479 tags['tip'] = self.changelog.tip()
1479 tags['tip'] = self.changelog.tip()
1480 tagtypes = dict([(encoding.tolocal(name), value)
1480 tagtypes = dict([(encoding.tolocal(name), value)
1481 for (name, value) in tagtypes.iteritems()])
1481 for (name, value) in tagtypes.iteritems()])
1482 return (tags, tagtypes)
1482 return (tags, tagtypes)
1483
1483
1484 def tagtype(self, tagname):
1484 def tagtype(self, tagname):
1485 '''
1485 '''
1486 return the type of the given tag. result can be:
1486 return the type of the given tag. result can be:
1487
1487
1488 'local' : a local tag
1488 'local' : a local tag
1489 'global' : a global tag
1489 'global' : a global tag
1490 None : tag does not exist
1490 None : tag does not exist
1491 '''
1491 '''
1492
1492
1493 return self._tagscache.tagtypes.get(tagname)
1493 return self._tagscache.tagtypes.get(tagname)
1494
1494
1495 def tagslist(self):
1495 def tagslist(self):
1496 '''return a list of tags ordered by revision'''
1496 '''return a list of tags ordered by revision'''
1497 if not self._tagscache.tagslist:
1497 if not self._tagscache.tagslist:
1498 l = []
1498 l = []
1499 for t, n in self.tags().iteritems():
1499 for t, n in self.tags().iteritems():
1500 l.append((self.changelog.rev(n), t, n))
1500 l.append((self.changelog.rev(n), t, n))
1501 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1501 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1502
1502
1503 return self._tagscache.tagslist
1503 return self._tagscache.tagslist
1504
1504
1505 def nodetags(self, node):
1505 def nodetags(self, node):
1506 '''return the tags associated with a node'''
1506 '''return the tags associated with a node'''
1507 if not self._tagscache.nodetagscache:
1507 if not self._tagscache.nodetagscache:
1508 nodetagscache = {}
1508 nodetagscache = {}
1509 for t, n in self._tagscache.tags.iteritems():
1509 for t, n in self._tagscache.tags.iteritems():
1510 nodetagscache.setdefault(n, []).append(t)
1510 nodetagscache.setdefault(n, []).append(t)
1511 for tags in nodetagscache.itervalues():
1511 for tags in nodetagscache.itervalues():
1512 tags.sort()
1512 tags.sort()
1513 self._tagscache.nodetagscache = nodetagscache
1513 self._tagscache.nodetagscache = nodetagscache
1514 return self._tagscache.nodetagscache.get(node, [])
1514 return self._tagscache.nodetagscache.get(node, [])
1515
1515
1516 def nodebookmarks(self, node):
1516 def nodebookmarks(self, node):
1517 """return the list of bookmarks pointing to the specified node"""
1517 """return the list of bookmarks pointing to the specified node"""
1518 return self._bookmarks.names(node)
1518 return self._bookmarks.names(node)
1519
1519
1520 def branchmap(self):
1520 def branchmap(self):
1521 '''returns a dictionary {branch: [branchheads]} with branchheads
1521 '''returns a dictionary {branch: [branchheads]} with branchheads
1522 ordered by increasing revision number'''
1522 ordered by increasing revision number'''
1523 return self._branchcaches[self]
1523 return self._branchcaches[self]
1524
1524
1525 @unfilteredmethod
1525 @unfilteredmethod
1526 def revbranchcache(self):
1526 def revbranchcache(self):
1527 if not self._revbranchcache:
1527 if not self._revbranchcache:
1528 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1528 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1529 return self._revbranchcache
1529 return self._revbranchcache
1530
1530
1531 def branchtip(self, branch, ignoremissing=False):
1531 def branchtip(self, branch, ignoremissing=False):
1532 '''return the tip node for a given branch
1532 '''return the tip node for a given branch
1533
1533
1534 If ignoremissing is True, then this method will not raise an error.
1534 If ignoremissing is True, then this method will not raise an error.
1535 This is helpful for callers that only expect None for a missing branch
1535 This is helpful for callers that only expect None for a missing branch
1536 (e.g. namespace).
1536 (e.g. namespace).
1537
1537
1538 '''
1538 '''
1539 try:
1539 try:
1540 return self.branchmap().branchtip(branch)
1540 return self.branchmap().branchtip(branch)
1541 except KeyError:
1541 except KeyError:
1542 if not ignoremissing:
1542 if not ignoremissing:
1543 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1543 raise error.RepoLookupError(_("unknown branch '%s'") % branch)
1544 else:
1544 else:
1545 pass
1545 pass
1546
1546
1547 def lookup(self, key):
1547 def lookup(self, key):
1548 return scmutil.revsymbol(self, key).node()
1548 return scmutil.revsymbol(self, key).node()
1549
1549
1550 def lookupbranch(self, key):
1550 def lookupbranch(self, key):
1551 if key in self.branchmap():
1551 if key in self.branchmap():
1552 return key
1552 return key
1553
1553
1554 return scmutil.revsymbol(self, key).branch()
1554 return scmutil.revsymbol(self, key).branch()
1555
1555
1556 def known(self, nodes):
1556 def known(self, nodes):
1557 cl = self.changelog
1557 cl = self.changelog
1558 nm = cl.nodemap
1558 nm = cl.nodemap
1559 filtered = cl.filteredrevs
1559 filtered = cl.filteredrevs
1560 result = []
1560 result = []
1561 for n in nodes:
1561 for n in nodes:
1562 r = nm.get(n)
1562 r = nm.get(n)
1563 resp = not (r is None or r in filtered)
1563 resp = not (r is None or r in filtered)
1564 result.append(resp)
1564 result.append(resp)
1565 return result
1565 return result
1566
1566
1567 def local(self):
1567 def local(self):
1568 return self
1568 return self
1569
1569
1570 def publishing(self):
1570 def publishing(self):
1571 # it's safe (and desirable) to trust the publish flag unconditionally
1571 # it's safe (and desirable) to trust the publish flag unconditionally
1572 # so that we don't finalize changes shared between users via ssh or nfs
1572 # so that we don't finalize changes shared between users via ssh or nfs
1573 return self.ui.configbool('phases', 'publish', untrusted=True)
1573 return self.ui.configbool('phases', 'publish', untrusted=True)
1574
1574
1575 def cancopy(self):
1575 def cancopy(self):
1576 # so statichttprepo's override of local() works
1576 # so statichttprepo's override of local() works
1577 if not self.local():
1577 if not self.local():
1578 return False
1578 return False
1579 if not self.publishing():
1579 if not self.publishing():
1580 return True
1580 return True
1581 # if publishing we can't copy if there is filtered content
1581 # if publishing we can't copy if there is filtered content
1582 return not self.filtered('visible').changelog.filteredrevs
1582 return not self.filtered('visible').changelog.filteredrevs
1583
1583
1584 def shared(self):
1584 def shared(self):
1585 '''the type of shared repository (None if not shared)'''
1585 '''the type of shared repository (None if not shared)'''
1586 if self.sharedpath != self.path:
1586 if self.sharedpath != self.path:
1587 return 'store'
1587 return 'store'
1588 return None
1588 return None
1589
1589
1590 def wjoin(self, f, *insidef):
1590 def wjoin(self, f, *insidef):
1591 return self.vfs.reljoin(self.root, f, *insidef)
1591 return self.vfs.reljoin(self.root, f, *insidef)
1592
1592
1593 def setparents(self, p1, p2=nullid):
1593 def setparents(self, p1, p2=nullid):
1594 with self.dirstate.parentchange():
1594 with self.dirstate.parentchange():
1595 copies = self.dirstate.setparents(p1, p2)
1595 copies = self.dirstate.setparents(p1, p2)
1596 pctx = self[p1]
1596 pctx = self[p1]
1597 if copies:
1597 if copies:
1598 # Adjust copy records, the dirstate cannot do it, it
1598 # Adjust copy records, the dirstate cannot do it, it
1599 # requires access to parents manifests. Preserve them
1599 # requires access to parents manifests. Preserve them
1600 # only for entries added to first parent.
1600 # only for entries added to first parent.
1601 for f in copies:
1601 for f in copies:
1602 if f not in pctx and copies[f] in pctx:
1602 if f not in pctx and copies[f] in pctx:
1603 self.dirstate.copy(copies[f], f)
1603 self.dirstate.copy(copies[f], f)
1604 if p2 == nullid:
1604 if p2 == nullid:
1605 for f, s in sorted(self.dirstate.copies().items()):
1605 for f, s in sorted(self.dirstate.copies().items()):
1606 if f not in pctx and s not in pctx:
1606 if f not in pctx and s not in pctx:
1607 self.dirstate.copy(None, f)
1607 self.dirstate.copy(None, f)
1608
1608
1609 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1609 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1610 """changeid must be a changeset revision, if specified.
1610 """changeid must be a changeset revision, if specified.
1611 fileid can be a file revision or node."""
1611 fileid can be a file revision or node."""
1612 return context.filectx(self, path, changeid, fileid,
1612 return context.filectx(self, path, changeid, fileid,
1613 changectx=changectx)
1613 changectx=changectx)
1614
1614
1615 def getcwd(self):
1615 def getcwd(self):
1616 return self.dirstate.getcwd()
1616 return self.dirstate.getcwd()
1617
1617
1618 def pathto(self, f, cwd=None):
1618 def pathto(self, f, cwd=None):
1619 return self.dirstate.pathto(f, cwd)
1619 return self.dirstate.pathto(f, cwd)
1620
1620
1621 def _loadfilter(self, filter):
1621 def _loadfilter(self, filter):
1622 if filter not in self._filterpats:
1622 if filter not in self._filterpats:
1623 l = []
1623 l = []
1624 for pat, cmd in self.ui.configitems(filter):
1624 for pat, cmd in self.ui.configitems(filter):
1625 if cmd == '!':
1625 if cmd == '!':
1626 continue
1626 continue
1627 mf = matchmod.match(self.root, '', [pat])
1627 mf = matchmod.match(self.root, '', [pat])
1628 fn = None
1628 fn = None
1629 params = cmd
1629 params = cmd
1630 for name, filterfn in self._datafilters.iteritems():
1630 for name, filterfn in self._datafilters.iteritems():
1631 if cmd.startswith(name):
1631 if cmd.startswith(name):
1632 fn = filterfn
1632 fn = filterfn
1633 params = cmd[len(name):].lstrip()
1633 params = cmd[len(name):].lstrip()
1634 break
1634 break
1635 if not fn:
1635 if not fn:
1636 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1636 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1637 # Wrap old filters not supporting keyword arguments
1637 # Wrap old filters not supporting keyword arguments
1638 if not pycompat.getargspec(fn)[2]:
1638 if not pycompat.getargspec(fn)[2]:
1639 oldfn = fn
1639 oldfn = fn
1640 fn = lambda s, c, **kwargs: oldfn(s, c)
1640 fn = lambda s, c, **kwargs: oldfn(s, c)
1641 l.append((mf, fn, params))
1641 l.append((mf, fn, params))
1642 self._filterpats[filter] = l
1642 self._filterpats[filter] = l
1643 return self._filterpats[filter]
1643 return self._filterpats[filter]
1644
1644
1645 def _filter(self, filterpats, filename, data):
1645 def _filter(self, filterpats, filename, data):
1646 for mf, fn, cmd in filterpats:
1646 for mf, fn, cmd in filterpats:
1647 if mf(filename):
1647 if mf(filename):
1648 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1648 self.ui.debug("filtering %s through %s\n" % (filename, cmd))
1649 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1649 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1650 break
1650 break
1651
1651
1652 return data
1652 return data
1653
1653
1654 @unfilteredpropertycache
1654 @unfilteredpropertycache
1655 def _encodefilterpats(self):
1655 def _encodefilterpats(self):
1656 return self._loadfilter('encode')
1656 return self._loadfilter('encode')
1657
1657
1658 @unfilteredpropertycache
1658 @unfilteredpropertycache
1659 def _decodefilterpats(self):
1659 def _decodefilterpats(self):
1660 return self._loadfilter('decode')
1660 return self._loadfilter('decode')
1661
1661
1662 def adddatafilter(self, name, filter):
1662 def adddatafilter(self, name, filter):
1663 self._datafilters[name] = filter
1663 self._datafilters[name] = filter
1664
1664
1665 def wread(self, filename):
1665 def wread(self, filename):
1666 if self.wvfs.islink(filename):
1666 if self.wvfs.islink(filename):
1667 data = self.wvfs.readlink(filename)
1667 data = self.wvfs.readlink(filename)
1668 else:
1668 else:
1669 data = self.wvfs.read(filename)
1669 data = self.wvfs.read(filename)
1670 return self._filter(self._encodefilterpats, filename, data)
1670 return self._filter(self._encodefilterpats, filename, data)
1671
1671
1672 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1672 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
1673 """write ``data`` into ``filename`` in the working directory
1673 """write ``data`` into ``filename`` in the working directory
1674
1674
1675 This returns length of written (maybe decoded) data.
1675 This returns length of written (maybe decoded) data.
1676 """
1676 """
1677 data = self._filter(self._decodefilterpats, filename, data)
1677 data = self._filter(self._decodefilterpats, filename, data)
1678 if 'l' in flags:
1678 if 'l' in flags:
1679 self.wvfs.symlink(data, filename)
1679 self.wvfs.symlink(data, filename)
1680 else:
1680 else:
1681 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1681 self.wvfs.write(filename, data, backgroundclose=backgroundclose,
1682 **kwargs)
1682 **kwargs)
1683 if 'x' in flags:
1683 if 'x' in flags:
1684 self.wvfs.setflags(filename, False, True)
1684 self.wvfs.setflags(filename, False, True)
1685 else:
1685 else:
1686 self.wvfs.setflags(filename, False, False)
1686 self.wvfs.setflags(filename, False, False)
1687 return len(data)
1687 return len(data)
1688
1688
1689 def wwritedata(self, filename, data):
1689 def wwritedata(self, filename, data):
1690 return self._filter(self._decodefilterpats, filename, data)
1690 return self._filter(self._decodefilterpats, filename, data)
1691
1691
1692 def currenttransaction(self):
1692 def currenttransaction(self):
1693 """return the current transaction or None if non exists"""
1693 """return the current transaction or None if non exists"""
1694 if self._transref:
1694 if self._transref:
1695 tr = self._transref()
1695 tr = self._transref()
1696 else:
1696 else:
1697 tr = None
1697 tr = None
1698
1698
1699 if tr and tr.running():
1699 if tr and tr.running():
1700 return tr
1700 return tr
1701 return None
1701 return None
1702
1702
1703 def transaction(self, desc, report=None):
1703 def transaction(self, desc, report=None):
1704 if (self.ui.configbool('devel', 'all-warnings')
1704 if (self.ui.configbool('devel', 'all-warnings')
1705 or self.ui.configbool('devel', 'check-locks')):
1705 or self.ui.configbool('devel', 'check-locks')):
1706 if self._currentlock(self._lockref) is None:
1706 if self._currentlock(self._lockref) is None:
1707 raise error.ProgrammingError('transaction requires locking')
1707 raise error.ProgrammingError('transaction requires locking')
1708 tr = self.currenttransaction()
1708 tr = self.currenttransaction()
1709 if tr is not None:
1709 if tr is not None:
1710 return tr.nest(name=desc)
1710 return tr.nest(name=desc)
1711
1711
1712 # abort here if the journal already exists
1712 # abort here if the journal already exists
1713 if self.svfs.exists("journal"):
1713 if self.svfs.exists("journal"):
1714 raise error.RepoError(
1714 raise error.RepoError(
1715 _("abandoned transaction found"),
1715 _("abandoned transaction found"),
1716 hint=_("run 'hg recover' to clean up transaction"))
1716 hint=_("run 'hg recover' to clean up transaction"))
1717
1717
1718 idbase = "%.40f#%f" % (random.random(), time.time())
1718 idbase = "%.40f#%f" % (random.random(), time.time())
1719 ha = hex(hashlib.sha1(idbase).digest())
1719 ha = hex(hashlib.sha1(idbase).digest())
1720 txnid = 'TXN:' + ha
1720 txnid = 'TXN:' + ha
1721 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1721 self.hook('pretxnopen', throw=True, txnname=desc, txnid=txnid)
1722
1722
1723 self._writejournal(desc)
1723 self._writejournal(desc)
1724 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1724 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
1725 if report:
1725 if report:
1726 rp = report
1726 rp = report
1727 else:
1727 else:
1728 rp = self.ui.warn
1728 rp = self.ui.warn
1729 vfsmap = {'plain': self.vfs, 'store': self.svfs} # root of .hg/
1729 vfsmap = {'plain': self.vfs, 'store': self.svfs} # root of .hg/
1730 # we must avoid cyclic reference between repo and transaction.
1730 # we must avoid cyclic reference between repo and transaction.
1731 reporef = weakref.ref(self)
1731 reporef = weakref.ref(self)
1732 # Code to track tag movement
1732 # Code to track tag movement
1733 #
1733 #
1734 # Since tags are all handled as file content, it is actually quite hard
1734 # Since tags are all handled as file content, it is actually quite hard
1735 # to track these movement from a code perspective. So we fallback to a
1735 # to track these movement from a code perspective. So we fallback to a
1736 # tracking at the repository level. One could envision to track changes
1736 # tracking at the repository level. One could envision to track changes
1737 # to the '.hgtags' file through changegroup apply but that fails to
1737 # to the '.hgtags' file through changegroup apply but that fails to
1738 # cope with case where transaction expose new heads without changegroup
1738 # cope with case where transaction expose new heads without changegroup
1739 # being involved (eg: phase movement).
1739 # being involved (eg: phase movement).
1740 #
1740 #
1741 # For now, We gate the feature behind a flag since this likely comes
1741 # For now, We gate the feature behind a flag since this likely comes
1742 # with performance impacts. The current code run more often than needed
1742 # with performance impacts. The current code run more often than needed
1743 # and do not use caches as much as it could. The current focus is on
1743 # and do not use caches as much as it could. The current focus is on
1744 # the behavior of the feature so we disable it by default. The flag
1744 # the behavior of the feature so we disable it by default. The flag
1745 # will be removed when we are happy with the performance impact.
1745 # will be removed when we are happy with the performance impact.
1746 #
1746 #
1747 # Once this feature is no longer experimental move the following
1747 # Once this feature is no longer experimental move the following
1748 # documentation to the appropriate help section:
1748 # documentation to the appropriate help section:
1749 #
1749 #
1750 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1750 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
1751 # tags (new or changed or deleted tags). In addition the details of
1751 # tags (new or changed or deleted tags). In addition the details of
1752 # these changes are made available in a file at:
1752 # these changes are made available in a file at:
1753 # ``REPOROOT/.hg/changes/tags.changes``.
1753 # ``REPOROOT/.hg/changes/tags.changes``.
1754 # Make sure you check for HG_TAG_MOVED before reading that file as it
1754 # Make sure you check for HG_TAG_MOVED before reading that file as it
1755 # might exist from a previous transaction even if no tag were touched
1755 # might exist from a previous transaction even if no tag were touched
1756 # in this one. Changes are recorded in a line base format::
1756 # in this one. Changes are recorded in a line base format::
1757 #
1757 #
1758 # <action> <hex-node> <tag-name>\n
1758 # <action> <hex-node> <tag-name>\n
1759 #
1759 #
1760 # Actions are defined as follow:
1760 # Actions are defined as follow:
1761 # "-R": tag is removed,
1761 # "-R": tag is removed,
1762 # "+A": tag is added,
1762 # "+A": tag is added,
1763 # "-M": tag is moved (old value),
1763 # "-M": tag is moved (old value),
1764 # "+M": tag is moved (new value),
1764 # "+M": tag is moved (new value),
1765 tracktags = lambda x: None
1765 tracktags = lambda x: None
1766 # experimental config: experimental.hook-track-tags
1766 # experimental config: experimental.hook-track-tags
1767 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1767 shouldtracktags = self.ui.configbool('experimental', 'hook-track-tags')
1768 if desc != 'strip' and shouldtracktags:
1768 if desc != 'strip' and shouldtracktags:
1769 oldheads = self.changelog.headrevs()
1769 oldheads = self.changelog.headrevs()
1770 def tracktags(tr2):
1770 def tracktags(tr2):
1771 repo = reporef()
1771 repo = reporef()
1772 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1772 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
1773 newheads = repo.changelog.headrevs()
1773 newheads = repo.changelog.headrevs()
1774 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1774 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
1775 # notes: we compare lists here.
1775 # notes: we compare lists here.
1776 # As we do it only once buiding set would not be cheaper
1776 # As we do it only once buiding set would not be cheaper
1777 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1777 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
1778 if changes:
1778 if changes:
1779 tr2.hookargs['tag_moved'] = '1'
1779 tr2.hookargs['tag_moved'] = '1'
1780 with repo.vfs('changes/tags.changes', 'w',
1780 with repo.vfs('changes/tags.changes', 'w',
1781 atomictemp=True) as changesfile:
1781 atomictemp=True) as changesfile:
1782 # note: we do not register the file to the transaction
1782 # note: we do not register the file to the transaction
1783 # because we needs it to still exist on the transaction
1783 # because we needs it to still exist on the transaction
1784 # is close (for txnclose hooks)
1784 # is close (for txnclose hooks)
1785 tagsmod.writediff(changesfile, changes)
1785 tagsmod.writediff(changesfile, changes)
1786 def validate(tr2):
1786 def validate(tr2):
1787 """will run pre-closing hooks"""
1787 """will run pre-closing hooks"""
1788 # XXX the transaction API is a bit lacking here so we take a hacky
1788 # XXX the transaction API is a bit lacking here so we take a hacky
1789 # path for now
1789 # path for now
1790 #
1790 #
1791 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1791 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
1792 # dict is copied before these run. In addition we needs the data
1792 # dict is copied before these run. In addition we needs the data
1793 # available to in memory hooks too.
1793 # available to in memory hooks too.
1794 #
1794 #
1795 # Moreover, we also need to make sure this runs before txnclose
1795 # Moreover, we also need to make sure this runs before txnclose
1796 # hooks and there is no "pending" mechanism that would execute
1796 # hooks and there is no "pending" mechanism that would execute
1797 # logic only if hooks are about to run.
1797 # logic only if hooks are about to run.
1798 #
1798 #
1799 # Fixing this limitation of the transaction is also needed to track
1799 # Fixing this limitation of the transaction is also needed to track
1800 # other families of changes (bookmarks, phases, obsolescence).
1800 # other families of changes (bookmarks, phases, obsolescence).
1801 #
1801 #
1802 # This will have to be fixed before we remove the experimental
1802 # This will have to be fixed before we remove the experimental
1803 # gating.
1803 # gating.
1804 tracktags(tr2)
1804 tracktags(tr2)
1805 repo = reporef()
1805 repo = reporef()
1806 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1806 if repo.ui.configbool('experimental', 'single-head-per-branch'):
1807 scmutil.enforcesinglehead(repo, tr2, desc)
1807 scmutil.enforcesinglehead(repo, tr2, desc)
1808 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1808 if hook.hashook(repo.ui, 'pretxnclose-bookmark'):
1809 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1809 for name, (old, new) in sorted(tr.changes['bookmarks'].items()):
1810 args = tr.hookargs.copy()
1810 args = tr.hookargs.copy()
1811 args.update(bookmarks.preparehookargs(name, old, new))
1811 args.update(bookmarks.preparehookargs(name, old, new))
1812 repo.hook('pretxnclose-bookmark', throw=True,
1812 repo.hook('pretxnclose-bookmark', throw=True,
1813 txnname=desc,
1813 txnname=desc,
1814 **pycompat.strkwargs(args))
1814 **pycompat.strkwargs(args))
1815 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1815 if hook.hashook(repo.ui, 'pretxnclose-phase'):
1816 cl = repo.unfiltered().changelog
1816 cl = repo.unfiltered().changelog
1817 for rev, (old, new) in tr.changes['phases'].items():
1817 for rev, (old, new) in tr.changes['phases'].items():
1818 args = tr.hookargs.copy()
1818 args = tr.hookargs.copy()
1819 node = hex(cl.node(rev))
1819 node = hex(cl.node(rev))
1820 args.update(phases.preparehookargs(node, old, new))
1820 args.update(phases.preparehookargs(node, old, new))
1821 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1821 repo.hook('pretxnclose-phase', throw=True, txnname=desc,
1822 **pycompat.strkwargs(args))
1822 **pycompat.strkwargs(args))
1823
1823
1824 repo.hook('pretxnclose', throw=True,
1824 repo.hook('pretxnclose', throw=True,
1825 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1825 txnname=desc, **pycompat.strkwargs(tr.hookargs))
1826 def releasefn(tr, success):
1826 def releasefn(tr, success):
1827 repo = reporef()
1827 repo = reporef()
1828 if success:
1828 if success:
1829 # this should be explicitly invoked here, because
1829 # this should be explicitly invoked here, because
1830 # in-memory changes aren't written out at closing
1830 # in-memory changes aren't written out at closing
1831 # transaction, if tr.addfilegenerator (via
1831 # transaction, if tr.addfilegenerator (via
1832 # dirstate.write or so) isn't invoked while
1832 # dirstate.write or so) isn't invoked while
1833 # transaction running
1833 # transaction running
1834 repo.dirstate.write(None)
1834 repo.dirstate.write(None)
1835 else:
1835 else:
1836 # discard all changes (including ones already written
1836 # discard all changes (including ones already written
1837 # out) in this transaction
1837 # out) in this transaction
1838 narrowspec.restorebackup(self, 'journal.narrowspec')
1838 narrowspec.restorebackup(self, 'journal.narrowspec')
1839 narrowspec.restorewcbackup(self, 'journal.narrowspec.dirstate')
1839 narrowspec.restorewcbackup(self, 'journal.narrowspec.dirstate')
1840 repo.dirstate.restorebackup(None, 'journal.dirstate')
1840 repo.dirstate.restorebackup(None, 'journal.dirstate')
1841
1841
1842 repo.invalidate(clearfilecache=True)
1842 repo.invalidate(clearfilecache=True)
1843
1843
1844 tr = transaction.transaction(rp, self.svfs, vfsmap,
1844 tr = transaction.transaction(rp, self.svfs, vfsmap,
1845 "journal",
1845 "journal",
1846 "undo",
1846 "undo",
1847 aftertrans(renames),
1847 aftertrans(renames),
1848 self.store.createmode,
1848 self.store.createmode,
1849 validator=validate,
1849 validator=validate,
1850 releasefn=releasefn,
1850 releasefn=releasefn,
1851 checkambigfiles=_cachedfiles,
1851 checkambigfiles=_cachedfiles,
1852 name=desc)
1852 name=desc)
1853 tr.changes['origrepolen'] = len(self)
1853 tr.changes['origrepolen'] = len(self)
1854 tr.changes['obsmarkers'] = set()
1854 tr.changes['obsmarkers'] = set()
1855 tr.changes['phases'] = {}
1855 tr.changes['phases'] = {}
1856 tr.changes['bookmarks'] = {}
1856 tr.changes['bookmarks'] = {}
1857
1857
1858 tr.hookargs['txnid'] = txnid
1858 tr.hookargs['txnid'] = txnid
1859 # note: writing the fncache only during finalize mean that the file is
1859 # note: writing the fncache only during finalize mean that the file is
1860 # outdated when running hooks. As fncache is used for streaming clone,
1860 # outdated when running hooks. As fncache is used for streaming clone,
1861 # this is not expected to break anything that happen during the hooks.
1861 # this is not expected to break anything that happen during the hooks.
1862 tr.addfinalize('flush-fncache', self.store.write)
1862 tr.addfinalize('flush-fncache', self.store.write)
1863 def txnclosehook(tr2):
1863 def txnclosehook(tr2):
1864 """To be run if transaction is successful, will schedule a hook run
1864 """To be run if transaction is successful, will schedule a hook run
1865 """
1865 """
1866 # Don't reference tr2 in hook() so we don't hold a reference.
1866 # Don't reference tr2 in hook() so we don't hold a reference.
1867 # This reduces memory consumption when there are multiple
1867 # This reduces memory consumption when there are multiple
1868 # transactions per lock. This can likely go away if issue5045
1868 # transactions per lock. This can likely go away if issue5045
1869 # fixes the function accumulation.
1869 # fixes the function accumulation.
1870 hookargs = tr2.hookargs
1870 hookargs = tr2.hookargs
1871
1871
1872 def hookfunc():
1872 def hookfunc():
1873 repo = reporef()
1873 repo = reporef()
1874 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1874 if hook.hashook(repo.ui, 'txnclose-bookmark'):
1875 bmchanges = sorted(tr.changes['bookmarks'].items())
1875 bmchanges = sorted(tr.changes['bookmarks'].items())
1876 for name, (old, new) in bmchanges:
1876 for name, (old, new) in bmchanges:
1877 args = tr.hookargs.copy()
1877 args = tr.hookargs.copy()
1878 args.update(bookmarks.preparehookargs(name, old, new))
1878 args.update(bookmarks.preparehookargs(name, old, new))
1879 repo.hook('txnclose-bookmark', throw=False,
1879 repo.hook('txnclose-bookmark', throw=False,
1880 txnname=desc, **pycompat.strkwargs(args))
1880 txnname=desc, **pycompat.strkwargs(args))
1881
1881
1882 if hook.hashook(repo.ui, 'txnclose-phase'):
1882 if hook.hashook(repo.ui, 'txnclose-phase'):
1883 cl = repo.unfiltered().changelog
1883 cl = repo.unfiltered().changelog
1884 phasemv = sorted(tr.changes['phases'].items())
1884 phasemv = sorted(tr.changes['phases'].items())
1885 for rev, (old, new) in phasemv:
1885 for rev, (old, new) in phasemv:
1886 args = tr.hookargs.copy()
1886 args = tr.hookargs.copy()
1887 node = hex(cl.node(rev))
1887 node = hex(cl.node(rev))
1888 args.update(phases.preparehookargs(node, old, new))
1888 args.update(phases.preparehookargs(node, old, new))
1889 repo.hook('txnclose-phase', throw=False, txnname=desc,
1889 repo.hook('txnclose-phase', throw=False, txnname=desc,
1890 **pycompat.strkwargs(args))
1890 **pycompat.strkwargs(args))
1891
1891
1892 repo.hook('txnclose', throw=False, txnname=desc,
1892 repo.hook('txnclose', throw=False, txnname=desc,
1893 **pycompat.strkwargs(hookargs))
1893 **pycompat.strkwargs(hookargs))
1894 reporef()._afterlock(hookfunc)
1894 reporef()._afterlock(hookfunc)
1895 tr.addfinalize('txnclose-hook', txnclosehook)
1895 tr.addfinalize('txnclose-hook', txnclosehook)
1896 # Include a leading "-" to make it happen before the transaction summary
1896 # Include a leading "-" to make it happen before the transaction summary
1897 # reports registered via scmutil.registersummarycallback() whose names
1897 # reports registered via scmutil.registersummarycallback() whose names
1898 # are 00-txnreport etc. That way, the caches will be warm when the
1898 # are 00-txnreport etc. That way, the caches will be warm when the
1899 # callbacks run.
1899 # callbacks run.
1900 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1900 tr.addpostclose('-warm-cache', self._buildcacheupdater(tr))
1901 def txnaborthook(tr2):
1901 def txnaborthook(tr2):
1902 """To be run if transaction is aborted
1902 """To be run if transaction is aborted
1903 """
1903 """
1904 reporef().hook('txnabort', throw=False, txnname=desc,
1904 reporef().hook('txnabort', throw=False, txnname=desc,
1905 **pycompat.strkwargs(tr2.hookargs))
1905 **pycompat.strkwargs(tr2.hookargs))
1906 tr.addabort('txnabort-hook', txnaborthook)
1906 tr.addabort('txnabort-hook', txnaborthook)
1907 # avoid eager cache invalidation. in-memory data should be identical
1907 # avoid eager cache invalidation. in-memory data should be identical
1908 # to stored data if transaction has no error.
1908 # to stored data if transaction has no error.
1909 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1909 tr.addpostclose('refresh-filecachestats', self._refreshfilecachestats)
1910 self._transref = weakref.ref(tr)
1910 self._transref = weakref.ref(tr)
1911 scmutil.registersummarycallback(self, tr, desc)
1911 scmutil.registersummarycallback(self, tr, desc)
1912 return tr
1912 return tr
1913
1913
1914 def _journalfiles(self):
1914 def _journalfiles(self):
1915 return ((self.svfs, 'journal'),
1915 return ((self.svfs, 'journal'),
1916 (self.svfs, 'journal.narrowspec'),
1916 (self.svfs, 'journal.narrowspec'),
1917 (self.vfs, 'journal.narrowspec.dirstate'),
1917 (self.vfs, 'journal.narrowspec.dirstate'),
1918 (self.vfs, 'journal.dirstate'),
1918 (self.vfs, 'journal.dirstate'),
1919 (self.vfs, 'journal.branch'),
1919 (self.vfs, 'journal.branch'),
1920 (self.vfs, 'journal.desc'),
1920 (self.vfs, 'journal.desc'),
1921 (self.vfs, 'journal.bookmarks'),
1921 (self.vfs, 'journal.bookmarks'),
1922 (self.svfs, 'journal.phaseroots'))
1922 (self.svfs, 'journal.phaseroots'))
1923
1923
1924 def undofiles(self):
1924 def undofiles(self):
1925 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1925 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
1926
1926
1927 @unfilteredmethod
1927 @unfilteredmethod
1928 def _writejournal(self, desc):
1928 def _writejournal(self, desc):
1929 self.dirstate.savebackup(None, 'journal.dirstate')
1929 self.dirstate.savebackup(None, 'journal.dirstate')
1930 narrowspec.savewcbackup(self, 'journal.narrowspec.dirstate')
1930 narrowspec.savewcbackup(self, 'journal.narrowspec.dirstate')
1931 narrowspec.savebackup(self, 'journal.narrowspec')
1931 narrowspec.savebackup(self, 'journal.narrowspec')
1932 self.vfs.write("journal.branch",
1932 self.vfs.write("journal.branch",
1933 encoding.fromlocal(self.dirstate.branch()))
1933 encoding.fromlocal(self.dirstate.branch()))
1934 self.vfs.write("journal.desc",
1934 self.vfs.write("journal.desc",
1935 "%d\n%s\n" % (len(self), desc))
1935 "%d\n%s\n" % (len(self), desc))
1936 self.vfs.write("journal.bookmarks",
1936 self.vfs.write("journal.bookmarks",
1937 self.vfs.tryread("bookmarks"))
1937 self.vfs.tryread("bookmarks"))
1938 self.svfs.write("journal.phaseroots",
1938 self.svfs.write("journal.phaseroots",
1939 self.svfs.tryread("phaseroots"))
1939 self.svfs.tryread("phaseroots"))
1940
1940
1941 def recover(self):
1941 def recover(self):
1942 with self.lock():
1942 with self.lock():
1943 if self.svfs.exists("journal"):
1943 if self.svfs.exists("journal"):
1944 self.ui.status(_("rolling back interrupted transaction\n"))
1944 self.ui.status(_("rolling back interrupted transaction\n"))
1945 vfsmap = {'': self.svfs,
1945 vfsmap = {'': self.svfs,
1946 'plain': self.vfs,}
1946 'plain': self.vfs,}
1947 transaction.rollback(self.svfs, vfsmap, "journal",
1947 transaction.rollback(self.svfs, vfsmap, "journal",
1948 self.ui.warn,
1948 self.ui.warn,
1949 checkambigfiles=_cachedfiles)
1949 checkambigfiles=_cachedfiles)
1950 self.invalidate()
1950 self.invalidate()
1951 return True
1951 return True
1952 else:
1952 else:
1953 self.ui.warn(_("no interrupted transaction available\n"))
1953 self.ui.warn(_("no interrupted transaction available\n"))
1954 return False
1954 return False
1955
1955
1956 def rollback(self, dryrun=False, force=False):
1956 def rollback(self, dryrun=False, force=False):
1957 wlock = lock = dsguard = None
1957 wlock = lock = dsguard = None
1958 try:
1958 try:
1959 wlock = self.wlock()
1959 wlock = self.wlock()
1960 lock = self.lock()
1960 lock = self.lock()
1961 if self.svfs.exists("undo"):
1961 if self.svfs.exists("undo"):
1962 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1962 dsguard = dirstateguard.dirstateguard(self, 'rollback')
1963
1963
1964 return self._rollback(dryrun, force, dsguard)
1964 return self._rollback(dryrun, force, dsguard)
1965 else:
1965 else:
1966 self.ui.warn(_("no rollback information available\n"))
1966 self.ui.warn(_("no rollback information available\n"))
1967 return 1
1967 return 1
1968 finally:
1968 finally:
1969 release(dsguard, lock, wlock)
1969 release(dsguard, lock, wlock)
1970
1970
1971 @unfilteredmethod # Until we get smarter cache management
1971 @unfilteredmethod # Until we get smarter cache management
1972 def _rollback(self, dryrun, force, dsguard):
1972 def _rollback(self, dryrun, force, dsguard):
1973 ui = self.ui
1973 ui = self.ui
1974 try:
1974 try:
1975 args = self.vfs.read('undo.desc').splitlines()
1975 args = self.vfs.read('undo.desc').splitlines()
1976 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1976 (oldlen, desc, detail) = (int(args[0]), args[1], None)
1977 if len(args) >= 3:
1977 if len(args) >= 3:
1978 detail = args[2]
1978 detail = args[2]
1979 oldtip = oldlen - 1
1979 oldtip = oldlen - 1
1980
1980
1981 if detail and ui.verbose:
1981 if detail and ui.verbose:
1982 msg = (_('repository tip rolled back to revision %d'
1982 msg = (_('repository tip rolled back to revision %d'
1983 ' (undo %s: %s)\n')
1983 ' (undo %s: %s)\n')
1984 % (oldtip, desc, detail))
1984 % (oldtip, desc, detail))
1985 else:
1985 else:
1986 msg = (_('repository tip rolled back to revision %d'
1986 msg = (_('repository tip rolled back to revision %d'
1987 ' (undo %s)\n')
1987 ' (undo %s)\n')
1988 % (oldtip, desc))
1988 % (oldtip, desc))
1989 except IOError:
1989 except IOError:
1990 msg = _('rolling back unknown transaction\n')
1990 msg = _('rolling back unknown transaction\n')
1991 desc = None
1991 desc = None
1992
1992
1993 if not force and self['.'] != self['tip'] and desc == 'commit':
1993 if not force and self['.'] != self['tip'] and desc == 'commit':
1994 raise error.Abort(
1994 raise error.Abort(
1995 _('rollback of last commit while not checked out '
1995 _('rollback of last commit while not checked out '
1996 'may lose data'), hint=_('use -f to force'))
1996 'may lose data'), hint=_('use -f to force'))
1997
1997
1998 ui.status(msg)
1998 ui.status(msg)
1999 if dryrun:
1999 if dryrun:
2000 return 0
2000 return 0
2001
2001
2002 parents = self.dirstate.parents()
2002 parents = self.dirstate.parents()
2003 self.destroying()
2003 self.destroying()
2004 vfsmap = {'plain': self.vfs, '': self.svfs}
2004 vfsmap = {'plain': self.vfs, '': self.svfs}
2005 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
2005 transaction.rollback(self.svfs, vfsmap, 'undo', ui.warn,
2006 checkambigfiles=_cachedfiles)
2006 checkambigfiles=_cachedfiles)
2007 if self.vfs.exists('undo.bookmarks'):
2007 if self.vfs.exists('undo.bookmarks'):
2008 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
2008 self.vfs.rename('undo.bookmarks', 'bookmarks', checkambig=True)
2009 if self.svfs.exists('undo.phaseroots'):
2009 if self.svfs.exists('undo.phaseroots'):
2010 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
2010 self.svfs.rename('undo.phaseroots', 'phaseroots', checkambig=True)
2011 self.invalidate()
2011 self.invalidate()
2012
2012
2013 parentgone = any(p not in self.changelog.nodemap for p in parents)
2013 parentgone = any(p not in self.changelog.nodemap for p in parents)
2014 if parentgone:
2014 if parentgone:
2015 # prevent dirstateguard from overwriting already restored one
2015 # prevent dirstateguard from overwriting already restored one
2016 dsguard.close()
2016 dsguard.close()
2017
2017
2018 narrowspec.restorebackup(self, 'undo.narrowspec')
2018 narrowspec.restorebackup(self, 'undo.narrowspec')
2019 narrowspec.restorewcbackup(self, 'undo.narrowspec.dirstate')
2019 narrowspec.restorewcbackup(self, 'undo.narrowspec.dirstate')
2020 self.dirstate.restorebackup(None, 'undo.dirstate')
2020 self.dirstate.restorebackup(None, 'undo.dirstate')
2021 try:
2021 try:
2022 branch = self.vfs.read('undo.branch')
2022 branch = self.vfs.read('undo.branch')
2023 self.dirstate.setbranch(encoding.tolocal(branch))
2023 self.dirstate.setbranch(encoding.tolocal(branch))
2024 except IOError:
2024 except IOError:
2025 ui.warn(_('named branch could not be reset: '
2025 ui.warn(_('named branch could not be reset: '
2026 'current branch is still \'%s\'\n')
2026 'current branch is still \'%s\'\n')
2027 % self.dirstate.branch())
2027 % self.dirstate.branch())
2028
2028
2029 parents = tuple([p.rev() for p in self[None].parents()])
2029 parents = tuple([p.rev() for p in self[None].parents()])
2030 if len(parents) > 1:
2030 if len(parents) > 1:
2031 ui.status(_('working directory now based on '
2031 ui.status(_('working directory now based on '
2032 'revisions %d and %d\n') % parents)
2032 'revisions %d and %d\n') % parents)
2033 else:
2033 else:
2034 ui.status(_('working directory now based on '
2034 ui.status(_('working directory now based on '
2035 'revision %d\n') % parents)
2035 'revision %d\n') % parents)
2036 mergemod.mergestate.clean(self, self['.'].node())
2036 mergemod.mergestate.clean(self, self['.'].node())
2037
2037
2038 # TODO: if we know which new heads may result from this rollback, pass
2038 # TODO: if we know which new heads may result from this rollback, pass
2039 # them to destroy(), which will prevent the branchhead cache from being
2039 # them to destroy(), which will prevent the branchhead cache from being
2040 # invalidated.
2040 # invalidated.
2041 self.destroyed()
2041 self.destroyed()
2042 return 0
2042 return 0
2043
2043
2044 def _buildcacheupdater(self, newtransaction):
2044 def _buildcacheupdater(self, newtransaction):
2045 """called during transaction to build the callback updating cache
2045 """called during transaction to build the callback updating cache
2046
2046
2047 Lives on the repository to help extension who might want to augment
2047 Lives on the repository to help extension who might want to augment
2048 this logic. For this purpose, the created transaction is passed to the
2048 this logic. For this purpose, the created transaction is passed to the
2049 method.
2049 method.
2050 """
2050 """
2051 # we must avoid cyclic reference between repo and transaction.
2051 # we must avoid cyclic reference between repo and transaction.
2052 reporef = weakref.ref(self)
2052 reporef = weakref.ref(self)
2053 def updater(tr):
2053 def updater(tr):
2054 repo = reporef()
2054 repo = reporef()
2055 repo.updatecaches(tr)
2055 repo.updatecaches(tr)
2056 return updater
2056 return updater
2057
2057
2058 @unfilteredmethod
2058 @unfilteredmethod
2059 def updatecaches(self, tr=None, full=False):
2059 def updatecaches(self, tr=None, full=False):
2060 """warm appropriate caches
2060 """warm appropriate caches
2061
2061
2062 If this function is called after a transaction closed. The transaction
2062 If this function is called after a transaction closed. The transaction
2063 will be available in the 'tr' argument. This can be used to selectively
2063 will be available in the 'tr' argument. This can be used to selectively
2064 update caches relevant to the changes in that transaction.
2064 update caches relevant to the changes in that transaction.
2065
2065
2066 If 'full' is set, make sure all caches the function knows about have
2066 If 'full' is set, make sure all caches the function knows about have
2067 up-to-date data. Even the ones usually loaded more lazily.
2067 up-to-date data. Even the ones usually loaded more lazily.
2068 """
2068 """
2069 if tr is not None and tr.hookargs.get('source') == 'strip':
2069 if tr is not None and tr.hookargs.get('source') == 'strip':
2070 # During strip, many caches are invalid but
2070 # During strip, many caches are invalid but
2071 # later call to `destroyed` will refresh them.
2071 # later call to `destroyed` will refresh them.
2072 return
2072 return
2073
2073
2074 if tr is None or tr.changes['origrepolen'] < len(self):
2074 if tr is None or tr.changes['origrepolen'] < len(self):
2075 # accessing the 'ser ved' branchmap should refresh all the others,
2075 # accessing the 'ser ved' branchmap should refresh all the others,
2076 self.ui.debug('updating the branch cache\n')
2076 self.ui.debug('updating the branch cache\n')
2077 self.filtered('served').branchmap()
2077 self.filtered('served').branchmap()
2078
2078
2079 if full:
2079 if full:
2080 rbc = self.revbranchcache()
2080 rbc = self.revbranchcache()
2081 for r in self.changelog:
2081 for r in self.changelog:
2082 rbc.branchinfo(r)
2082 rbc.branchinfo(r)
2083 rbc.write()
2083 rbc.write()
2084
2084
2085 # ensure the working copy parents are in the manifestfulltextcache
2085 # ensure the working copy parents are in the manifestfulltextcache
2086 for ctx in self['.'].parents():
2086 for ctx in self['.'].parents():
2087 ctx.manifest() # accessing the manifest is enough
2087 ctx.manifest() # accessing the manifest is enough
2088
2088
2089 def invalidatecaches(self):
2089 def invalidatecaches(self):
2090
2090
2091 if r'_tagscache' in vars(self):
2091 if r'_tagscache' in vars(self):
2092 # can't use delattr on proxy
2092 # can't use delattr on proxy
2093 del self.__dict__[r'_tagscache']
2093 del self.__dict__[r'_tagscache']
2094
2094
2095 self._branchcaches.clear()
2095 self._branchcaches.clear()
2096 self.invalidatevolatilesets()
2096 self.invalidatevolatilesets()
2097 self._sparsesignaturecache.clear()
2097 self._sparsesignaturecache.clear()
2098
2098
2099 def invalidatevolatilesets(self):
2099 def invalidatevolatilesets(self):
2100 self.filteredrevcache.clear()
2100 self.filteredrevcache.clear()
2101 obsolete.clearobscaches(self)
2101 obsolete.clearobscaches(self)
2102
2102
2103 def invalidatedirstate(self):
2103 def invalidatedirstate(self):
2104 '''Invalidates the dirstate, causing the next call to dirstate
2104 '''Invalidates the dirstate, causing the next call to dirstate
2105 to check if it was modified since the last time it was read,
2105 to check if it was modified since the last time it was read,
2106 rereading it if it has.
2106 rereading it if it has.
2107
2107
2108 This is different to dirstate.invalidate() that it doesn't always
2108 This is different to dirstate.invalidate() that it doesn't always
2109 rereads the dirstate. Use dirstate.invalidate() if you want to
2109 rereads the dirstate. Use dirstate.invalidate() if you want to
2110 explicitly read the dirstate again (i.e. restoring it to a previous
2110 explicitly read the dirstate again (i.e. restoring it to a previous
2111 known good state).'''
2111 known good state).'''
2112 if hasunfilteredcache(self, r'dirstate'):
2112 if hasunfilteredcache(self, r'dirstate'):
2113 for k in self.dirstate._filecache:
2113 for k in self.dirstate._filecache:
2114 try:
2114 try:
2115 delattr(self.dirstate, k)
2115 delattr(self.dirstate, k)
2116 except AttributeError:
2116 except AttributeError:
2117 pass
2117 pass
2118 delattr(self.unfiltered(), r'dirstate')
2118 delattr(self.unfiltered(), r'dirstate')
2119
2119
2120 def invalidate(self, clearfilecache=False):
2120 def invalidate(self, clearfilecache=False):
2121 '''Invalidates both store and non-store parts other than dirstate
2121 '''Invalidates both store and non-store parts other than dirstate
2122
2122
2123 If a transaction is running, invalidation of store is omitted,
2123 If a transaction is running, invalidation of store is omitted,
2124 because discarding in-memory changes might cause inconsistency
2124 because discarding in-memory changes might cause inconsistency
2125 (e.g. incomplete fncache causes unintentional failure, but
2125 (e.g. incomplete fncache causes unintentional failure, but
2126 redundant one doesn't).
2126 redundant one doesn't).
2127 '''
2127 '''
2128 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2128 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2129 for k in list(self._filecache.keys()):
2129 for k in list(self._filecache.keys()):
2130 # dirstate is invalidated separately in invalidatedirstate()
2130 # dirstate is invalidated separately in invalidatedirstate()
2131 if k == 'dirstate':
2131 if k == 'dirstate':
2132 continue
2132 continue
2133 if (k == 'changelog' and
2133 if (k == 'changelog' and
2134 self.currenttransaction() and
2134 self.currenttransaction() and
2135 self.changelog._delayed):
2135 self.changelog._delayed):
2136 # The changelog object may store unwritten revisions. We don't
2136 # The changelog object may store unwritten revisions. We don't
2137 # want to lose them.
2137 # want to lose them.
2138 # TODO: Solve the problem instead of working around it.
2138 # TODO: Solve the problem instead of working around it.
2139 continue
2139 continue
2140
2140
2141 if clearfilecache:
2141 if clearfilecache:
2142 del self._filecache[k]
2142 del self._filecache[k]
2143 try:
2143 try:
2144 delattr(unfiltered, k)
2144 delattr(unfiltered, k)
2145 except AttributeError:
2145 except AttributeError:
2146 pass
2146 pass
2147 self.invalidatecaches()
2147 self.invalidatecaches()
2148 if not self.currenttransaction():
2148 if not self.currenttransaction():
2149 # TODO: Changing contents of store outside transaction
2149 # TODO: Changing contents of store outside transaction
2150 # causes inconsistency. We should make in-memory store
2150 # causes inconsistency. We should make in-memory store
2151 # changes detectable, and abort if changed.
2151 # changes detectable, and abort if changed.
2152 self.store.invalidatecaches()
2152 self.store.invalidatecaches()
2153
2153
2154 def invalidateall(self):
2154 def invalidateall(self):
2155 '''Fully invalidates both store and non-store parts, causing the
2155 '''Fully invalidates both store and non-store parts, causing the
2156 subsequent operation to reread any outside changes.'''
2156 subsequent operation to reread any outside changes.'''
2157 # extension should hook this to invalidate its caches
2157 # extension should hook this to invalidate its caches
2158 self.invalidate()
2158 self.invalidate()
2159 self.invalidatedirstate()
2159 self.invalidatedirstate()
2160
2160
2161 @unfilteredmethod
2161 @unfilteredmethod
2162 def _refreshfilecachestats(self, tr):
2162 def _refreshfilecachestats(self, tr):
2163 """Reload stats of cached files so that they are flagged as valid"""
2163 """Reload stats of cached files so that they are flagged as valid"""
2164 for k, ce in self._filecache.items():
2164 for k, ce in self._filecache.items():
2165 k = pycompat.sysstr(k)
2165 k = pycompat.sysstr(k)
2166 if k == r'dirstate' or k not in self.__dict__:
2166 if k == r'dirstate' or k not in self.__dict__:
2167 continue
2167 continue
2168 ce.refresh()
2168 ce.refresh()
2169
2169
2170 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
2170 def _lock(self, vfs, lockname, wait, releasefn, acquirefn, desc,
2171 inheritchecker=None, parentenvvar=None):
2171 inheritchecker=None, parentenvvar=None):
2172 parentlock = None
2172 parentlock = None
2173 # the contents of parentenvvar are used by the underlying lock to
2173 # the contents of parentenvvar are used by the underlying lock to
2174 # determine whether it can be inherited
2174 # determine whether it can be inherited
2175 if parentenvvar is not None:
2175 if parentenvvar is not None:
2176 parentlock = encoding.environ.get(parentenvvar)
2176 parentlock = encoding.environ.get(parentenvvar)
2177
2177
2178 timeout = 0
2178 timeout = 0
2179 warntimeout = 0
2179 warntimeout = 0
2180 if wait:
2180 if wait:
2181 timeout = self.ui.configint("ui", "timeout")
2181 timeout = self.ui.configint("ui", "timeout")
2182 warntimeout = self.ui.configint("ui", "timeout.warn")
2182 warntimeout = self.ui.configint("ui", "timeout.warn")
2183 # internal config: ui.signal-safe-lock
2183 # internal config: ui.signal-safe-lock
2184 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
2184 signalsafe = self.ui.configbool('ui', 'signal-safe-lock')
2185
2185
2186 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
2186 l = lockmod.trylock(self.ui, vfs, lockname, timeout, warntimeout,
2187 releasefn=releasefn,
2187 releasefn=releasefn,
2188 acquirefn=acquirefn, desc=desc,
2188 acquirefn=acquirefn, desc=desc,
2189 inheritchecker=inheritchecker,
2189 inheritchecker=inheritchecker,
2190 parentlock=parentlock,
2190 parentlock=parentlock,
2191 signalsafe=signalsafe)
2191 signalsafe=signalsafe)
2192 return l
2192 return l
2193
2193
2194 def _afterlock(self, callback):
2194 def _afterlock(self, callback):
2195 """add a callback to be run when the repository is fully unlocked
2195 """add a callback to be run when the repository is fully unlocked
2196
2196
2197 The callback will be executed when the outermost lock is released
2197 The callback will be executed when the outermost lock is released
2198 (with wlock being higher level than 'lock')."""
2198 (with wlock being higher level than 'lock')."""
2199 for ref in (self._wlockref, self._lockref):
2199 for ref in (self._wlockref, self._lockref):
2200 l = ref and ref()
2200 l = ref and ref()
2201 if l and l.held:
2201 if l and l.held:
2202 l.postrelease.append(callback)
2202 l.postrelease.append(callback)
2203 break
2203 break
2204 else: # no lock have been found.
2204 else: # no lock have been found.
2205 callback()
2205 callback()
2206
2206
2207 def lock(self, wait=True):
2207 def lock(self, wait=True):
2208 '''Lock the repository store (.hg/store) and return a weak reference
2208 '''Lock the repository store (.hg/store) and return a weak reference
2209 to the lock. Use this before modifying the store (e.g. committing or
2209 to the lock. Use this before modifying the store (e.g. committing or
2210 stripping). If you are opening a transaction, get a lock as well.)
2210 stripping). If you are opening a transaction, get a lock as well.)
2211
2211
2212 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2212 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2213 'wlock' first to avoid a dead-lock hazard.'''
2213 'wlock' first to avoid a dead-lock hazard.'''
2214 l = self._currentlock(self._lockref)
2214 l = self._currentlock(self._lockref)
2215 if l is not None:
2215 if l is not None:
2216 l.lock()
2216 l.lock()
2217 return l
2217 return l
2218
2218
2219 l = self._lock(self.svfs, "lock", wait, None,
2219 l = self._lock(self.svfs, "lock", wait, None,
2220 self.invalidate, _('repository %s') % self.origroot)
2220 self.invalidate, _('repository %s') % self.origroot)
2221 self._lockref = weakref.ref(l)
2221 self._lockref = weakref.ref(l)
2222 return l
2222 return l
2223
2223
2224 def _wlockchecktransaction(self):
2224 def _wlockchecktransaction(self):
2225 if self.currenttransaction() is not None:
2225 if self.currenttransaction() is not None:
2226 raise error.LockInheritanceContractViolation(
2226 raise error.LockInheritanceContractViolation(
2227 'wlock cannot be inherited in the middle of a transaction')
2227 'wlock cannot be inherited in the middle of a transaction')
2228
2228
2229 def wlock(self, wait=True):
2229 def wlock(self, wait=True):
2230 '''Lock the non-store parts of the repository (everything under
2230 '''Lock the non-store parts of the repository (everything under
2231 .hg except .hg/store) and return a weak reference to the lock.
2231 .hg except .hg/store) and return a weak reference to the lock.
2232
2232
2233 Use this before modifying files in .hg.
2233 Use this before modifying files in .hg.
2234
2234
2235 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2235 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2236 'wlock' first to avoid a dead-lock hazard.'''
2236 'wlock' first to avoid a dead-lock hazard.'''
2237 l = self._wlockref and self._wlockref()
2237 l = self._wlockref and self._wlockref()
2238 if l is not None and l.held:
2238 if l is not None and l.held:
2239 l.lock()
2239 l.lock()
2240 return l
2240 return l
2241
2241
2242 # We do not need to check for non-waiting lock acquisition. Such
2242 # We do not need to check for non-waiting lock acquisition. Such
2243 # acquisition would not cause dead-lock as they would just fail.
2243 # acquisition would not cause dead-lock as they would just fail.
2244 if wait and (self.ui.configbool('devel', 'all-warnings')
2244 if wait and (self.ui.configbool('devel', 'all-warnings')
2245 or self.ui.configbool('devel', 'check-locks')):
2245 or self.ui.configbool('devel', 'check-locks')):
2246 if self._currentlock(self._lockref) is not None:
2246 if self._currentlock(self._lockref) is not None:
2247 self.ui.develwarn('"wlock" acquired after "lock"')
2247 self.ui.develwarn('"wlock" acquired after "lock"')
2248
2248
2249 def unlock():
2249 def unlock():
2250 if self.dirstate.pendingparentchange():
2250 if self.dirstate.pendingparentchange():
2251 self.dirstate.invalidate()
2251 self.dirstate.invalidate()
2252 else:
2252 else:
2253 self.dirstate.write(None)
2253 self.dirstate.write(None)
2254
2254
2255 self._filecache['dirstate'].refresh()
2255 self._filecache['dirstate'].refresh()
2256
2256
2257 l = self._lock(self.vfs, "wlock", wait, unlock,
2257 l = self._lock(self.vfs, "wlock", wait, unlock,
2258 self.invalidatedirstate, _('working directory of %s') %
2258 self.invalidatedirstate, _('working directory of %s') %
2259 self.origroot,
2259 self.origroot,
2260 inheritchecker=self._wlockchecktransaction,
2260 inheritchecker=self._wlockchecktransaction,
2261 parentenvvar='HG_WLOCK_LOCKER')
2261 parentenvvar='HG_WLOCK_LOCKER')
2262 self._wlockref = weakref.ref(l)
2262 self._wlockref = weakref.ref(l)
2263 return l
2263 return l
2264
2264
2265 def _currentlock(self, lockref):
2265 def _currentlock(self, lockref):
2266 """Returns the lock if it's held, or None if it's not."""
2266 """Returns the lock if it's held, or None if it's not."""
2267 if lockref is None:
2267 if lockref is None:
2268 return None
2268 return None
2269 l = lockref()
2269 l = lockref()
2270 if l is None or not l.held:
2270 if l is None or not l.held:
2271 return None
2271 return None
2272 return l
2272 return l
2273
2273
2274 def currentwlock(self):
2274 def currentwlock(self):
2275 """Returns the wlock if it's held, or None if it's not."""
2275 """Returns the wlock if it's held, or None if it's not."""
2276 return self._currentlock(self._wlockref)
2276 return self._currentlock(self._wlockref)
2277
2277
2278 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2278 def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist):
2279 """
2279 """
2280 commit an individual file as part of a larger transaction
2280 commit an individual file as part of a larger transaction
2281 """
2281 """
2282
2282
2283 fname = fctx.path()
2283 fname = fctx.path()
2284 fparent1 = manifest1.get(fname, nullid)
2284 fparent1 = manifest1.get(fname, nullid)
2285 fparent2 = manifest2.get(fname, nullid)
2285 fparent2 = manifest2.get(fname, nullid)
2286 if isinstance(fctx, context.filectx):
2286 if isinstance(fctx, context.filectx):
2287 node = fctx.filenode()
2287 node = fctx.filenode()
2288 if node in [fparent1, fparent2]:
2288 if node in [fparent1, fparent2]:
2289 self.ui.debug('reusing %s filelog entry\n' % fname)
2289 self.ui.debug('reusing %s filelog entry\n' % fname)
2290 if manifest1.flags(fname) != fctx.flags():
2290 if manifest1.flags(fname) != fctx.flags():
2291 changelist.append(fname)
2291 changelist.append(fname)
2292 return node
2292 return node
2293
2293
2294 flog = self.file(fname)
2294 flog = self.file(fname)
2295 meta = {}
2295 meta = {}
2296 copy = fctx.renamed()
2296 copy = fctx.renamed()
2297 if copy and copy[0] != fname:
2297 if copy and copy[0] != fname:
2298 # Mark the new revision of this file as a copy of another
2298 # Mark the new revision of this file as a copy of another
2299 # file. This copy data will effectively act as a parent
2299 # file. This copy data will effectively act as a parent
2300 # of this new revision. If this is a merge, the first
2300 # of this new revision. If this is a merge, the first
2301 # parent will be the nullid (meaning "look up the copy data")
2301 # parent will be the nullid (meaning "look up the copy data")
2302 # and the second one will be the other parent. For example:
2302 # and the second one will be the other parent. For example:
2303 #
2303 #
2304 # 0 --- 1 --- 3 rev1 changes file foo
2304 # 0 --- 1 --- 3 rev1 changes file foo
2305 # \ / rev2 renames foo to bar and changes it
2305 # \ / rev2 renames foo to bar and changes it
2306 # \- 2 -/ rev3 should have bar with all changes and
2306 # \- 2 -/ rev3 should have bar with all changes and
2307 # should record that bar descends from
2307 # should record that bar descends from
2308 # bar in rev2 and foo in rev1
2308 # bar in rev2 and foo in rev1
2309 #
2309 #
2310 # this allows this merge to succeed:
2310 # this allows this merge to succeed:
2311 #
2311 #
2312 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2312 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2313 # \ / merging rev3 and rev4 should use bar@rev2
2313 # \ / merging rev3 and rev4 should use bar@rev2
2314 # \- 2 --- 4 as the merge base
2314 # \- 2 --- 4 as the merge base
2315 #
2315 #
2316
2316
2317 cfname = copy[0]
2317 cfname = copy[0]
2318 crev = manifest1.get(cfname)
2318 crev = manifest1.get(cfname)
2319 newfparent = fparent2
2319 newfparent = fparent2
2320
2320
2321 if manifest2: # branch merge
2321 if manifest2: # branch merge
2322 if fparent2 == nullid or crev is None: # copied on remote side
2322 if fparent2 == nullid or crev is None: # copied on remote side
2323 if cfname in manifest2:
2323 if cfname in manifest2:
2324 crev = manifest2[cfname]
2324 crev = manifest2[cfname]
2325 newfparent = fparent1
2325 newfparent = fparent1
2326
2326
2327 # Here, we used to search backwards through history to try to find
2327 # Here, we used to search backwards through history to try to find
2328 # where the file copy came from if the source of a copy was not in
2328 # where the file copy came from if the source of a copy was not in
2329 # the parent directory. However, this doesn't actually make sense to
2329 # the parent directory. However, this doesn't actually make sense to
2330 # do (what does a copy from something not in your working copy even
2330 # do (what does a copy from something not in your working copy even
2331 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2331 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2332 # the user that copy information was dropped, so if they didn't
2332 # the user that copy information was dropped, so if they didn't
2333 # expect this outcome it can be fixed, but this is the correct
2333 # expect this outcome it can be fixed, but this is the correct
2334 # behavior in this circumstance.
2334 # behavior in this circumstance.
2335
2335
2336 if crev:
2336 if crev:
2337 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2337 self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev)))
2338 meta["copy"] = cfname
2338 meta["copy"] = cfname
2339 meta["copyrev"] = hex(crev)
2339 meta["copyrev"] = hex(crev)
2340 fparent1, fparent2 = nullid, newfparent
2340 fparent1, fparent2 = nullid, newfparent
2341 else:
2341 else:
2342 self.ui.warn(_("warning: can't find ancestor for '%s' "
2342 self.ui.warn(_("warning: can't find ancestor for '%s' "
2343 "copied from '%s'!\n") % (fname, cfname))
2343 "copied from '%s'!\n") % (fname, cfname))
2344
2344
2345 elif fparent1 == nullid:
2345 elif fparent1 == nullid:
2346 fparent1, fparent2 = fparent2, nullid
2346 fparent1, fparent2 = fparent2, nullid
2347 elif fparent2 != nullid:
2347 elif fparent2 != nullid:
2348 # is one parent an ancestor of the other?
2348 # is one parent an ancestor of the other?
2349 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2349 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2350 if fparent1 in fparentancestors:
2350 if fparent1 in fparentancestors:
2351 fparent1, fparent2 = fparent2, nullid
2351 fparent1, fparent2 = fparent2, nullid
2352 elif fparent2 in fparentancestors:
2352 elif fparent2 in fparentancestors:
2353 fparent2 = nullid
2353 fparent2 = nullid
2354
2354
2355 # is the file changed?
2355 # is the file changed?
2356 text = fctx.data()
2356 text = fctx.data()
2357 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2357 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2358 changelist.append(fname)
2358 changelist.append(fname)
2359 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2359 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2360 # are just the flags changed during merge?
2360 # are just the flags changed during merge?
2361 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2361 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2362 changelist.append(fname)
2362 changelist.append(fname)
2363
2363
2364 return fparent1
2364 return fparent1
2365
2365
2366 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2366 def checkcommitpatterns(self, wctx, vdirs, match, status, fail):
2367 """check for commit arguments that aren't committable"""
2367 """check for commit arguments that aren't committable"""
2368 if match.isexact() or match.prefix():
2368 if match.isexact() or match.prefix():
2369 matched = set(status.modified + status.added + status.removed)
2369 matched = set(status.modified + status.added + status.removed)
2370
2370
2371 for f in match.files():
2371 for f in match.files():
2372 f = self.dirstate.normalize(f)
2372 f = self.dirstate.normalize(f)
2373 if f == '.' or f in matched or f in wctx.substate:
2373 if f == '.' or f in matched or f in wctx.substate:
2374 continue
2374 continue
2375 if f in status.deleted:
2375 if f in status.deleted:
2376 fail(f, _('file not found!'))
2376 fail(f, _('file not found!'))
2377 if f in vdirs: # visited directory
2377 if f in vdirs: # visited directory
2378 d = f + '/'
2378 d = f + '/'
2379 for mf in matched:
2379 for mf in matched:
2380 if mf.startswith(d):
2380 if mf.startswith(d):
2381 break
2381 break
2382 else:
2382 else:
2383 fail(f, _("no match under directory!"))
2383 fail(f, _("no match under directory!"))
2384 elif f not in self.dirstate:
2384 elif f not in self.dirstate:
2385 fail(f, _("file not tracked!"))
2385 fail(f, _("file not tracked!"))
2386
2386
2387 @unfilteredmethod
2387 @unfilteredmethod
2388 def commit(self, text="", user=None, date=None, match=None, force=False,
2388 def commit(self, text="", user=None, date=None, match=None, force=False,
2389 editor=False, extra=None):
2389 editor=False, extra=None):
2390 """Add a new revision to current repository.
2390 """Add a new revision to current repository.
2391
2391
2392 Revision information is gathered from the working directory,
2392 Revision information is gathered from the working directory,
2393 match can be used to filter the committed files. If editor is
2393 match can be used to filter the committed files. If editor is
2394 supplied, it is called to get a commit message.
2394 supplied, it is called to get a commit message.
2395 """
2395 """
2396 if extra is None:
2396 if extra is None:
2397 extra = {}
2397 extra = {}
2398
2398
2399 def fail(f, msg):
2399 def fail(f, msg):
2400 raise error.Abort('%s: %s' % (f, msg))
2400 raise error.Abort('%s: %s' % (f, msg))
2401
2401
2402 if not match:
2402 if not match:
2403 match = matchmod.always(self.root, '')
2403 match = matchmod.always(self.root, '')
2404
2404
2405 if not force:
2405 if not force:
2406 vdirs = []
2406 vdirs = []
2407 match.explicitdir = vdirs.append
2407 match.explicitdir = vdirs.append
2408 match.bad = fail
2408 match.bad = fail
2409
2409
2410 # lock() for recent changelog (see issue4368)
2410 # lock() for recent changelog (see issue4368)
2411 with self.wlock(), self.lock():
2411 with self.wlock(), self.lock():
2412 wctx = self[None]
2412 wctx = self[None]
2413 merge = len(wctx.parents()) > 1
2413 merge = len(wctx.parents()) > 1
2414
2414
2415 if not force and merge and not match.always():
2415 if not force and merge and not match.always():
2416 raise error.Abort(_('cannot partially commit a merge '
2416 raise error.Abort(_('cannot partially commit a merge '
2417 '(do not specify files or patterns)'))
2417 '(do not specify files or patterns)'))
2418
2418
2419 status = self.status(match=match, clean=force)
2419 status = self.status(match=match, clean=force)
2420 if force:
2420 if force:
2421 status.modified.extend(status.clean) # mq may commit clean files
2421 status.modified.extend(status.clean) # mq may commit clean files
2422
2422
2423 # check subrepos
2423 # check subrepos
2424 subs, commitsubs, newstate = subrepoutil.precommit(
2424 subs, commitsubs, newstate = subrepoutil.precommit(
2425 self.ui, wctx, status, match, force=force)
2425 self.ui, wctx, status, match, force=force)
2426
2426
2427 # make sure all explicit patterns are matched
2427 # make sure all explicit patterns are matched
2428 if not force:
2428 if not force:
2429 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2429 self.checkcommitpatterns(wctx, vdirs, match, status, fail)
2430
2430
2431 cctx = context.workingcommitctx(self, status,
2431 cctx = context.workingcommitctx(self, status,
2432 text, user, date, extra)
2432 text, user, date, extra)
2433
2433
2434 # internal config: ui.allowemptycommit
2434 # internal config: ui.allowemptycommit
2435 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2435 allowemptycommit = (wctx.branch() != wctx.p1().branch()
2436 or extra.get('close') or merge or cctx.files()
2436 or extra.get('close') or merge or cctx.files()
2437 or self.ui.configbool('ui', 'allowemptycommit'))
2437 or self.ui.configbool('ui', 'allowemptycommit'))
2438 if not allowemptycommit:
2438 if not allowemptycommit:
2439 return None
2439 return None
2440
2440
2441 if merge and cctx.deleted():
2441 if merge and cctx.deleted():
2442 raise error.Abort(_("cannot commit merge with missing files"))
2442 raise error.Abort(_("cannot commit merge with missing files"))
2443
2443
2444 ms = mergemod.mergestate.read(self)
2444 ms = mergemod.mergestate.read(self)
2445 mergeutil.checkunresolved(ms)
2445 mergeutil.checkunresolved(ms)
2446
2446
2447 if editor:
2447 if editor:
2448 cctx._text = editor(self, cctx, subs)
2448 cctx._text = editor(self, cctx, subs)
2449 edited = (text != cctx._text)
2449 edited = (text != cctx._text)
2450
2450
2451 # Save commit message in case this transaction gets rolled back
2451 # Save commit message in case this transaction gets rolled back
2452 # (e.g. by a pretxncommit hook). Leave the content alone on
2452 # (e.g. by a pretxncommit hook). Leave the content alone on
2453 # the assumption that the user will use the same editor again.
2453 # the assumption that the user will use the same editor again.
2454 msgfn = self.savecommitmessage(cctx._text)
2454 msgfn = self.savecommitmessage(cctx._text)
2455
2455
2456 # commit subs and write new state
2456 # commit subs and write new state
2457 if subs:
2457 if subs:
2458 for s in sorted(commitsubs):
2458 for s in sorted(commitsubs):
2459 sub = wctx.sub(s)
2459 sub = wctx.sub(s)
2460 self.ui.status(_('committing subrepository %s\n') %
2460 self.ui.status(_('committing subrepository %s\n') %
2461 subrepoutil.subrelpath(sub))
2461 subrepoutil.subrelpath(sub))
2462 sr = sub.commit(cctx._text, user, date)
2462 sr = sub.commit(cctx._text, user, date)
2463 newstate[s] = (newstate[s][0], sr)
2463 newstate[s] = (newstate[s][0], sr)
2464 subrepoutil.writestate(self, newstate)
2464 subrepoutil.writestate(self, newstate)
2465
2465
2466 p1, p2 = self.dirstate.parents()
2466 p1, p2 = self.dirstate.parents()
2467 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2467 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '')
2468 try:
2468 try:
2469 self.hook("precommit", throw=True, parent1=hookp1,
2469 self.hook("precommit", throw=True, parent1=hookp1,
2470 parent2=hookp2)
2470 parent2=hookp2)
2471 with self.transaction('commit'):
2471 with self.transaction('commit'):
2472 ret = self.commitctx(cctx, True)
2472 ret = self.commitctx(cctx, True)
2473 # update bookmarks, dirstate and mergestate
2473 # update bookmarks, dirstate and mergestate
2474 bookmarks.update(self, [p1, p2], ret)
2474 bookmarks.update(self, [p1, p2], ret)
2475 cctx.markcommitted(ret)
2475 cctx.markcommitted(ret)
2476 ms.reset()
2476 ms.reset()
2477 except: # re-raises
2477 except: # re-raises
2478 if edited:
2478 if edited:
2479 self.ui.write(
2479 self.ui.write(
2480 _('note: commit message saved in %s\n') % msgfn)
2480 _('note: commit message saved in %s\n') % msgfn)
2481 raise
2481 raise
2482
2482
2483 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2483 def commithook(node=hex(ret), parent1=hookp1, parent2=hookp2):
2484 # hack for command that use a temporary commit (eg: histedit)
2484 # hack for command that use a temporary commit (eg: histedit)
2485 # temporary commit got stripped before hook release
2485 # temporary commit got stripped before hook release
2486 if self.changelog.hasnode(ret):
2486 if self.changelog.hasnode(ret):
2487 self.hook("commit", node=node, parent1=parent1,
2487 self.hook("commit", node=node, parent1=parent1,
2488 parent2=parent2)
2488 parent2=parent2)
2489 self._afterlock(commithook)
2489 self._afterlock(commithook)
2490 return ret
2490 return ret
2491
2491
2492 @unfilteredmethod
2492 @unfilteredmethod
2493 def commitctx(self, ctx, error=False):
2493 def commitctx(self, ctx, error=False):
2494 """Add a new revision to current repository.
2494 """Add a new revision to current repository.
2495 Revision information is passed via the context argument.
2495 Revision information is passed via the context argument.
2496
2496
2497 ctx.files() should list all files involved in this commit, i.e.
2497 ctx.files() should list all files involved in this commit, i.e.
2498 modified/added/removed files. On merge, it may be wider than the
2498 modified/added/removed files. On merge, it may be wider than the
2499 ctx.files() to be committed, since any file nodes derived directly
2499 ctx.files() to be committed, since any file nodes derived directly
2500 from p1 or p2 are excluded from the committed ctx.files().
2500 from p1 or p2 are excluded from the committed ctx.files().
2501 """
2501 """
2502
2502
2503 p1, p2 = ctx.p1(), ctx.p2()
2503 p1, p2 = ctx.p1(), ctx.p2()
2504 user = ctx.user()
2504 user = ctx.user()
2505
2505
2506 with self.lock(), self.transaction("commit") as tr:
2506 with self.lock(), self.transaction("commit") as tr:
2507 trp = weakref.proxy(tr)
2507 trp = weakref.proxy(tr)
2508
2508
2509 if ctx.manifestnode():
2509 if ctx.manifestnode():
2510 # reuse an existing manifest revision
2510 # reuse an existing manifest revision
2511 self.ui.debug('reusing known manifest\n')
2511 self.ui.debug('reusing known manifest\n')
2512 mn = ctx.manifestnode()
2512 mn = ctx.manifestnode()
2513 files = ctx.files()
2513 files = ctx.files()
2514 elif ctx.files():
2514 elif ctx.files():
2515 m1ctx = p1.manifestctx()
2515 m1ctx = p1.manifestctx()
2516 m2ctx = p2.manifestctx()
2516 m2ctx = p2.manifestctx()
2517 mctx = m1ctx.copy()
2517 mctx = m1ctx.copy()
2518
2518
2519 m = mctx.read()
2519 m = mctx.read()
2520 m1 = m1ctx.read()
2520 m1 = m1ctx.read()
2521 m2 = m2ctx.read()
2521 m2 = m2ctx.read()
2522
2522
2523 # check in files
2523 # check in files
2524 added = []
2524 added = []
2525 changed = []
2525 changed = []
2526 removed = list(ctx.removed())
2526 removed = list(ctx.removed())
2527 linkrev = len(self)
2527 linkrev = len(self)
2528 self.ui.note(_("committing files:\n"))
2528 self.ui.note(_("committing files:\n"))
2529 for f in sorted(ctx.modified() + ctx.added()):
2529 for f in sorted(ctx.modified() + ctx.added()):
2530 self.ui.note(f + "\n")
2530 self.ui.note(f + "\n")
2531 try:
2531 try:
2532 fctx = ctx[f]
2532 fctx = ctx[f]
2533 if fctx is None:
2533 if fctx is None:
2534 removed.append(f)
2534 removed.append(f)
2535 else:
2535 else:
2536 added.append(f)
2536 added.append(f)
2537 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2537 m[f] = self._filecommit(fctx, m1, m2, linkrev,
2538 trp, changed)
2538 trp, changed)
2539 m.setflag(f, fctx.flags())
2539 m.setflag(f, fctx.flags())
2540 except OSError:
2540 except OSError:
2541 self.ui.warn(_("trouble committing %s!\n") % f)
2541 self.ui.warn(_("trouble committing %s!\n") % f)
2542 raise
2542 raise
2543 except IOError as inst:
2543 except IOError as inst:
2544 errcode = getattr(inst, 'errno', errno.ENOENT)
2544 errcode = getattr(inst, 'errno', errno.ENOENT)
2545 if error or errcode and errcode != errno.ENOENT:
2545 if error or errcode and errcode != errno.ENOENT:
2546 self.ui.warn(_("trouble committing %s!\n") % f)
2546 self.ui.warn(_("trouble committing %s!\n") % f)
2547 raise
2547 raise
2548
2548
2549 # update manifest
2549 # update manifest
2550 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2550 removed = [f for f in sorted(removed) if f in m1 or f in m2]
2551 drop = [f for f in removed if f in m]
2551 drop = [f for f in removed if f in m]
2552 for f in drop:
2552 for f in drop:
2553 del m[f]
2553 del m[f]
2554 files = changed + removed
2554 files = changed + removed
2555 md = None
2555 md = None
2556 if not files:
2556 if not files:
2557 # if no "files" actually changed in terms of the changelog,
2557 # if no "files" actually changed in terms of the changelog,
2558 # try hard to detect unmodified manifest entry so that the
2558 # try hard to detect unmodified manifest entry so that the
2559 # exact same commit can be reproduced later on convert.
2559 # exact same commit can be reproduced later on convert.
2560 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2560 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
2561 if not files and md:
2561 if not files and md:
2562 self.ui.debug('not reusing manifest (no file change in '
2562 self.ui.debug('not reusing manifest (no file change in '
2563 'changelog, but manifest differs)\n')
2563 'changelog, but manifest differs)\n')
2564 if files or md:
2564 if files or md:
2565 self.ui.note(_("committing manifest\n"))
2565 self.ui.note(_("committing manifest\n"))
2566 # we're using narrowmatch here since it's already applied at
2566 # we're using narrowmatch here since it's already applied at
2567 # other stages (such as dirstate.walk), so we're already
2567 # other stages (such as dirstate.walk), so we're already
2568 # ignoring things outside of narrowspec in most cases. The
2568 # ignoring things outside of narrowspec in most cases. The
2569 # one case where we might have files outside the narrowspec
2569 # one case where we might have files outside the narrowspec
2570 # at this point is merges, and we already error out in the
2570 # at this point is merges, and we already error out in the
2571 # case where the merge has files outside of the narrowspec,
2571 # case where the merge has files outside of the narrowspec,
2572 # so this is safe.
2572 # so this is safe.
2573 mn = mctx.write(trp, linkrev,
2573 mn = mctx.write(trp, linkrev,
2574 p1.manifestnode(), p2.manifestnode(),
2574 p1.manifestnode(), p2.manifestnode(),
2575 added, drop, match=self.narrowmatch())
2575 added, drop, match=self.narrowmatch())
2576 else:
2576 else:
2577 self.ui.debug('reusing manifest form p1 (listed files '
2577 self.ui.debug('reusing manifest form p1 (listed files '
2578 'actually unchanged)\n')
2578 'actually unchanged)\n')
2579 mn = p1.manifestnode()
2579 mn = p1.manifestnode()
2580 else:
2580 else:
2581 self.ui.debug('reusing manifest from p1 (no file change)\n')
2581 self.ui.debug('reusing manifest from p1 (no file change)\n')
2582 mn = p1.manifestnode()
2582 mn = p1.manifestnode()
2583 files = []
2583 files = []
2584
2584
2585 # update changelog
2585 # update changelog
2586 self.ui.note(_("committing changelog\n"))
2586 self.ui.note(_("committing changelog\n"))
2587 self.changelog.delayupdate(tr)
2587 self.changelog.delayupdate(tr)
2588 n = self.changelog.add(mn, files, ctx.description(),
2588 n = self.changelog.add(mn, files, ctx.description(),
2589 trp, p1.node(), p2.node(),
2589 trp, p1.node(), p2.node(),
2590 user, ctx.date(), ctx.extra().copy())
2590 user, ctx.date(), ctx.extra().copy())
2591 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2591 xp1, xp2 = p1.hex(), p2 and p2.hex() or ''
2592 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2592 self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1,
2593 parent2=xp2)
2593 parent2=xp2)
2594 # set the new commit is proper phase
2594 # set the new commit is proper phase
2595 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2595 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
2596 if targetphase:
2596 if targetphase:
2597 # retract boundary do not alter parent changeset.
2597 # retract boundary do not alter parent changeset.
2598 # if a parent have higher the resulting phase will
2598 # if a parent have higher the resulting phase will
2599 # be compliant anyway
2599 # be compliant anyway
2600 #
2600 #
2601 # if minimal phase was 0 we don't need to retract anything
2601 # if minimal phase was 0 we don't need to retract anything
2602 phases.registernew(self, tr, targetphase, [n])
2602 phases.registernew(self, tr, targetphase, [n])
2603 return n
2603 return n
2604
2604
2605 @unfilteredmethod
2605 @unfilteredmethod
2606 def destroying(self):
2606 def destroying(self):
2607 '''Inform the repository that nodes are about to be destroyed.
2607 '''Inform the repository that nodes are about to be destroyed.
2608 Intended for use by strip and rollback, so there's a common
2608 Intended for use by strip and rollback, so there's a common
2609 place for anything that has to be done before destroying history.
2609 place for anything that has to be done before destroying history.
2610
2610
2611 This is mostly useful for saving state that is in memory and waiting
2611 This is mostly useful for saving state that is in memory and waiting
2612 to be flushed when the current lock is released. Because a call to
2612 to be flushed when the current lock is released. Because a call to
2613 destroyed is imminent, the repo will be invalidated causing those
2613 destroyed is imminent, the repo will be invalidated causing those
2614 changes to stay in memory (waiting for the next unlock), or vanish
2614 changes to stay in memory (waiting for the next unlock), or vanish
2615 completely.
2615 completely.
2616 '''
2616 '''
2617 # When using the same lock to commit and strip, the phasecache is left
2617 # When using the same lock to commit and strip, the phasecache is left
2618 # dirty after committing. Then when we strip, the repo is invalidated,
2618 # dirty after committing. Then when we strip, the repo is invalidated,
2619 # causing those changes to disappear.
2619 # causing those changes to disappear.
2620 if '_phasecache' in vars(self):
2620 if '_phasecache' in vars(self):
2621 self._phasecache.write()
2621 self._phasecache.write()
2622
2622
2623 @unfilteredmethod
2623 @unfilteredmethod
2624 def destroyed(self):
2624 def destroyed(self):
2625 '''Inform the repository that nodes have been destroyed.
2625 '''Inform the repository that nodes have been destroyed.
2626 Intended for use by strip and rollback, so there's a common
2626 Intended for use by strip and rollback, so there's a common
2627 place for anything that has to be done after destroying history.
2627 place for anything that has to be done after destroying history.
2628 '''
2628 '''
2629 # When one tries to:
2629 # When one tries to:
2630 # 1) destroy nodes thus calling this method (e.g. strip)
2630 # 1) destroy nodes thus calling this method (e.g. strip)
2631 # 2) use phasecache somewhere (e.g. commit)
2631 # 2) use phasecache somewhere (e.g. commit)
2632 #
2632 #
2633 # then 2) will fail because the phasecache contains nodes that were
2633 # then 2) will fail because the phasecache contains nodes that were
2634 # removed. We can either remove phasecache from the filecache,
2634 # removed. We can either remove phasecache from the filecache,
2635 # causing it to reload next time it is accessed, or simply filter
2635 # causing it to reload next time it is accessed, or simply filter
2636 # the removed nodes now and write the updated cache.
2636 # the removed nodes now and write the updated cache.
2637 self._phasecache.filterunknown(self)
2637 self._phasecache.filterunknown(self)
2638 self._phasecache.write()
2638 self._phasecache.write()
2639
2639
2640 # refresh all repository caches
2640 # refresh all repository caches
2641 self.updatecaches()
2641 self.updatecaches()
2642
2642
2643 # Ensure the persistent tag cache is updated. Doing it now
2643 # Ensure the persistent tag cache is updated. Doing it now
2644 # means that the tag cache only has to worry about destroyed
2644 # means that the tag cache only has to worry about destroyed
2645 # heads immediately after a strip/rollback. That in turn
2645 # heads immediately after a strip/rollback. That in turn
2646 # guarantees that "cachetip == currenttip" (comparing both rev
2646 # guarantees that "cachetip == currenttip" (comparing both rev
2647 # and node) always means no nodes have been added or destroyed.
2647 # and node) always means no nodes have been added or destroyed.
2648
2648
2649 # XXX this is suboptimal when qrefresh'ing: we strip the current
2649 # XXX this is suboptimal when qrefresh'ing: we strip the current
2650 # head, refresh the tag cache, then immediately add a new head.
2650 # head, refresh the tag cache, then immediately add a new head.
2651 # But I think doing it this way is necessary for the "instant
2651 # But I think doing it this way is necessary for the "instant
2652 # tag cache retrieval" case to work.
2652 # tag cache retrieval" case to work.
2653 self.invalidate()
2653 self.invalidate()
2654
2654
2655 def status(self, node1='.', node2=None, match=None,
2655 def status(self, node1='.', node2=None, match=None,
2656 ignored=False, clean=False, unknown=False,
2656 ignored=False, clean=False, unknown=False,
2657 listsubrepos=False):
2657 listsubrepos=False):
2658 '''a convenience method that calls node1.status(node2)'''
2658 '''a convenience method that calls node1.status(node2)'''
2659 return self[node1].status(node2, match, ignored, clean, unknown,
2659 return self[node1].status(node2, match, ignored, clean, unknown,
2660 listsubrepos)
2660 listsubrepos)
2661
2661
2662 def addpostdsstatus(self, ps):
2662 def addpostdsstatus(self, ps):
2663 """Add a callback to run within the wlock, at the point at which status
2663 """Add a callback to run within the wlock, at the point at which status
2664 fixups happen.
2664 fixups happen.
2665
2665
2666 On status completion, callback(wctx, status) will be called with the
2666 On status completion, callback(wctx, status) will be called with the
2667 wlock held, unless the dirstate has changed from underneath or the wlock
2667 wlock held, unless the dirstate has changed from underneath or the wlock
2668 couldn't be grabbed.
2668 couldn't be grabbed.
2669
2669
2670 Callbacks should not capture and use a cached copy of the dirstate --
2670 Callbacks should not capture and use a cached copy of the dirstate --
2671 it might change in the meanwhile. Instead, they should access the
2671 it might change in the meanwhile. Instead, they should access the
2672 dirstate via wctx.repo().dirstate.
2672 dirstate via wctx.repo().dirstate.
2673
2673
2674 This list is emptied out after each status run -- extensions should
2674 This list is emptied out after each status run -- extensions should
2675 make sure it adds to this list each time dirstate.status is called.
2675 make sure it adds to this list each time dirstate.status is called.
2676 Extensions should also make sure they don't call this for statuses
2676 Extensions should also make sure they don't call this for statuses
2677 that don't involve the dirstate.
2677 that don't involve the dirstate.
2678 """
2678 """
2679
2679
2680 # The list is located here for uniqueness reasons -- it is actually
2680 # The list is located here for uniqueness reasons -- it is actually
2681 # managed by the workingctx, but that isn't unique per-repo.
2681 # managed by the workingctx, but that isn't unique per-repo.
2682 self._postdsstatus.append(ps)
2682 self._postdsstatus.append(ps)
2683
2683
2684 def postdsstatus(self):
2684 def postdsstatus(self):
2685 """Used by workingctx to get the list of post-dirstate-status hooks."""
2685 """Used by workingctx to get the list of post-dirstate-status hooks."""
2686 return self._postdsstatus
2686 return self._postdsstatus
2687
2687
2688 def clearpostdsstatus(self):
2688 def clearpostdsstatus(self):
2689 """Used by workingctx to clear post-dirstate-status hooks."""
2689 """Used by workingctx to clear post-dirstate-status hooks."""
2690 del self._postdsstatus[:]
2690 del self._postdsstatus[:]
2691
2691
2692 def heads(self, start=None):
2692 def heads(self, start=None):
2693 if start is None:
2693 if start is None:
2694 cl = self.changelog
2694 cl = self.changelog
2695 headrevs = reversed(cl.headrevs())
2695 headrevs = reversed(cl.headrevs())
2696 return [cl.node(rev) for rev in headrevs]
2696 return [cl.node(rev) for rev in headrevs]
2697
2697
2698 heads = self.changelog.heads(start)
2698 heads = self.changelog.heads(start)
2699 # sort the output in rev descending order
2699 # sort the output in rev descending order
2700 return sorted(heads, key=self.changelog.rev, reverse=True)
2700 return sorted(heads, key=self.changelog.rev, reverse=True)
2701
2701
2702 def branchheads(self, branch=None, start=None, closed=False):
2702 def branchheads(self, branch=None, start=None, closed=False):
2703 '''return a (possibly filtered) list of heads for the given branch
2703 '''return a (possibly filtered) list of heads for the given branch
2704
2704
2705 Heads are returned in topological order, from newest to oldest.
2705 Heads are returned in topological order, from newest to oldest.
2706 If branch is None, use the dirstate branch.
2706 If branch is None, use the dirstate branch.
2707 If start is not None, return only heads reachable from start.
2707 If start is not None, return only heads reachable from start.
2708 If closed is True, return heads that are marked as closed as well.
2708 If closed is True, return heads that are marked as closed as well.
2709 '''
2709 '''
2710 if branch is None:
2710 if branch is None:
2711 branch = self[None].branch()
2711 branch = self[None].branch()
2712 branches = self.branchmap()
2712 branches = self.branchmap()
2713 if branch not in branches:
2713 if branch not in branches:
2714 return []
2714 return []
2715 # the cache returns heads ordered lowest to highest
2715 # the cache returns heads ordered lowest to highest
2716 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2716 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
2717 if start is not None:
2717 if start is not None:
2718 # filter out the heads that cannot be reached from startrev
2718 # filter out the heads that cannot be reached from startrev
2719 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2719 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
2720 bheads = [h for h in bheads if h in fbheads]
2720 bheads = [h for h in bheads if h in fbheads]
2721 return bheads
2721 return bheads
2722
2722
2723 def branches(self, nodes):
2723 def branches(self, nodes):
2724 if not nodes:
2724 if not nodes:
2725 nodes = [self.changelog.tip()]
2725 nodes = [self.changelog.tip()]
2726 b = []
2726 b = []
2727 for n in nodes:
2727 for n in nodes:
2728 t = n
2728 t = n
2729 while True:
2729 while True:
2730 p = self.changelog.parents(n)
2730 p = self.changelog.parents(n)
2731 if p[1] != nullid or p[0] == nullid:
2731 if p[1] != nullid or p[0] == nullid:
2732 b.append((t, n, p[0], p[1]))
2732 b.append((t, n, p[0], p[1]))
2733 break
2733 break
2734 n = p[0]
2734 n = p[0]
2735 return b
2735 return b
2736
2736
2737 def between(self, pairs):
2737 def between(self, pairs):
2738 r = []
2738 r = []
2739
2739
2740 for top, bottom in pairs:
2740 for top, bottom in pairs:
2741 n, l, i = top, [], 0
2741 n, l, i = top, [], 0
2742 f = 1
2742 f = 1
2743
2743
2744 while n != bottom and n != nullid:
2744 while n != bottom and n != nullid:
2745 p = self.changelog.parents(n)[0]
2745 p = self.changelog.parents(n)[0]
2746 if i == f:
2746 if i == f:
2747 l.append(n)
2747 l.append(n)
2748 f = f * 2
2748 f = f * 2
2749 n = p
2749 n = p
2750 i += 1
2750 i += 1
2751
2751
2752 r.append(l)
2752 r.append(l)
2753
2753
2754 return r
2754 return r
2755
2755
2756 def checkpush(self, pushop):
2756 def checkpush(self, pushop):
2757 """Extensions can override this function if additional checks have
2757 """Extensions can override this function if additional checks have
2758 to be performed before pushing, or call it if they override push
2758 to be performed before pushing, or call it if they override push
2759 command.
2759 command.
2760 """
2760 """
2761
2761
2762 @unfilteredpropertycache
2762 @unfilteredpropertycache
2763 def prepushoutgoinghooks(self):
2763 def prepushoutgoinghooks(self):
2764 """Return util.hooks consists of a pushop with repo, remote, outgoing
2764 """Return util.hooks consists of a pushop with repo, remote, outgoing
2765 methods, which are called before pushing changesets.
2765 methods, which are called before pushing changesets.
2766 """
2766 """
2767 return util.hooks()
2767 return util.hooks()
2768
2768
2769 def pushkey(self, namespace, key, old, new):
2769 def pushkey(self, namespace, key, old, new):
2770 try:
2770 try:
2771 tr = self.currenttransaction()
2771 tr = self.currenttransaction()
2772 hookargs = {}
2772 hookargs = {}
2773 if tr is not None:
2773 if tr is not None:
2774 hookargs.update(tr.hookargs)
2774 hookargs.update(tr.hookargs)
2775 hookargs = pycompat.strkwargs(hookargs)
2775 hookargs = pycompat.strkwargs(hookargs)
2776 hookargs[r'namespace'] = namespace
2776 hookargs[r'namespace'] = namespace
2777 hookargs[r'key'] = key
2777 hookargs[r'key'] = key
2778 hookargs[r'old'] = old
2778 hookargs[r'old'] = old
2779 hookargs[r'new'] = new
2779 hookargs[r'new'] = new
2780 self.hook('prepushkey', throw=True, **hookargs)
2780 self.hook('prepushkey', throw=True, **hookargs)
2781 except error.HookAbort as exc:
2781 except error.HookAbort as exc:
2782 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2782 self.ui.write_err(_("pushkey-abort: %s\n") % exc)
2783 if exc.hint:
2783 if exc.hint:
2784 self.ui.write_err(_("(%s)\n") % exc.hint)
2784 self.ui.write_err(_("(%s)\n") % exc.hint)
2785 return False
2785 return False
2786 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2786 self.ui.debug('pushing key for "%s:%s"\n' % (namespace, key))
2787 ret = pushkey.push(self, namespace, key, old, new)
2787 ret = pushkey.push(self, namespace, key, old, new)
2788 def runhook():
2788 def runhook():
2789 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2789 self.hook('pushkey', namespace=namespace, key=key, old=old, new=new,
2790 ret=ret)
2790 ret=ret)
2791 self._afterlock(runhook)
2791 self._afterlock(runhook)
2792 return ret
2792 return ret
2793
2793
2794 def listkeys(self, namespace):
2794 def listkeys(self, namespace):
2795 self.hook('prelistkeys', throw=True, namespace=namespace)
2795 self.hook('prelistkeys', throw=True, namespace=namespace)
2796 self.ui.debug('listing keys for "%s"\n' % namespace)
2796 self.ui.debug('listing keys for "%s"\n' % namespace)
2797 values = pushkey.list(self, namespace)
2797 values = pushkey.list(self, namespace)
2798 self.hook('listkeys', namespace=namespace, values=values)
2798 self.hook('listkeys', namespace=namespace, values=values)
2799 return values
2799 return values
2800
2800
2801 def debugwireargs(self, one, two, three=None, four=None, five=None):
2801 def debugwireargs(self, one, two, three=None, four=None, five=None):
2802 '''used to test argument passing over the wire'''
2802 '''used to test argument passing over the wire'''
2803 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2803 return "%s %s %s %s %s" % (one, two, pycompat.bytestr(three),
2804 pycompat.bytestr(four),
2804 pycompat.bytestr(four),
2805 pycompat.bytestr(five))
2805 pycompat.bytestr(five))
2806
2806
2807 def savecommitmessage(self, text):
2807 def savecommitmessage(self, text):
2808 fp = self.vfs('last-message.txt', 'wb')
2808 fp = self.vfs('last-message.txt', 'wb')
2809 try:
2809 try:
2810 fp.write(text)
2810 fp.write(text)
2811 finally:
2811 finally:
2812 fp.close()
2812 fp.close()
2813 return self.pathto(fp.name[len(self.root) + 1:])
2813 return self.pathto(fp.name[len(self.root) + 1:])
2814
2814
2815 # used to avoid circular references so destructors work
2815 # used to avoid circular references so destructors work
2816 def aftertrans(files):
2816 def aftertrans(files):
2817 renamefiles = [tuple(t) for t in files]
2817 renamefiles = [tuple(t) for t in files]
2818 def a():
2818 def a():
2819 for vfs, src, dest in renamefiles:
2819 for vfs, src, dest in renamefiles:
2820 # if src and dest refer to a same file, vfs.rename is a no-op,
2820 # if src and dest refer to a same file, vfs.rename is a no-op,
2821 # leaving both src and dest on disk. delete dest to make sure
2821 # leaving both src and dest on disk. delete dest to make sure
2822 # the rename couldn't be such a no-op.
2822 # the rename couldn't be such a no-op.
2823 vfs.tryunlink(dest)
2823 vfs.tryunlink(dest)
2824 try:
2824 try:
2825 vfs.rename(src, dest)
2825 vfs.rename(src, dest)
2826 except OSError: # journal file does not yet exist
2826 except OSError: # journal file does not yet exist
2827 pass
2827 pass
2828 return a
2828 return a
2829
2829
2830 def undoname(fn):
2830 def undoname(fn):
2831 base, name = os.path.split(fn)
2831 base, name = os.path.split(fn)
2832 assert name.startswith('journal')
2832 assert name.startswith('journal')
2833 return os.path.join(base, name.replace('journal', 'undo', 1))
2833 return os.path.join(base, name.replace('journal', 'undo', 1))
2834
2834
2835 def instance(ui, path, create, intents=None, createopts=None):
2835 def instance(ui, path, create, intents=None, createopts=None):
2836 localpath = util.urllocalpath(path)
2836 localpath = util.urllocalpath(path)
2837 if create:
2837 if create:
2838 createrepository(ui, localpath, createopts=createopts)
2838 createrepository(ui, localpath, createopts=createopts)
2839
2839
2840 return makelocalrepository(ui, localpath, intents=intents)
2840 return makelocalrepository(ui, localpath, intents=intents)
2841
2841
2842 def islocal(path):
2842 def islocal(path):
2843 return True
2843 return True
2844
2844
2845 def defaultcreateopts(ui, createopts=None):
2845 def defaultcreateopts(ui, createopts=None):
2846 """Populate the default creation options for a repository.
2846 """Populate the default creation options for a repository.
2847
2847
2848 A dictionary of explicitly requested creation options can be passed
2848 A dictionary of explicitly requested creation options can be passed
2849 in. Missing keys will be populated.
2849 in. Missing keys will be populated.
2850 """
2850 """
2851 createopts = dict(createopts or {})
2851 createopts = dict(createopts or {})
2852
2852
2853 if 'backend' not in createopts:
2853 if 'backend' not in createopts:
2854 # experimental config: storage.new-repo-backend
2854 # experimental config: storage.new-repo-backend
2855 createopts['backend'] = ui.config('storage', 'new-repo-backend')
2855 createopts['backend'] = ui.config('storage', 'new-repo-backend')
2856
2856
2857 return createopts
2857 return createopts
2858
2858
2859 def newreporequirements(ui, createopts):
2859 def newreporequirements(ui, createopts):
2860 """Determine the set of requirements for a new local repository.
2860 """Determine the set of requirements for a new local repository.
2861
2861
2862 Extensions can wrap this function to specify custom requirements for
2862 Extensions can wrap this function to specify custom requirements for
2863 new repositories.
2863 new repositories.
2864 """
2864 """
2865 # If the repo is being created from a shared repository, we copy
2865 # If the repo is being created from a shared repository, we copy
2866 # its requirements.
2866 # its requirements.
2867 if 'sharedrepo' in createopts:
2867 if 'sharedrepo' in createopts:
2868 requirements = set(createopts['sharedrepo'].requirements)
2868 requirements = set(createopts['sharedrepo'].requirements)
2869 if createopts.get('sharedrelative'):
2869 if createopts.get('sharedrelative'):
2870 requirements.add('relshared')
2870 requirements.add('relshared')
2871 else:
2871 else:
2872 requirements.add('shared')
2872 requirements.add('shared')
2873
2873
2874 return requirements
2874 return requirements
2875
2875
2876 if 'backend' not in createopts:
2876 if 'backend' not in createopts:
2877 raise error.ProgrammingError('backend key not present in createopts; '
2877 raise error.ProgrammingError('backend key not present in createopts; '
2878 'was defaultcreateopts() called?')
2878 'was defaultcreateopts() called?')
2879
2879
2880 if createopts['backend'] != 'revlogv1':
2880 if createopts['backend'] != 'revlogv1':
2881 raise error.Abort(_('unable to determine repository requirements for '
2881 raise error.Abort(_('unable to determine repository requirements for '
2882 'storage backend: %s') % createopts['backend'])
2882 'storage backend: %s') % createopts['backend'])
2883
2883
2884 requirements = {'revlogv1'}
2884 requirements = {'revlogv1'}
2885 if ui.configbool('format', 'usestore'):
2885 if ui.configbool('format', 'usestore'):
2886 requirements.add('store')
2886 requirements.add('store')
2887 if ui.configbool('format', 'usefncache'):
2887 if ui.configbool('format', 'usefncache'):
2888 requirements.add('fncache')
2888 requirements.add('fncache')
2889 if ui.configbool('format', 'dotencode'):
2889 if ui.configbool('format', 'dotencode'):
2890 requirements.add('dotencode')
2890 requirements.add('dotencode')
2891
2891
2892 compengine = ui.config('experimental', 'format.compression')
2892 compengine = ui.config('experimental', 'format.compression')
2893 if compengine not in util.compengines:
2893 if compengine not in util.compengines:
2894 raise error.Abort(_('compression engine %s defined by '
2894 raise error.Abort(_('compression engine %s defined by '
2895 'experimental.format.compression not available') %
2895 'experimental.format.compression not available') %
2896 compengine,
2896 compengine,
2897 hint=_('run "hg debuginstall" to list available '
2897 hint=_('run "hg debuginstall" to list available '
2898 'compression engines'))
2898 'compression engines'))
2899
2899
2900 # zlib is the historical default and doesn't need an explicit requirement.
2900 # zlib is the historical default and doesn't need an explicit requirement.
2901 if compengine != 'zlib':
2901 if compengine != 'zlib':
2902 requirements.add('exp-compression-%s' % compengine)
2902 requirements.add('exp-compression-%s' % compengine)
2903
2903
2904 if scmutil.gdinitconfig(ui):
2904 if scmutil.gdinitconfig(ui):
2905 requirements.add('generaldelta')
2905 requirements.add('generaldelta')
2906 if ui.configbool('format', 'sparse-revlog'):
2906 if ui.configbool('format', 'sparse-revlog'):
2907 requirements.add(SPARSEREVLOG_REQUIREMENT)
2907 requirements.add(SPARSEREVLOG_REQUIREMENT)
2908 if ui.configbool('experimental', 'treemanifest'):
2908 if ui.configbool('experimental', 'treemanifest'):
2909 requirements.add('treemanifest')
2909 requirements.add('treemanifest')
2910
2910
2911 revlogv2 = ui.config('experimental', 'revlogv2')
2911 revlogv2 = ui.config('experimental', 'revlogv2')
2912 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2912 if revlogv2 == 'enable-unstable-format-and-corrupt-my-data':
2913 requirements.remove('revlogv1')
2913 requirements.remove('revlogv1')
2914 # generaldelta is implied by revlogv2.
2914 # generaldelta is implied by revlogv2.
2915 requirements.discard('generaldelta')
2915 requirements.discard('generaldelta')
2916 requirements.add(REVLOGV2_REQUIREMENT)
2916 requirements.add(REVLOGV2_REQUIREMENT)
2917 # experimental config: format.internal-phase
2917 # experimental config: format.internal-phase
2918 if ui.configbool('format', 'internal-phase'):
2918 if ui.configbool('format', 'internal-phase'):
2919 requirements.add('internal-phase')
2919 requirements.add('internal-phase')
2920
2920
2921 if createopts.get('narrowfiles'):
2921 if createopts.get('narrowfiles'):
2922 requirements.add(repository.NARROW_REQUIREMENT)
2922 requirements.add(repository.NARROW_REQUIREMENT)
2923
2923
2924 if createopts.get('lfs'):
2924 if createopts.get('lfs'):
2925 requirements.add('lfs')
2925 requirements.add('lfs')
2926
2926
2927 return requirements
2927 return requirements
2928
2928
2929 def filterknowncreateopts(ui, createopts):
2929 def filterknowncreateopts(ui, createopts):
2930 """Filters a dict of repo creation options against options that are known.
2930 """Filters a dict of repo creation options against options that are known.
2931
2931
2932 Receives a dict of repo creation options and returns a dict of those
2932 Receives a dict of repo creation options and returns a dict of those
2933 options that we don't know how to handle.
2933 options that we don't know how to handle.
2934
2934
2935 This function is called as part of repository creation. If the
2935 This function is called as part of repository creation. If the
2936 returned dict contains any items, repository creation will not
2936 returned dict contains any items, repository creation will not
2937 be allowed, as it means there was a request to create a repository
2937 be allowed, as it means there was a request to create a repository
2938 with options not recognized by loaded code.
2938 with options not recognized by loaded code.
2939
2939
2940 Extensions can wrap this function to filter out creation options
2940 Extensions can wrap this function to filter out creation options
2941 they know how to handle.
2941 they know how to handle.
2942 """
2942 """
2943 known = {
2943 known = {
2944 'backend',
2944 'backend',
2945 'lfs',
2945 'lfs',
2946 'narrowfiles',
2946 'narrowfiles',
2947 'sharedrepo',
2947 'sharedrepo',
2948 'sharedrelative',
2948 'sharedrelative',
2949 'shareditems',
2949 'shareditems',
2950 'shallowfilestore',
2950 'shallowfilestore',
2951 }
2951 }
2952
2952
2953 return {k: v for k, v in createopts.items() if k not in known}
2953 return {k: v for k, v in createopts.items() if k not in known}
2954
2954
2955 def createrepository(ui, path, createopts=None):
2955 def createrepository(ui, path, createopts=None):
2956 """Create a new repository in a vfs.
2956 """Create a new repository in a vfs.
2957
2957
2958 ``path`` path to the new repo's working directory.
2958 ``path`` path to the new repo's working directory.
2959 ``createopts`` options for the new repository.
2959 ``createopts`` options for the new repository.
2960
2960
2961 The following keys for ``createopts`` are recognized:
2961 The following keys for ``createopts`` are recognized:
2962
2962
2963 backend
2963 backend
2964 The storage backend to use.
2964 The storage backend to use.
2965 lfs
2965 lfs
2966 Repository will be created with ``lfs`` requirement. The lfs extension
2966 Repository will be created with ``lfs`` requirement. The lfs extension
2967 will automatically be loaded when the repository is accessed.
2967 will automatically be loaded when the repository is accessed.
2968 narrowfiles
2968 narrowfiles
2969 Set up repository to support narrow file storage.
2969 Set up repository to support narrow file storage.
2970 sharedrepo
2970 sharedrepo
2971 Repository object from which storage should be shared.
2971 Repository object from which storage should be shared.
2972 sharedrelative
2972 sharedrelative
2973 Boolean indicating if the path to the shared repo should be
2973 Boolean indicating if the path to the shared repo should be
2974 stored as relative. By default, the pointer to the "parent" repo
2974 stored as relative. By default, the pointer to the "parent" repo
2975 is stored as an absolute path.
2975 is stored as an absolute path.
2976 shareditems
2976 shareditems
2977 Set of items to share to the new repository (in addition to storage).
2977 Set of items to share to the new repository (in addition to storage).
2978 shallowfilestore
2978 shallowfilestore
2979 Indicates that storage for files should be shallow (not all ancestor
2979 Indicates that storage for files should be shallow (not all ancestor
2980 revisions are known).
2980 revisions are known).
2981 """
2981 """
2982 createopts = defaultcreateopts(ui, createopts=createopts)
2982 createopts = defaultcreateopts(ui, createopts=createopts)
2983
2983
2984 unknownopts = filterknowncreateopts(ui, createopts)
2984 unknownopts = filterknowncreateopts(ui, createopts)
2985
2985
2986 if not isinstance(unknownopts, dict):
2986 if not isinstance(unknownopts, dict):
2987 raise error.ProgrammingError('filterknowncreateopts() did not return '
2987 raise error.ProgrammingError('filterknowncreateopts() did not return '
2988 'a dict')
2988 'a dict')
2989
2989
2990 if unknownopts:
2990 if unknownopts:
2991 raise error.Abort(_('unable to create repository because of unknown '
2991 raise error.Abort(_('unable to create repository because of unknown '
2992 'creation option: %s') %
2992 'creation option: %s') %
2993 ', '.join(sorted(unknownopts)),
2993 ', '.join(sorted(unknownopts)),
2994 hint=_('is a required extension not loaded?'))
2994 hint=_('is a required extension not loaded?'))
2995
2995
2996 requirements = newreporequirements(ui, createopts=createopts)
2996 requirements = newreporequirements(ui, createopts=createopts)
2997
2997
2998 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
2998 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
2999
2999
3000 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3000 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3001 if hgvfs.exists():
3001 if hgvfs.exists():
3002 raise error.RepoError(_('repository %s already exists') % path)
3002 raise error.RepoError(_('repository %s already exists') % path)
3003
3003
3004 if 'sharedrepo' in createopts:
3004 if 'sharedrepo' in createopts:
3005 sharedpath = createopts['sharedrepo'].sharedpath
3005 sharedpath = createopts['sharedrepo'].sharedpath
3006
3006
3007 if createopts.get('sharedrelative'):
3007 if createopts.get('sharedrelative'):
3008 try:
3008 try:
3009 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3009 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3010 except (IOError, ValueError) as e:
3010 except (IOError, ValueError) as e:
3011 # ValueError is raised on Windows if the drive letters differ
3011 # ValueError is raised on Windows if the drive letters differ
3012 # on each path.
3012 # on each path.
3013 raise error.Abort(_('cannot calculate relative path'),
3013 raise error.Abort(_('cannot calculate relative path'),
3014 hint=stringutil.forcebytestr(e))
3014 hint=stringutil.forcebytestr(e))
3015
3015
3016 if not wdirvfs.exists():
3016 if not wdirvfs.exists():
3017 wdirvfs.makedirs()
3017 wdirvfs.makedirs()
3018
3018
3019 hgvfs.makedir(notindexed=True)
3019 hgvfs.makedir(notindexed=True)
3020 if 'sharedrepo' not in createopts:
3020 if 'sharedrepo' not in createopts:
3021 hgvfs.mkdir(b'cache')
3021 hgvfs.mkdir(b'cache')
3022 hgvfs.mkdir(b'wcache')
3022 hgvfs.mkdir(b'wcache')
3023
3023
3024 if b'store' in requirements and 'sharedrepo' not in createopts:
3024 if b'store' in requirements and 'sharedrepo' not in createopts:
3025 hgvfs.mkdir(b'store')
3025 hgvfs.mkdir(b'store')
3026
3026
3027 # We create an invalid changelog outside the store so very old
3027 # We create an invalid changelog outside the store so very old
3028 # Mercurial versions (which didn't know about the requirements
3028 # Mercurial versions (which didn't know about the requirements
3029 # file) encounter an error on reading the changelog. This
3029 # file) encounter an error on reading the changelog. This
3030 # effectively locks out old clients and prevents them from
3030 # effectively locks out old clients and prevents them from
3031 # mucking with a repo in an unknown format.
3031 # mucking with a repo in an unknown format.
3032 #
3032 #
3033 # The revlog header has version 2, which won't be recognized by
3033 # The revlog header has version 2, which won't be recognized by
3034 # such old clients.
3034 # such old clients.
3035 hgvfs.append(b'00changelog.i',
3035 hgvfs.append(b'00changelog.i',
3036 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3036 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3037 b'layout')
3037 b'layout')
3038
3038
3039 scmutil.writerequires(hgvfs, requirements)
3039 scmutil.writerequires(hgvfs, requirements)
3040
3040
3041 # Write out file telling readers where to find the shared store.
3041 # Write out file telling readers where to find the shared store.
3042 if 'sharedrepo' in createopts:
3042 if 'sharedrepo' in createopts:
3043 hgvfs.write(b'sharedpath', sharedpath)
3043 hgvfs.write(b'sharedpath', sharedpath)
3044
3044
3045 if createopts.get('shareditems'):
3045 if createopts.get('shareditems'):
3046 shared = b'\n'.join(sorted(createopts['shareditems'])) + b'\n'
3046 shared = b'\n'.join(sorted(createopts['shareditems'])) + b'\n'
3047 hgvfs.write(b'shared', shared)
3047 hgvfs.write(b'shared', shared)
3048
3048
3049 def poisonrepository(repo):
3049 def poisonrepository(repo):
3050 """Poison a repository instance so it can no longer be used."""
3050 """Poison a repository instance so it can no longer be used."""
3051 # Perform any cleanup on the instance.
3051 # Perform any cleanup on the instance.
3052 repo.close()
3052 repo.close()
3053
3053
3054 # Our strategy is to replace the type of the object with one that
3054 # Our strategy is to replace the type of the object with one that
3055 # has all attribute lookups result in error.
3055 # has all attribute lookups result in error.
3056 #
3056 #
3057 # But we have to allow the close() method because some constructors
3057 # But we have to allow the close() method because some constructors
3058 # of repos call close() on repo references.
3058 # of repos call close() on repo references.
3059 class poisonedrepository(object):
3059 class poisonedrepository(object):
3060 def __getattribute__(self, item):
3060 def __getattribute__(self, item):
3061 if item == r'close':
3061 if item == r'close':
3062 return object.__getattribute__(self, item)
3062 return object.__getattribute__(self, item)
3063
3063
3064 raise error.ProgrammingError('repo instances should not be used '
3064 raise error.ProgrammingError('repo instances should not be used '
3065 'after unshare')
3065 'after unshare')
3066
3066
3067 def close(self):
3067 def close(self):
3068 pass
3068 pass
3069
3069
3070 # We may have a repoview, which intercepts __setattr__. So be sure
3070 # We may have a repoview, which intercepts __setattr__. So be sure
3071 # we operate at the lowest level possible.
3071 # we operate at the lowest level possible.
3072 object.__setattr__(repo, r'__class__', poisonedrepository)
3072 object.__setattr__(repo, r'__class__', poisonedrepository)
@@ -1,1374 +1,1371 b''
1 # match.py - filename matching
1 # match.py - filename matching
2 #
2 #
3 # Copyright 2008, 2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2008, 2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import, print_function
8 from __future__ import absolute_import, print_function
9
9
10 import copy
10 import copy
11 import itertools
11 import itertools
12 import os
12 import os
13 import re
13 import re
14
14
15 from .i18n import _
15 from .i18n import _
16 from . import (
16 from . import (
17 encoding,
17 encoding,
18 error,
18 error,
19 pathutil,
19 pathutil,
20 pycompat,
20 pycompat,
21 util,
21 util,
22 )
22 )
23 from .utils import (
23 from .utils import (
24 stringutil,
24 stringutil,
25 )
25 )
26
26
27 allpatternkinds = ('re', 'glob', 'path', 'relglob', 'relpath', 'relre',
27 allpatternkinds = ('re', 'glob', 'path', 'relglob', 'relpath', 'relre',
28 'rootglob',
28 'rootglob',
29 'listfile', 'listfile0', 'set', 'include', 'subinclude',
29 'listfile', 'listfile0', 'set', 'include', 'subinclude',
30 'rootfilesin')
30 'rootfilesin')
31 cwdrelativepatternkinds = ('relpath', 'glob')
31 cwdrelativepatternkinds = ('relpath', 'glob')
32
32
33 propertycache = util.propertycache
33 propertycache = util.propertycache
34
34
35 def _rematcher(regex):
35 def _rematcher(regex):
36 '''compile the regexp with the best available regexp engine and return a
36 '''compile the regexp with the best available regexp engine and return a
37 matcher function'''
37 matcher function'''
38 m = util.re.compile(regex)
38 m = util.re.compile(regex)
39 try:
39 try:
40 # slightly faster, provided by facebook's re2 bindings
40 # slightly faster, provided by facebook's re2 bindings
41 return m.test_match
41 return m.test_match
42 except AttributeError:
42 except AttributeError:
43 return m.match
43 return m.match
44
44
45 def _expandsets(root, cwd, kindpats, ctx, listsubrepos, badfn):
45 def _expandsets(kindpats, ctx, listsubrepos, badfn):
46 '''Returns the kindpats list with the 'set' patterns expanded to matchers'''
46 '''Returns the kindpats list with the 'set' patterns expanded to matchers'''
47 matchers = []
47 matchers = []
48 other = []
48 other = []
49
49
50 for kind, pat, source in kindpats:
50 for kind, pat, source in kindpats:
51 if kind == 'set':
51 if kind == 'set':
52 if ctx is None:
52 if ctx is None:
53 raise error.ProgrammingError("fileset expression with no "
53 raise error.ProgrammingError("fileset expression with no "
54 "context")
54 "context")
55 matchers.append(ctx.matchfileset(pat, badfn=badfn))
55 matchers.append(ctx.matchfileset(pat, badfn=badfn))
56
56
57 if listsubrepos:
57 if listsubrepos:
58 for subpath in ctx.substate:
58 for subpath in ctx.substate:
59 sm = ctx.sub(subpath).matchfileset(pat, badfn=badfn)
59 sm = ctx.sub(subpath).matchfileset(pat, badfn=badfn)
60 pm = prefixdirmatcher(root, cwd, subpath, sm, badfn=badfn)
60 pm = prefixdirmatcher(subpath, sm, badfn=badfn)
61 matchers.append(pm)
61 matchers.append(pm)
62
62
63 continue
63 continue
64 other.append((kind, pat, source))
64 other.append((kind, pat, source))
65 return matchers, other
65 return matchers, other
66
66
67 def _expandsubinclude(kindpats, root):
67 def _expandsubinclude(kindpats, root):
68 '''Returns the list of subinclude matcher args and the kindpats without the
68 '''Returns the list of subinclude matcher args and the kindpats without the
69 subincludes in it.'''
69 subincludes in it.'''
70 relmatchers = []
70 relmatchers = []
71 other = []
71 other = []
72
72
73 for kind, pat, source in kindpats:
73 for kind, pat, source in kindpats:
74 if kind == 'subinclude':
74 if kind == 'subinclude':
75 sourceroot = pathutil.dirname(util.normpath(source))
75 sourceroot = pathutil.dirname(util.normpath(source))
76 pat = util.pconvert(pat)
76 pat = util.pconvert(pat)
77 path = pathutil.join(sourceroot, pat)
77 path = pathutil.join(sourceroot, pat)
78
78
79 newroot = pathutil.dirname(path)
79 newroot = pathutil.dirname(path)
80 matcherargs = (newroot, '', [], ['include:%s' % path])
80 matcherargs = (newroot, '', [], ['include:%s' % path])
81
81
82 prefix = pathutil.canonpath(root, root, newroot)
82 prefix = pathutil.canonpath(root, root, newroot)
83 if prefix:
83 if prefix:
84 prefix += '/'
84 prefix += '/'
85 relmatchers.append((prefix, matcherargs))
85 relmatchers.append((prefix, matcherargs))
86 else:
86 else:
87 other.append((kind, pat, source))
87 other.append((kind, pat, source))
88
88
89 return relmatchers, other
89 return relmatchers, other
90
90
91 def _kindpatsalwaysmatch(kindpats):
91 def _kindpatsalwaysmatch(kindpats):
92 """"Checks whether the kindspats match everything, as e.g.
92 """"Checks whether the kindspats match everything, as e.g.
93 'relpath:.' does.
93 'relpath:.' does.
94 """
94 """
95 for kind, pat, source in kindpats:
95 for kind, pat, source in kindpats:
96 if pat != '' or kind not in ['relpath', 'glob']:
96 if pat != '' or kind not in ['relpath', 'glob']:
97 return False
97 return False
98 return True
98 return True
99
99
100 def _buildkindpatsmatcher(matchercls, root, cwd, kindpats, ctx=None,
100 def _buildkindpatsmatcher(matchercls, root, kindpats, ctx=None,
101 listsubrepos=False, badfn=None):
101 listsubrepos=False, badfn=None):
102 matchers = []
102 matchers = []
103 fms, kindpats = _expandsets(root, cwd, kindpats, ctx=ctx,
103 fms, kindpats = _expandsets(kindpats, ctx=ctx,
104 listsubrepos=listsubrepos, badfn=badfn)
104 listsubrepos=listsubrepos, badfn=badfn)
105 if kindpats:
105 if kindpats:
106 m = matchercls(root, cwd, kindpats, badfn=badfn)
106 m = matchercls(root, kindpats, badfn=badfn)
107 matchers.append(m)
107 matchers.append(m)
108 if fms:
108 if fms:
109 matchers.extend(fms)
109 matchers.extend(fms)
110 if not matchers:
110 if not matchers:
111 return nevermatcher(root, cwd, badfn=badfn)
111 return nevermatcher(badfn=badfn)
112 if len(matchers) == 1:
112 if len(matchers) == 1:
113 return matchers[0]
113 return matchers[0]
114 return unionmatcher(matchers)
114 return unionmatcher(matchers)
115
115
116 def match(root, cwd, patterns=None, include=None, exclude=None, default='glob',
116 def match(root, cwd, patterns=None, include=None, exclude=None, default='glob',
117 auditor=None, ctx=None, listsubrepos=False, warn=None,
117 auditor=None, ctx=None, listsubrepos=False, warn=None,
118 badfn=None, icasefs=False):
118 badfn=None, icasefs=False):
119 """build an object to match a set of file patterns
119 """build an object to match a set of file patterns
120
120
121 arguments:
121 arguments:
122 root - the canonical root of the tree you're matching against
122 root - the canonical root of the tree you're matching against
123 cwd - the current working directory, if relevant
123 cwd - the current working directory, if relevant
124 patterns - patterns to find
124 patterns - patterns to find
125 include - patterns to include (unless they are excluded)
125 include - patterns to include (unless they are excluded)
126 exclude - patterns to exclude (even if they are included)
126 exclude - patterns to exclude (even if they are included)
127 default - if a pattern in patterns has no explicit type, assume this one
127 default - if a pattern in patterns has no explicit type, assume this one
128 warn - optional function used for printing warnings
128 warn - optional function used for printing warnings
129 badfn - optional bad() callback for this matcher instead of the default
129 badfn - optional bad() callback for this matcher instead of the default
130 icasefs - make a matcher for wdir on case insensitive filesystems, which
130 icasefs - make a matcher for wdir on case insensitive filesystems, which
131 normalizes the given patterns to the case in the filesystem
131 normalizes the given patterns to the case in the filesystem
132
132
133 a pattern is one of:
133 a pattern is one of:
134 'glob:<glob>' - a glob relative to cwd
134 'glob:<glob>' - a glob relative to cwd
135 're:<regexp>' - a regular expression
135 're:<regexp>' - a regular expression
136 'path:<path>' - a path relative to repository root, which is matched
136 'path:<path>' - a path relative to repository root, which is matched
137 recursively
137 recursively
138 'rootfilesin:<path>' - a path relative to repository root, which is
138 'rootfilesin:<path>' - a path relative to repository root, which is
139 matched non-recursively (will not match subdirectories)
139 matched non-recursively (will not match subdirectories)
140 'relglob:<glob>' - an unrooted glob (*.c matches C files in all dirs)
140 'relglob:<glob>' - an unrooted glob (*.c matches C files in all dirs)
141 'relpath:<path>' - a path relative to cwd
141 'relpath:<path>' - a path relative to cwd
142 'relre:<regexp>' - a regexp that needn't match the start of a name
142 'relre:<regexp>' - a regexp that needn't match the start of a name
143 'set:<fileset>' - a fileset expression
143 'set:<fileset>' - a fileset expression
144 'include:<path>' - a file of patterns to read and include
144 'include:<path>' - a file of patterns to read and include
145 'subinclude:<path>' - a file of patterns to match against files under
145 'subinclude:<path>' - a file of patterns to match against files under
146 the same directory
146 the same directory
147 '<something>' - a pattern of the specified default type
147 '<something>' - a pattern of the specified default type
148 """
148 """
149 normalize = _donormalize
149 normalize = _donormalize
150 if icasefs:
150 if icasefs:
151 dirstate = ctx.repo().dirstate
151 dirstate = ctx.repo().dirstate
152 dsnormalize = dirstate.normalize
152 dsnormalize = dirstate.normalize
153
153
154 def normalize(patterns, default, root, cwd, auditor, warn):
154 def normalize(patterns, default, root, cwd, auditor, warn):
155 kp = _donormalize(patterns, default, root, cwd, auditor, warn)
155 kp = _donormalize(patterns, default, root, cwd, auditor, warn)
156 kindpats = []
156 kindpats = []
157 for kind, pats, source in kp:
157 for kind, pats, source in kp:
158 if kind not in ('re', 'relre'): # regex can't be normalized
158 if kind not in ('re', 'relre'): # regex can't be normalized
159 p = pats
159 p = pats
160 pats = dsnormalize(pats)
160 pats = dsnormalize(pats)
161
161
162 # Preserve the original to handle a case only rename.
162 # Preserve the original to handle a case only rename.
163 if p != pats and p in dirstate:
163 if p != pats and p in dirstate:
164 kindpats.append((kind, p, source))
164 kindpats.append((kind, p, source))
165
165
166 kindpats.append((kind, pats, source))
166 kindpats.append((kind, pats, source))
167 return kindpats
167 return kindpats
168
168
169 if patterns:
169 if patterns:
170 kindpats = normalize(patterns, default, root, cwd, auditor, warn)
170 kindpats = normalize(patterns, default, root, cwd, auditor, warn)
171 if _kindpatsalwaysmatch(kindpats):
171 if _kindpatsalwaysmatch(kindpats):
172 m = alwaysmatcher(root, cwd, badfn)
172 m = alwaysmatcher(badfn)
173 else:
173 else:
174 m = _buildkindpatsmatcher(patternmatcher, root, cwd, kindpats,
174 m = _buildkindpatsmatcher(patternmatcher, root, kindpats, ctx=ctx,
175 ctx=ctx, listsubrepos=listsubrepos,
175 listsubrepos=listsubrepos, badfn=badfn)
176 badfn=badfn)
177 else:
176 else:
178 # It's a little strange that no patterns means to match everything.
177 # It's a little strange that no patterns means to match everything.
179 # Consider changing this to match nothing (probably using nevermatcher).
178 # Consider changing this to match nothing (probably using nevermatcher).
180 m = alwaysmatcher(root, cwd, badfn)
179 m = alwaysmatcher(badfn)
181
180
182 if include:
181 if include:
183 kindpats = normalize(include, 'glob', root, cwd, auditor, warn)
182 kindpats = normalize(include, 'glob', root, cwd, auditor, warn)
184 im = _buildkindpatsmatcher(includematcher, root, cwd, kindpats, ctx=ctx,
183 im = _buildkindpatsmatcher(includematcher, root, kindpats, ctx=ctx,
185 listsubrepos=listsubrepos, badfn=None)
184 listsubrepos=listsubrepos, badfn=None)
186 m = intersectmatchers(m, im)
185 m = intersectmatchers(m, im)
187 if exclude:
186 if exclude:
188 kindpats = normalize(exclude, 'glob', root, cwd, auditor, warn)
187 kindpats = normalize(exclude, 'glob', root, cwd, auditor, warn)
189 em = _buildkindpatsmatcher(includematcher, root, cwd, kindpats, ctx=ctx,
188 em = _buildkindpatsmatcher(includematcher, root, kindpats, ctx=ctx,
190 listsubrepos=listsubrepos, badfn=None)
189 listsubrepos=listsubrepos, badfn=None)
191 m = differencematcher(m, em)
190 m = differencematcher(m, em)
192 return m
191 return m
193
192
194 def exact(root, cwd, files, badfn=None):
193 def exact(root, cwd, files, badfn=None):
195 return exactmatcher(root, cwd, files, badfn=badfn)
194 return exactmatcher(files, badfn=badfn)
196
195
197 def always(root, cwd, badfn=None):
196 def always(root, cwd, badfn=None):
198 return alwaysmatcher(root, cwd, badfn=badfn)
197 return alwaysmatcher(badfn=badfn)
199
198
200 def never(root, cwd, badfn=None):
199 def never(root, cwd, badfn=None):
201 return nevermatcher(root, cwd, badfn=badfn)
200 return nevermatcher(badfn=badfn)
202
201
203 def badmatch(match, badfn):
202 def badmatch(match, badfn):
204 """Make a copy of the given matcher, replacing its bad method with the given
203 """Make a copy of the given matcher, replacing its bad method with the given
205 one.
204 one.
206 """
205 """
207 m = copy.copy(match)
206 m = copy.copy(match)
208 m.bad = badfn
207 m.bad = badfn
209 return m
208 return m
210
209
211 def _donormalize(patterns, default, root, cwd, auditor, warn):
210 def _donormalize(patterns, default, root, cwd, auditor, warn):
212 '''Convert 'kind:pat' from the patterns list to tuples with kind and
211 '''Convert 'kind:pat' from the patterns list to tuples with kind and
213 normalized and rooted patterns and with listfiles expanded.'''
212 normalized and rooted patterns and with listfiles expanded.'''
214 kindpats = []
213 kindpats = []
215 for kind, pat in [_patsplit(p, default) for p in patterns]:
214 for kind, pat in [_patsplit(p, default) for p in patterns]:
216 if kind in cwdrelativepatternkinds:
215 if kind in cwdrelativepatternkinds:
217 pat = pathutil.canonpath(root, cwd, pat, auditor)
216 pat = pathutil.canonpath(root, cwd, pat, auditor)
218 elif kind in ('relglob', 'path', 'rootfilesin', 'rootglob'):
217 elif kind in ('relglob', 'path', 'rootfilesin', 'rootglob'):
219 pat = util.normpath(pat)
218 pat = util.normpath(pat)
220 elif kind in ('listfile', 'listfile0'):
219 elif kind in ('listfile', 'listfile0'):
221 try:
220 try:
222 files = util.readfile(pat)
221 files = util.readfile(pat)
223 if kind == 'listfile0':
222 if kind == 'listfile0':
224 files = files.split('\0')
223 files = files.split('\0')
225 else:
224 else:
226 files = files.splitlines()
225 files = files.splitlines()
227 files = [f for f in files if f]
226 files = [f for f in files if f]
228 except EnvironmentError:
227 except EnvironmentError:
229 raise error.Abort(_("unable to read file list (%s)") % pat)
228 raise error.Abort(_("unable to read file list (%s)") % pat)
230 for k, p, source in _donormalize(files, default, root, cwd,
229 for k, p, source in _donormalize(files, default, root, cwd,
231 auditor, warn):
230 auditor, warn):
232 kindpats.append((k, p, pat))
231 kindpats.append((k, p, pat))
233 continue
232 continue
234 elif kind == 'include':
233 elif kind == 'include':
235 try:
234 try:
236 fullpath = os.path.join(root, util.localpath(pat))
235 fullpath = os.path.join(root, util.localpath(pat))
237 includepats = readpatternfile(fullpath, warn)
236 includepats = readpatternfile(fullpath, warn)
238 for k, p, source in _donormalize(includepats, default,
237 for k, p, source in _donormalize(includepats, default,
239 root, cwd, auditor, warn):
238 root, cwd, auditor, warn):
240 kindpats.append((k, p, source or pat))
239 kindpats.append((k, p, source or pat))
241 except error.Abort as inst:
240 except error.Abort as inst:
242 raise error.Abort('%s: %s' % (pat, inst[0]))
241 raise error.Abort('%s: %s' % (pat, inst[0]))
243 except IOError as inst:
242 except IOError as inst:
244 if warn:
243 if warn:
245 warn(_("skipping unreadable pattern file '%s': %s\n") %
244 warn(_("skipping unreadable pattern file '%s': %s\n") %
246 (pat, stringutil.forcebytestr(inst.strerror)))
245 (pat, stringutil.forcebytestr(inst.strerror)))
247 continue
246 continue
248 # else: re or relre - which cannot be normalized
247 # else: re or relre - which cannot be normalized
249 kindpats.append((kind, pat, ''))
248 kindpats.append((kind, pat, ''))
250 return kindpats
249 return kindpats
251
250
252 class basematcher(object):
251 class basematcher(object):
253
252
254 def __init__(self, root, cwd, badfn=None):
253 def __init__(self, badfn=None):
255 self._root = root
256 self._cwd = cwd
257 if badfn is not None:
254 if badfn is not None:
258 self.bad = badfn
255 self.bad = badfn
259
256
260 def __call__(self, fn):
257 def __call__(self, fn):
261 return self.matchfn(fn)
258 return self.matchfn(fn)
262 def __iter__(self):
259 def __iter__(self):
263 for f in self._files:
260 for f in self._files:
264 yield f
261 yield f
265 # Callbacks related to how the matcher is used by dirstate.walk.
262 # Callbacks related to how the matcher is used by dirstate.walk.
266 # Subscribers to these events must monkeypatch the matcher object.
263 # Subscribers to these events must monkeypatch the matcher object.
267 def bad(self, f, msg):
264 def bad(self, f, msg):
268 '''Callback from dirstate.walk for each explicit file that can't be
265 '''Callback from dirstate.walk for each explicit file that can't be
269 found/accessed, with an error message.'''
266 found/accessed, with an error message.'''
270
267
271 # If an explicitdir is set, it will be called when an explicitly listed
268 # If an explicitdir is set, it will be called when an explicitly listed
272 # directory is visited.
269 # directory is visited.
273 explicitdir = None
270 explicitdir = None
274
271
275 # If an traversedir is set, it will be called when a directory discovered
272 # If an traversedir is set, it will be called when a directory discovered
276 # by recursive traversal is visited.
273 # by recursive traversal is visited.
277 traversedir = None
274 traversedir = None
278
275
279 @propertycache
276 @propertycache
280 def _files(self):
277 def _files(self):
281 return []
278 return []
282
279
283 def files(self):
280 def files(self):
284 '''Explicitly listed files or patterns or roots:
281 '''Explicitly listed files or patterns or roots:
285 if no patterns or .always(): empty list,
282 if no patterns or .always(): empty list,
286 if exact: list exact files,
283 if exact: list exact files,
287 if not .anypats(): list all files and dirs,
284 if not .anypats(): list all files and dirs,
288 else: optimal roots'''
285 else: optimal roots'''
289 return self._files
286 return self._files
290
287
291 @propertycache
288 @propertycache
292 def _fileset(self):
289 def _fileset(self):
293 return set(self._files)
290 return set(self._files)
294
291
295 def exact(self, f):
292 def exact(self, f):
296 '''Returns True if f is in .files().'''
293 '''Returns True if f is in .files().'''
297 return f in self._fileset
294 return f in self._fileset
298
295
299 def matchfn(self, f):
296 def matchfn(self, f):
300 return False
297 return False
301
298
302 def visitdir(self, dir):
299 def visitdir(self, dir):
303 '''Decides whether a directory should be visited based on whether it
300 '''Decides whether a directory should be visited based on whether it
304 has potential matches in it or one of its subdirectories. This is
301 has potential matches in it or one of its subdirectories. This is
305 based on the match's primary, included, and excluded patterns.
302 based on the match's primary, included, and excluded patterns.
306
303
307 Returns the string 'all' if the given directory and all subdirectories
304 Returns the string 'all' if the given directory and all subdirectories
308 should be visited. Otherwise returns True or False indicating whether
305 should be visited. Otherwise returns True or False indicating whether
309 the given directory should be visited.
306 the given directory should be visited.
310 '''
307 '''
311 return True
308 return True
312
309
313 def visitchildrenset(self, dir):
310 def visitchildrenset(self, dir):
314 '''Decides whether a directory should be visited based on whether it
311 '''Decides whether a directory should be visited based on whether it
315 has potential matches in it or one of its subdirectories, and
312 has potential matches in it or one of its subdirectories, and
316 potentially lists which subdirectories of that directory should be
313 potentially lists which subdirectories of that directory should be
317 visited. This is based on the match's primary, included, and excluded
314 visited. This is based on the match's primary, included, and excluded
318 patterns.
315 patterns.
319
316
320 This function is very similar to 'visitdir', and the following mapping
317 This function is very similar to 'visitdir', and the following mapping
321 can be applied:
318 can be applied:
322
319
323 visitdir | visitchildrenlist
320 visitdir | visitchildrenlist
324 ----------+-------------------
321 ----------+-------------------
325 False | set()
322 False | set()
326 'all' | 'all'
323 'all' | 'all'
327 True | 'this' OR non-empty set of subdirs -or files- to visit
324 True | 'this' OR non-empty set of subdirs -or files- to visit
328
325
329 Example:
326 Example:
330 Assume matchers ['path:foo/bar', 'rootfilesin:qux'], we would return
327 Assume matchers ['path:foo/bar', 'rootfilesin:qux'], we would return
331 the following values (assuming the implementation of visitchildrenset
328 the following values (assuming the implementation of visitchildrenset
332 is capable of recognizing this; some implementations are not).
329 is capable of recognizing this; some implementations are not).
333
330
334 '.' -> {'foo', 'qux'}
331 '.' -> {'foo', 'qux'}
335 'baz' -> set()
332 'baz' -> set()
336 'foo' -> {'bar'}
333 'foo' -> {'bar'}
337 # Ideally this would be 'all', but since the prefix nature of matchers
334 # Ideally this would be 'all', but since the prefix nature of matchers
338 # is applied to the entire matcher, we have to downgrade this to
335 # is applied to the entire matcher, we have to downgrade this to
339 # 'this' due to the non-prefix 'rootfilesin'-kind matcher being mixed
336 # 'this' due to the non-prefix 'rootfilesin'-kind matcher being mixed
340 # in.
337 # in.
341 'foo/bar' -> 'this'
338 'foo/bar' -> 'this'
342 'qux' -> 'this'
339 'qux' -> 'this'
343
340
344 Important:
341 Important:
345 Most matchers do not know if they're representing files or
342 Most matchers do not know if they're representing files or
346 directories. They see ['path:dir/f'] and don't know whether 'f' is a
343 directories. They see ['path:dir/f'] and don't know whether 'f' is a
347 file or a directory, so visitchildrenset('dir') for most matchers will
344 file or a directory, so visitchildrenset('dir') for most matchers will
348 return {'f'}, but if the matcher knows it's a file (like exactmatcher
345 return {'f'}, but if the matcher knows it's a file (like exactmatcher
349 does), it may return 'this'. Do not rely on the return being a set
346 does), it may return 'this'. Do not rely on the return being a set
350 indicating that there are no files in this dir to investigate (or
347 indicating that there are no files in this dir to investigate (or
351 equivalently that if there are files to investigate in 'dir' that it
348 equivalently that if there are files to investigate in 'dir' that it
352 will always return 'this').
349 will always return 'this').
353 '''
350 '''
354 return 'this'
351 return 'this'
355
352
356 def always(self):
353 def always(self):
357 '''Matcher will match everything and .files() will be empty --
354 '''Matcher will match everything and .files() will be empty --
358 optimization might be possible.'''
355 optimization might be possible.'''
359 return False
356 return False
360
357
361 def isexact(self):
358 def isexact(self):
362 '''Matcher will match exactly the list of files in .files() --
359 '''Matcher will match exactly the list of files in .files() --
363 optimization might be possible.'''
360 optimization might be possible.'''
364 return False
361 return False
365
362
366 def prefix(self):
363 def prefix(self):
367 '''Matcher will match the paths in .files() recursively --
364 '''Matcher will match the paths in .files() recursively --
368 optimization might be possible.'''
365 optimization might be possible.'''
369 return False
366 return False
370
367
371 def anypats(self):
368 def anypats(self):
372 '''None of .always(), .isexact(), and .prefix() is true --
369 '''None of .always(), .isexact(), and .prefix() is true --
373 optimizations will be difficult.'''
370 optimizations will be difficult.'''
374 return not self.always() and not self.isexact() and not self.prefix()
371 return not self.always() and not self.isexact() and not self.prefix()
375
372
376 class alwaysmatcher(basematcher):
373 class alwaysmatcher(basematcher):
377 '''Matches everything.'''
374 '''Matches everything.'''
378
375
379 def __init__(self, root, cwd, badfn=None):
376 def __init__(self, badfn=None):
380 super(alwaysmatcher, self).__init__(root, cwd, badfn)
377 super(alwaysmatcher, self).__init__(badfn)
381
378
382 def always(self):
379 def always(self):
383 return True
380 return True
384
381
385 def matchfn(self, f):
382 def matchfn(self, f):
386 return True
383 return True
387
384
388 def visitdir(self, dir):
385 def visitdir(self, dir):
389 return 'all'
386 return 'all'
390
387
391 def visitchildrenset(self, dir):
388 def visitchildrenset(self, dir):
392 return 'all'
389 return 'all'
393
390
394 def __repr__(self):
391 def __repr__(self):
395 return r'<alwaysmatcher>'
392 return r'<alwaysmatcher>'
396
393
397 class nevermatcher(basematcher):
394 class nevermatcher(basematcher):
398 '''Matches nothing.'''
395 '''Matches nothing.'''
399
396
400 def __init__(self, root, cwd, badfn=None):
397 def __init__(self, badfn=None):
401 super(nevermatcher, self).__init__(root, cwd, badfn)
398 super(nevermatcher, self).__init__(badfn)
402
399
403 # It's a little weird to say that the nevermatcher is an exact matcher
400 # It's a little weird to say that the nevermatcher is an exact matcher
404 # or a prefix matcher, but it seems to make sense to let callers take
401 # or a prefix matcher, but it seems to make sense to let callers take
405 # fast paths based on either. There will be no exact matches, nor any
402 # fast paths based on either. There will be no exact matches, nor any
406 # prefixes (files() returns []), so fast paths iterating over them should
403 # prefixes (files() returns []), so fast paths iterating over them should
407 # be efficient (and correct).
404 # be efficient (and correct).
408 def isexact(self):
405 def isexact(self):
409 return True
406 return True
410
407
411 def prefix(self):
408 def prefix(self):
412 return True
409 return True
413
410
414 def visitdir(self, dir):
411 def visitdir(self, dir):
415 return False
412 return False
416
413
417 def visitchildrenset(self, dir):
414 def visitchildrenset(self, dir):
418 return set()
415 return set()
419
416
420 def __repr__(self):
417 def __repr__(self):
421 return r'<nevermatcher>'
418 return r'<nevermatcher>'
422
419
423 class predicatematcher(basematcher):
420 class predicatematcher(basematcher):
424 """A matcher adapter for a simple boolean function"""
421 """A matcher adapter for a simple boolean function"""
425
422
426 def __init__(self, root, cwd, predfn, predrepr=None, badfn=None):
423 def __init__(self, predfn, predrepr=None, badfn=None):
427 super(predicatematcher, self).__init__(root, cwd, badfn)
424 super(predicatematcher, self).__init__(badfn)
428 self.matchfn = predfn
425 self.matchfn = predfn
429 self._predrepr = predrepr
426 self._predrepr = predrepr
430
427
431 @encoding.strmethod
428 @encoding.strmethod
432 def __repr__(self):
429 def __repr__(self):
433 s = (stringutil.buildrepr(self._predrepr)
430 s = (stringutil.buildrepr(self._predrepr)
434 or pycompat.byterepr(self.matchfn))
431 or pycompat.byterepr(self.matchfn))
435 return '<predicatenmatcher pred=%s>' % s
432 return '<predicatenmatcher pred=%s>' % s
436
433
437 class patternmatcher(basematcher):
434 class patternmatcher(basematcher):
438
435
439 def __init__(self, root, cwd, kindpats, badfn=None):
436 def __init__(self, root, kindpats, badfn=None):
440 super(patternmatcher, self).__init__(root, cwd, badfn)
437 super(patternmatcher, self).__init__(badfn)
441
438
442 self._files = _explicitfiles(kindpats)
439 self._files = _explicitfiles(kindpats)
443 self._prefix = _prefix(kindpats)
440 self._prefix = _prefix(kindpats)
444 self._pats, self.matchfn = _buildmatch(kindpats, '$', root)
441 self._pats, self.matchfn = _buildmatch(kindpats, '$', root)
445
442
446 @propertycache
443 @propertycache
447 def _dirs(self):
444 def _dirs(self):
448 return set(util.dirs(self._fileset)) | {'.'}
445 return set(util.dirs(self._fileset)) | {'.'}
449
446
450 def visitdir(self, dir):
447 def visitdir(self, dir):
451 if self._prefix and dir in self._fileset:
448 if self._prefix and dir in self._fileset:
452 return 'all'
449 return 'all'
453 return ('.' in self._fileset or
450 return ('.' in self._fileset or
454 dir in self._fileset or
451 dir in self._fileset or
455 dir in self._dirs or
452 dir in self._dirs or
456 any(parentdir in self._fileset
453 any(parentdir in self._fileset
457 for parentdir in util.finddirs(dir)))
454 for parentdir in util.finddirs(dir)))
458
455
459 def visitchildrenset(self, dir):
456 def visitchildrenset(self, dir):
460 ret = self.visitdir(dir)
457 ret = self.visitdir(dir)
461 if ret is True:
458 if ret is True:
462 return 'this'
459 return 'this'
463 elif not ret:
460 elif not ret:
464 return set()
461 return set()
465 assert ret == 'all'
462 assert ret == 'all'
466 return 'all'
463 return 'all'
467
464
468 def prefix(self):
465 def prefix(self):
469 return self._prefix
466 return self._prefix
470
467
471 @encoding.strmethod
468 @encoding.strmethod
472 def __repr__(self):
469 def __repr__(self):
473 return ('<patternmatcher patterns=%r>' % pycompat.bytestr(self._pats))
470 return ('<patternmatcher patterns=%r>' % pycompat.bytestr(self._pats))
474
471
475 # This is basically a reimplementation of util.dirs that stores the children
472 # This is basically a reimplementation of util.dirs that stores the children
476 # instead of just a count of them, plus a small optional optimization to avoid
473 # instead of just a count of them, plus a small optional optimization to avoid
477 # some directories we don't need.
474 # some directories we don't need.
478 class _dirchildren(object):
475 class _dirchildren(object):
479 def __init__(self, paths, onlyinclude=None):
476 def __init__(self, paths, onlyinclude=None):
480 self._dirs = {}
477 self._dirs = {}
481 self._onlyinclude = onlyinclude or []
478 self._onlyinclude = onlyinclude or []
482 addpath = self.addpath
479 addpath = self.addpath
483 for f in paths:
480 for f in paths:
484 addpath(f)
481 addpath(f)
485
482
486 def addpath(self, path):
483 def addpath(self, path):
487 if path == '.':
484 if path == '.':
488 return
485 return
489 dirs = self._dirs
486 dirs = self._dirs
490 findsplitdirs = _dirchildren._findsplitdirs
487 findsplitdirs = _dirchildren._findsplitdirs
491 for d, b in findsplitdirs(path):
488 for d, b in findsplitdirs(path):
492 if d not in self._onlyinclude:
489 if d not in self._onlyinclude:
493 continue
490 continue
494 dirs.setdefault(d, set()).add(b)
491 dirs.setdefault(d, set()).add(b)
495
492
496 @staticmethod
493 @staticmethod
497 def _findsplitdirs(path):
494 def _findsplitdirs(path):
498 # yields (dirname, basename) tuples, walking back to the root. This is
495 # yields (dirname, basename) tuples, walking back to the root. This is
499 # very similar to util.finddirs, except:
496 # very similar to util.finddirs, except:
500 # - produces a (dirname, basename) tuple, not just 'dirname'
497 # - produces a (dirname, basename) tuple, not just 'dirname'
501 # - includes root dir
498 # - includes root dir
502 # Unlike manifest._splittopdir, this does not suffix `dirname` with a
499 # Unlike manifest._splittopdir, this does not suffix `dirname` with a
503 # slash, and produces '.' for the root instead of ''.
500 # slash, and produces '.' for the root instead of ''.
504 oldpos = len(path)
501 oldpos = len(path)
505 pos = path.rfind('/')
502 pos = path.rfind('/')
506 while pos != -1:
503 while pos != -1:
507 yield path[:pos], path[pos + 1:oldpos]
504 yield path[:pos], path[pos + 1:oldpos]
508 oldpos = pos
505 oldpos = pos
509 pos = path.rfind('/', 0, pos)
506 pos = path.rfind('/', 0, pos)
510 yield '.', path[:oldpos]
507 yield '.', path[:oldpos]
511
508
512 def get(self, path):
509 def get(self, path):
513 return self._dirs.get(path, set())
510 return self._dirs.get(path, set())
514
511
515 class includematcher(basematcher):
512 class includematcher(basematcher):
516
513
517 def __init__(self, root, cwd, kindpats, badfn=None):
514 def __init__(self, root, kindpats, badfn=None):
518 super(includematcher, self).__init__(root, cwd, badfn)
515 super(includematcher, self).__init__(badfn)
519
516
520 self._pats, self.matchfn = _buildmatch(kindpats, '(?:/|$)', root)
517 self._pats, self.matchfn = _buildmatch(kindpats, '(?:/|$)', root)
521 self._prefix = _prefix(kindpats)
518 self._prefix = _prefix(kindpats)
522 roots, dirs, parents = _rootsdirsandparents(kindpats)
519 roots, dirs, parents = _rootsdirsandparents(kindpats)
523 # roots are directories which are recursively included.
520 # roots are directories which are recursively included.
524 self._roots = set(roots)
521 self._roots = set(roots)
525 # dirs are directories which are non-recursively included.
522 # dirs are directories which are non-recursively included.
526 self._dirs = set(dirs)
523 self._dirs = set(dirs)
527 # parents are directories which are non-recursively included because
524 # parents are directories which are non-recursively included because
528 # they are needed to get to items in _dirs or _roots.
525 # they are needed to get to items in _dirs or _roots.
529 self._parents = set(parents)
526 self._parents = set(parents)
530
527
531 def visitdir(self, dir):
528 def visitdir(self, dir):
532 if self._prefix and dir in self._roots:
529 if self._prefix and dir in self._roots:
533 return 'all'
530 return 'all'
534 return ('.' in self._roots or
531 return ('.' in self._roots or
535 dir in self._roots or
532 dir in self._roots or
536 dir in self._dirs or
533 dir in self._dirs or
537 dir in self._parents or
534 dir in self._parents or
538 any(parentdir in self._roots
535 any(parentdir in self._roots
539 for parentdir in util.finddirs(dir)))
536 for parentdir in util.finddirs(dir)))
540
537
541 @propertycache
538 @propertycache
542 def _allparentschildren(self):
539 def _allparentschildren(self):
543 # It may seem odd that we add dirs, roots, and parents, and then
540 # It may seem odd that we add dirs, roots, and parents, and then
544 # restrict to only parents. This is to catch the case of:
541 # restrict to only parents. This is to catch the case of:
545 # dirs = ['foo/bar']
542 # dirs = ['foo/bar']
546 # parents = ['foo']
543 # parents = ['foo']
547 # if we asked for the children of 'foo', but had only added
544 # if we asked for the children of 'foo', but had only added
548 # self._parents, we wouldn't be able to respond ['bar'].
545 # self._parents, we wouldn't be able to respond ['bar'].
549 return _dirchildren(
546 return _dirchildren(
550 itertools.chain(self._dirs, self._roots, self._parents),
547 itertools.chain(self._dirs, self._roots, self._parents),
551 onlyinclude=self._parents)
548 onlyinclude=self._parents)
552
549
553 def visitchildrenset(self, dir):
550 def visitchildrenset(self, dir):
554 if self._prefix and dir in self._roots:
551 if self._prefix and dir in self._roots:
555 return 'all'
552 return 'all'
556 # Note: this does *not* include the 'dir in self._parents' case from
553 # Note: this does *not* include the 'dir in self._parents' case from
557 # visitdir, that's handled below.
554 # visitdir, that's handled below.
558 if ('.' in self._roots or
555 if ('.' in self._roots or
559 dir in self._roots or
556 dir in self._roots or
560 dir in self._dirs or
557 dir in self._dirs or
561 any(parentdir in self._roots
558 any(parentdir in self._roots
562 for parentdir in util.finddirs(dir))):
559 for parentdir in util.finddirs(dir))):
563 return 'this'
560 return 'this'
564
561
565 if dir in self._parents:
562 if dir in self._parents:
566 return self._allparentschildren.get(dir) or set()
563 return self._allparentschildren.get(dir) or set()
567 return set()
564 return set()
568
565
569 @encoding.strmethod
566 @encoding.strmethod
570 def __repr__(self):
567 def __repr__(self):
571 return ('<includematcher includes=%r>' % pycompat.bytestr(self._pats))
568 return ('<includematcher includes=%r>' % pycompat.bytestr(self._pats))
572
569
573 class exactmatcher(basematcher):
570 class exactmatcher(basematcher):
574 '''Matches the input files exactly. They are interpreted as paths, not
571 '''Matches the input files exactly. They are interpreted as paths, not
575 patterns (so no kind-prefixes).
572 patterns (so no kind-prefixes).
576 '''
573 '''
577
574
578 def __init__(self, root, cwd, files, badfn=None):
575 def __init__(self, files, badfn=None):
579 super(exactmatcher, self).__init__(root, cwd, badfn)
576 super(exactmatcher, self).__init__(badfn)
580
577
581 if isinstance(files, list):
578 if isinstance(files, list):
582 self._files = files
579 self._files = files
583 else:
580 else:
584 self._files = list(files)
581 self._files = list(files)
585
582
586 matchfn = basematcher.exact
583 matchfn = basematcher.exact
587
584
588 @propertycache
585 @propertycache
589 def _dirs(self):
586 def _dirs(self):
590 return set(util.dirs(self._fileset)) | {'.'}
587 return set(util.dirs(self._fileset)) | {'.'}
591
588
592 def visitdir(self, dir):
589 def visitdir(self, dir):
593 return dir in self._dirs
590 return dir in self._dirs
594
591
595 def visitchildrenset(self, dir):
592 def visitchildrenset(self, dir):
596 if not self._fileset or dir not in self._dirs:
593 if not self._fileset or dir not in self._dirs:
597 return set()
594 return set()
598
595
599 candidates = self._fileset | self._dirs - {'.'}
596 candidates = self._fileset | self._dirs - {'.'}
600 if dir != '.':
597 if dir != '.':
601 d = dir + '/'
598 d = dir + '/'
602 candidates = set(c[len(d):] for c in candidates if
599 candidates = set(c[len(d):] for c in candidates if
603 c.startswith(d))
600 c.startswith(d))
604 # self._dirs includes all of the directories, recursively, so if
601 # self._dirs includes all of the directories, recursively, so if
605 # we're attempting to match foo/bar/baz.txt, it'll have '.', 'foo',
602 # we're attempting to match foo/bar/baz.txt, it'll have '.', 'foo',
606 # 'foo/bar' in it. Thus we can safely ignore a candidate that has a
603 # 'foo/bar' in it. Thus we can safely ignore a candidate that has a
607 # '/' in it, indicating a it's for a subdir-of-a-subdir; the
604 # '/' in it, indicating a it's for a subdir-of-a-subdir; the
608 # immediate subdir will be in there without a slash.
605 # immediate subdir will be in there without a slash.
609 ret = {c for c in candidates if '/' not in c}
606 ret = {c for c in candidates if '/' not in c}
610 # We really do not expect ret to be empty, since that would imply that
607 # We really do not expect ret to be empty, since that would imply that
611 # there's something in _dirs that didn't have a file in _fileset.
608 # there's something in _dirs that didn't have a file in _fileset.
612 assert ret
609 assert ret
613 return ret
610 return ret
614
611
615 def isexact(self):
612 def isexact(self):
616 return True
613 return True
617
614
618 @encoding.strmethod
615 @encoding.strmethod
619 def __repr__(self):
616 def __repr__(self):
620 return ('<exactmatcher files=%r>' % self._files)
617 return ('<exactmatcher files=%r>' % self._files)
621
618
622 class differencematcher(basematcher):
619 class differencematcher(basematcher):
623 '''Composes two matchers by matching if the first matches and the second
620 '''Composes two matchers by matching if the first matches and the second
624 does not.
621 does not.
625
622
626 The second matcher's non-matching-attributes (root, cwd, bad, explicitdir,
623 The second matcher's non-matching-attributes (bad, explicitdir,
627 traversedir) are ignored.
624 traversedir) are ignored.
628 '''
625 '''
629 def __init__(self, m1, m2):
626 def __init__(self, m1, m2):
630 super(differencematcher, self).__init__(m1._root, m1._cwd)
627 super(differencematcher, self).__init__()
631 self._m1 = m1
628 self._m1 = m1
632 self._m2 = m2
629 self._m2 = m2
633 self.bad = m1.bad
630 self.bad = m1.bad
634 self.explicitdir = m1.explicitdir
631 self.explicitdir = m1.explicitdir
635 self.traversedir = m1.traversedir
632 self.traversedir = m1.traversedir
636
633
637 def matchfn(self, f):
634 def matchfn(self, f):
638 return self._m1(f) and not self._m2(f)
635 return self._m1(f) and not self._m2(f)
639
636
640 @propertycache
637 @propertycache
641 def _files(self):
638 def _files(self):
642 if self.isexact():
639 if self.isexact():
643 return [f for f in self._m1.files() if self(f)]
640 return [f for f in self._m1.files() if self(f)]
644 # If m1 is not an exact matcher, we can't easily figure out the set of
641 # If m1 is not an exact matcher, we can't easily figure out the set of
645 # files, because its files() are not always files. For example, if
642 # files, because its files() are not always files. For example, if
646 # m1 is "path:dir" and m2 is "rootfileins:.", we don't
643 # m1 is "path:dir" and m2 is "rootfileins:.", we don't
647 # want to remove "dir" from the set even though it would match m2,
644 # want to remove "dir" from the set even though it would match m2,
648 # because the "dir" in m1 may not be a file.
645 # because the "dir" in m1 may not be a file.
649 return self._m1.files()
646 return self._m1.files()
650
647
651 def visitdir(self, dir):
648 def visitdir(self, dir):
652 if self._m2.visitdir(dir) == 'all':
649 if self._m2.visitdir(dir) == 'all':
653 return False
650 return False
654 elif not self._m2.visitdir(dir):
651 elif not self._m2.visitdir(dir):
655 # m2 does not match dir, we can return 'all' here if possible
652 # m2 does not match dir, we can return 'all' here if possible
656 return self._m1.visitdir(dir)
653 return self._m1.visitdir(dir)
657 return bool(self._m1.visitdir(dir))
654 return bool(self._m1.visitdir(dir))
658
655
659 def visitchildrenset(self, dir):
656 def visitchildrenset(self, dir):
660 m2_set = self._m2.visitchildrenset(dir)
657 m2_set = self._m2.visitchildrenset(dir)
661 if m2_set == 'all':
658 if m2_set == 'all':
662 return set()
659 return set()
663 m1_set = self._m1.visitchildrenset(dir)
660 m1_set = self._m1.visitchildrenset(dir)
664 # Possible values for m1: 'all', 'this', set(...), set()
661 # Possible values for m1: 'all', 'this', set(...), set()
665 # Possible values for m2: 'this', set(...), set()
662 # Possible values for m2: 'this', set(...), set()
666 # If m2 has nothing under here that we care about, return m1, even if
663 # If m2 has nothing under here that we care about, return m1, even if
667 # it's 'all'. This is a change in behavior from visitdir, which would
664 # it's 'all'. This is a change in behavior from visitdir, which would
668 # return True, not 'all', for some reason.
665 # return True, not 'all', for some reason.
669 if not m2_set:
666 if not m2_set:
670 return m1_set
667 return m1_set
671 if m1_set in ['all', 'this']:
668 if m1_set in ['all', 'this']:
672 # Never return 'all' here if m2_set is any kind of non-empty (either
669 # Never return 'all' here if m2_set is any kind of non-empty (either
673 # 'this' or set(foo)), since m2 might return set() for a
670 # 'this' or set(foo)), since m2 might return set() for a
674 # subdirectory.
671 # subdirectory.
675 return 'this'
672 return 'this'
676 # Possible values for m1: set(...), set()
673 # Possible values for m1: set(...), set()
677 # Possible values for m2: 'this', set(...)
674 # Possible values for m2: 'this', set(...)
678 # We ignore m2's set results. They're possibly incorrect:
675 # We ignore m2's set results. They're possibly incorrect:
679 # m1 = path:dir/subdir, m2=rootfilesin:dir, visitchildrenset('.'):
676 # m1 = path:dir/subdir, m2=rootfilesin:dir, visitchildrenset('.'):
680 # m1 returns {'dir'}, m2 returns {'dir'}, if we subtracted we'd
677 # m1 returns {'dir'}, m2 returns {'dir'}, if we subtracted we'd
681 # return set(), which is *not* correct, we still need to visit 'dir'!
678 # return set(), which is *not* correct, we still need to visit 'dir'!
682 return m1_set
679 return m1_set
683
680
684 def isexact(self):
681 def isexact(self):
685 return self._m1.isexact()
682 return self._m1.isexact()
686
683
687 @encoding.strmethod
684 @encoding.strmethod
688 def __repr__(self):
685 def __repr__(self):
689 return ('<differencematcher m1=%r, m2=%r>' % (self._m1, self._m2))
686 return ('<differencematcher m1=%r, m2=%r>' % (self._m1, self._m2))
690
687
691 def intersectmatchers(m1, m2):
688 def intersectmatchers(m1, m2):
692 '''Composes two matchers by matching if both of them match.
689 '''Composes two matchers by matching if both of them match.
693
690
694 The second matcher's non-matching-attributes (root, cwd, bad, explicitdir,
691 The second matcher's non-matching-attributes (bad, explicitdir,
695 traversedir) are ignored.
692 traversedir) are ignored.
696 '''
693 '''
697 if m1 is None or m2 is None:
694 if m1 is None or m2 is None:
698 return m1 or m2
695 return m1 or m2
699 if m1.always():
696 if m1.always():
700 m = copy.copy(m2)
697 m = copy.copy(m2)
701 # TODO: Consider encapsulating these things in a class so there's only
698 # TODO: Consider encapsulating these things in a class so there's only
702 # one thing to copy from m1.
699 # one thing to copy from m1.
703 m.bad = m1.bad
700 m.bad = m1.bad
704 m.explicitdir = m1.explicitdir
701 m.explicitdir = m1.explicitdir
705 m.traversedir = m1.traversedir
702 m.traversedir = m1.traversedir
706 return m
703 return m
707 if m2.always():
704 if m2.always():
708 m = copy.copy(m1)
705 m = copy.copy(m1)
709 return m
706 return m
710 return intersectionmatcher(m1, m2)
707 return intersectionmatcher(m1, m2)
711
708
712 class intersectionmatcher(basematcher):
709 class intersectionmatcher(basematcher):
713 def __init__(self, m1, m2):
710 def __init__(self, m1, m2):
714 super(intersectionmatcher, self).__init__(m1._root, m1._cwd)
711 super(intersectionmatcher, self).__init__()
715 self._m1 = m1
712 self._m1 = m1
716 self._m2 = m2
713 self._m2 = m2
717 self.bad = m1.bad
714 self.bad = m1.bad
718 self.explicitdir = m1.explicitdir
715 self.explicitdir = m1.explicitdir
719 self.traversedir = m1.traversedir
716 self.traversedir = m1.traversedir
720
717
721 @propertycache
718 @propertycache
722 def _files(self):
719 def _files(self):
723 if self.isexact():
720 if self.isexact():
724 m1, m2 = self._m1, self._m2
721 m1, m2 = self._m1, self._m2
725 if not m1.isexact():
722 if not m1.isexact():
726 m1, m2 = m2, m1
723 m1, m2 = m2, m1
727 return [f for f in m1.files() if m2(f)]
724 return [f for f in m1.files() if m2(f)]
728 # It neither m1 nor m2 is an exact matcher, we can't easily intersect
725 # It neither m1 nor m2 is an exact matcher, we can't easily intersect
729 # the set of files, because their files() are not always files. For
726 # the set of files, because their files() are not always files. For
730 # example, if intersecting a matcher "-I glob:foo.txt" with matcher of
727 # example, if intersecting a matcher "-I glob:foo.txt" with matcher of
731 # "path:dir2", we don't want to remove "dir2" from the set.
728 # "path:dir2", we don't want to remove "dir2" from the set.
732 return self._m1.files() + self._m2.files()
729 return self._m1.files() + self._m2.files()
733
730
734 def matchfn(self, f):
731 def matchfn(self, f):
735 return self._m1(f) and self._m2(f)
732 return self._m1(f) and self._m2(f)
736
733
737 def visitdir(self, dir):
734 def visitdir(self, dir):
738 visit1 = self._m1.visitdir(dir)
735 visit1 = self._m1.visitdir(dir)
739 if visit1 == 'all':
736 if visit1 == 'all':
740 return self._m2.visitdir(dir)
737 return self._m2.visitdir(dir)
741 # bool() because visit1=True + visit2='all' should not be 'all'
738 # bool() because visit1=True + visit2='all' should not be 'all'
742 return bool(visit1 and self._m2.visitdir(dir))
739 return bool(visit1 and self._m2.visitdir(dir))
743
740
744 def visitchildrenset(self, dir):
741 def visitchildrenset(self, dir):
745 m1_set = self._m1.visitchildrenset(dir)
742 m1_set = self._m1.visitchildrenset(dir)
746 if not m1_set:
743 if not m1_set:
747 return set()
744 return set()
748 m2_set = self._m2.visitchildrenset(dir)
745 m2_set = self._m2.visitchildrenset(dir)
749 if not m2_set:
746 if not m2_set:
750 return set()
747 return set()
751
748
752 if m1_set == 'all':
749 if m1_set == 'all':
753 return m2_set
750 return m2_set
754 elif m2_set == 'all':
751 elif m2_set == 'all':
755 return m1_set
752 return m1_set
756
753
757 if m1_set == 'this' or m2_set == 'this':
754 if m1_set == 'this' or m2_set == 'this':
758 return 'this'
755 return 'this'
759
756
760 assert isinstance(m1_set, set) and isinstance(m2_set, set)
757 assert isinstance(m1_set, set) and isinstance(m2_set, set)
761 return m1_set.intersection(m2_set)
758 return m1_set.intersection(m2_set)
762
759
763 def always(self):
760 def always(self):
764 return self._m1.always() and self._m2.always()
761 return self._m1.always() and self._m2.always()
765
762
766 def isexact(self):
763 def isexact(self):
767 return self._m1.isexact() or self._m2.isexact()
764 return self._m1.isexact() or self._m2.isexact()
768
765
769 @encoding.strmethod
766 @encoding.strmethod
770 def __repr__(self):
767 def __repr__(self):
771 return ('<intersectionmatcher m1=%r, m2=%r>' % (self._m1, self._m2))
768 return ('<intersectionmatcher m1=%r, m2=%r>' % (self._m1, self._m2))
772
769
773 class subdirmatcher(basematcher):
770 class subdirmatcher(basematcher):
774 """Adapt a matcher to work on a subdirectory only.
771 """Adapt a matcher to work on a subdirectory only.
775
772
776 The paths are remapped to remove/insert the path as needed:
773 The paths are remapped to remove/insert the path as needed:
777
774
778 >>> from . import pycompat
775 >>> from . import pycompat
779 >>> m1 = match(b'root', b'', [b'a.txt', b'sub/b.txt'])
776 >>> m1 = match(b'root', b'', [b'a.txt', b'sub/b.txt'])
780 >>> m2 = subdirmatcher(b'sub', m1)
777 >>> m2 = subdirmatcher(b'sub', m1)
781 >>> bool(m2(b'a.txt'))
778 >>> bool(m2(b'a.txt'))
782 False
779 False
783 >>> bool(m2(b'b.txt'))
780 >>> bool(m2(b'b.txt'))
784 True
781 True
785 >>> bool(m2.matchfn(b'a.txt'))
782 >>> bool(m2.matchfn(b'a.txt'))
786 False
783 False
787 >>> bool(m2.matchfn(b'b.txt'))
784 >>> bool(m2.matchfn(b'b.txt'))
788 True
785 True
789 >>> m2.files()
786 >>> m2.files()
790 ['b.txt']
787 ['b.txt']
791 >>> m2.exact(b'b.txt')
788 >>> m2.exact(b'b.txt')
792 True
789 True
793 >>> def bad(f, msg):
790 >>> def bad(f, msg):
794 ... print(pycompat.sysstr(b"%s: %s" % (f, msg)))
791 ... print(pycompat.sysstr(b"%s: %s" % (f, msg)))
795 >>> m1.bad = bad
792 >>> m1.bad = bad
796 >>> m2.bad(b'x.txt', b'No such file')
793 >>> m2.bad(b'x.txt', b'No such file')
797 sub/x.txt: No such file
794 sub/x.txt: No such file
798 """
795 """
799
796
800 def __init__(self, path, matcher):
797 def __init__(self, path, matcher):
801 super(subdirmatcher, self).__init__(matcher._root, matcher._cwd)
798 super(subdirmatcher, self).__init__()
802 self._path = path
799 self._path = path
803 self._matcher = matcher
800 self._matcher = matcher
804 self._always = matcher.always()
801 self._always = matcher.always()
805
802
806 self._files = [f[len(path) + 1:] for f in matcher._files
803 self._files = [f[len(path) + 1:] for f in matcher._files
807 if f.startswith(path + "/")]
804 if f.startswith(path + "/")]
808
805
809 # If the parent repo had a path to this subrepo and the matcher is
806 # If the parent repo had a path to this subrepo and the matcher is
810 # a prefix matcher, this submatcher always matches.
807 # a prefix matcher, this submatcher always matches.
811 if matcher.prefix():
808 if matcher.prefix():
812 self._always = any(f == path for f in matcher._files)
809 self._always = any(f == path for f in matcher._files)
813
810
814 def bad(self, f, msg):
811 def bad(self, f, msg):
815 self._matcher.bad(self._path + "/" + f, msg)
812 self._matcher.bad(self._path + "/" + f, msg)
816
813
817 def matchfn(self, f):
814 def matchfn(self, f):
818 # Some information is lost in the superclass's constructor, so we
815 # Some information is lost in the superclass's constructor, so we
819 # can not accurately create the matching function for the subdirectory
816 # can not accurately create the matching function for the subdirectory
820 # from the inputs. Instead, we override matchfn() and visitdir() to
817 # from the inputs. Instead, we override matchfn() and visitdir() to
821 # call the original matcher with the subdirectory path prepended.
818 # call the original matcher with the subdirectory path prepended.
822 return self._matcher.matchfn(self._path + "/" + f)
819 return self._matcher.matchfn(self._path + "/" + f)
823
820
824 def visitdir(self, dir):
821 def visitdir(self, dir):
825 if dir == '.':
822 if dir == '.':
826 dir = self._path
823 dir = self._path
827 else:
824 else:
828 dir = self._path + "/" + dir
825 dir = self._path + "/" + dir
829 return self._matcher.visitdir(dir)
826 return self._matcher.visitdir(dir)
830
827
831 def visitchildrenset(self, dir):
828 def visitchildrenset(self, dir):
832 if dir == '.':
829 if dir == '.':
833 dir = self._path
830 dir = self._path
834 else:
831 else:
835 dir = self._path + "/" + dir
832 dir = self._path + "/" + dir
836 return self._matcher.visitchildrenset(dir)
833 return self._matcher.visitchildrenset(dir)
837
834
838 def always(self):
835 def always(self):
839 return self._always
836 return self._always
840
837
841 def prefix(self):
838 def prefix(self):
842 return self._matcher.prefix() and not self._always
839 return self._matcher.prefix() and not self._always
843
840
844 @encoding.strmethod
841 @encoding.strmethod
845 def __repr__(self):
842 def __repr__(self):
846 return ('<subdirmatcher path=%r, matcher=%r>' %
843 return ('<subdirmatcher path=%r, matcher=%r>' %
847 (self._path, self._matcher))
844 (self._path, self._matcher))
848
845
849 class prefixdirmatcher(basematcher):
846 class prefixdirmatcher(basematcher):
850 """Adapt a matcher to work on a parent directory.
847 """Adapt a matcher to work on a parent directory.
851
848
852 The matcher's non-matching-attributes (root, cwd, bad, explicitdir,
849 The matcher's non-matching-attributes (bad, explicitdir, traversedir) are
853 traversedir) are ignored.
850 ignored.
854
851
855 The prefix path should usually be the relative path from the root of
852 The prefix path should usually be the relative path from the root of
856 this matcher to the root of the wrapped matcher.
853 this matcher to the root of the wrapped matcher.
857
854
858 >>> m1 = match(util.localpath(b'root/d/e'), b'f', [b'../a.txt', b'b.txt'])
855 >>> m1 = match(util.localpath(b'root/d/e'), b'f', [b'../a.txt', b'b.txt'])
859 >>> m2 = prefixdirmatcher(b'root', b'd/e/f', b'd/e', m1)
856 >>> m2 = prefixdirmatcher(b'd/e', m1)
860 >>> bool(m2(b'a.txt'),)
857 >>> bool(m2(b'a.txt'),)
861 False
858 False
862 >>> bool(m2(b'd/e/a.txt'))
859 >>> bool(m2(b'd/e/a.txt'))
863 True
860 True
864 >>> bool(m2(b'd/e/b.txt'))
861 >>> bool(m2(b'd/e/b.txt'))
865 False
862 False
866 >>> m2.files()
863 >>> m2.files()
867 ['d/e/a.txt', 'd/e/f/b.txt']
864 ['d/e/a.txt', 'd/e/f/b.txt']
868 >>> m2.exact(b'd/e/a.txt')
865 >>> m2.exact(b'd/e/a.txt')
869 True
866 True
870 >>> m2.visitdir(b'd')
867 >>> m2.visitdir(b'd')
871 True
868 True
872 >>> m2.visitdir(b'd/e')
869 >>> m2.visitdir(b'd/e')
873 True
870 True
874 >>> m2.visitdir(b'd/e/f')
871 >>> m2.visitdir(b'd/e/f')
875 True
872 True
876 >>> m2.visitdir(b'd/e/g')
873 >>> m2.visitdir(b'd/e/g')
877 False
874 False
878 >>> m2.visitdir(b'd/ef')
875 >>> m2.visitdir(b'd/ef')
879 False
876 False
880 """
877 """
881
878
882 def __init__(self, root, cwd, path, matcher, badfn=None):
879 def __init__(self, path, matcher, badfn=None):
883 super(prefixdirmatcher, self).__init__(root, cwd, badfn)
880 super(prefixdirmatcher, self).__init__(badfn)
884 if not path:
881 if not path:
885 raise error.ProgrammingError('prefix path must not be empty')
882 raise error.ProgrammingError('prefix path must not be empty')
886 self._path = path
883 self._path = path
887 self._pathprefix = path + '/'
884 self._pathprefix = path + '/'
888 self._matcher = matcher
885 self._matcher = matcher
889
886
890 @propertycache
887 @propertycache
891 def _files(self):
888 def _files(self):
892 return [self._pathprefix + f for f in self._matcher._files]
889 return [self._pathprefix + f for f in self._matcher._files]
893
890
894 def matchfn(self, f):
891 def matchfn(self, f):
895 if not f.startswith(self._pathprefix):
892 if not f.startswith(self._pathprefix):
896 return False
893 return False
897 return self._matcher.matchfn(f[len(self._pathprefix):])
894 return self._matcher.matchfn(f[len(self._pathprefix):])
898
895
899 @propertycache
896 @propertycache
900 def _pathdirs(self):
897 def _pathdirs(self):
901 return set(util.finddirs(self._path)) | {'.'}
898 return set(util.finddirs(self._path)) | {'.'}
902
899
903 def visitdir(self, dir):
900 def visitdir(self, dir):
904 if dir == self._path:
901 if dir == self._path:
905 return self._matcher.visitdir('.')
902 return self._matcher.visitdir('.')
906 if dir.startswith(self._pathprefix):
903 if dir.startswith(self._pathprefix):
907 return self._matcher.visitdir(dir[len(self._pathprefix):])
904 return self._matcher.visitdir(dir[len(self._pathprefix):])
908 return dir in self._pathdirs
905 return dir in self._pathdirs
909
906
910 def visitchildrenset(self, dir):
907 def visitchildrenset(self, dir):
911 if dir == self._path:
908 if dir == self._path:
912 return self._matcher.visitchildrenset('.')
909 return self._matcher.visitchildrenset('.')
913 if dir.startswith(self._pathprefix):
910 if dir.startswith(self._pathprefix):
914 return self._matcher.visitchildrenset(dir[len(self._pathprefix):])
911 return self._matcher.visitchildrenset(dir[len(self._pathprefix):])
915 if dir in self._pathdirs:
912 if dir in self._pathdirs:
916 return 'this'
913 return 'this'
917 return set()
914 return set()
918
915
919 def isexact(self):
916 def isexact(self):
920 return self._matcher.isexact()
917 return self._matcher.isexact()
921
918
922 def prefix(self):
919 def prefix(self):
923 return self._matcher.prefix()
920 return self._matcher.prefix()
924
921
925 @encoding.strmethod
922 @encoding.strmethod
926 def __repr__(self):
923 def __repr__(self):
927 return ('<prefixdirmatcher path=%r, matcher=%r>'
924 return ('<prefixdirmatcher path=%r, matcher=%r>'
928 % (pycompat.bytestr(self._path), self._matcher))
925 % (pycompat.bytestr(self._path), self._matcher))
929
926
930 class unionmatcher(basematcher):
927 class unionmatcher(basematcher):
931 """A matcher that is the union of several matchers.
928 """A matcher that is the union of several matchers.
932
929
933 The non-matching-attributes (root, cwd, bad, explicitdir, traversedir) are
930 The non-matching-attributes (bad, explicitdir, traversedir) are taken from
934 taken from the first matcher.
931 the first matcher.
935 """
932 """
936
933
937 def __init__(self, matchers):
934 def __init__(self, matchers):
938 m1 = matchers[0]
935 m1 = matchers[0]
939 super(unionmatcher, self).__init__(m1._root, m1._cwd)
936 super(unionmatcher, self).__init__()
940 self.explicitdir = m1.explicitdir
937 self.explicitdir = m1.explicitdir
941 self.traversedir = m1.traversedir
938 self.traversedir = m1.traversedir
942 self._matchers = matchers
939 self._matchers = matchers
943
940
944 def matchfn(self, f):
941 def matchfn(self, f):
945 for match in self._matchers:
942 for match in self._matchers:
946 if match(f):
943 if match(f):
947 return True
944 return True
948 return False
945 return False
949
946
950 def visitdir(self, dir):
947 def visitdir(self, dir):
951 r = False
948 r = False
952 for m in self._matchers:
949 for m in self._matchers:
953 v = m.visitdir(dir)
950 v = m.visitdir(dir)
954 if v == 'all':
951 if v == 'all':
955 return v
952 return v
956 r |= v
953 r |= v
957 return r
954 return r
958
955
959 def visitchildrenset(self, dir):
956 def visitchildrenset(self, dir):
960 r = set()
957 r = set()
961 this = False
958 this = False
962 for m in self._matchers:
959 for m in self._matchers:
963 v = m.visitchildrenset(dir)
960 v = m.visitchildrenset(dir)
964 if not v:
961 if not v:
965 continue
962 continue
966 if v == 'all':
963 if v == 'all':
967 return v
964 return v
968 if this or v == 'this':
965 if this or v == 'this':
969 this = True
966 this = True
970 # don't break, we might have an 'all' in here.
967 # don't break, we might have an 'all' in here.
971 continue
968 continue
972 assert isinstance(v, set)
969 assert isinstance(v, set)
973 r = r.union(v)
970 r = r.union(v)
974 if this:
971 if this:
975 return 'this'
972 return 'this'
976 return r
973 return r
977
974
978 @encoding.strmethod
975 @encoding.strmethod
979 def __repr__(self):
976 def __repr__(self):
980 return ('<unionmatcher matchers=%r>' % self._matchers)
977 return ('<unionmatcher matchers=%r>' % self._matchers)
981
978
982 def patkind(pattern, default=None):
979 def patkind(pattern, default=None):
983 '''If pattern is 'kind:pat' with a known kind, return kind.'''
980 '''If pattern is 'kind:pat' with a known kind, return kind.'''
984 return _patsplit(pattern, default)[0]
981 return _patsplit(pattern, default)[0]
985
982
986 def _patsplit(pattern, default):
983 def _patsplit(pattern, default):
987 """Split a string into the optional pattern kind prefix and the actual
984 """Split a string into the optional pattern kind prefix and the actual
988 pattern."""
985 pattern."""
989 if ':' in pattern:
986 if ':' in pattern:
990 kind, pat = pattern.split(':', 1)
987 kind, pat = pattern.split(':', 1)
991 if kind in allpatternkinds:
988 if kind in allpatternkinds:
992 return kind, pat
989 return kind, pat
993 return default, pattern
990 return default, pattern
994
991
995 def _globre(pat):
992 def _globre(pat):
996 r'''Convert an extended glob string to a regexp string.
993 r'''Convert an extended glob string to a regexp string.
997
994
998 >>> from . import pycompat
995 >>> from . import pycompat
999 >>> def bprint(s):
996 >>> def bprint(s):
1000 ... print(pycompat.sysstr(s))
997 ... print(pycompat.sysstr(s))
1001 >>> bprint(_globre(br'?'))
998 >>> bprint(_globre(br'?'))
1002 .
999 .
1003 >>> bprint(_globre(br'*'))
1000 >>> bprint(_globre(br'*'))
1004 [^/]*
1001 [^/]*
1005 >>> bprint(_globre(br'**'))
1002 >>> bprint(_globre(br'**'))
1006 .*
1003 .*
1007 >>> bprint(_globre(br'**/a'))
1004 >>> bprint(_globre(br'**/a'))
1008 (?:.*/)?a
1005 (?:.*/)?a
1009 >>> bprint(_globre(br'a/**/b'))
1006 >>> bprint(_globre(br'a/**/b'))
1010 a/(?:.*/)?b
1007 a/(?:.*/)?b
1011 >>> bprint(_globre(br'[a*?!^][^b][!c]'))
1008 >>> bprint(_globre(br'[a*?!^][^b][!c]'))
1012 [a*?!^][\^b][^c]
1009 [a*?!^][\^b][^c]
1013 >>> bprint(_globre(br'{a,b}'))
1010 >>> bprint(_globre(br'{a,b}'))
1014 (?:a|b)
1011 (?:a|b)
1015 >>> bprint(_globre(br'.\*\?'))
1012 >>> bprint(_globre(br'.\*\?'))
1016 \.\*\?
1013 \.\*\?
1017 '''
1014 '''
1018 i, n = 0, len(pat)
1015 i, n = 0, len(pat)
1019 res = ''
1016 res = ''
1020 group = 0
1017 group = 0
1021 escape = util.stringutil.regexbytesescapemap.get
1018 escape = util.stringutil.regexbytesescapemap.get
1022 def peek():
1019 def peek():
1023 return i < n and pat[i:i + 1]
1020 return i < n and pat[i:i + 1]
1024 while i < n:
1021 while i < n:
1025 c = pat[i:i + 1]
1022 c = pat[i:i + 1]
1026 i += 1
1023 i += 1
1027 if c not in '*?[{},\\':
1024 if c not in '*?[{},\\':
1028 res += escape(c, c)
1025 res += escape(c, c)
1029 elif c == '*':
1026 elif c == '*':
1030 if peek() == '*':
1027 if peek() == '*':
1031 i += 1
1028 i += 1
1032 if peek() == '/':
1029 if peek() == '/':
1033 i += 1
1030 i += 1
1034 res += '(?:.*/)?'
1031 res += '(?:.*/)?'
1035 else:
1032 else:
1036 res += '.*'
1033 res += '.*'
1037 else:
1034 else:
1038 res += '[^/]*'
1035 res += '[^/]*'
1039 elif c == '?':
1036 elif c == '?':
1040 res += '.'
1037 res += '.'
1041 elif c == '[':
1038 elif c == '[':
1042 j = i
1039 j = i
1043 if j < n and pat[j:j + 1] in '!]':
1040 if j < n and pat[j:j + 1] in '!]':
1044 j += 1
1041 j += 1
1045 while j < n and pat[j:j + 1] != ']':
1042 while j < n and pat[j:j + 1] != ']':
1046 j += 1
1043 j += 1
1047 if j >= n:
1044 if j >= n:
1048 res += '\\['
1045 res += '\\['
1049 else:
1046 else:
1050 stuff = pat[i:j].replace('\\','\\\\')
1047 stuff = pat[i:j].replace('\\','\\\\')
1051 i = j + 1
1048 i = j + 1
1052 if stuff[0:1] == '!':
1049 if stuff[0:1] == '!':
1053 stuff = '^' + stuff[1:]
1050 stuff = '^' + stuff[1:]
1054 elif stuff[0:1] == '^':
1051 elif stuff[0:1] == '^':
1055 stuff = '\\' + stuff
1052 stuff = '\\' + stuff
1056 res = '%s[%s]' % (res, stuff)
1053 res = '%s[%s]' % (res, stuff)
1057 elif c == '{':
1054 elif c == '{':
1058 group += 1
1055 group += 1
1059 res += '(?:'
1056 res += '(?:'
1060 elif c == '}' and group:
1057 elif c == '}' and group:
1061 res += ')'
1058 res += ')'
1062 group -= 1
1059 group -= 1
1063 elif c == ',' and group:
1060 elif c == ',' and group:
1064 res += '|'
1061 res += '|'
1065 elif c == '\\':
1062 elif c == '\\':
1066 p = peek()
1063 p = peek()
1067 if p:
1064 if p:
1068 i += 1
1065 i += 1
1069 res += escape(p, p)
1066 res += escape(p, p)
1070 else:
1067 else:
1071 res += escape(c, c)
1068 res += escape(c, c)
1072 else:
1069 else:
1073 res += escape(c, c)
1070 res += escape(c, c)
1074 return res
1071 return res
1075
1072
1076 def _regex(kind, pat, globsuffix):
1073 def _regex(kind, pat, globsuffix):
1077 '''Convert a (normalized) pattern of any kind into a regular expression.
1074 '''Convert a (normalized) pattern of any kind into a regular expression.
1078 globsuffix is appended to the regexp of globs.'''
1075 globsuffix is appended to the regexp of globs.'''
1079 if not pat:
1076 if not pat:
1080 return ''
1077 return ''
1081 if kind == 're':
1078 if kind == 're':
1082 return pat
1079 return pat
1083 if kind in ('path', 'relpath'):
1080 if kind in ('path', 'relpath'):
1084 if pat == '.':
1081 if pat == '.':
1085 return ''
1082 return ''
1086 return util.stringutil.reescape(pat) + '(?:/|$)'
1083 return util.stringutil.reescape(pat) + '(?:/|$)'
1087 if kind == 'rootfilesin':
1084 if kind == 'rootfilesin':
1088 if pat == '.':
1085 if pat == '.':
1089 escaped = ''
1086 escaped = ''
1090 else:
1087 else:
1091 # Pattern is a directory name.
1088 # Pattern is a directory name.
1092 escaped = util.stringutil.reescape(pat) + '/'
1089 escaped = util.stringutil.reescape(pat) + '/'
1093 # Anything after the pattern must be a non-directory.
1090 # Anything after the pattern must be a non-directory.
1094 return escaped + '[^/]+$'
1091 return escaped + '[^/]+$'
1095 if kind == 'relglob':
1092 if kind == 'relglob':
1096 return '(?:|.*/)' + _globre(pat) + globsuffix
1093 return '(?:|.*/)' + _globre(pat) + globsuffix
1097 if kind == 'relre':
1094 if kind == 'relre':
1098 if pat.startswith('^'):
1095 if pat.startswith('^'):
1099 return pat
1096 return pat
1100 return '.*' + pat
1097 return '.*' + pat
1101 if kind in ('glob', 'rootglob'):
1098 if kind in ('glob', 'rootglob'):
1102 return _globre(pat) + globsuffix
1099 return _globre(pat) + globsuffix
1103 raise error.ProgrammingError('not a regex pattern: %s:%s' % (kind, pat))
1100 raise error.ProgrammingError('not a regex pattern: %s:%s' % (kind, pat))
1104
1101
1105 def _buildmatch(kindpats, globsuffix, root):
1102 def _buildmatch(kindpats, globsuffix, root):
1106 '''Return regexp string and a matcher function for kindpats.
1103 '''Return regexp string and a matcher function for kindpats.
1107 globsuffix is appended to the regexp of globs.'''
1104 globsuffix is appended to the regexp of globs.'''
1108 matchfuncs = []
1105 matchfuncs = []
1109
1106
1110 subincludes, kindpats = _expandsubinclude(kindpats, root)
1107 subincludes, kindpats = _expandsubinclude(kindpats, root)
1111 if subincludes:
1108 if subincludes:
1112 submatchers = {}
1109 submatchers = {}
1113 def matchsubinclude(f):
1110 def matchsubinclude(f):
1114 for prefix, matcherargs in subincludes:
1111 for prefix, matcherargs in subincludes:
1115 if f.startswith(prefix):
1112 if f.startswith(prefix):
1116 mf = submatchers.get(prefix)
1113 mf = submatchers.get(prefix)
1117 if mf is None:
1114 if mf is None:
1118 mf = match(*matcherargs)
1115 mf = match(*matcherargs)
1119 submatchers[prefix] = mf
1116 submatchers[prefix] = mf
1120
1117
1121 if mf(f[len(prefix):]):
1118 if mf(f[len(prefix):]):
1122 return True
1119 return True
1123 return False
1120 return False
1124 matchfuncs.append(matchsubinclude)
1121 matchfuncs.append(matchsubinclude)
1125
1122
1126 regex = ''
1123 regex = ''
1127 if kindpats:
1124 if kindpats:
1128 if all(k == 'rootfilesin' for k, p, s in kindpats):
1125 if all(k == 'rootfilesin' for k, p, s in kindpats):
1129 dirs = {p for k, p, s in kindpats}
1126 dirs = {p for k, p, s in kindpats}
1130 def mf(f):
1127 def mf(f):
1131 i = f.rfind('/')
1128 i = f.rfind('/')
1132 if i >= 0:
1129 if i >= 0:
1133 dir = f[:i]
1130 dir = f[:i]
1134 else:
1131 else:
1135 dir = '.'
1132 dir = '.'
1136 return dir in dirs
1133 return dir in dirs
1137 regex = b'rootfilesin: %s' % stringutil.pprint(list(sorted(dirs)))
1134 regex = b'rootfilesin: %s' % stringutil.pprint(list(sorted(dirs)))
1138 matchfuncs.append(mf)
1135 matchfuncs.append(mf)
1139 else:
1136 else:
1140 regex, mf = _buildregexmatch(kindpats, globsuffix)
1137 regex, mf = _buildregexmatch(kindpats, globsuffix)
1141 matchfuncs.append(mf)
1138 matchfuncs.append(mf)
1142
1139
1143 if len(matchfuncs) == 1:
1140 if len(matchfuncs) == 1:
1144 return regex, matchfuncs[0]
1141 return regex, matchfuncs[0]
1145 else:
1142 else:
1146 return regex, lambda f: any(mf(f) for mf in matchfuncs)
1143 return regex, lambda f: any(mf(f) for mf in matchfuncs)
1147
1144
1148 MAX_RE_SIZE = 20000
1145 MAX_RE_SIZE = 20000
1149
1146
1150 def _joinregexes(regexps):
1147 def _joinregexes(regexps):
1151 """gather multiple regular expressions into a single one"""
1148 """gather multiple regular expressions into a single one"""
1152 return '|'.join(regexps)
1149 return '|'.join(regexps)
1153
1150
1154 def _buildregexmatch(kindpats, globsuffix):
1151 def _buildregexmatch(kindpats, globsuffix):
1155 """Build a match function from a list of kinds and kindpats,
1152 """Build a match function from a list of kinds and kindpats,
1156 return regexp string and a matcher function.
1153 return regexp string and a matcher function.
1157
1154
1158 Test too large input
1155 Test too large input
1159 >>> _buildregexmatch([
1156 >>> _buildregexmatch([
1160 ... (b'relglob', b'?' * MAX_RE_SIZE, b'')
1157 ... (b'relglob', b'?' * MAX_RE_SIZE, b'')
1161 ... ], b'$')
1158 ... ], b'$')
1162 Traceback (most recent call last):
1159 Traceback (most recent call last):
1163 ...
1160 ...
1164 Abort: matcher pattern is too long (20009 bytes)
1161 Abort: matcher pattern is too long (20009 bytes)
1165 """
1162 """
1166 try:
1163 try:
1167 allgroups = []
1164 allgroups = []
1168 regexps = [_regex(k, p, globsuffix) for (k, p, s) in kindpats]
1165 regexps = [_regex(k, p, globsuffix) for (k, p, s) in kindpats]
1169 fullregexp = _joinregexes(regexps)
1166 fullregexp = _joinregexes(regexps)
1170
1167
1171 startidx = 0
1168 startidx = 0
1172 groupsize = 0
1169 groupsize = 0
1173 for idx, r in enumerate(regexps):
1170 for idx, r in enumerate(regexps):
1174 piecesize = len(r)
1171 piecesize = len(r)
1175 if piecesize > MAX_RE_SIZE:
1172 if piecesize > MAX_RE_SIZE:
1176 msg = _("matcher pattern is too long (%d bytes)") % piecesize
1173 msg = _("matcher pattern is too long (%d bytes)") % piecesize
1177 raise error.Abort(msg)
1174 raise error.Abort(msg)
1178 elif (groupsize + piecesize) > MAX_RE_SIZE:
1175 elif (groupsize + piecesize) > MAX_RE_SIZE:
1179 group = regexps[startidx:idx]
1176 group = regexps[startidx:idx]
1180 allgroups.append(_joinregexes(group))
1177 allgroups.append(_joinregexes(group))
1181 startidx = idx
1178 startidx = idx
1182 groupsize = 0
1179 groupsize = 0
1183 groupsize += piecesize + 1
1180 groupsize += piecesize + 1
1184
1181
1185 if startidx == 0:
1182 if startidx == 0:
1186 func = _rematcher(fullregexp)
1183 func = _rematcher(fullregexp)
1187 else:
1184 else:
1188 group = regexps[startidx:]
1185 group = regexps[startidx:]
1189 allgroups.append(_joinregexes(group))
1186 allgroups.append(_joinregexes(group))
1190 allmatchers = [_rematcher(g) for g in allgroups]
1187 allmatchers = [_rematcher(g) for g in allgroups]
1191 func = lambda s: any(m(s) for m in allmatchers)
1188 func = lambda s: any(m(s) for m in allmatchers)
1192 return fullregexp, func
1189 return fullregexp, func
1193 except re.error:
1190 except re.error:
1194 for k, p, s in kindpats:
1191 for k, p, s in kindpats:
1195 try:
1192 try:
1196 _rematcher(_regex(k, p, globsuffix))
1193 _rematcher(_regex(k, p, globsuffix))
1197 except re.error:
1194 except re.error:
1198 if s:
1195 if s:
1199 raise error.Abort(_("%s: invalid pattern (%s): %s") %
1196 raise error.Abort(_("%s: invalid pattern (%s): %s") %
1200 (s, k, p))
1197 (s, k, p))
1201 else:
1198 else:
1202 raise error.Abort(_("invalid pattern (%s): %s") % (k, p))
1199 raise error.Abort(_("invalid pattern (%s): %s") % (k, p))
1203 raise error.Abort(_("invalid pattern"))
1200 raise error.Abort(_("invalid pattern"))
1204
1201
1205 def _patternrootsanddirs(kindpats):
1202 def _patternrootsanddirs(kindpats):
1206 '''Returns roots and directories corresponding to each pattern.
1203 '''Returns roots and directories corresponding to each pattern.
1207
1204
1208 This calculates the roots and directories exactly matching the patterns and
1205 This calculates the roots and directories exactly matching the patterns and
1209 returns a tuple of (roots, dirs) for each. It does not return other
1206 returns a tuple of (roots, dirs) for each. It does not return other
1210 directories which may also need to be considered, like the parent
1207 directories which may also need to be considered, like the parent
1211 directories.
1208 directories.
1212 '''
1209 '''
1213 r = []
1210 r = []
1214 d = []
1211 d = []
1215 for kind, pat, source in kindpats:
1212 for kind, pat, source in kindpats:
1216 if kind in ('glob', 'rootglob'): # find the non-glob prefix
1213 if kind in ('glob', 'rootglob'): # find the non-glob prefix
1217 root = []
1214 root = []
1218 for p in pat.split('/'):
1215 for p in pat.split('/'):
1219 if '[' in p or '{' in p or '*' in p or '?' in p:
1216 if '[' in p or '{' in p or '*' in p or '?' in p:
1220 break
1217 break
1221 root.append(p)
1218 root.append(p)
1222 r.append('/'.join(root) or '.')
1219 r.append('/'.join(root) or '.')
1223 elif kind in ('relpath', 'path'):
1220 elif kind in ('relpath', 'path'):
1224 r.append(pat or '.')
1221 r.append(pat or '.')
1225 elif kind in ('rootfilesin',):
1222 elif kind in ('rootfilesin',):
1226 d.append(pat or '.')
1223 d.append(pat or '.')
1227 else: # relglob, re, relre
1224 else: # relglob, re, relre
1228 r.append('.')
1225 r.append('.')
1229 return r, d
1226 return r, d
1230
1227
1231 def _roots(kindpats):
1228 def _roots(kindpats):
1232 '''Returns root directories to match recursively from the given patterns.'''
1229 '''Returns root directories to match recursively from the given patterns.'''
1233 roots, dirs = _patternrootsanddirs(kindpats)
1230 roots, dirs = _patternrootsanddirs(kindpats)
1234 return roots
1231 return roots
1235
1232
1236 def _rootsdirsandparents(kindpats):
1233 def _rootsdirsandparents(kindpats):
1237 '''Returns roots and exact directories from patterns.
1234 '''Returns roots and exact directories from patterns.
1238
1235
1239 `roots` are directories to match recursively, `dirs` should
1236 `roots` are directories to match recursively, `dirs` should
1240 be matched non-recursively, and `parents` are the implicitly required
1237 be matched non-recursively, and `parents` are the implicitly required
1241 directories to walk to items in either roots or dirs.
1238 directories to walk to items in either roots or dirs.
1242
1239
1243 Returns a tuple of (roots, dirs, parents).
1240 Returns a tuple of (roots, dirs, parents).
1244
1241
1245 >>> _rootsdirsandparents(
1242 >>> _rootsdirsandparents(
1246 ... [(b'glob', b'g/h/*', b''), (b'glob', b'g/h', b''),
1243 ... [(b'glob', b'g/h/*', b''), (b'glob', b'g/h', b''),
1247 ... (b'glob', b'g*', b'')])
1244 ... (b'glob', b'g*', b'')])
1248 (['g/h', 'g/h', '.'], [], ['g', '.'])
1245 (['g/h', 'g/h', '.'], [], ['g', '.'])
1249 >>> _rootsdirsandparents(
1246 >>> _rootsdirsandparents(
1250 ... [(b'rootfilesin', b'g/h', b''), (b'rootfilesin', b'', b'')])
1247 ... [(b'rootfilesin', b'g/h', b''), (b'rootfilesin', b'', b'')])
1251 ([], ['g/h', '.'], ['g', '.'])
1248 ([], ['g/h', '.'], ['g', '.'])
1252 >>> _rootsdirsandparents(
1249 >>> _rootsdirsandparents(
1253 ... [(b'relpath', b'r', b''), (b'path', b'p/p', b''),
1250 ... [(b'relpath', b'r', b''), (b'path', b'p/p', b''),
1254 ... (b'path', b'', b'')])
1251 ... (b'path', b'', b'')])
1255 (['r', 'p/p', '.'], [], ['p', '.'])
1252 (['r', 'p/p', '.'], [], ['p', '.'])
1256 >>> _rootsdirsandparents(
1253 >>> _rootsdirsandparents(
1257 ... [(b'relglob', b'rg*', b''), (b're', b're/', b''),
1254 ... [(b'relglob', b'rg*', b''), (b're', b're/', b''),
1258 ... (b'relre', b'rr', b'')])
1255 ... (b'relre', b'rr', b'')])
1259 (['.', '.', '.'], [], ['.'])
1256 (['.', '.', '.'], [], ['.'])
1260 '''
1257 '''
1261 r, d = _patternrootsanddirs(kindpats)
1258 r, d = _patternrootsanddirs(kindpats)
1262
1259
1263 p = []
1260 p = []
1264 # Append the parents as non-recursive/exact directories, since they must be
1261 # Append the parents as non-recursive/exact directories, since they must be
1265 # scanned to get to either the roots or the other exact directories.
1262 # scanned to get to either the roots or the other exact directories.
1266 p.extend(util.dirs(d))
1263 p.extend(util.dirs(d))
1267 p.extend(util.dirs(r))
1264 p.extend(util.dirs(r))
1268 # util.dirs() does not include the root directory, so add it manually
1265 # util.dirs() does not include the root directory, so add it manually
1269 p.append('.')
1266 p.append('.')
1270
1267
1271 # FIXME: all uses of this function convert these to sets, do so before
1268 # FIXME: all uses of this function convert these to sets, do so before
1272 # returning.
1269 # returning.
1273 # FIXME: all uses of this function do not need anything in 'roots' and
1270 # FIXME: all uses of this function do not need anything in 'roots' and
1274 # 'dirs' to also be in 'parents', consider removing them before returning.
1271 # 'dirs' to also be in 'parents', consider removing them before returning.
1275 return r, d, p
1272 return r, d, p
1276
1273
1277 def _explicitfiles(kindpats):
1274 def _explicitfiles(kindpats):
1278 '''Returns the potential explicit filenames from the patterns.
1275 '''Returns the potential explicit filenames from the patterns.
1279
1276
1280 >>> _explicitfiles([(b'path', b'foo/bar', b'')])
1277 >>> _explicitfiles([(b'path', b'foo/bar', b'')])
1281 ['foo/bar']
1278 ['foo/bar']
1282 >>> _explicitfiles([(b'rootfilesin', b'foo/bar', b'')])
1279 >>> _explicitfiles([(b'rootfilesin', b'foo/bar', b'')])
1283 []
1280 []
1284 '''
1281 '''
1285 # Keep only the pattern kinds where one can specify filenames (vs only
1282 # Keep only the pattern kinds where one can specify filenames (vs only
1286 # directory names).
1283 # directory names).
1287 filable = [kp for kp in kindpats if kp[0] not in ('rootfilesin',)]
1284 filable = [kp for kp in kindpats if kp[0] not in ('rootfilesin',)]
1288 return _roots(filable)
1285 return _roots(filable)
1289
1286
1290 def _prefix(kindpats):
1287 def _prefix(kindpats):
1291 '''Whether all the patterns match a prefix (i.e. recursively)'''
1288 '''Whether all the patterns match a prefix (i.e. recursively)'''
1292 for kind, pat, source in kindpats:
1289 for kind, pat, source in kindpats:
1293 if kind not in ('path', 'relpath'):
1290 if kind not in ('path', 'relpath'):
1294 return False
1291 return False
1295 return True
1292 return True
1296
1293
1297 _commentre = None
1294 _commentre = None
1298
1295
1299 def readpatternfile(filepath, warn, sourceinfo=False):
1296 def readpatternfile(filepath, warn, sourceinfo=False):
1300 '''parse a pattern file, returning a list of
1297 '''parse a pattern file, returning a list of
1301 patterns. These patterns should be given to compile()
1298 patterns. These patterns should be given to compile()
1302 to be validated and converted into a match function.
1299 to be validated and converted into a match function.
1303
1300
1304 trailing white space is dropped.
1301 trailing white space is dropped.
1305 the escape character is backslash.
1302 the escape character is backslash.
1306 comments start with #.
1303 comments start with #.
1307 empty lines are skipped.
1304 empty lines are skipped.
1308
1305
1309 lines can be of the following formats:
1306 lines can be of the following formats:
1310
1307
1311 syntax: regexp # defaults following lines to non-rooted regexps
1308 syntax: regexp # defaults following lines to non-rooted regexps
1312 syntax: glob # defaults following lines to non-rooted globs
1309 syntax: glob # defaults following lines to non-rooted globs
1313 re:pattern # non-rooted regular expression
1310 re:pattern # non-rooted regular expression
1314 glob:pattern # non-rooted glob
1311 glob:pattern # non-rooted glob
1315 rootglob:pat # rooted glob (same root as ^ in regexps)
1312 rootglob:pat # rooted glob (same root as ^ in regexps)
1316 pattern # pattern of the current default type
1313 pattern # pattern of the current default type
1317
1314
1318 if sourceinfo is set, returns a list of tuples:
1315 if sourceinfo is set, returns a list of tuples:
1319 (pattern, lineno, originalline). This is useful to debug ignore patterns.
1316 (pattern, lineno, originalline). This is useful to debug ignore patterns.
1320 '''
1317 '''
1321
1318
1322 syntaxes = {
1319 syntaxes = {
1323 're': 'relre:',
1320 're': 'relre:',
1324 'regexp': 'relre:',
1321 'regexp': 'relre:',
1325 'glob': 'relglob:',
1322 'glob': 'relglob:',
1326 'rootglob': 'rootglob:',
1323 'rootglob': 'rootglob:',
1327 'include': 'include',
1324 'include': 'include',
1328 'subinclude': 'subinclude',
1325 'subinclude': 'subinclude',
1329 }
1326 }
1330 syntax = 'relre:'
1327 syntax = 'relre:'
1331 patterns = []
1328 patterns = []
1332
1329
1333 fp = open(filepath, 'rb')
1330 fp = open(filepath, 'rb')
1334 for lineno, line in enumerate(util.iterfile(fp), start=1):
1331 for lineno, line in enumerate(util.iterfile(fp), start=1):
1335 if "#" in line:
1332 if "#" in line:
1336 global _commentre
1333 global _commentre
1337 if not _commentre:
1334 if not _commentre:
1338 _commentre = util.re.compile(br'((?:^|[^\\])(?:\\\\)*)#.*')
1335 _commentre = util.re.compile(br'((?:^|[^\\])(?:\\\\)*)#.*')
1339 # remove comments prefixed by an even number of escapes
1336 # remove comments prefixed by an even number of escapes
1340 m = _commentre.search(line)
1337 m = _commentre.search(line)
1341 if m:
1338 if m:
1342 line = line[:m.end(1)]
1339 line = line[:m.end(1)]
1343 # fixup properly escaped comments that survived the above
1340 # fixup properly escaped comments that survived the above
1344 line = line.replace("\\#", "#")
1341 line = line.replace("\\#", "#")
1345 line = line.rstrip()
1342 line = line.rstrip()
1346 if not line:
1343 if not line:
1347 continue
1344 continue
1348
1345
1349 if line.startswith('syntax:'):
1346 if line.startswith('syntax:'):
1350 s = line[7:].strip()
1347 s = line[7:].strip()
1351 try:
1348 try:
1352 syntax = syntaxes[s]
1349 syntax = syntaxes[s]
1353 except KeyError:
1350 except KeyError:
1354 if warn:
1351 if warn:
1355 warn(_("%s: ignoring invalid syntax '%s'\n") %
1352 warn(_("%s: ignoring invalid syntax '%s'\n") %
1356 (filepath, s))
1353 (filepath, s))
1357 continue
1354 continue
1358
1355
1359 linesyntax = syntax
1356 linesyntax = syntax
1360 for s, rels in syntaxes.iteritems():
1357 for s, rels in syntaxes.iteritems():
1361 if line.startswith(rels):
1358 if line.startswith(rels):
1362 linesyntax = rels
1359 linesyntax = rels
1363 line = line[len(rels):]
1360 line = line[len(rels):]
1364 break
1361 break
1365 elif line.startswith(s+':'):
1362 elif line.startswith(s+':'):
1366 linesyntax = rels
1363 linesyntax = rels
1367 line = line[len(s) + 1:]
1364 line = line[len(s) + 1:]
1368 break
1365 break
1369 if sourceinfo:
1366 if sourceinfo:
1370 patterns.append((linesyntax + line, lineno, line))
1367 patterns.append((linesyntax + line, lineno, line))
1371 else:
1368 else:
1372 patterns.append(linesyntax + line)
1369 patterns.append(linesyntax + line)
1373 fp.close()
1370 fp.close()
1374 return patterns
1371 return patterns
@@ -1,700 +1,700 b''
1 # sparse.py - functionality for sparse checkouts
1 # sparse.py - functionality for sparse checkouts
2 #
2 #
3 # Copyright 2014 Facebook, Inc.
3 # Copyright 2014 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11 import os
11 import os
12
12
13 from .i18n import _
13 from .i18n import _
14 from .node import (
14 from .node import (
15 hex,
15 hex,
16 nullid,
16 nullid,
17 )
17 )
18 from . import (
18 from . import (
19 error,
19 error,
20 match as matchmod,
20 match as matchmod,
21 merge as mergemod,
21 merge as mergemod,
22 pathutil,
22 pathutil,
23 pycompat,
23 pycompat,
24 scmutil,
24 scmutil,
25 util,
25 util,
26 )
26 )
27
27
28 # Whether sparse features are enabled. This variable is intended to be
28 # Whether sparse features are enabled. This variable is intended to be
29 # temporary to facilitate porting sparse to core. It should eventually be
29 # temporary to facilitate porting sparse to core. It should eventually be
30 # a per-repo option, possibly a repo requirement.
30 # a per-repo option, possibly a repo requirement.
31 enabled = False
31 enabled = False
32
32
33 def parseconfig(ui, raw, action):
33 def parseconfig(ui, raw, action):
34 """Parse sparse config file content.
34 """Parse sparse config file content.
35
35
36 action is the command which is trigerring this read, can be narrow, sparse
36 action is the command which is trigerring this read, can be narrow, sparse
37
37
38 Returns a tuple of includes, excludes, and profiles.
38 Returns a tuple of includes, excludes, and profiles.
39 """
39 """
40 includes = set()
40 includes = set()
41 excludes = set()
41 excludes = set()
42 profiles = set()
42 profiles = set()
43 current = None
43 current = None
44 havesection = False
44 havesection = False
45
45
46 for line in raw.split('\n'):
46 for line in raw.split('\n'):
47 line = line.strip()
47 line = line.strip()
48 if not line or line.startswith('#'):
48 if not line or line.startswith('#'):
49 # empty or comment line, skip
49 # empty or comment line, skip
50 continue
50 continue
51 elif line.startswith('%include '):
51 elif line.startswith('%include '):
52 line = line[9:].strip()
52 line = line[9:].strip()
53 if line:
53 if line:
54 profiles.add(line)
54 profiles.add(line)
55 elif line == '[include]':
55 elif line == '[include]':
56 if havesection and current != includes:
56 if havesection and current != includes:
57 # TODO pass filename into this API so we can report it.
57 # TODO pass filename into this API so we can report it.
58 raise error.Abort(_('%(action)s config cannot have includes '
58 raise error.Abort(_('%(action)s config cannot have includes '
59 'after excludes') % {'action': action})
59 'after excludes') % {'action': action})
60 havesection = True
60 havesection = True
61 current = includes
61 current = includes
62 continue
62 continue
63 elif line == '[exclude]':
63 elif line == '[exclude]':
64 havesection = True
64 havesection = True
65 current = excludes
65 current = excludes
66 elif line:
66 elif line:
67 if current is None:
67 if current is None:
68 raise error.Abort(_('%(action)s config entry outside of '
68 raise error.Abort(_('%(action)s config entry outside of '
69 'section: %(line)s')
69 'section: %(line)s')
70 % {'action': action, 'line': line},
70 % {'action': action, 'line': line},
71 hint=_('add an [include] or [exclude] line '
71 hint=_('add an [include] or [exclude] line '
72 'to declare the entry type'))
72 'to declare the entry type'))
73
73
74 if line.strip().startswith('/'):
74 if line.strip().startswith('/'):
75 ui.warn(_('warning: %(action)s profile cannot use'
75 ui.warn(_('warning: %(action)s profile cannot use'
76 ' paths starting with /, ignoring %(line)s\n')
76 ' paths starting with /, ignoring %(line)s\n')
77 % {'action': action, 'line': line})
77 % {'action': action, 'line': line})
78 continue
78 continue
79 current.add(line)
79 current.add(line)
80
80
81 return includes, excludes, profiles
81 return includes, excludes, profiles
82
82
83 # Exists as separate function to facilitate monkeypatching.
83 # Exists as separate function to facilitate monkeypatching.
84 def readprofile(repo, profile, changeid):
84 def readprofile(repo, profile, changeid):
85 """Resolve the raw content of a sparse profile file."""
85 """Resolve the raw content of a sparse profile file."""
86 # TODO add some kind of cache here because this incurs a manifest
86 # TODO add some kind of cache here because this incurs a manifest
87 # resolve and can be slow.
87 # resolve and can be slow.
88 return repo.filectx(profile, changeid=changeid).data()
88 return repo.filectx(profile, changeid=changeid).data()
89
89
90 def patternsforrev(repo, rev):
90 def patternsforrev(repo, rev):
91 """Obtain sparse checkout patterns for the given rev.
91 """Obtain sparse checkout patterns for the given rev.
92
92
93 Returns a tuple of iterables representing includes, excludes, and
93 Returns a tuple of iterables representing includes, excludes, and
94 patterns.
94 patterns.
95 """
95 """
96 # Feature isn't enabled. No-op.
96 # Feature isn't enabled. No-op.
97 if not enabled:
97 if not enabled:
98 return set(), set(), set()
98 return set(), set(), set()
99
99
100 raw = repo.vfs.tryread('sparse')
100 raw = repo.vfs.tryread('sparse')
101 if not raw:
101 if not raw:
102 return set(), set(), set()
102 return set(), set(), set()
103
103
104 if rev is None:
104 if rev is None:
105 raise error.Abort(_('cannot parse sparse patterns from working '
105 raise error.Abort(_('cannot parse sparse patterns from working '
106 'directory'))
106 'directory'))
107
107
108 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
108 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
109 ctx = repo[rev]
109 ctx = repo[rev]
110
110
111 if profiles:
111 if profiles:
112 visited = set()
112 visited = set()
113 while profiles:
113 while profiles:
114 profile = profiles.pop()
114 profile = profiles.pop()
115 if profile in visited:
115 if profile in visited:
116 continue
116 continue
117
117
118 visited.add(profile)
118 visited.add(profile)
119
119
120 try:
120 try:
121 raw = readprofile(repo, profile, rev)
121 raw = readprofile(repo, profile, rev)
122 except error.ManifestLookupError:
122 except error.ManifestLookupError:
123 msg = (
123 msg = (
124 "warning: sparse profile '%s' not found "
124 "warning: sparse profile '%s' not found "
125 "in rev %s - ignoring it\n" % (profile, ctx))
125 "in rev %s - ignoring it\n" % (profile, ctx))
126 # experimental config: sparse.missingwarning
126 # experimental config: sparse.missingwarning
127 if repo.ui.configbool(
127 if repo.ui.configbool(
128 'sparse', 'missingwarning'):
128 'sparse', 'missingwarning'):
129 repo.ui.warn(msg)
129 repo.ui.warn(msg)
130 else:
130 else:
131 repo.ui.debug(msg)
131 repo.ui.debug(msg)
132 continue
132 continue
133
133
134 pincludes, pexcludes, subprofs = parseconfig(repo.ui, raw, 'sparse')
134 pincludes, pexcludes, subprofs = parseconfig(repo.ui, raw, 'sparse')
135 includes.update(pincludes)
135 includes.update(pincludes)
136 excludes.update(pexcludes)
136 excludes.update(pexcludes)
137 profiles.update(subprofs)
137 profiles.update(subprofs)
138
138
139 profiles = visited
139 profiles = visited
140
140
141 if includes:
141 if includes:
142 includes.add('.hg*')
142 includes.add('.hg*')
143
143
144 return includes, excludes, profiles
144 return includes, excludes, profiles
145
145
146 def activeconfig(repo):
146 def activeconfig(repo):
147 """Determine the active sparse config rules.
147 """Determine the active sparse config rules.
148
148
149 Rules are constructed by reading the current sparse config and bringing in
149 Rules are constructed by reading the current sparse config and bringing in
150 referenced profiles from parents of the working directory.
150 referenced profiles from parents of the working directory.
151 """
151 """
152 revs = [repo.changelog.rev(node) for node in
152 revs = [repo.changelog.rev(node) for node in
153 repo.dirstate.parents() if node != nullid]
153 repo.dirstate.parents() if node != nullid]
154
154
155 allincludes = set()
155 allincludes = set()
156 allexcludes = set()
156 allexcludes = set()
157 allprofiles = set()
157 allprofiles = set()
158
158
159 for rev in revs:
159 for rev in revs:
160 includes, excludes, profiles = patternsforrev(repo, rev)
160 includes, excludes, profiles = patternsforrev(repo, rev)
161 allincludes |= includes
161 allincludes |= includes
162 allexcludes |= excludes
162 allexcludes |= excludes
163 allprofiles |= profiles
163 allprofiles |= profiles
164
164
165 return allincludes, allexcludes, allprofiles
165 return allincludes, allexcludes, allprofiles
166
166
167 def configsignature(repo, includetemp=True):
167 def configsignature(repo, includetemp=True):
168 """Obtain the signature string for the current sparse configuration.
168 """Obtain the signature string for the current sparse configuration.
169
169
170 This is used to construct a cache key for matchers.
170 This is used to construct a cache key for matchers.
171 """
171 """
172 cache = repo._sparsesignaturecache
172 cache = repo._sparsesignaturecache
173
173
174 signature = cache.get('signature')
174 signature = cache.get('signature')
175
175
176 if includetemp:
176 if includetemp:
177 tempsignature = cache.get('tempsignature')
177 tempsignature = cache.get('tempsignature')
178 else:
178 else:
179 tempsignature = '0'
179 tempsignature = '0'
180
180
181 if signature is None or (includetemp and tempsignature is None):
181 if signature is None or (includetemp and tempsignature is None):
182 signature = hex(hashlib.sha1(repo.vfs.tryread('sparse')).digest())
182 signature = hex(hashlib.sha1(repo.vfs.tryread('sparse')).digest())
183 cache['signature'] = signature
183 cache['signature'] = signature
184
184
185 if includetemp:
185 if includetemp:
186 raw = repo.vfs.tryread('tempsparse')
186 raw = repo.vfs.tryread('tempsparse')
187 tempsignature = hex(hashlib.sha1(raw).digest())
187 tempsignature = hex(hashlib.sha1(raw).digest())
188 cache['tempsignature'] = tempsignature
188 cache['tempsignature'] = tempsignature
189
189
190 return '%s %s' % (signature, tempsignature)
190 return '%s %s' % (signature, tempsignature)
191
191
192 def writeconfig(repo, includes, excludes, profiles):
192 def writeconfig(repo, includes, excludes, profiles):
193 """Write the sparse config file given a sparse configuration."""
193 """Write the sparse config file given a sparse configuration."""
194 with repo.vfs('sparse', 'wb') as fh:
194 with repo.vfs('sparse', 'wb') as fh:
195 for p in sorted(profiles):
195 for p in sorted(profiles):
196 fh.write('%%include %s\n' % p)
196 fh.write('%%include %s\n' % p)
197
197
198 if includes:
198 if includes:
199 fh.write('[include]\n')
199 fh.write('[include]\n')
200 for i in sorted(includes):
200 for i in sorted(includes):
201 fh.write(i)
201 fh.write(i)
202 fh.write('\n')
202 fh.write('\n')
203
203
204 if excludes:
204 if excludes:
205 fh.write('[exclude]\n')
205 fh.write('[exclude]\n')
206 for e in sorted(excludes):
206 for e in sorted(excludes):
207 fh.write(e)
207 fh.write(e)
208 fh.write('\n')
208 fh.write('\n')
209
209
210 repo._sparsesignaturecache.clear()
210 repo._sparsesignaturecache.clear()
211
211
212 def readtemporaryincludes(repo):
212 def readtemporaryincludes(repo):
213 raw = repo.vfs.tryread('tempsparse')
213 raw = repo.vfs.tryread('tempsparse')
214 if not raw:
214 if not raw:
215 return set()
215 return set()
216
216
217 return set(raw.split('\n'))
217 return set(raw.split('\n'))
218
218
219 def writetemporaryincludes(repo, includes):
219 def writetemporaryincludes(repo, includes):
220 repo.vfs.write('tempsparse', '\n'.join(sorted(includes)))
220 repo.vfs.write('tempsparse', '\n'.join(sorted(includes)))
221 repo._sparsesignaturecache.clear()
221 repo._sparsesignaturecache.clear()
222
222
223 def addtemporaryincludes(repo, additional):
223 def addtemporaryincludes(repo, additional):
224 includes = readtemporaryincludes(repo)
224 includes = readtemporaryincludes(repo)
225 for i in additional:
225 for i in additional:
226 includes.add(i)
226 includes.add(i)
227 writetemporaryincludes(repo, includes)
227 writetemporaryincludes(repo, includes)
228
228
229 def prunetemporaryincludes(repo):
229 def prunetemporaryincludes(repo):
230 if not enabled or not repo.vfs.exists('tempsparse'):
230 if not enabled or not repo.vfs.exists('tempsparse'):
231 return
231 return
232
232
233 s = repo.status()
233 s = repo.status()
234 if s.modified or s.added or s.removed or s.deleted:
234 if s.modified or s.added or s.removed or s.deleted:
235 # Still have pending changes. Don't bother trying to prune.
235 # Still have pending changes. Don't bother trying to prune.
236 return
236 return
237
237
238 sparsematch = matcher(repo, includetemp=False)
238 sparsematch = matcher(repo, includetemp=False)
239 dirstate = repo.dirstate
239 dirstate = repo.dirstate
240 actions = []
240 actions = []
241 dropped = []
241 dropped = []
242 tempincludes = readtemporaryincludes(repo)
242 tempincludes = readtemporaryincludes(repo)
243 for file in tempincludes:
243 for file in tempincludes:
244 if file in dirstate and not sparsematch(file):
244 if file in dirstate and not sparsematch(file):
245 message = _('dropping temporarily included sparse files')
245 message = _('dropping temporarily included sparse files')
246 actions.append((file, None, message))
246 actions.append((file, None, message))
247 dropped.append(file)
247 dropped.append(file)
248
248
249 typeactions = mergemod.emptyactions()
249 typeactions = mergemod.emptyactions()
250 typeactions['r'] = actions
250 typeactions['r'] = actions
251 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'], False)
251 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'], False)
252
252
253 # Fix dirstate
253 # Fix dirstate
254 for file in dropped:
254 for file in dropped:
255 dirstate.drop(file)
255 dirstate.drop(file)
256
256
257 repo.vfs.unlink('tempsparse')
257 repo.vfs.unlink('tempsparse')
258 repo._sparsesignaturecache.clear()
258 repo._sparsesignaturecache.clear()
259 msg = _('cleaned up %d temporarily added file(s) from the '
259 msg = _('cleaned up %d temporarily added file(s) from the '
260 'sparse checkout\n')
260 'sparse checkout\n')
261 repo.ui.status(msg % len(tempincludes))
261 repo.ui.status(msg % len(tempincludes))
262
262
263 def forceincludematcher(matcher, includes):
263 def forceincludematcher(matcher, includes):
264 """Returns a matcher that returns true for any of the forced includes
264 """Returns a matcher that returns true for any of the forced includes
265 before testing against the actual matcher."""
265 before testing against the actual matcher."""
266 kindpats = [('path', include, '') for include in includes]
266 kindpats = [('path', include, '') for include in includes]
267 includematcher = matchmod.includematcher('', '', kindpats)
267 includematcher = matchmod.includematcher('', kindpats)
268 return matchmod.unionmatcher([includematcher, matcher])
268 return matchmod.unionmatcher([includematcher, matcher])
269
269
270 def matcher(repo, revs=None, includetemp=True):
270 def matcher(repo, revs=None, includetemp=True):
271 """Obtain a matcher for sparse working directories for the given revs.
271 """Obtain a matcher for sparse working directories for the given revs.
272
272
273 If multiple revisions are specified, the matcher is the union of all
273 If multiple revisions are specified, the matcher is the union of all
274 revs.
274 revs.
275
275
276 ``includetemp`` indicates whether to use the temporary sparse profile.
276 ``includetemp`` indicates whether to use the temporary sparse profile.
277 """
277 """
278 # If sparse isn't enabled, sparse matcher matches everything.
278 # If sparse isn't enabled, sparse matcher matches everything.
279 if not enabled:
279 if not enabled:
280 return matchmod.always(repo.root, '')
280 return matchmod.always(repo.root, '')
281
281
282 if not revs or revs == [None]:
282 if not revs or revs == [None]:
283 revs = [repo.changelog.rev(node)
283 revs = [repo.changelog.rev(node)
284 for node in repo.dirstate.parents() if node != nullid]
284 for node in repo.dirstate.parents() if node != nullid]
285
285
286 signature = configsignature(repo, includetemp=includetemp)
286 signature = configsignature(repo, includetemp=includetemp)
287
287
288 key = '%s %s' % (signature, ' '.join(map(pycompat.bytestr, revs)))
288 key = '%s %s' % (signature, ' '.join(map(pycompat.bytestr, revs)))
289
289
290 result = repo._sparsematchercache.get(key)
290 result = repo._sparsematchercache.get(key)
291 if result:
291 if result:
292 return result
292 return result
293
293
294 matchers = []
294 matchers = []
295 for rev in revs:
295 for rev in revs:
296 try:
296 try:
297 includes, excludes, profiles = patternsforrev(repo, rev)
297 includes, excludes, profiles = patternsforrev(repo, rev)
298
298
299 if includes or excludes:
299 if includes or excludes:
300 matcher = matchmod.match(repo.root, '', [],
300 matcher = matchmod.match(repo.root, '', [],
301 include=includes, exclude=excludes,
301 include=includes, exclude=excludes,
302 default='relpath')
302 default='relpath')
303 matchers.append(matcher)
303 matchers.append(matcher)
304 except IOError:
304 except IOError:
305 pass
305 pass
306
306
307 if not matchers:
307 if not matchers:
308 result = matchmod.always(repo.root, '')
308 result = matchmod.always(repo.root, '')
309 elif len(matchers) == 1:
309 elif len(matchers) == 1:
310 result = matchers[0]
310 result = matchers[0]
311 else:
311 else:
312 result = matchmod.unionmatcher(matchers)
312 result = matchmod.unionmatcher(matchers)
313
313
314 if includetemp:
314 if includetemp:
315 tempincludes = readtemporaryincludes(repo)
315 tempincludes = readtemporaryincludes(repo)
316 result = forceincludematcher(result, tempincludes)
316 result = forceincludematcher(result, tempincludes)
317
317
318 repo._sparsematchercache[key] = result
318 repo._sparsematchercache[key] = result
319
319
320 return result
320 return result
321
321
322 def filterupdatesactions(repo, wctx, mctx, branchmerge, actions):
322 def filterupdatesactions(repo, wctx, mctx, branchmerge, actions):
323 """Filter updates to only lay out files that match the sparse rules."""
323 """Filter updates to only lay out files that match the sparse rules."""
324 if not enabled:
324 if not enabled:
325 return actions
325 return actions
326
326
327 oldrevs = [pctx.rev() for pctx in wctx.parents()]
327 oldrevs = [pctx.rev() for pctx in wctx.parents()]
328 oldsparsematch = matcher(repo, oldrevs)
328 oldsparsematch = matcher(repo, oldrevs)
329
329
330 if oldsparsematch.always():
330 if oldsparsematch.always():
331 return actions
331 return actions
332
332
333 files = set()
333 files = set()
334 prunedactions = {}
334 prunedactions = {}
335
335
336 if branchmerge:
336 if branchmerge:
337 # If we're merging, use the wctx filter, since we're merging into
337 # If we're merging, use the wctx filter, since we're merging into
338 # the wctx.
338 # the wctx.
339 sparsematch = matcher(repo, [wctx.p1().rev()])
339 sparsematch = matcher(repo, [wctx.p1().rev()])
340 else:
340 else:
341 # If we're updating, use the target context's filter, since we're
341 # If we're updating, use the target context's filter, since we're
342 # moving to the target context.
342 # moving to the target context.
343 sparsematch = matcher(repo, [mctx.rev()])
343 sparsematch = matcher(repo, [mctx.rev()])
344
344
345 temporaryfiles = []
345 temporaryfiles = []
346 for file, action in actions.iteritems():
346 for file, action in actions.iteritems():
347 type, args, msg = action
347 type, args, msg = action
348 files.add(file)
348 files.add(file)
349 if sparsematch(file):
349 if sparsematch(file):
350 prunedactions[file] = action
350 prunedactions[file] = action
351 elif type == 'm':
351 elif type == 'm':
352 temporaryfiles.append(file)
352 temporaryfiles.append(file)
353 prunedactions[file] = action
353 prunedactions[file] = action
354 elif branchmerge:
354 elif branchmerge:
355 if type != 'k':
355 if type != 'k':
356 temporaryfiles.append(file)
356 temporaryfiles.append(file)
357 prunedactions[file] = action
357 prunedactions[file] = action
358 elif type == 'f':
358 elif type == 'f':
359 prunedactions[file] = action
359 prunedactions[file] = action
360 elif file in wctx:
360 elif file in wctx:
361 prunedactions[file] = ('r', args, msg)
361 prunedactions[file] = ('r', args, msg)
362
362
363 if branchmerge and type == mergemod.ACTION_MERGE:
363 if branchmerge and type == mergemod.ACTION_MERGE:
364 f1, f2, fa, move, anc = args
364 f1, f2, fa, move, anc = args
365 if not sparsematch(f1):
365 if not sparsematch(f1):
366 temporaryfiles.append(f1)
366 temporaryfiles.append(f1)
367
367
368 if len(temporaryfiles) > 0:
368 if len(temporaryfiles) > 0:
369 repo.ui.status(_('temporarily included %d file(s) in the sparse '
369 repo.ui.status(_('temporarily included %d file(s) in the sparse '
370 'checkout for merging\n') % len(temporaryfiles))
370 'checkout for merging\n') % len(temporaryfiles))
371 addtemporaryincludes(repo, temporaryfiles)
371 addtemporaryincludes(repo, temporaryfiles)
372
372
373 # Add the new files to the working copy so they can be merged, etc
373 # Add the new files to the working copy so they can be merged, etc
374 actions = []
374 actions = []
375 message = 'temporarily adding to sparse checkout'
375 message = 'temporarily adding to sparse checkout'
376 wctxmanifest = repo[None].manifest()
376 wctxmanifest = repo[None].manifest()
377 for file in temporaryfiles:
377 for file in temporaryfiles:
378 if file in wctxmanifest:
378 if file in wctxmanifest:
379 fctx = repo[None][file]
379 fctx = repo[None][file]
380 actions.append((file, (fctx.flags(), False), message))
380 actions.append((file, (fctx.flags(), False), message))
381
381
382 typeactions = mergemod.emptyactions()
382 typeactions = mergemod.emptyactions()
383 typeactions['g'] = actions
383 typeactions['g'] = actions
384 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'],
384 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'],
385 False)
385 False)
386
386
387 dirstate = repo.dirstate
387 dirstate = repo.dirstate
388 for file, flags, msg in actions:
388 for file, flags, msg in actions:
389 dirstate.normal(file)
389 dirstate.normal(file)
390
390
391 profiles = activeconfig(repo)[2]
391 profiles = activeconfig(repo)[2]
392 changedprofiles = profiles & files
392 changedprofiles = profiles & files
393 # If an active profile changed during the update, refresh the checkout.
393 # If an active profile changed during the update, refresh the checkout.
394 # Don't do this during a branch merge, since all incoming changes should
394 # Don't do this during a branch merge, since all incoming changes should
395 # have been handled by the temporary includes above.
395 # have been handled by the temporary includes above.
396 if changedprofiles and not branchmerge:
396 if changedprofiles and not branchmerge:
397 mf = mctx.manifest()
397 mf = mctx.manifest()
398 for file in mf:
398 for file in mf:
399 old = oldsparsematch(file)
399 old = oldsparsematch(file)
400 new = sparsematch(file)
400 new = sparsematch(file)
401 if not old and new:
401 if not old and new:
402 flags = mf.flags(file)
402 flags = mf.flags(file)
403 prunedactions[file] = ('g', (flags, False), '')
403 prunedactions[file] = ('g', (flags, False), '')
404 elif old and not new:
404 elif old and not new:
405 prunedactions[file] = ('r', [], '')
405 prunedactions[file] = ('r', [], '')
406
406
407 return prunedactions
407 return prunedactions
408
408
409 def refreshwdir(repo, origstatus, origsparsematch, force=False):
409 def refreshwdir(repo, origstatus, origsparsematch, force=False):
410 """Refreshes working directory by taking sparse config into account.
410 """Refreshes working directory by taking sparse config into account.
411
411
412 The old status and sparse matcher is compared against the current sparse
412 The old status and sparse matcher is compared against the current sparse
413 matcher.
413 matcher.
414
414
415 Will abort if a file with pending changes is being excluded or included
415 Will abort if a file with pending changes is being excluded or included
416 unless ``force`` is True.
416 unless ``force`` is True.
417 """
417 """
418 # Verify there are no pending changes
418 # Verify there are no pending changes
419 pending = set()
419 pending = set()
420 pending.update(origstatus.modified)
420 pending.update(origstatus.modified)
421 pending.update(origstatus.added)
421 pending.update(origstatus.added)
422 pending.update(origstatus.removed)
422 pending.update(origstatus.removed)
423 sparsematch = matcher(repo)
423 sparsematch = matcher(repo)
424 abort = False
424 abort = False
425
425
426 for f in pending:
426 for f in pending:
427 if not sparsematch(f):
427 if not sparsematch(f):
428 repo.ui.warn(_("pending changes to '%s'\n") % f)
428 repo.ui.warn(_("pending changes to '%s'\n") % f)
429 abort = not force
429 abort = not force
430
430
431 if abort:
431 if abort:
432 raise error.Abort(_('could not update sparseness due to pending '
432 raise error.Abort(_('could not update sparseness due to pending '
433 'changes'))
433 'changes'))
434
434
435 # Calculate actions
435 # Calculate actions
436 dirstate = repo.dirstate
436 dirstate = repo.dirstate
437 ctx = repo['.']
437 ctx = repo['.']
438 added = []
438 added = []
439 lookup = []
439 lookup = []
440 dropped = []
440 dropped = []
441 mf = ctx.manifest()
441 mf = ctx.manifest()
442 files = set(mf)
442 files = set(mf)
443
443
444 actions = {}
444 actions = {}
445
445
446 for file in files:
446 for file in files:
447 old = origsparsematch(file)
447 old = origsparsematch(file)
448 new = sparsematch(file)
448 new = sparsematch(file)
449 # Add files that are newly included, or that don't exist in
449 # Add files that are newly included, or that don't exist in
450 # the dirstate yet.
450 # the dirstate yet.
451 if (new and not old) or (old and new and not file in dirstate):
451 if (new and not old) or (old and new and not file in dirstate):
452 fl = mf.flags(file)
452 fl = mf.flags(file)
453 if repo.wvfs.exists(file):
453 if repo.wvfs.exists(file):
454 actions[file] = ('e', (fl,), '')
454 actions[file] = ('e', (fl,), '')
455 lookup.append(file)
455 lookup.append(file)
456 else:
456 else:
457 actions[file] = ('g', (fl, False), '')
457 actions[file] = ('g', (fl, False), '')
458 added.append(file)
458 added.append(file)
459 # Drop files that are newly excluded, or that still exist in
459 # Drop files that are newly excluded, or that still exist in
460 # the dirstate.
460 # the dirstate.
461 elif (old and not new) or (not old and not new and file in dirstate):
461 elif (old and not new) or (not old and not new and file in dirstate):
462 dropped.append(file)
462 dropped.append(file)
463 if file not in pending:
463 if file not in pending:
464 actions[file] = ('r', [], '')
464 actions[file] = ('r', [], '')
465
465
466 # Verify there are no pending changes in newly included files
466 # Verify there are no pending changes in newly included files
467 abort = False
467 abort = False
468 for file in lookup:
468 for file in lookup:
469 repo.ui.warn(_("pending changes to '%s'\n") % file)
469 repo.ui.warn(_("pending changes to '%s'\n") % file)
470 abort = not force
470 abort = not force
471 if abort:
471 if abort:
472 raise error.Abort(_('cannot change sparseness due to pending '
472 raise error.Abort(_('cannot change sparseness due to pending '
473 'changes (delete the files or use '
473 'changes (delete the files or use '
474 '--force to bring them back dirty)'))
474 '--force to bring them back dirty)'))
475
475
476 # Check for files that were only in the dirstate.
476 # Check for files that were only in the dirstate.
477 for file, state in dirstate.iteritems():
477 for file, state in dirstate.iteritems():
478 if not file in files:
478 if not file in files:
479 old = origsparsematch(file)
479 old = origsparsematch(file)
480 new = sparsematch(file)
480 new = sparsematch(file)
481 if old and not new:
481 if old and not new:
482 dropped.append(file)
482 dropped.append(file)
483
483
484 # Apply changes to disk
484 # Apply changes to disk
485 typeactions = mergemod.emptyactions()
485 typeactions = mergemod.emptyactions()
486 for f, (m, args, msg) in actions.iteritems():
486 for f, (m, args, msg) in actions.iteritems():
487 typeactions[m].append((f, args, msg))
487 typeactions[m].append((f, args, msg))
488
488
489 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'], False)
489 mergemod.applyupdates(repo, typeactions, repo[None], repo['.'], False)
490
490
491 # Fix dirstate
491 # Fix dirstate
492 for file in added:
492 for file in added:
493 dirstate.normal(file)
493 dirstate.normal(file)
494
494
495 for file in dropped:
495 for file in dropped:
496 dirstate.drop(file)
496 dirstate.drop(file)
497
497
498 for file in lookup:
498 for file in lookup:
499 # File exists on disk, and we're bringing it back in an unknown state.
499 # File exists on disk, and we're bringing it back in an unknown state.
500 dirstate.normallookup(file)
500 dirstate.normallookup(file)
501
501
502 return added, dropped, lookup
502 return added, dropped, lookup
503
503
504 def aftercommit(repo, node):
504 def aftercommit(repo, node):
505 """Perform actions after a working directory commit."""
505 """Perform actions after a working directory commit."""
506 # This function is called unconditionally, even if sparse isn't
506 # This function is called unconditionally, even if sparse isn't
507 # enabled.
507 # enabled.
508 ctx = repo[node]
508 ctx = repo[node]
509
509
510 profiles = patternsforrev(repo, ctx.rev())[2]
510 profiles = patternsforrev(repo, ctx.rev())[2]
511
511
512 # profiles will only have data if sparse is enabled.
512 # profiles will only have data if sparse is enabled.
513 if profiles & set(ctx.files()):
513 if profiles & set(ctx.files()):
514 origstatus = repo.status()
514 origstatus = repo.status()
515 origsparsematch = matcher(repo)
515 origsparsematch = matcher(repo)
516 refreshwdir(repo, origstatus, origsparsematch, force=True)
516 refreshwdir(repo, origstatus, origsparsematch, force=True)
517
517
518 prunetemporaryincludes(repo)
518 prunetemporaryincludes(repo)
519
519
520 def _updateconfigandrefreshwdir(repo, includes, excludes, profiles,
520 def _updateconfigandrefreshwdir(repo, includes, excludes, profiles,
521 force=False, removing=False):
521 force=False, removing=False):
522 """Update the sparse config and working directory state."""
522 """Update the sparse config and working directory state."""
523 raw = repo.vfs.tryread('sparse')
523 raw = repo.vfs.tryread('sparse')
524 oldincludes, oldexcludes, oldprofiles = parseconfig(repo.ui, raw, 'sparse')
524 oldincludes, oldexcludes, oldprofiles = parseconfig(repo.ui, raw, 'sparse')
525
525
526 oldstatus = repo.status()
526 oldstatus = repo.status()
527 oldmatch = matcher(repo)
527 oldmatch = matcher(repo)
528 oldrequires = set(repo.requirements)
528 oldrequires = set(repo.requirements)
529
529
530 # TODO remove this try..except once the matcher integrates better
530 # TODO remove this try..except once the matcher integrates better
531 # with dirstate. We currently have to write the updated config
531 # with dirstate. We currently have to write the updated config
532 # because that will invalidate the matcher cache and force a
532 # because that will invalidate the matcher cache and force a
533 # re-read. We ideally want to update the cached matcher on the
533 # re-read. We ideally want to update the cached matcher on the
534 # repo instance then flush the new config to disk once wdir is
534 # repo instance then flush the new config to disk once wdir is
535 # updated. But this requires massive rework to matcher() and its
535 # updated. But this requires massive rework to matcher() and its
536 # consumers.
536 # consumers.
537
537
538 if 'exp-sparse' in oldrequires and removing:
538 if 'exp-sparse' in oldrequires and removing:
539 repo.requirements.discard('exp-sparse')
539 repo.requirements.discard('exp-sparse')
540 scmutil.writerequires(repo.vfs, repo.requirements)
540 scmutil.writerequires(repo.vfs, repo.requirements)
541 elif 'exp-sparse' not in oldrequires:
541 elif 'exp-sparse' not in oldrequires:
542 repo.requirements.add('exp-sparse')
542 repo.requirements.add('exp-sparse')
543 scmutil.writerequires(repo.vfs, repo.requirements)
543 scmutil.writerequires(repo.vfs, repo.requirements)
544
544
545 try:
545 try:
546 writeconfig(repo, includes, excludes, profiles)
546 writeconfig(repo, includes, excludes, profiles)
547 return refreshwdir(repo, oldstatus, oldmatch, force=force)
547 return refreshwdir(repo, oldstatus, oldmatch, force=force)
548 except Exception:
548 except Exception:
549 if repo.requirements != oldrequires:
549 if repo.requirements != oldrequires:
550 repo.requirements.clear()
550 repo.requirements.clear()
551 repo.requirements |= oldrequires
551 repo.requirements |= oldrequires
552 scmutil.writerequires(repo.vfs, repo.requirements)
552 scmutil.writerequires(repo.vfs, repo.requirements)
553 writeconfig(repo, oldincludes, oldexcludes, oldprofiles)
553 writeconfig(repo, oldincludes, oldexcludes, oldprofiles)
554 raise
554 raise
555
555
556 def clearrules(repo, force=False):
556 def clearrules(repo, force=False):
557 """Clears include/exclude rules from the sparse config.
557 """Clears include/exclude rules from the sparse config.
558
558
559 The remaining sparse config only has profiles, if defined. The working
559 The remaining sparse config only has profiles, if defined. The working
560 directory is refreshed, as needed.
560 directory is refreshed, as needed.
561 """
561 """
562 with repo.wlock():
562 with repo.wlock():
563 raw = repo.vfs.tryread('sparse')
563 raw = repo.vfs.tryread('sparse')
564 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
564 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
565
565
566 if not includes and not excludes:
566 if not includes and not excludes:
567 return
567 return
568
568
569 _updateconfigandrefreshwdir(repo, set(), set(), profiles, force=force)
569 _updateconfigandrefreshwdir(repo, set(), set(), profiles, force=force)
570
570
571 def importfromfiles(repo, opts, paths, force=False):
571 def importfromfiles(repo, opts, paths, force=False):
572 """Import sparse config rules from files.
572 """Import sparse config rules from files.
573
573
574 The updated sparse config is written out and the working directory
574 The updated sparse config is written out and the working directory
575 is refreshed, as needed.
575 is refreshed, as needed.
576 """
576 """
577 with repo.wlock():
577 with repo.wlock():
578 # read current configuration
578 # read current configuration
579 raw = repo.vfs.tryread('sparse')
579 raw = repo.vfs.tryread('sparse')
580 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
580 includes, excludes, profiles = parseconfig(repo.ui, raw, 'sparse')
581 aincludes, aexcludes, aprofiles = activeconfig(repo)
581 aincludes, aexcludes, aprofiles = activeconfig(repo)
582
582
583 # Import rules on top; only take in rules that are not yet
583 # Import rules on top; only take in rules that are not yet
584 # part of the active rules.
584 # part of the active rules.
585 changed = False
585 changed = False
586 for p in paths:
586 for p in paths:
587 with util.posixfile(util.expandpath(p), mode='rb') as fh:
587 with util.posixfile(util.expandpath(p), mode='rb') as fh:
588 raw = fh.read()
588 raw = fh.read()
589
589
590 iincludes, iexcludes, iprofiles = parseconfig(repo.ui, raw,
590 iincludes, iexcludes, iprofiles = parseconfig(repo.ui, raw,
591 'sparse')
591 'sparse')
592 oldsize = len(includes) + len(excludes) + len(profiles)
592 oldsize = len(includes) + len(excludes) + len(profiles)
593 includes.update(iincludes - aincludes)
593 includes.update(iincludes - aincludes)
594 excludes.update(iexcludes - aexcludes)
594 excludes.update(iexcludes - aexcludes)
595 profiles.update(iprofiles - aprofiles)
595 profiles.update(iprofiles - aprofiles)
596 if len(includes) + len(excludes) + len(profiles) > oldsize:
596 if len(includes) + len(excludes) + len(profiles) > oldsize:
597 changed = True
597 changed = True
598
598
599 profilecount = includecount = excludecount = 0
599 profilecount = includecount = excludecount = 0
600 fcounts = (0, 0, 0)
600 fcounts = (0, 0, 0)
601
601
602 if changed:
602 if changed:
603 profilecount = len(profiles - aprofiles)
603 profilecount = len(profiles - aprofiles)
604 includecount = len(includes - aincludes)
604 includecount = len(includes - aincludes)
605 excludecount = len(excludes - aexcludes)
605 excludecount = len(excludes - aexcludes)
606
606
607 fcounts = map(len, _updateconfigandrefreshwdir(
607 fcounts = map(len, _updateconfigandrefreshwdir(
608 repo, includes, excludes, profiles, force=force))
608 repo, includes, excludes, profiles, force=force))
609
609
610 printchanges(repo.ui, opts, profilecount, includecount, excludecount,
610 printchanges(repo.ui, opts, profilecount, includecount, excludecount,
611 *fcounts)
611 *fcounts)
612
612
613 def updateconfig(repo, pats, opts, include=False, exclude=False, reset=False,
613 def updateconfig(repo, pats, opts, include=False, exclude=False, reset=False,
614 delete=False, enableprofile=False, disableprofile=False,
614 delete=False, enableprofile=False, disableprofile=False,
615 force=False, usereporootpaths=False):
615 force=False, usereporootpaths=False):
616 """Perform a sparse config update.
616 """Perform a sparse config update.
617
617
618 Only one of the actions may be performed.
618 Only one of the actions may be performed.
619
619
620 The new config is written out and a working directory refresh is performed.
620 The new config is written out and a working directory refresh is performed.
621 """
621 """
622 with repo.wlock():
622 with repo.wlock():
623 raw = repo.vfs.tryread('sparse')
623 raw = repo.vfs.tryread('sparse')
624 oldinclude, oldexclude, oldprofiles = parseconfig(repo.ui, raw,
624 oldinclude, oldexclude, oldprofiles = parseconfig(repo.ui, raw,
625 'sparse')
625 'sparse')
626
626
627 if reset:
627 if reset:
628 newinclude = set()
628 newinclude = set()
629 newexclude = set()
629 newexclude = set()
630 newprofiles = set()
630 newprofiles = set()
631 else:
631 else:
632 newinclude = set(oldinclude)
632 newinclude = set(oldinclude)
633 newexclude = set(oldexclude)
633 newexclude = set(oldexclude)
634 newprofiles = set(oldprofiles)
634 newprofiles = set(oldprofiles)
635
635
636 if any(os.path.isabs(pat) for pat in pats):
636 if any(os.path.isabs(pat) for pat in pats):
637 raise error.Abort(_('paths cannot be absolute'))
637 raise error.Abort(_('paths cannot be absolute'))
638
638
639 if not usereporootpaths:
639 if not usereporootpaths:
640 # let's treat paths as relative to cwd
640 # let's treat paths as relative to cwd
641 root, cwd = repo.root, repo.getcwd()
641 root, cwd = repo.root, repo.getcwd()
642 abspats = []
642 abspats = []
643 for kindpat in pats:
643 for kindpat in pats:
644 kind, pat = matchmod._patsplit(kindpat, None)
644 kind, pat = matchmod._patsplit(kindpat, None)
645 if kind in matchmod.cwdrelativepatternkinds or kind is None:
645 if kind in matchmod.cwdrelativepatternkinds or kind is None:
646 ap = (kind + ':' if kind else '') +\
646 ap = (kind + ':' if kind else '') +\
647 pathutil.canonpath(root, cwd, pat)
647 pathutil.canonpath(root, cwd, pat)
648 abspats.append(ap)
648 abspats.append(ap)
649 else:
649 else:
650 abspats.append(kindpat)
650 abspats.append(kindpat)
651 pats = abspats
651 pats = abspats
652
652
653 if include:
653 if include:
654 newinclude.update(pats)
654 newinclude.update(pats)
655 elif exclude:
655 elif exclude:
656 newexclude.update(pats)
656 newexclude.update(pats)
657 elif enableprofile:
657 elif enableprofile:
658 newprofiles.update(pats)
658 newprofiles.update(pats)
659 elif disableprofile:
659 elif disableprofile:
660 newprofiles.difference_update(pats)
660 newprofiles.difference_update(pats)
661 elif delete:
661 elif delete:
662 newinclude.difference_update(pats)
662 newinclude.difference_update(pats)
663 newexclude.difference_update(pats)
663 newexclude.difference_update(pats)
664
664
665 profilecount = (len(newprofiles - oldprofiles) -
665 profilecount = (len(newprofiles - oldprofiles) -
666 len(oldprofiles - newprofiles))
666 len(oldprofiles - newprofiles))
667 includecount = (len(newinclude - oldinclude) -
667 includecount = (len(newinclude - oldinclude) -
668 len(oldinclude - newinclude))
668 len(oldinclude - newinclude))
669 excludecount = (len(newexclude - oldexclude) -
669 excludecount = (len(newexclude - oldexclude) -
670 len(oldexclude - newexclude))
670 len(oldexclude - newexclude))
671
671
672 fcounts = map(len, _updateconfigandrefreshwdir(
672 fcounts = map(len, _updateconfigandrefreshwdir(
673 repo, newinclude, newexclude, newprofiles, force=force,
673 repo, newinclude, newexclude, newprofiles, force=force,
674 removing=reset))
674 removing=reset))
675
675
676 printchanges(repo.ui, opts, profilecount, includecount,
676 printchanges(repo.ui, opts, profilecount, includecount,
677 excludecount, *fcounts)
677 excludecount, *fcounts)
678
678
679 def printchanges(ui, opts, profilecount=0, includecount=0, excludecount=0,
679 def printchanges(ui, opts, profilecount=0, includecount=0, excludecount=0,
680 added=0, dropped=0, conflicting=0):
680 added=0, dropped=0, conflicting=0):
681 """Print output summarizing sparse config changes."""
681 """Print output summarizing sparse config changes."""
682 with ui.formatter('sparse', opts) as fm:
682 with ui.formatter('sparse', opts) as fm:
683 fm.startitem()
683 fm.startitem()
684 fm.condwrite(ui.verbose, 'profiles_added', _('Profiles changed: %d\n'),
684 fm.condwrite(ui.verbose, 'profiles_added', _('Profiles changed: %d\n'),
685 profilecount)
685 profilecount)
686 fm.condwrite(ui.verbose, 'include_rules_added',
686 fm.condwrite(ui.verbose, 'include_rules_added',
687 _('Include rules changed: %d\n'), includecount)
687 _('Include rules changed: %d\n'), includecount)
688 fm.condwrite(ui.verbose, 'exclude_rules_added',
688 fm.condwrite(ui.verbose, 'exclude_rules_added',
689 _('Exclude rules changed: %d\n'), excludecount)
689 _('Exclude rules changed: %d\n'), excludecount)
690
690
691 # In 'plain' verbose mode, mergemod.applyupdates already outputs what
691 # In 'plain' verbose mode, mergemod.applyupdates already outputs what
692 # files are added or removed outside of the templating formatter
692 # files are added or removed outside of the templating formatter
693 # framework. No point in repeating ourselves in that case.
693 # framework. No point in repeating ourselves in that case.
694 if not fm.isplain():
694 if not fm.isplain():
695 fm.condwrite(ui.verbose, 'files_added', _('Files added: %d\n'),
695 fm.condwrite(ui.verbose, 'files_added', _('Files added: %d\n'),
696 added)
696 added)
697 fm.condwrite(ui.verbose, 'files_dropped', _('Files dropped: %d\n'),
697 fm.condwrite(ui.verbose, 'files_dropped', _('Files dropped: %d\n'),
698 dropped)
698 dropped)
699 fm.condwrite(ui.verbose, 'files_conflicting',
699 fm.condwrite(ui.verbose, 'files_conflicting',
700 _('Files conflicting: %d\n'), conflicting)
700 _('Files conflicting: %d\n'), conflicting)
@@ -1,1840 +1,1839 b''
1 # subrepo.py - sub-repository classes and factory
1 # subrepo.py - sub-repository classes and factory
2 #
2 #
3 # Copyright 2009-2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2009-2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import copy
10 import copy
11 import errno
11 import errno
12 import hashlib
12 import hashlib
13 import os
13 import os
14 import re
14 import re
15 import stat
15 import stat
16 import subprocess
16 import subprocess
17 import sys
17 import sys
18 import tarfile
18 import tarfile
19 import xml.dom.minidom
19 import xml.dom.minidom
20
20
21 from .i18n import _
21 from .i18n import _
22 from . import (
22 from . import (
23 cmdutil,
23 cmdutil,
24 encoding,
24 encoding,
25 error,
25 error,
26 exchange,
26 exchange,
27 logcmdutil,
27 logcmdutil,
28 match as matchmod,
28 match as matchmod,
29 node,
29 node,
30 pathutil,
30 pathutil,
31 phases,
31 phases,
32 pycompat,
32 pycompat,
33 scmutil,
33 scmutil,
34 subrepoutil,
34 subrepoutil,
35 util,
35 util,
36 vfs as vfsmod,
36 vfs as vfsmod,
37 )
37 )
38 from .utils import (
38 from .utils import (
39 dateutil,
39 dateutil,
40 procutil,
40 procutil,
41 stringutil,
41 stringutil,
42 )
42 )
43
43
44 hg = None
44 hg = None
45 reporelpath = subrepoutil.reporelpath
45 reporelpath = subrepoutil.reporelpath
46 subrelpath = subrepoutil.subrelpath
46 subrelpath = subrepoutil.subrelpath
47 _abssource = subrepoutil._abssource
47 _abssource = subrepoutil._abssource
48 propertycache = util.propertycache
48 propertycache = util.propertycache
49
49
50 def _expandedabspath(path):
50 def _expandedabspath(path):
51 '''
51 '''
52 get a path or url and if it is a path expand it and return an absolute path
52 get a path or url and if it is a path expand it and return an absolute path
53 '''
53 '''
54 expandedpath = util.urllocalpath(util.expandpath(path))
54 expandedpath = util.urllocalpath(util.expandpath(path))
55 u = util.url(expandedpath)
55 u = util.url(expandedpath)
56 if not u.scheme:
56 if not u.scheme:
57 path = util.normpath(os.path.abspath(u.path))
57 path = util.normpath(os.path.abspath(u.path))
58 return path
58 return path
59
59
60 def _getstorehashcachename(remotepath):
60 def _getstorehashcachename(remotepath):
61 '''get a unique filename for the store hash cache of a remote repository'''
61 '''get a unique filename for the store hash cache of a remote repository'''
62 return node.hex(hashlib.sha1(_expandedabspath(remotepath)).digest())[0:12]
62 return node.hex(hashlib.sha1(_expandedabspath(remotepath)).digest())[0:12]
63
63
64 class SubrepoAbort(error.Abort):
64 class SubrepoAbort(error.Abort):
65 """Exception class used to avoid handling a subrepo error more than once"""
65 """Exception class used to avoid handling a subrepo error more than once"""
66 def __init__(self, *args, **kw):
66 def __init__(self, *args, **kw):
67 self.subrepo = kw.pop(r'subrepo', None)
67 self.subrepo = kw.pop(r'subrepo', None)
68 self.cause = kw.pop(r'cause', None)
68 self.cause = kw.pop(r'cause', None)
69 error.Abort.__init__(self, *args, **kw)
69 error.Abort.__init__(self, *args, **kw)
70
70
71 def annotatesubrepoerror(func):
71 def annotatesubrepoerror(func):
72 def decoratedmethod(self, *args, **kargs):
72 def decoratedmethod(self, *args, **kargs):
73 try:
73 try:
74 res = func(self, *args, **kargs)
74 res = func(self, *args, **kargs)
75 except SubrepoAbort as ex:
75 except SubrepoAbort as ex:
76 # This exception has already been handled
76 # This exception has already been handled
77 raise ex
77 raise ex
78 except error.Abort as ex:
78 except error.Abort as ex:
79 subrepo = subrelpath(self)
79 subrepo = subrelpath(self)
80 errormsg = (stringutil.forcebytestr(ex) + ' '
80 errormsg = (stringutil.forcebytestr(ex) + ' '
81 + _('(in subrepository "%s")') % subrepo)
81 + _('(in subrepository "%s")') % subrepo)
82 # avoid handling this exception by raising a SubrepoAbort exception
82 # avoid handling this exception by raising a SubrepoAbort exception
83 raise SubrepoAbort(errormsg, hint=ex.hint, subrepo=subrepo,
83 raise SubrepoAbort(errormsg, hint=ex.hint, subrepo=subrepo,
84 cause=sys.exc_info())
84 cause=sys.exc_info())
85 return res
85 return res
86 return decoratedmethod
86 return decoratedmethod
87
87
88 def _updateprompt(ui, sub, dirty, local, remote):
88 def _updateprompt(ui, sub, dirty, local, remote):
89 if dirty:
89 if dirty:
90 msg = (_(' subrepository sources for %s differ\n'
90 msg = (_(' subrepository sources for %s differ\n'
91 'use (l)ocal source (%s) or (r)emote source (%s)?'
91 'use (l)ocal source (%s) or (r)emote source (%s)?'
92 '$$ &Local $$ &Remote')
92 '$$ &Local $$ &Remote')
93 % (subrelpath(sub), local, remote))
93 % (subrelpath(sub), local, remote))
94 else:
94 else:
95 msg = (_(' subrepository sources for %s differ (in checked out '
95 msg = (_(' subrepository sources for %s differ (in checked out '
96 'version)\n'
96 'version)\n'
97 'use (l)ocal source (%s) or (r)emote source (%s)?'
97 'use (l)ocal source (%s) or (r)emote source (%s)?'
98 '$$ &Local $$ &Remote')
98 '$$ &Local $$ &Remote')
99 % (subrelpath(sub), local, remote))
99 % (subrelpath(sub), local, remote))
100 return ui.promptchoice(msg, 0)
100 return ui.promptchoice(msg, 0)
101
101
102 def _sanitize(ui, vfs, ignore):
102 def _sanitize(ui, vfs, ignore):
103 for dirname, dirs, names in vfs.walk():
103 for dirname, dirs, names in vfs.walk():
104 for i, d in enumerate(dirs):
104 for i, d in enumerate(dirs):
105 if d.lower() == ignore:
105 if d.lower() == ignore:
106 del dirs[i]
106 del dirs[i]
107 break
107 break
108 if vfs.basename(dirname).lower() != '.hg':
108 if vfs.basename(dirname).lower() != '.hg':
109 continue
109 continue
110 for f in names:
110 for f in names:
111 if f.lower() == 'hgrc':
111 if f.lower() == 'hgrc':
112 ui.warn(_("warning: removing potentially hostile 'hgrc' "
112 ui.warn(_("warning: removing potentially hostile 'hgrc' "
113 "in '%s'\n") % vfs.join(dirname))
113 "in '%s'\n") % vfs.join(dirname))
114 vfs.unlink(vfs.reljoin(dirname, f))
114 vfs.unlink(vfs.reljoin(dirname, f))
115
115
116 def _auditsubrepopath(repo, path):
116 def _auditsubrepopath(repo, path):
117 # sanity check for potentially unsafe paths such as '~' and '$FOO'
117 # sanity check for potentially unsafe paths such as '~' and '$FOO'
118 if path.startswith('~') or '$' in path or util.expandpath(path) != path:
118 if path.startswith('~') or '$' in path or util.expandpath(path) != path:
119 raise error.Abort(_('subrepo path contains illegal component: %s')
119 raise error.Abort(_('subrepo path contains illegal component: %s')
120 % path)
120 % path)
121 # auditor doesn't check if the path itself is a symlink
121 # auditor doesn't check if the path itself is a symlink
122 pathutil.pathauditor(repo.root)(path)
122 pathutil.pathauditor(repo.root)(path)
123 if repo.wvfs.islink(path):
123 if repo.wvfs.islink(path):
124 raise error.Abort(_("subrepo '%s' traverses symbolic link") % path)
124 raise error.Abort(_("subrepo '%s' traverses symbolic link") % path)
125
125
126 SUBREPO_ALLOWED_DEFAULTS = {
126 SUBREPO_ALLOWED_DEFAULTS = {
127 'hg': True,
127 'hg': True,
128 'git': False,
128 'git': False,
129 'svn': False,
129 'svn': False,
130 }
130 }
131
131
132 def _checktype(ui, kind):
132 def _checktype(ui, kind):
133 # subrepos.allowed is a master kill switch. If disabled, subrepos are
133 # subrepos.allowed is a master kill switch. If disabled, subrepos are
134 # disabled period.
134 # disabled period.
135 if not ui.configbool('subrepos', 'allowed', True):
135 if not ui.configbool('subrepos', 'allowed', True):
136 raise error.Abort(_('subrepos not enabled'),
136 raise error.Abort(_('subrepos not enabled'),
137 hint=_("see 'hg help config.subrepos' for details"))
137 hint=_("see 'hg help config.subrepos' for details"))
138
138
139 default = SUBREPO_ALLOWED_DEFAULTS.get(kind, False)
139 default = SUBREPO_ALLOWED_DEFAULTS.get(kind, False)
140 if not ui.configbool('subrepos', '%s:allowed' % kind, default):
140 if not ui.configbool('subrepos', '%s:allowed' % kind, default):
141 raise error.Abort(_('%s subrepos not allowed') % kind,
141 raise error.Abort(_('%s subrepos not allowed') % kind,
142 hint=_("see 'hg help config.subrepos' for details"))
142 hint=_("see 'hg help config.subrepos' for details"))
143
143
144 if kind not in types:
144 if kind not in types:
145 raise error.Abort(_('unknown subrepo type %s') % kind)
145 raise error.Abort(_('unknown subrepo type %s') % kind)
146
146
147 def subrepo(ctx, path, allowwdir=False, allowcreate=True):
147 def subrepo(ctx, path, allowwdir=False, allowcreate=True):
148 """return instance of the right subrepo class for subrepo in path"""
148 """return instance of the right subrepo class for subrepo in path"""
149 # subrepo inherently violates our import layering rules
149 # subrepo inherently violates our import layering rules
150 # because it wants to make repo objects from deep inside the stack
150 # because it wants to make repo objects from deep inside the stack
151 # so we manually delay the circular imports to not break
151 # so we manually delay the circular imports to not break
152 # scripts that don't use our demand-loading
152 # scripts that don't use our demand-loading
153 global hg
153 global hg
154 from . import hg as h
154 from . import hg as h
155 hg = h
155 hg = h
156
156
157 repo = ctx.repo()
157 repo = ctx.repo()
158 _auditsubrepopath(repo, path)
158 _auditsubrepopath(repo, path)
159 state = ctx.substate[path]
159 state = ctx.substate[path]
160 _checktype(repo.ui, state[2])
160 _checktype(repo.ui, state[2])
161 if allowwdir:
161 if allowwdir:
162 state = (state[0], ctx.subrev(path), state[2])
162 state = (state[0], ctx.subrev(path), state[2])
163 return types[state[2]](ctx, path, state[:2], allowcreate)
163 return types[state[2]](ctx, path, state[:2], allowcreate)
164
164
165 def nullsubrepo(ctx, path, pctx):
165 def nullsubrepo(ctx, path, pctx):
166 """return an empty subrepo in pctx for the extant subrepo in ctx"""
166 """return an empty subrepo in pctx for the extant subrepo in ctx"""
167 # subrepo inherently violates our import layering rules
167 # subrepo inherently violates our import layering rules
168 # because it wants to make repo objects from deep inside the stack
168 # because it wants to make repo objects from deep inside the stack
169 # so we manually delay the circular imports to not break
169 # so we manually delay the circular imports to not break
170 # scripts that don't use our demand-loading
170 # scripts that don't use our demand-loading
171 global hg
171 global hg
172 from . import hg as h
172 from . import hg as h
173 hg = h
173 hg = h
174
174
175 repo = ctx.repo()
175 repo = ctx.repo()
176 _auditsubrepopath(repo, path)
176 _auditsubrepopath(repo, path)
177 state = ctx.substate[path]
177 state = ctx.substate[path]
178 _checktype(repo.ui, state[2])
178 _checktype(repo.ui, state[2])
179 subrev = ''
179 subrev = ''
180 if state[2] == 'hg':
180 if state[2] == 'hg':
181 subrev = "0" * 40
181 subrev = "0" * 40
182 return types[state[2]](pctx, path, (state[0], subrev), True)
182 return types[state[2]](pctx, path, (state[0], subrev), True)
183
183
184 # subrepo classes need to implement the following abstract class:
184 # subrepo classes need to implement the following abstract class:
185
185
186 class abstractsubrepo(object):
186 class abstractsubrepo(object):
187
187
188 def __init__(self, ctx, path):
188 def __init__(self, ctx, path):
189 """Initialize abstractsubrepo part
189 """Initialize abstractsubrepo part
190
190
191 ``ctx`` is the context referring this subrepository in the
191 ``ctx`` is the context referring this subrepository in the
192 parent repository.
192 parent repository.
193
193
194 ``path`` is the path to this subrepository as seen from
194 ``path`` is the path to this subrepository as seen from
195 innermost repository.
195 innermost repository.
196 """
196 """
197 self.ui = ctx.repo().ui
197 self.ui = ctx.repo().ui
198 self._ctx = ctx
198 self._ctx = ctx
199 self._path = path
199 self._path = path
200
200
201 def addwebdirpath(self, serverpath, webconf):
201 def addwebdirpath(self, serverpath, webconf):
202 """Add the hgwebdir entries for this subrepo, and any of its subrepos.
202 """Add the hgwebdir entries for this subrepo, and any of its subrepos.
203
203
204 ``serverpath`` is the path component of the URL for this repo.
204 ``serverpath`` is the path component of the URL for this repo.
205
205
206 ``webconf`` is the dictionary of hgwebdir entries.
206 ``webconf`` is the dictionary of hgwebdir entries.
207 """
207 """
208 pass
208 pass
209
209
210 def storeclean(self, path):
210 def storeclean(self, path):
211 """
211 """
212 returns true if the repository has not changed since it was last
212 returns true if the repository has not changed since it was last
213 cloned from or pushed to a given repository.
213 cloned from or pushed to a given repository.
214 """
214 """
215 return False
215 return False
216
216
217 def dirty(self, ignoreupdate=False, missing=False):
217 def dirty(self, ignoreupdate=False, missing=False):
218 """returns true if the dirstate of the subrepo is dirty or does not
218 """returns true if the dirstate of the subrepo is dirty or does not
219 match current stored state. If ignoreupdate is true, only check
219 match current stored state. If ignoreupdate is true, only check
220 whether the subrepo has uncommitted changes in its dirstate. If missing
220 whether the subrepo has uncommitted changes in its dirstate. If missing
221 is true, check for deleted files.
221 is true, check for deleted files.
222 """
222 """
223 raise NotImplementedError
223 raise NotImplementedError
224
224
225 def dirtyreason(self, ignoreupdate=False, missing=False):
225 def dirtyreason(self, ignoreupdate=False, missing=False):
226 """return reason string if it is ``dirty()``
226 """return reason string if it is ``dirty()``
227
227
228 Returned string should have enough information for the message
228 Returned string should have enough information for the message
229 of exception.
229 of exception.
230
230
231 This returns None, otherwise.
231 This returns None, otherwise.
232 """
232 """
233 if self.dirty(ignoreupdate=ignoreupdate, missing=missing):
233 if self.dirty(ignoreupdate=ignoreupdate, missing=missing):
234 return _('uncommitted changes in subrepository "%s"'
234 return _('uncommitted changes in subrepository "%s"'
235 ) % subrelpath(self)
235 ) % subrelpath(self)
236
236
237 def bailifchanged(self, ignoreupdate=False, hint=None):
237 def bailifchanged(self, ignoreupdate=False, hint=None):
238 """raise Abort if subrepository is ``dirty()``
238 """raise Abort if subrepository is ``dirty()``
239 """
239 """
240 dirtyreason = self.dirtyreason(ignoreupdate=ignoreupdate,
240 dirtyreason = self.dirtyreason(ignoreupdate=ignoreupdate,
241 missing=True)
241 missing=True)
242 if dirtyreason:
242 if dirtyreason:
243 raise error.Abort(dirtyreason, hint=hint)
243 raise error.Abort(dirtyreason, hint=hint)
244
244
245 def basestate(self):
245 def basestate(self):
246 """current working directory base state, disregarding .hgsubstate
246 """current working directory base state, disregarding .hgsubstate
247 state and working directory modifications"""
247 state and working directory modifications"""
248 raise NotImplementedError
248 raise NotImplementedError
249
249
250 def checknested(self, path):
250 def checknested(self, path):
251 """check if path is a subrepository within this repository"""
251 """check if path is a subrepository within this repository"""
252 return False
252 return False
253
253
254 def commit(self, text, user, date):
254 def commit(self, text, user, date):
255 """commit the current changes to the subrepo with the given
255 """commit the current changes to the subrepo with the given
256 log message. Use given user and date if possible. Return the
256 log message. Use given user and date if possible. Return the
257 new state of the subrepo.
257 new state of the subrepo.
258 """
258 """
259 raise NotImplementedError
259 raise NotImplementedError
260
260
261 def phase(self, state):
261 def phase(self, state):
262 """returns phase of specified state in the subrepository.
262 """returns phase of specified state in the subrepository.
263 """
263 """
264 return phases.public
264 return phases.public
265
265
266 def remove(self):
266 def remove(self):
267 """remove the subrepo
267 """remove the subrepo
268
268
269 (should verify the dirstate is not dirty first)
269 (should verify the dirstate is not dirty first)
270 """
270 """
271 raise NotImplementedError
271 raise NotImplementedError
272
272
273 def get(self, state, overwrite=False):
273 def get(self, state, overwrite=False):
274 """run whatever commands are needed to put the subrepo into
274 """run whatever commands are needed to put the subrepo into
275 this state
275 this state
276 """
276 """
277 raise NotImplementedError
277 raise NotImplementedError
278
278
279 def merge(self, state):
279 def merge(self, state):
280 """merge currently-saved state with the new state."""
280 """merge currently-saved state with the new state."""
281 raise NotImplementedError
281 raise NotImplementedError
282
282
283 def push(self, opts):
283 def push(self, opts):
284 """perform whatever action is analogous to 'hg push'
284 """perform whatever action is analogous to 'hg push'
285
285
286 This may be a no-op on some systems.
286 This may be a no-op on some systems.
287 """
287 """
288 raise NotImplementedError
288 raise NotImplementedError
289
289
290 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
290 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
291 return []
291 return []
292
292
293 def addremove(self, matcher, prefix, uipathfn, opts):
293 def addremove(self, matcher, prefix, uipathfn, opts):
294 self.ui.warn("%s: %s" % (prefix, _("addremove is not supported")))
294 self.ui.warn("%s: %s" % (prefix, _("addremove is not supported")))
295 return 1
295 return 1
296
296
297 def cat(self, match, fm, fntemplate, prefix, **opts):
297 def cat(self, match, fm, fntemplate, prefix, **opts):
298 return 1
298 return 1
299
299
300 def status(self, rev2, **opts):
300 def status(self, rev2, **opts):
301 return scmutil.status([], [], [], [], [], [], [])
301 return scmutil.status([], [], [], [], [], [], [])
302
302
303 def diff(self, ui, diffopts, node2, match, prefix, **opts):
303 def diff(self, ui, diffopts, node2, match, prefix, **opts):
304 pass
304 pass
305
305
306 def outgoing(self, ui, dest, opts):
306 def outgoing(self, ui, dest, opts):
307 return 1
307 return 1
308
308
309 def incoming(self, ui, source, opts):
309 def incoming(self, ui, source, opts):
310 return 1
310 return 1
311
311
312 def files(self):
312 def files(self):
313 """return filename iterator"""
313 """return filename iterator"""
314 raise NotImplementedError
314 raise NotImplementedError
315
315
316 def filedata(self, name, decode):
316 def filedata(self, name, decode):
317 """return file data, optionally passed through repo decoders"""
317 """return file data, optionally passed through repo decoders"""
318 raise NotImplementedError
318 raise NotImplementedError
319
319
320 def fileflags(self, name):
320 def fileflags(self, name):
321 """return file flags"""
321 """return file flags"""
322 return ''
322 return ''
323
323
324 def matchfileset(self, expr, badfn=None):
324 def matchfileset(self, expr, badfn=None):
325 """Resolve the fileset expression for this repo"""
325 """Resolve the fileset expression for this repo"""
326 return matchmod.never(self.wvfs.base, '', badfn=badfn)
326 return matchmod.never(self.wvfs.base, '', badfn=badfn)
327
327
328 def printfiles(self, ui, m, fm, fmt, subrepos):
328 def printfiles(self, ui, m, fm, fmt, subrepos):
329 """handle the files command for this subrepo"""
329 """handle the files command for this subrepo"""
330 return 1
330 return 1
331
331
332 def archive(self, archiver, prefix, match=None, decode=True):
332 def archive(self, archiver, prefix, match=None, decode=True):
333 if match is not None:
333 if match is not None:
334 files = [f for f in self.files() if match(f)]
334 files = [f for f in self.files() if match(f)]
335 else:
335 else:
336 files = self.files()
336 files = self.files()
337 total = len(files)
337 total = len(files)
338 relpath = subrelpath(self)
338 relpath = subrelpath(self)
339 progress = self.ui.makeprogress(_('archiving (%s)') % relpath,
339 progress = self.ui.makeprogress(_('archiving (%s)') % relpath,
340 unit=_('files'), total=total)
340 unit=_('files'), total=total)
341 progress.update(0)
341 progress.update(0)
342 for name in files:
342 for name in files:
343 flags = self.fileflags(name)
343 flags = self.fileflags(name)
344 mode = 'x' in flags and 0o755 or 0o644
344 mode = 'x' in flags and 0o755 or 0o644
345 symlink = 'l' in flags
345 symlink = 'l' in flags
346 archiver.addfile(prefix + name, mode, symlink,
346 archiver.addfile(prefix + name, mode, symlink,
347 self.filedata(name, decode))
347 self.filedata(name, decode))
348 progress.increment()
348 progress.increment()
349 progress.complete()
349 progress.complete()
350 return total
350 return total
351
351
352 def walk(self, match):
352 def walk(self, match):
353 '''
353 '''
354 walk recursively through the directory tree, finding all files
354 walk recursively through the directory tree, finding all files
355 matched by the match function
355 matched by the match function
356 '''
356 '''
357
357
358 def forget(self, match, prefix, uipathfn, dryrun, interactive):
358 def forget(self, match, prefix, uipathfn, dryrun, interactive):
359 return ([], [])
359 return ([], [])
360
360
361 def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
361 def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
362 dryrun, warnings):
362 dryrun, warnings):
363 """remove the matched files from the subrepository and the filesystem,
363 """remove the matched files from the subrepository and the filesystem,
364 possibly by force and/or after the file has been removed from the
364 possibly by force and/or after the file has been removed from the
365 filesystem. Return 0 on success, 1 on any warning.
365 filesystem. Return 0 on success, 1 on any warning.
366 """
366 """
367 warnings.append(_("warning: removefiles not implemented (%s)")
367 warnings.append(_("warning: removefiles not implemented (%s)")
368 % self._path)
368 % self._path)
369 return 1
369 return 1
370
370
371 def revert(self, substate, *pats, **opts):
371 def revert(self, substate, *pats, **opts):
372 self.ui.warn(_('%s: reverting %s subrepos is unsupported\n') \
372 self.ui.warn(_('%s: reverting %s subrepos is unsupported\n') \
373 % (substate[0], substate[2]))
373 % (substate[0], substate[2]))
374 return []
374 return []
375
375
376 def shortid(self, revid):
376 def shortid(self, revid):
377 return revid
377 return revid
378
378
379 def unshare(self):
379 def unshare(self):
380 '''
380 '''
381 convert this repository from shared to normal storage.
381 convert this repository from shared to normal storage.
382 '''
382 '''
383
383
384 def verify(self):
384 def verify(self):
385 '''verify the integrity of the repository. Return 0 on success or
385 '''verify the integrity of the repository. Return 0 on success or
386 warning, 1 on any error.
386 warning, 1 on any error.
387 '''
387 '''
388 return 0
388 return 0
389
389
390 @propertycache
390 @propertycache
391 def wvfs(self):
391 def wvfs(self):
392 """return vfs to access the working directory of this subrepository
392 """return vfs to access the working directory of this subrepository
393 """
393 """
394 return vfsmod.vfs(self._ctx.repo().wvfs.join(self._path))
394 return vfsmod.vfs(self._ctx.repo().wvfs.join(self._path))
395
395
396 @propertycache
396 @propertycache
397 def _relpath(self):
397 def _relpath(self):
398 """return path to this subrepository as seen from outermost repository
398 """return path to this subrepository as seen from outermost repository
399 """
399 """
400 return self.wvfs.reljoin(reporelpath(self._ctx.repo()), self._path)
400 return self.wvfs.reljoin(reporelpath(self._ctx.repo()), self._path)
401
401
402 class hgsubrepo(abstractsubrepo):
402 class hgsubrepo(abstractsubrepo):
403 def __init__(self, ctx, path, state, allowcreate):
403 def __init__(self, ctx, path, state, allowcreate):
404 super(hgsubrepo, self).__init__(ctx, path)
404 super(hgsubrepo, self).__init__(ctx, path)
405 self._state = state
405 self._state = state
406 r = ctx.repo()
406 r = ctx.repo()
407 root = r.wjoin(path)
407 root = r.wjoin(path)
408 create = allowcreate and not r.wvfs.exists('%s/.hg' % path)
408 create = allowcreate and not r.wvfs.exists('%s/.hg' % path)
409 # repository constructor does expand variables in path, which is
409 # repository constructor does expand variables in path, which is
410 # unsafe since subrepo path might come from untrusted source.
410 # unsafe since subrepo path might come from untrusted source.
411 if os.path.realpath(util.expandpath(root)) != root:
411 if os.path.realpath(util.expandpath(root)) != root:
412 raise error.Abort(_('subrepo path contains illegal component: %s')
412 raise error.Abort(_('subrepo path contains illegal component: %s')
413 % path)
413 % path)
414 self._repo = hg.repository(r.baseui, root, create=create)
414 self._repo = hg.repository(r.baseui, root, create=create)
415 if self._repo.root != root:
415 if self._repo.root != root:
416 raise error.ProgrammingError('failed to reject unsafe subrepo '
416 raise error.ProgrammingError('failed to reject unsafe subrepo '
417 'path: %s (expanded to %s)'
417 'path: %s (expanded to %s)'
418 % (root, self._repo.root))
418 % (root, self._repo.root))
419
419
420 # Propagate the parent's --hidden option
420 # Propagate the parent's --hidden option
421 if r is r.unfiltered():
421 if r is r.unfiltered():
422 self._repo = self._repo.unfiltered()
422 self._repo = self._repo.unfiltered()
423
423
424 self.ui = self._repo.ui
424 self.ui = self._repo.ui
425 for s, k in [('ui', 'commitsubrepos')]:
425 for s, k in [('ui', 'commitsubrepos')]:
426 v = r.ui.config(s, k)
426 v = r.ui.config(s, k)
427 if v:
427 if v:
428 self.ui.setconfig(s, k, v, 'subrepo')
428 self.ui.setconfig(s, k, v, 'subrepo')
429 # internal config: ui._usedassubrepo
429 # internal config: ui._usedassubrepo
430 self.ui.setconfig('ui', '_usedassubrepo', 'True', 'subrepo')
430 self.ui.setconfig('ui', '_usedassubrepo', 'True', 'subrepo')
431 self._initrepo(r, state[0], create)
431 self._initrepo(r, state[0], create)
432
432
433 @annotatesubrepoerror
433 @annotatesubrepoerror
434 def addwebdirpath(self, serverpath, webconf):
434 def addwebdirpath(self, serverpath, webconf):
435 cmdutil.addwebdirpath(self._repo, subrelpath(self), webconf)
435 cmdutil.addwebdirpath(self._repo, subrelpath(self), webconf)
436
436
437 def storeclean(self, path):
437 def storeclean(self, path):
438 with self._repo.lock():
438 with self._repo.lock():
439 return self._storeclean(path)
439 return self._storeclean(path)
440
440
441 def _storeclean(self, path):
441 def _storeclean(self, path):
442 clean = True
442 clean = True
443 itercache = self._calcstorehash(path)
443 itercache = self._calcstorehash(path)
444 for filehash in self._readstorehashcache(path):
444 for filehash in self._readstorehashcache(path):
445 if filehash != next(itercache, None):
445 if filehash != next(itercache, None):
446 clean = False
446 clean = False
447 break
447 break
448 if clean:
448 if clean:
449 # if not empty:
449 # if not empty:
450 # the cached and current pull states have a different size
450 # the cached and current pull states have a different size
451 clean = next(itercache, None) is None
451 clean = next(itercache, None) is None
452 return clean
452 return clean
453
453
454 def _calcstorehash(self, remotepath):
454 def _calcstorehash(self, remotepath):
455 '''calculate a unique "store hash"
455 '''calculate a unique "store hash"
456
456
457 This method is used to to detect when there are changes that may
457 This method is used to to detect when there are changes that may
458 require a push to a given remote path.'''
458 require a push to a given remote path.'''
459 # sort the files that will be hashed in increasing (likely) file size
459 # sort the files that will be hashed in increasing (likely) file size
460 filelist = ('bookmarks', 'store/phaseroots', 'store/00changelog.i')
460 filelist = ('bookmarks', 'store/phaseroots', 'store/00changelog.i')
461 yield '# %s\n' % _expandedabspath(remotepath)
461 yield '# %s\n' % _expandedabspath(remotepath)
462 vfs = self._repo.vfs
462 vfs = self._repo.vfs
463 for relname in filelist:
463 for relname in filelist:
464 filehash = node.hex(hashlib.sha1(vfs.tryread(relname)).digest())
464 filehash = node.hex(hashlib.sha1(vfs.tryread(relname)).digest())
465 yield '%s = %s\n' % (relname, filehash)
465 yield '%s = %s\n' % (relname, filehash)
466
466
467 @propertycache
467 @propertycache
468 def _cachestorehashvfs(self):
468 def _cachestorehashvfs(self):
469 return vfsmod.vfs(self._repo.vfs.join('cache/storehash'))
469 return vfsmod.vfs(self._repo.vfs.join('cache/storehash'))
470
470
471 def _readstorehashcache(self, remotepath):
471 def _readstorehashcache(self, remotepath):
472 '''read the store hash cache for a given remote repository'''
472 '''read the store hash cache for a given remote repository'''
473 cachefile = _getstorehashcachename(remotepath)
473 cachefile = _getstorehashcachename(remotepath)
474 return self._cachestorehashvfs.tryreadlines(cachefile, 'r')
474 return self._cachestorehashvfs.tryreadlines(cachefile, 'r')
475
475
476 def _cachestorehash(self, remotepath):
476 def _cachestorehash(self, remotepath):
477 '''cache the current store hash
477 '''cache the current store hash
478
478
479 Each remote repo requires its own store hash cache, because a subrepo
479 Each remote repo requires its own store hash cache, because a subrepo
480 store may be "clean" versus a given remote repo, but not versus another
480 store may be "clean" versus a given remote repo, but not versus another
481 '''
481 '''
482 cachefile = _getstorehashcachename(remotepath)
482 cachefile = _getstorehashcachename(remotepath)
483 with self._repo.lock():
483 with self._repo.lock():
484 storehash = list(self._calcstorehash(remotepath))
484 storehash = list(self._calcstorehash(remotepath))
485 vfs = self._cachestorehashvfs
485 vfs = self._cachestorehashvfs
486 vfs.writelines(cachefile, storehash, mode='wb', notindexed=True)
486 vfs.writelines(cachefile, storehash, mode='wb', notindexed=True)
487
487
488 def _getctx(self):
488 def _getctx(self):
489 '''fetch the context for this subrepo revision, possibly a workingctx
489 '''fetch the context for this subrepo revision, possibly a workingctx
490 '''
490 '''
491 if self._ctx.rev() is None:
491 if self._ctx.rev() is None:
492 return self._repo[None] # workingctx if parent is workingctx
492 return self._repo[None] # workingctx if parent is workingctx
493 else:
493 else:
494 rev = self._state[1]
494 rev = self._state[1]
495 return self._repo[rev]
495 return self._repo[rev]
496
496
497 @annotatesubrepoerror
497 @annotatesubrepoerror
498 def _initrepo(self, parentrepo, source, create):
498 def _initrepo(self, parentrepo, source, create):
499 self._repo._subparent = parentrepo
499 self._repo._subparent = parentrepo
500 self._repo._subsource = source
500 self._repo._subsource = source
501
501
502 if create:
502 if create:
503 lines = ['[paths]\n']
503 lines = ['[paths]\n']
504
504
505 def addpathconfig(key, value):
505 def addpathconfig(key, value):
506 if value:
506 if value:
507 lines.append('%s = %s\n' % (key, value))
507 lines.append('%s = %s\n' % (key, value))
508 self.ui.setconfig('paths', key, value, 'subrepo')
508 self.ui.setconfig('paths', key, value, 'subrepo')
509
509
510 defpath = _abssource(self._repo, abort=False)
510 defpath = _abssource(self._repo, abort=False)
511 defpushpath = _abssource(self._repo, True, abort=False)
511 defpushpath = _abssource(self._repo, True, abort=False)
512 addpathconfig('default', defpath)
512 addpathconfig('default', defpath)
513 if defpath != defpushpath:
513 if defpath != defpushpath:
514 addpathconfig('default-push', defpushpath)
514 addpathconfig('default-push', defpushpath)
515
515
516 self._repo.vfs.write('hgrc', util.tonativeeol(''.join(lines)))
516 self._repo.vfs.write('hgrc', util.tonativeeol(''.join(lines)))
517
517
518 @annotatesubrepoerror
518 @annotatesubrepoerror
519 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
519 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
520 return cmdutil.add(ui, self._repo, match, prefix, uipathfn,
520 return cmdutil.add(ui, self._repo, match, prefix, uipathfn,
521 explicitonly, **opts)
521 explicitonly, **opts)
522
522
523 @annotatesubrepoerror
523 @annotatesubrepoerror
524 def addremove(self, m, prefix, uipathfn, opts):
524 def addremove(self, m, prefix, uipathfn, opts):
525 # In the same way as sub directories are processed, once in a subrepo,
525 # In the same way as sub directories are processed, once in a subrepo,
526 # always entry any of its subrepos. Don't corrupt the options that will
526 # always entry any of its subrepos. Don't corrupt the options that will
527 # be used to process sibling subrepos however.
527 # be used to process sibling subrepos however.
528 opts = copy.copy(opts)
528 opts = copy.copy(opts)
529 opts['subrepos'] = True
529 opts['subrepos'] = True
530 return scmutil.addremove(self._repo, m, prefix, uipathfn, opts)
530 return scmutil.addremove(self._repo, m, prefix, uipathfn, opts)
531
531
532 @annotatesubrepoerror
532 @annotatesubrepoerror
533 def cat(self, match, fm, fntemplate, prefix, **opts):
533 def cat(self, match, fm, fntemplate, prefix, **opts):
534 rev = self._state[1]
534 rev = self._state[1]
535 ctx = self._repo[rev]
535 ctx = self._repo[rev]
536 return cmdutil.cat(self.ui, self._repo, ctx, match, fm, fntemplate,
536 return cmdutil.cat(self.ui, self._repo, ctx, match, fm, fntemplate,
537 prefix, **opts)
537 prefix, **opts)
538
538
539 @annotatesubrepoerror
539 @annotatesubrepoerror
540 def status(self, rev2, **opts):
540 def status(self, rev2, **opts):
541 try:
541 try:
542 rev1 = self._state[1]
542 rev1 = self._state[1]
543 ctx1 = self._repo[rev1]
543 ctx1 = self._repo[rev1]
544 ctx2 = self._repo[rev2]
544 ctx2 = self._repo[rev2]
545 return self._repo.status(ctx1, ctx2, **opts)
545 return self._repo.status(ctx1, ctx2, **opts)
546 except error.RepoLookupError as inst:
546 except error.RepoLookupError as inst:
547 self.ui.warn(_('warning: error "%s" in subrepository "%s"\n')
547 self.ui.warn(_('warning: error "%s" in subrepository "%s"\n')
548 % (inst, subrelpath(self)))
548 % (inst, subrelpath(self)))
549 return scmutil.status([], [], [], [], [], [], [])
549 return scmutil.status([], [], [], [], [], [], [])
550
550
551 @annotatesubrepoerror
551 @annotatesubrepoerror
552 def diff(self, ui, diffopts, node2, match, prefix, **opts):
552 def diff(self, ui, diffopts, node2, match, prefix, **opts):
553 try:
553 try:
554 node1 = node.bin(self._state[1])
554 node1 = node.bin(self._state[1])
555 # We currently expect node2 to come from substate and be
555 # We currently expect node2 to come from substate and be
556 # in hex format
556 # in hex format
557 if node2 is not None:
557 if node2 is not None:
558 node2 = node.bin(node2)
558 node2 = node.bin(node2)
559 logcmdutil.diffordiffstat(ui, self._repo, diffopts, node1, node2,
559 logcmdutil.diffordiffstat(ui, self._repo, diffopts, node1, node2,
560 match, prefix=prefix, listsubrepos=True,
560 match, prefix=prefix, listsubrepos=True,
561 **opts)
561 **opts)
562 except error.RepoLookupError as inst:
562 except error.RepoLookupError as inst:
563 self.ui.warn(_('warning: error "%s" in subrepository "%s"\n')
563 self.ui.warn(_('warning: error "%s" in subrepository "%s"\n')
564 % (inst, subrelpath(self)))
564 % (inst, subrelpath(self)))
565
565
566 @annotatesubrepoerror
566 @annotatesubrepoerror
567 def archive(self, archiver, prefix, match=None, decode=True):
567 def archive(self, archiver, prefix, match=None, decode=True):
568 self._get(self._state + ('hg',))
568 self._get(self._state + ('hg',))
569 files = self.files()
569 files = self.files()
570 if match:
570 if match:
571 files = [f for f in files if match(f)]
571 files = [f for f in files if match(f)]
572 rev = self._state[1]
572 rev = self._state[1]
573 ctx = self._repo[rev]
573 ctx = self._repo[rev]
574 scmutil.prefetchfiles(self._repo, [ctx.rev()],
574 scmutil.prefetchfiles(self._repo, [ctx.rev()],
575 scmutil.matchfiles(self._repo, files))
575 scmutil.matchfiles(self._repo, files))
576 total = abstractsubrepo.archive(self, archiver, prefix, match)
576 total = abstractsubrepo.archive(self, archiver, prefix, match)
577 for subpath in ctx.substate:
577 for subpath in ctx.substate:
578 s = subrepo(ctx, subpath, True)
578 s = subrepo(ctx, subpath, True)
579 submatch = matchmod.subdirmatcher(subpath, match)
579 submatch = matchmod.subdirmatcher(subpath, match)
580 subprefix = prefix + subpath + '/'
580 subprefix = prefix + subpath + '/'
581 total += s.archive(archiver, subprefix, submatch,
581 total += s.archive(archiver, subprefix, submatch,
582 decode)
582 decode)
583 return total
583 return total
584
584
585 @annotatesubrepoerror
585 @annotatesubrepoerror
586 def dirty(self, ignoreupdate=False, missing=False):
586 def dirty(self, ignoreupdate=False, missing=False):
587 r = self._state[1]
587 r = self._state[1]
588 if r == '' and not ignoreupdate: # no state recorded
588 if r == '' and not ignoreupdate: # no state recorded
589 return True
589 return True
590 w = self._repo[None]
590 w = self._repo[None]
591 if r != w.p1().hex() and not ignoreupdate:
591 if r != w.p1().hex() and not ignoreupdate:
592 # different version checked out
592 # different version checked out
593 return True
593 return True
594 return w.dirty(missing=missing) # working directory changed
594 return w.dirty(missing=missing) # working directory changed
595
595
596 def basestate(self):
596 def basestate(self):
597 return self._repo['.'].hex()
597 return self._repo['.'].hex()
598
598
599 def checknested(self, path):
599 def checknested(self, path):
600 return self._repo._checknested(self._repo.wjoin(path))
600 return self._repo._checknested(self._repo.wjoin(path))
601
601
602 @annotatesubrepoerror
602 @annotatesubrepoerror
603 def commit(self, text, user, date):
603 def commit(self, text, user, date):
604 # don't bother committing in the subrepo if it's only been
604 # don't bother committing in the subrepo if it's only been
605 # updated
605 # updated
606 if not self.dirty(True):
606 if not self.dirty(True):
607 return self._repo['.'].hex()
607 return self._repo['.'].hex()
608 self.ui.debug("committing subrepo %s\n" % subrelpath(self))
608 self.ui.debug("committing subrepo %s\n" % subrelpath(self))
609 n = self._repo.commit(text, user, date)
609 n = self._repo.commit(text, user, date)
610 if not n:
610 if not n:
611 return self._repo['.'].hex() # different version checked out
611 return self._repo['.'].hex() # different version checked out
612 return node.hex(n)
612 return node.hex(n)
613
613
614 @annotatesubrepoerror
614 @annotatesubrepoerror
615 def phase(self, state):
615 def phase(self, state):
616 return self._repo[state or '.'].phase()
616 return self._repo[state or '.'].phase()
617
617
618 @annotatesubrepoerror
618 @annotatesubrepoerror
619 def remove(self):
619 def remove(self):
620 # we can't fully delete the repository as it may contain
620 # we can't fully delete the repository as it may contain
621 # local-only history
621 # local-only history
622 self.ui.note(_('removing subrepo %s\n') % subrelpath(self))
622 self.ui.note(_('removing subrepo %s\n') % subrelpath(self))
623 hg.clean(self._repo, node.nullid, False)
623 hg.clean(self._repo, node.nullid, False)
624
624
625 def _get(self, state):
625 def _get(self, state):
626 source, revision, kind = state
626 source, revision, kind = state
627 parentrepo = self._repo._subparent
627 parentrepo = self._repo._subparent
628
628
629 if revision in self._repo.unfiltered():
629 if revision in self._repo.unfiltered():
630 # Allow shared subrepos tracked at null to setup the sharedpath
630 # Allow shared subrepos tracked at null to setup the sharedpath
631 if len(self._repo) != 0 or not parentrepo.shared():
631 if len(self._repo) != 0 or not parentrepo.shared():
632 return True
632 return True
633 self._repo._subsource = source
633 self._repo._subsource = source
634 srcurl = _abssource(self._repo)
634 srcurl = _abssource(self._repo)
635
635
636 # Defer creating the peer until after the status message is logged, in
636 # Defer creating the peer until after the status message is logged, in
637 # case there are network problems.
637 # case there are network problems.
638 getpeer = lambda: hg.peer(self._repo, {}, srcurl)
638 getpeer = lambda: hg.peer(self._repo, {}, srcurl)
639
639
640 if len(self._repo) == 0:
640 if len(self._repo) == 0:
641 # use self._repo.vfs instead of self.wvfs to remove .hg only
641 # use self._repo.vfs instead of self.wvfs to remove .hg only
642 self._repo.vfs.rmtree()
642 self._repo.vfs.rmtree()
643
643
644 # A remote subrepo could be shared if there is a local copy
644 # A remote subrepo could be shared if there is a local copy
645 # relative to the parent's share source. But clone pooling doesn't
645 # relative to the parent's share source. But clone pooling doesn't
646 # assemble the repos in a tree, so that can't be consistently done.
646 # assemble the repos in a tree, so that can't be consistently done.
647 # A simpler option is for the user to configure clone pooling, and
647 # A simpler option is for the user to configure clone pooling, and
648 # work with that.
648 # work with that.
649 if parentrepo.shared() and hg.islocal(srcurl):
649 if parentrepo.shared() and hg.islocal(srcurl):
650 self.ui.status(_('sharing subrepo %s from %s\n')
650 self.ui.status(_('sharing subrepo %s from %s\n')
651 % (subrelpath(self), srcurl))
651 % (subrelpath(self), srcurl))
652 shared = hg.share(self._repo._subparent.baseui,
652 shared = hg.share(self._repo._subparent.baseui,
653 getpeer(), self._repo.root,
653 getpeer(), self._repo.root,
654 update=False, bookmarks=False)
654 update=False, bookmarks=False)
655 self._repo = shared.local()
655 self._repo = shared.local()
656 else:
656 else:
657 # TODO: find a common place for this and this code in the
657 # TODO: find a common place for this and this code in the
658 # share.py wrap of the clone command.
658 # share.py wrap of the clone command.
659 if parentrepo.shared():
659 if parentrepo.shared():
660 pool = self.ui.config('share', 'pool')
660 pool = self.ui.config('share', 'pool')
661 if pool:
661 if pool:
662 pool = util.expandpath(pool)
662 pool = util.expandpath(pool)
663
663
664 shareopts = {
664 shareopts = {
665 'pool': pool,
665 'pool': pool,
666 'mode': self.ui.config('share', 'poolnaming'),
666 'mode': self.ui.config('share', 'poolnaming'),
667 }
667 }
668 else:
668 else:
669 shareopts = {}
669 shareopts = {}
670
670
671 self.ui.status(_('cloning subrepo %s from %s\n')
671 self.ui.status(_('cloning subrepo %s from %s\n')
672 % (subrelpath(self), util.hidepassword(srcurl)))
672 % (subrelpath(self), util.hidepassword(srcurl)))
673 other, cloned = hg.clone(self._repo._subparent.baseui, {},
673 other, cloned = hg.clone(self._repo._subparent.baseui, {},
674 getpeer(), self._repo.root,
674 getpeer(), self._repo.root,
675 update=False, shareopts=shareopts)
675 update=False, shareopts=shareopts)
676 self._repo = cloned.local()
676 self._repo = cloned.local()
677 self._initrepo(parentrepo, source, create=True)
677 self._initrepo(parentrepo, source, create=True)
678 self._cachestorehash(srcurl)
678 self._cachestorehash(srcurl)
679 else:
679 else:
680 self.ui.status(_('pulling subrepo %s from %s\n')
680 self.ui.status(_('pulling subrepo %s from %s\n')
681 % (subrelpath(self), util.hidepassword(srcurl)))
681 % (subrelpath(self), util.hidepassword(srcurl)))
682 cleansub = self.storeclean(srcurl)
682 cleansub = self.storeclean(srcurl)
683 exchange.pull(self._repo, getpeer())
683 exchange.pull(self._repo, getpeer())
684 if cleansub:
684 if cleansub:
685 # keep the repo clean after pull
685 # keep the repo clean after pull
686 self._cachestorehash(srcurl)
686 self._cachestorehash(srcurl)
687 return False
687 return False
688
688
689 @annotatesubrepoerror
689 @annotatesubrepoerror
690 def get(self, state, overwrite=False):
690 def get(self, state, overwrite=False):
691 inrepo = self._get(state)
691 inrepo = self._get(state)
692 source, revision, kind = state
692 source, revision, kind = state
693 repo = self._repo
693 repo = self._repo
694 repo.ui.debug("getting subrepo %s\n" % self._path)
694 repo.ui.debug("getting subrepo %s\n" % self._path)
695 if inrepo:
695 if inrepo:
696 urepo = repo.unfiltered()
696 urepo = repo.unfiltered()
697 ctx = urepo[revision]
697 ctx = urepo[revision]
698 if ctx.hidden():
698 if ctx.hidden():
699 urepo.ui.warn(
699 urepo.ui.warn(
700 _('revision %s in subrepository "%s" is hidden\n') \
700 _('revision %s in subrepository "%s" is hidden\n') \
701 % (revision[0:12], self._path))
701 % (revision[0:12], self._path))
702 repo = urepo
702 repo = urepo
703 hg.updaterepo(repo, revision, overwrite)
703 hg.updaterepo(repo, revision, overwrite)
704
704
705 @annotatesubrepoerror
705 @annotatesubrepoerror
706 def merge(self, state):
706 def merge(self, state):
707 self._get(state)
707 self._get(state)
708 cur = self._repo['.']
708 cur = self._repo['.']
709 dst = self._repo[state[1]]
709 dst = self._repo[state[1]]
710 anc = dst.ancestor(cur)
710 anc = dst.ancestor(cur)
711
711
712 def mergefunc():
712 def mergefunc():
713 if anc == cur and dst.branch() == cur.branch():
713 if anc == cur and dst.branch() == cur.branch():
714 self.ui.debug('updating subrepository "%s"\n'
714 self.ui.debug('updating subrepository "%s"\n'
715 % subrelpath(self))
715 % subrelpath(self))
716 hg.update(self._repo, state[1])
716 hg.update(self._repo, state[1])
717 elif anc == dst:
717 elif anc == dst:
718 self.ui.debug('skipping subrepository "%s"\n'
718 self.ui.debug('skipping subrepository "%s"\n'
719 % subrelpath(self))
719 % subrelpath(self))
720 else:
720 else:
721 self.ui.debug('merging subrepository "%s"\n' % subrelpath(self))
721 self.ui.debug('merging subrepository "%s"\n' % subrelpath(self))
722 hg.merge(self._repo, state[1], remind=False)
722 hg.merge(self._repo, state[1], remind=False)
723
723
724 wctx = self._repo[None]
724 wctx = self._repo[None]
725 if self.dirty():
725 if self.dirty():
726 if anc != dst:
726 if anc != dst:
727 if _updateprompt(self.ui, self, wctx.dirty(), cur, dst):
727 if _updateprompt(self.ui, self, wctx.dirty(), cur, dst):
728 mergefunc()
728 mergefunc()
729 else:
729 else:
730 mergefunc()
730 mergefunc()
731 else:
731 else:
732 mergefunc()
732 mergefunc()
733
733
734 @annotatesubrepoerror
734 @annotatesubrepoerror
735 def push(self, opts):
735 def push(self, opts):
736 force = opts.get('force')
736 force = opts.get('force')
737 newbranch = opts.get('new_branch')
737 newbranch = opts.get('new_branch')
738 ssh = opts.get('ssh')
738 ssh = opts.get('ssh')
739
739
740 # push subrepos depth-first for coherent ordering
740 # push subrepos depth-first for coherent ordering
741 c = self._repo['.']
741 c = self._repo['.']
742 subs = c.substate # only repos that are committed
742 subs = c.substate # only repos that are committed
743 for s in sorted(subs):
743 for s in sorted(subs):
744 if c.sub(s).push(opts) == 0:
744 if c.sub(s).push(opts) == 0:
745 return False
745 return False
746
746
747 dsturl = _abssource(self._repo, True)
747 dsturl = _abssource(self._repo, True)
748 if not force:
748 if not force:
749 if self.storeclean(dsturl):
749 if self.storeclean(dsturl):
750 self.ui.status(
750 self.ui.status(
751 _('no changes made to subrepo %s since last push to %s\n')
751 _('no changes made to subrepo %s since last push to %s\n')
752 % (subrelpath(self), util.hidepassword(dsturl)))
752 % (subrelpath(self), util.hidepassword(dsturl)))
753 return None
753 return None
754 self.ui.status(_('pushing subrepo %s to %s\n') %
754 self.ui.status(_('pushing subrepo %s to %s\n') %
755 (subrelpath(self), util.hidepassword(dsturl)))
755 (subrelpath(self), util.hidepassword(dsturl)))
756 other = hg.peer(self._repo, {'ssh': ssh}, dsturl)
756 other = hg.peer(self._repo, {'ssh': ssh}, dsturl)
757 res = exchange.push(self._repo, other, force, newbranch=newbranch)
757 res = exchange.push(self._repo, other, force, newbranch=newbranch)
758
758
759 # the repo is now clean
759 # the repo is now clean
760 self._cachestorehash(dsturl)
760 self._cachestorehash(dsturl)
761 return res.cgresult
761 return res.cgresult
762
762
763 @annotatesubrepoerror
763 @annotatesubrepoerror
764 def outgoing(self, ui, dest, opts):
764 def outgoing(self, ui, dest, opts):
765 if 'rev' in opts or 'branch' in opts:
765 if 'rev' in opts or 'branch' in opts:
766 opts = copy.copy(opts)
766 opts = copy.copy(opts)
767 opts.pop('rev', None)
767 opts.pop('rev', None)
768 opts.pop('branch', None)
768 opts.pop('branch', None)
769 return hg.outgoing(ui, self._repo, _abssource(self._repo, True), opts)
769 return hg.outgoing(ui, self._repo, _abssource(self._repo, True), opts)
770
770
771 @annotatesubrepoerror
771 @annotatesubrepoerror
772 def incoming(self, ui, source, opts):
772 def incoming(self, ui, source, opts):
773 if 'rev' in opts or 'branch' in opts:
773 if 'rev' in opts or 'branch' in opts:
774 opts = copy.copy(opts)
774 opts = copy.copy(opts)
775 opts.pop('rev', None)
775 opts.pop('rev', None)
776 opts.pop('branch', None)
776 opts.pop('branch', None)
777 return hg.incoming(ui, self._repo, _abssource(self._repo, False), opts)
777 return hg.incoming(ui, self._repo, _abssource(self._repo, False), opts)
778
778
779 @annotatesubrepoerror
779 @annotatesubrepoerror
780 def files(self):
780 def files(self):
781 rev = self._state[1]
781 rev = self._state[1]
782 ctx = self._repo[rev]
782 ctx = self._repo[rev]
783 return ctx.manifest().keys()
783 return ctx.manifest().keys()
784
784
785 def filedata(self, name, decode):
785 def filedata(self, name, decode):
786 rev = self._state[1]
786 rev = self._state[1]
787 data = self._repo[rev][name].data()
787 data = self._repo[rev][name].data()
788 if decode:
788 if decode:
789 data = self._repo.wwritedata(name, data)
789 data = self._repo.wwritedata(name, data)
790 return data
790 return data
791
791
792 def fileflags(self, name):
792 def fileflags(self, name):
793 rev = self._state[1]
793 rev = self._state[1]
794 ctx = self._repo[rev]
794 ctx = self._repo[rev]
795 return ctx.flags(name)
795 return ctx.flags(name)
796
796
797 @annotatesubrepoerror
797 @annotatesubrepoerror
798 def printfiles(self, ui, m, fm, fmt, subrepos):
798 def printfiles(self, ui, m, fm, fmt, subrepos):
799 # If the parent context is a workingctx, use the workingctx here for
799 # If the parent context is a workingctx, use the workingctx here for
800 # consistency.
800 # consistency.
801 if self._ctx.rev() is None:
801 if self._ctx.rev() is None:
802 ctx = self._repo[None]
802 ctx = self._repo[None]
803 else:
803 else:
804 rev = self._state[1]
804 rev = self._state[1]
805 ctx = self._repo[rev]
805 ctx = self._repo[rev]
806 return cmdutil.files(ui, ctx, m, fm, fmt, subrepos)
806 return cmdutil.files(ui, ctx, m, fm, fmt, subrepos)
807
807
808 @annotatesubrepoerror
808 @annotatesubrepoerror
809 def matchfileset(self, expr, badfn=None):
809 def matchfileset(self, expr, badfn=None):
810 repo = self._repo
810 repo = self._repo
811 if self._ctx.rev() is None:
811 if self._ctx.rev() is None:
812 ctx = repo[None]
812 ctx = repo[None]
813 else:
813 else:
814 rev = self._state[1]
814 rev = self._state[1]
815 ctx = repo[rev]
815 ctx = repo[rev]
816
816
817 matchers = [ctx.matchfileset(expr, badfn=badfn)]
817 matchers = [ctx.matchfileset(expr, badfn=badfn)]
818
818
819 for subpath in ctx.substate:
819 for subpath in ctx.substate:
820 sub = ctx.sub(subpath)
820 sub = ctx.sub(subpath)
821
821
822 try:
822 try:
823 sm = sub.matchfileset(expr, badfn=badfn)
823 sm = sub.matchfileset(expr, badfn=badfn)
824 pm = matchmod.prefixdirmatcher(repo.root, repo.getcwd(),
824 pm = matchmod.prefixdirmatcher(subpath, sm, badfn=badfn)
825 subpath, sm, badfn=badfn)
826 matchers.append(pm)
825 matchers.append(pm)
827 except error.LookupError:
826 except error.LookupError:
828 self.ui.status(_("skipping missing subrepository: %s\n")
827 self.ui.status(_("skipping missing subrepository: %s\n")
829 % self.wvfs.reljoin(reporelpath(self), subpath))
828 % self.wvfs.reljoin(reporelpath(self), subpath))
830 if len(matchers) == 1:
829 if len(matchers) == 1:
831 return matchers[0]
830 return matchers[0]
832 return matchmod.unionmatcher(matchers)
831 return matchmod.unionmatcher(matchers)
833
832
834 def walk(self, match):
833 def walk(self, match):
835 ctx = self._repo[None]
834 ctx = self._repo[None]
836 return ctx.walk(match)
835 return ctx.walk(match)
837
836
838 @annotatesubrepoerror
837 @annotatesubrepoerror
839 def forget(self, match, prefix, uipathfn, dryrun, interactive):
838 def forget(self, match, prefix, uipathfn, dryrun, interactive):
840 return cmdutil.forget(self.ui, self._repo, match, prefix, uipathfn,
839 return cmdutil.forget(self.ui, self._repo, match, prefix, uipathfn,
841 True, dryrun=dryrun, interactive=interactive)
840 True, dryrun=dryrun, interactive=interactive)
842
841
843 @annotatesubrepoerror
842 @annotatesubrepoerror
844 def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
843 def removefiles(self, matcher, prefix, uipathfn, after, force, subrepos,
845 dryrun, warnings):
844 dryrun, warnings):
846 return cmdutil.remove(self.ui, self._repo, matcher, prefix, uipathfn,
845 return cmdutil.remove(self.ui, self._repo, matcher, prefix, uipathfn,
847 after, force, subrepos, dryrun)
846 after, force, subrepos, dryrun)
848
847
849 @annotatesubrepoerror
848 @annotatesubrepoerror
850 def revert(self, substate, *pats, **opts):
849 def revert(self, substate, *pats, **opts):
851 # reverting a subrepo is a 2 step process:
850 # reverting a subrepo is a 2 step process:
852 # 1. if the no_backup is not set, revert all modified
851 # 1. if the no_backup is not set, revert all modified
853 # files inside the subrepo
852 # files inside the subrepo
854 # 2. update the subrepo to the revision specified in
853 # 2. update the subrepo to the revision specified in
855 # the corresponding substate dictionary
854 # the corresponding substate dictionary
856 self.ui.status(_('reverting subrepo %s\n') % substate[0])
855 self.ui.status(_('reverting subrepo %s\n') % substate[0])
857 if not opts.get(r'no_backup'):
856 if not opts.get(r'no_backup'):
858 # Revert all files on the subrepo, creating backups
857 # Revert all files on the subrepo, creating backups
859 # Note that this will not recursively revert subrepos
858 # Note that this will not recursively revert subrepos
860 # We could do it if there was a set:subrepos() predicate
859 # We could do it if there was a set:subrepos() predicate
861 opts = opts.copy()
860 opts = opts.copy()
862 opts[r'date'] = None
861 opts[r'date'] = None
863 opts[r'rev'] = substate[1]
862 opts[r'rev'] = substate[1]
864
863
865 self.filerevert(*pats, **opts)
864 self.filerevert(*pats, **opts)
866
865
867 # Update the repo to the revision specified in the given substate
866 # Update the repo to the revision specified in the given substate
868 if not opts.get(r'dry_run'):
867 if not opts.get(r'dry_run'):
869 self.get(substate, overwrite=True)
868 self.get(substate, overwrite=True)
870
869
871 def filerevert(self, *pats, **opts):
870 def filerevert(self, *pats, **opts):
872 ctx = self._repo[opts[r'rev']]
871 ctx = self._repo[opts[r'rev']]
873 parents = self._repo.dirstate.parents()
872 parents = self._repo.dirstate.parents()
874 if opts.get(r'all'):
873 if opts.get(r'all'):
875 pats = ['set:modified()']
874 pats = ['set:modified()']
876 else:
875 else:
877 pats = []
876 pats = []
878 cmdutil.revert(self.ui, self._repo, ctx, parents, *pats, **opts)
877 cmdutil.revert(self.ui, self._repo, ctx, parents, *pats, **opts)
879
878
880 def shortid(self, revid):
879 def shortid(self, revid):
881 return revid[:12]
880 return revid[:12]
882
881
883 @annotatesubrepoerror
882 @annotatesubrepoerror
884 def unshare(self):
883 def unshare(self):
885 # subrepo inherently violates our import layering rules
884 # subrepo inherently violates our import layering rules
886 # because it wants to make repo objects from deep inside the stack
885 # because it wants to make repo objects from deep inside the stack
887 # so we manually delay the circular imports to not break
886 # so we manually delay the circular imports to not break
888 # scripts that don't use our demand-loading
887 # scripts that don't use our demand-loading
889 global hg
888 global hg
890 from . import hg as h
889 from . import hg as h
891 hg = h
890 hg = h
892
891
893 # Nothing prevents a user from sharing in a repo, and then making that a
892 # Nothing prevents a user from sharing in a repo, and then making that a
894 # subrepo. Alternately, the previous unshare attempt may have failed
893 # subrepo. Alternately, the previous unshare attempt may have failed
895 # part way through. So recurse whether or not this layer is shared.
894 # part way through. So recurse whether or not this layer is shared.
896 if self._repo.shared():
895 if self._repo.shared():
897 self.ui.status(_("unsharing subrepo '%s'\n") % self._relpath)
896 self.ui.status(_("unsharing subrepo '%s'\n") % self._relpath)
898
897
899 hg.unshare(self.ui, self._repo)
898 hg.unshare(self.ui, self._repo)
900
899
901 def verify(self):
900 def verify(self):
902 try:
901 try:
903 rev = self._state[1]
902 rev = self._state[1]
904 ctx = self._repo.unfiltered()[rev]
903 ctx = self._repo.unfiltered()[rev]
905 if ctx.hidden():
904 if ctx.hidden():
906 # Since hidden revisions aren't pushed/pulled, it seems worth an
905 # Since hidden revisions aren't pushed/pulled, it seems worth an
907 # explicit warning.
906 # explicit warning.
908 ui = self._repo.ui
907 ui = self._repo.ui
909 ui.warn(_("subrepo '%s' is hidden in revision %s\n") %
908 ui.warn(_("subrepo '%s' is hidden in revision %s\n") %
910 (self._relpath, node.short(self._ctx.node())))
909 (self._relpath, node.short(self._ctx.node())))
911 return 0
910 return 0
912 except error.RepoLookupError:
911 except error.RepoLookupError:
913 # A missing subrepo revision may be a case of needing to pull it, so
912 # A missing subrepo revision may be a case of needing to pull it, so
914 # don't treat this as an error.
913 # don't treat this as an error.
915 self._repo.ui.warn(_("subrepo '%s' not found in revision %s\n") %
914 self._repo.ui.warn(_("subrepo '%s' not found in revision %s\n") %
916 (self._relpath, node.short(self._ctx.node())))
915 (self._relpath, node.short(self._ctx.node())))
917 return 0
916 return 0
918
917
919 @propertycache
918 @propertycache
920 def wvfs(self):
919 def wvfs(self):
921 """return own wvfs for efficiency and consistency
920 """return own wvfs for efficiency and consistency
922 """
921 """
923 return self._repo.wvfs
922 return self._repo.wvfs
924
923
925 @propertycache
924 @propertycache
926 def _relpath(self):
925 def _relpath(self):
927 """return path to this subrepository as seen from outermost repository
926 """return path to this subrepository as seen from outermost repository
928 """
927 """
929 # Keep consistent dir separators by avoiding vfs.join(self._path)
928 # Keep consistent dir separators by avoiding vfs.join(self._path)
930 return reporelpath(self._repo)
929 return reporelpath(self._repo)
931
930
932 class svnsubrepo(abstractsubrepo):
931 class svnsubrepo(abstractsubrepo):
933 def __init__(self, ctx, path, state, allowcreate):
932 def __init__(self, ctx, path, state, allowcreate):
934 super(svnsubrepo, self).__init__(ctx, path)
933 super(svnsubrepo, self).__init__(ctx, path)
935 self._state = state
934 self._state = state
936 self._exe = procutil.findexe('svn')
935 self._exe = procutil.findexe('svn')
937 if not self._exe:
936 if not self._exe:
938 raise error.Abort(_("'svn' executable not found for subrepo '%s'")
937 raise error.Abort(_("'svn' executable not found for subrepo '%s'")
939 % self._path)
938 % self._path)
940
939
941 def _svncommand(self, commands, filename='', failok=False):
940 def _svncommand(self, commands, filename='', failok=False):
942 cmd = [self._exe]
941 cmd = [self._exe]
943 extrakw = {}
942 extrakw = {}
944 if not self.ui.interactive():
943 if not self.ui.interactive():
945 # Making stdin be a pipe should prevent svn from behaving
944 # Making stdin be a pipe should prevent svn from behaving
946 # interactively even if we can't pass --non-interactive.
945 # interactively even if we can't pass --non-interactive.
947 extrakw[r'stdin'] = subprocess.PIPE
946 extrakw[r'stdin'] = subprocess.PIPE
948 # Starting in svn 1.5 --non-interactive is a global flag
947 # Starting in svn 1.5 --non-interactive is a global flag
949 # instead of being per-command, but we need to support 1.4 so
948 # instead of being per-command, but we need to support 1.4 so
950 # we have to be intelligent about what commands take
949 # we have to be intelligent about what commands take
951 # --non-interactive.
950 # --non-interactive.
952 if commands[0] in ('update', 'checkout', 'commit'):
951 if commands[0] in ('update', 'checkout', 'commit'):
953 cmd.append('--non-interactive')
952 cmd.append('--non-interactive')
954 cmd.extend(commands)
953 cmd.extend(commands)
955 if filename is not None:
954 if filename is not None:
956 path = self.wvfs.reljoin(self._ctx.repo().origroot,
955 path = self.wvfs.reljoin(self._ctx.repo().origroot,
957 self._path, filename)
956 self._path, filename)
958 cmd.append(path)
957 cmd.append(path)
959 env = dict(encoding.environ)
958 env = dict(encoding.environ)
960 # Avoid localized output, preserve current locale for everything else.
959 # Avoid localized output, preserve current locale for everything else.
961 lc_all = env.get('LC_ALL')
960 lc_all = env.get('LC_ALL')
962 if lc_all:
961 if lc_all:
963 env['LANG'] = lc_all
962 env['LANG'] = lc_all
964 del env['LC_ALL']
963 del env['LC_ALL']
965 env['LC_MESSAGES'] = 'C'
964 env['LC_MESSAGES'] = 'C'
966 p = subprocess.Popen(pycompat.rapply(procutil.tonativestr, cmd),
965 p = subprocess.Popen(pycompat.rapply(procutil.tonativestr, cmd),
967 bufsize=-1, close_fds=procutil.closefds,
966 bufsize=-1, close_fds=procutil.closefds,
968 stdout=subprocess.PIPE, stderr=subprocess.PIPE,
967 stdout=subprocess.PIPE, stderr=subprocess.PIPE,
969 env=procutil.tonativeenv(env), **extrakw)
968 env=procutil.tonativeenv(env), **extrakw)
970 stdout, stderr = map(util.fromnativeeol, p.communicate())
969 stdout, stderr = map(util.fromnativeeol, p.communicate())
971 stderr = stderr.strip()
970 stderr = stderr.strip()
972 if not failok:
971 if not failok:
973 if p.returncode:
972 if p.returncode:
974 raise error.Abort(stderr or 'exited with code %d'
973 raise error.Abort(stderr or 'exited with code %d'
975 % p.returncode)
974 % p.returncode)
976 if stderr:
975 if stderr:
977 self.ui.warn(stderr + '\n')
976 self.ui.warn(stderr + '\n')
978 return stdout, stderr
977 return stdout, stderr
979
978
980 @propertycache
979 @propertycache
981 def _svnversion(self):
980 def _svnversion(self):
982 output, err = self._svncommand(['--version', '--quiet'], filename=None)
981 output, err = self._svncommand(['--version', '--quiet'], filename=None)
983 m = re.search(br'^(\d+)\.(\d+)', output)
982 m = re.search(br'^(\d+)\.(\d+)', output)
984 if not m:
983 if not m:
985 raise error.Abort(_('cannot retrieve svn tool version'))
984 raise error.Abort(_('cannot retrieve svn tool version'))
986 return (int(m.group(1)), int(m.group(2)))
985 return (int(m.group(1)), int(m.group(2)))
987
986
988 def _svnmissing(self):
987 def _svnmissing(self):
989 return not self.wvfs.exists('.svn')
988 return not self.wvfs.exists('.svn')
990
989
991 def _wcrevs(self):
990 def _wcrevs(self):
992 # Get the working directory revision as well as the last
991 # Get the working directory revision as well as the last
993 # commit revision so we can compare the subrepo state with
992 # commit revision so we can compare the subrepo state with
994 # both. We used to store the working directory one.
993 # both. We used to store the working directory one.
995 output, err = self._svncommand(['info', '--xml'])
994 output, err = self._svncommand(['info', '--xml'])
996 doc = xml.dom.minidom.parseString(output)
995 doc = xml.dom.minidom.parseString(output)
997 entries = doc.getElementsByTagName(r'entry')
996 entries = doc.getElementsByTagName(r'entry')
998 lastrev, rev = '0', '0'
997 lastrev, rev = '0', '0'
999 if entries:
998 if entries:
1000 rev = pycompat.bytestr(entries[0].getAttribute(r'revision')) or '0'
999 rev = pycompat.bytestr(entries[0].getAttribute(r'revision')) or '0'
1001 commits = entries[0].getElementsByTagName(r'commit')
1000 commits = entries[0].getElementsByTagName(r'commit')
1002 if commits:
1001 if commits:
1003 lastrev = pycompat.bytestr(
1002 lastrev = pycompat.bytestr(
1004 commits[0].getAttribute(r'revision')) or '0'
1003 commits[0].getAttribute(r'revision')) or '0'
1005 return (lastrev, rev)
1004 return (lastrev, rev)
1006
1005
1007 def _wcrev(self):
1006 def _wcrev(self):
1008 return self._wcrevs()[0]
1007 return self._wcrevs()[0]
1009
1008
1010 def _wcchanged(self):
1009 def _wcchanged(self):
1011 """Return (changes, extchanges, missing) where changes is True
1010 """Return (changes, extchanges, missing) where changes is True
1012 if the working directory was changed, extchanges is
1011 if the working directory was changed, extchanges is
1013 True if any of these changes concern an external entry and missing
1012 True if any of these changes concern an external entry and missing
1014 is True if any change is a missing entry.
1013 is True if any change is a missing entry.
1015 """
1014 """
1016 output, err = self._svncommand(['status', '--xml'])
1015 output, err = self._svncommand(['status', '--xml'])
1017 externals, changes, missing = [], [], []
1016 externals, changes, missing = [], [], []
1018 doc = xml.dom.minidom.parseString(output)
1017 doc = xml.dom.minidom.parseString(output)
1019 for e in doc.getElementsByTagName(r'entry'):
1018 for e in doc.getElementsByTagName(r'entry'):
1020 s = e.getElementsByTagName(r'wc-status')
1019 s = e.getElementsByTagName(r'wc-status')
1021 if not s:
1020 if not s:
1022 continue
1021 continue
1023 item = s[0].getAttribute(r'item')
1022 item = s[0].getAttribute(r'item')
1024 props = s[0].getAttribute(r'props')
1023 props = s[0].getAttribute(r'props')
1025 path = e.getAttribute(r'path').encode('utf8')
1024 path = e.getAttribute(r'path').encode('utf8')
1026 if item == r'external':
1025 if item == r'external':
1027 externals.append(path)
1026 externals.append(path)
1028 elif item == r'missing':
1027 elif item == r'missing':
1029 missing.append(path)
1028 missing.append(path)
1030 if (item not in (r'', r'normal', r'unversioned', r'external')
1029 if (item not in (r'', r'normal', r'unversioned', r'external')
1031 or props not in (r'', r'none', r'normal')):
1030 or props not in (r'', r'none', r'normal')):
1032 changes.append(path)
1031 changes.append(path)
1033 for path in changes:
1032 for path in changes:
1034 for ext in externals:
1033 for ext in externals:
1035 if path == ext or path.startswith(ext + pycompat.ossep):
1034 if path == ext or path.startswith(ext + pycompat.ossep):
1036 return True, True, bool(missing)
1035 return True, True, bool(missing)
1037 return bool(changes), False, bool(missing)
1036 return bool(changes), False, bool(missing)
1038
1037
1039 @annotatesubrepoerror
1038 @annotatesubrepoerror
1040 def dirty(self, ignoreupdate=False, missing=False):
1039 def dirty(self, ignoreupdate=False, missing=False):
1041 if self._svnmissing():
1040 if self._svnmissing():
1042 return self._state[1] != ''
1041 return self._state[1] != ''
1043 wcchanged = self._wcchanged()
1042 wcchanged = self._wcchanged()
1044 changed = wcchanged[0] or (missing and wcchanged[2])
1043 changed = wcchanged[0] or (missing and wcchanged[2])
1045 if not changed:
1044 if not changed:
1046 if self._state[1] in self._wcrevs() or ignoreupdate:
1045 if self._state[1] in self._wcrevs() or ignoreupdate:
1047 return False
1046 return False
1048 return True
1047 return True
1049
1048
1050 def basestate(self):
1049 def basestate(self):
1051 lastrev, rev = self._wcrevs()
1050 lastrev, rev = self._wcrevs()
1052 if lastrev != rev:
1051 if lastrev != rev:
1053 # Last committed rev is not the same than rev. We would
1052 # Last committed rev is not the same than rev. We would
1054 # like to take lastrev but we do not know if the subrepo
1053 # like to take lastrev but we do not know if the subrepo
1055 # URL exists at lastrev. Test it and fallback to rev it
1054 # URL exists at lastrev. Test it and fallback to rev it
1056 # is not there.
1055 # is not there.
1057 try:
1056 try:
1058 self._svncommand(['list', '%s@%s' % (self._state[0], lastrev)])
1057 self._svncommand(['list', '%s@%s' % (self._state[0], lastrev)])
1059 return lastrev
1058 return lastrev
1060 except error.Abort:
1059 except error.Abort:
1061 pass
1060 pass
1062 return rev
1061 return rev
1063
1062
1064 @annotatesubrepoerror
1063 @annotatesubrepoerror
1065 def commit(self, text, user, date):
1064 def commit(self, text, user, date):
1066 # user and date are out of our hands since svn is centralized
1065 # user and date are out of our hands since svn is centralized
1067 changed, extchanged, missing = self._wcchanged()
1066 changed, extchanged, missing = self._wcchanged()
1068 if not changed:
1067 if not changed:
1069 return self.basestate()
1068 return self.basestate()
1070 if extchanged:
1069 if extchanged:
1071 # Do not try to commit externals
1070 # Do not try to commit externals
1072 raise error.Abort(_('cannot commit svn externals'))
1071 raise error.Abort(_('cannot commit svn externals'))
1073 if missing:
1072 if missing:
1074 # svn can commit with missing entries but aborting like hg
1073 # svn can commit with missing entries but aborting like hg
1075 # seems a better approach.
1074 # seems a better approach.
1076 raise error.Abort(_('cannot commit missing svn entries'))
1075 raise error.Abort(_('cannot commit missing svn entries'))
1077 commitinfo, err = self._svncommand(['commit', '-m', text])
1076 commitinfo, err = self._svncommand(['commit', '-m', text])
1078 self.ui.status(commitinfo)
1077 self.ui.status(commitinfo)
1079 newrev = re.search('Committed revision ([0-9]+).', commitinfo)
1078 newrev = re.search('Committed revision ([0-9]+).', commitinfo)
1080 if not newrev:
1079 if not newrev:
1081 if not commitinfo.strip():
1080 if not commitinfo.strip():
1082 # Sometimes, our definition of "changed" differs from
1081 # Sometimes, our definition of "changed" differs from
1083 # svn one. For instance, svn ignores missing files
1082 # svn one. For instance, svn ignores missing files
1084 # when committing. If there are only missing files, no
1083 # when committing. If there are only missing files, no
1085 # commit is made, no output and no error code.
1084 # commit is made, no output and no error code.
1086 raise error.Abort(_('failed to commit svn changes'))
1085 raise error.Abort(_('failed to commit svn changes'))
1087 raise error.Abort(commitinfo.splitlines()[-1])
1086 raise error.Abort(commitinfo.splitlines()[-1])
1088 newrev = newrev.groups()[0]
1087 newrev = newrev.groups()[0]
1089 self.ui.status(self._svncommand(['update', '-r', newrev])[0])
1088 self.ui.status(self._svncommand(['update', '-r', newrev])[0])
1090 return newrev
1089 return newrev
1091
1090
1092 @annotatesubrepoerror
1091 @annotatesubrepoerror
1093 def remove(self):
1092 def remove(self):
1094 if self.dirty():
1093 if self.dirty():
1095 self.ui.warn(_('not removing repo %s because '
1094 self.ui.warn(_('not removing repo %s because '
1096 'it has changes.\n') % self._path)
1095 'it has changes.\n') % self._path)
1097 return
1096 return
1098 self.ui.note(_('removing subrepo %s\n') % self._path)
1097 self.ui.note(_('removing subrepo %s\n') % self._path)
1099
1098
1100 self.wvfs.rmtree(forcibly=True)
1099 self.wvfs.rmtree(forcibly=True)
1101 try:
1100 try:
1102 pwvfs = self._ctx.repo().wvfs
1101 pwvfs = self._ctx.repo().wvfs
1103 pwvfs.removedirs(pwvfs.dirname(self._path))
1102 pwvfs.removedirs(pwvfs.dirname(self._path))
1104 except OSError:
1103 except OSError:
1105 pass
1104 pass
1106
1105
1107 @annotatesubrepoerror
1106 @annotatesubrepoerror
1108 def get(self, state, overwrite=False):
1107 def get(self, state, overwrite=False):
1109 if overwrite:
1108 if overwrite:
1110 self._svncommand(['revert', '--recursive'])
1109 self._svncommand(['revert', '--recursive'])
1111 args = ['checkout']
1110 args = ['checkout']
1112 if self._svnversion >= (1, 5):
1111 if self._svnversion >= (1, 5):
1113 args.append('--force')
1112 args.append('--force')
1114 # The revision must be specified at the end of the URL to properly
1113 # The revision must be specified at the end of the URL to properly
1115 # update to a directory which has since been deleted and recreated.
1114 # update to a directory which has since been deleted and recreated.
1116 args.append('%s@%s' % (state[0], state[1]))
1115 args.append('%s@%s' % (state[0], state[1]))
1117
1116
1118 # SEC: check that the ssh url is safe
1117 # SEC: check that the ssh url is safe
1119 util.checksafessh(state[0])
1118 util.checksafessh(state[0])
1120
1119
1121 status, err = self._svncommand(args, failok=True)
1120 status, err = self._svncommand(args, failok=True)
1122 _sanitize(self.ui, self.wvfs, '.svn')
1121 _sanitize(self.ui, self.wvfs, '.svn')
1123 if not re.search('Checked out revision [0-9]+.', status):
1122 if not re.search('Checked out revision [0-9]+.', status):
1124 if ('is already a working copy for a different URL' in err
1123 if ('is already a working copy for a different URL' in err
1125 and (self._wcchanged()[:2] == (False, False))):
1124 and (self._wcchanged()[:2] == (False, False))):
1126 # obstructed but clean working copy, so just blow it away.
1125 # obstructed but clean working copy, so just blow it away.
1127 self.remove()
1126 self.remove()
1128 self.get(state, overwrite=False)
1127 self.get(state, overwrite=False)
1129 return
1128 return
1130 raise error.Abort((status or err).splitlines()[-1])
1129 raise error.Abort((status or err).splitlines()[-1])
1131 self.ui.status(status)
1130 self.ui.status(status)
1132
1131
1133 @annotatesubrepoerror
1132 @annotatesubrepoerror
1134 def merge(self, state):
1133 def merge(self, state):
1135 old = self._state[1]
1134 old = self._state[1]
1136 new = state[1]
1135 new = state[1]
1137 wcrev = self._wcrev()
1136 wcrev = self._wcrev()
1138 if new != wcrev:
1137 if new != wcrev:
1139 dirty = old == wcrev or self._wcchanged()[0]
1138 dirty = old == wcrev or self._wcchanged()[0]
1140 if _updateprompt(self.ui, self, dirty, wcrev, new):
1139 if _updateprompt(self.ui, self, dirty, wcrev, new):
1141 self.get(state, False)
1140 self.get(state, False)
1142
1141
1143 def push(self, opts):
1142 def push(self, opts):
1144 # push is a no-op for SVN
1143 # push is a no-op for SVN
1145 return True
1144 return True
1146
1145
1147 @annotatesubrepoerror
1146 @annotatesubrepoerror
1148 def files(self):
1147 def files(self):
1149 output = self._svncommand(['list', '--recursive', '--xml'])[0]
1148 output = self._svncommand(['list', '--recursive', '--xml'])[0]
1150 doc = xml.dom.minidom.parseString(output)
1149 doc = xml.dom.minidom.parseString(output)
1151 paths = []
1150 paths = []
1152 for e in doc.getElementsByTagName(r'entry'):
1151 for e in doc.getElementsByTagName(r'entry'):
1153 kind = pycompat.bytestr(e.getAttribute(r'kind'))
1152 kind = pycompat.bytestr(e.getAttribute(r'kind'))
1154 if kind != 'file':
1153 if kind != 'file':
1155 continue
1154 continue
1156 name = r''.join(c.data for c
1155 name = r''.join(c.data for c
1157 in e.getElementsByTagName(r'name')[0].childNodes
1156 in e.getElementsByTagName(r'name')[0].childNodes
1158 if c.nodeType == c.TEXT_NODE)
1157 if c.nodeType == c.TEXT_NODE)
1159 paths.append(name.encode('utf8'))
1158 paths.append(name.encode('utf8'))
1160 return paths
1159 return paths
1161
1160
1162 def filedata(self, name, decode):
1161 def filedata(self, name, decode):
1163 return self._svncommand(['cat'], name)[0]
1162 return self._svncommand(['cat'], name)[0]
1164
1163
1165
1164
1166 class gitsubrepo(abstractsubrepo):
1165 class gitsubrepo(abstractsubrepo):
1167 def __init__(self, ctx, path, state, allowcreate):
1166 def __init__(self, ctx, path, state, allowcreate):
1168 super(gitsubrepo, self).__init__(ctx, path)
1167 super(gitsubrepo, self).__init__(ctx, path)
1169 self._state = state
1168 self._state = state
1170 self._abspath = ctx.repo().wjoin(path)
1169 self._abspath = ctx.repo().wjoin(path)
1171 self._subparent = ctx.repo()
1170 self._subparent = ctx.repo()
1172 self._ensuregit()
1171 self._ensuregit()
1173
1172
1174 def _ensuregit(self):
1173 def _ensuregit(self):
1175 try:
1174 try:
1176 self._gitexecutable = 'git'
1175 self._gitexecutable = 'git'
1177 out, err = self._gitnodir(['--version'])
1176 out, err = self._gitnodir(['--version'])
1178 except OSError as e:
1177 except OSError as e:
1179 genericerror = _("error executing git for subrepo '%s': %s")
1178 genericerror = _("error executing git for subrepo '%s': %s")
1180 notfoundhint = _("check git is installed and in your PATH")
1179 notfoundhint = _("check git is installed and in your PATH")
1181 if e.errno != errno.ENOENT:
1180 if e.errno != errno.ENOENT:
1182 raise error.Abort(genericerror % (
1181 raise error.Abort(genericerror % (
1183 self._path, encoding.strtolocal(e.strerror)))
1182 self._path, encoding.strtolocal(e.strerror)))
1184 elif pycompat.iswindows:
1183 elif pycompat.iswindows:
1185 try:
1184 try:
1186 self._gitexecutable = 'git.cmd'
1185 self._gitexecutable = 'git.cmd'
1187 out, err = self._gitnodir(['--version'])
1186 out, err = self._gitnodir(['--version'])
1188 except OSError as e2:
1187 except OSError as e2:
1189 if e2.errno == errno.ENOENT:
1188 if e2.errno == errno.ENOENT:
1190 raise error.Abort(_("couldn't find 'git' or 'git.cmd'"
1189 raise error.Abort(_("couldn't find 'git' or 'git.cmd'"
1191 " for subrepo '%s'") % self._path,
1190 " for subrepo '%s'") % self._path,
1192 hint=notfoundhint)
1191 hint=notfoundhint)
1193 else:
1192 else:
1194 raise error.Abort(genericerror % (self._path,
1193 raise error.Abort(genericerror % (self._path,
1195 encoding.strtolocal(e2.strerror)))
1194 encoding.strtolocal(e2.strerror)))
1196 else:
1195 else:
1197 raise error.Abort(_("couldn't find git for subrepo '%s'")
1196 raise error.Abort(_("couldn't find git for subrepo '%s'")
1198 % self._path, hint=notfoundhint)
1197 % self._path, hint=notfoundhint)
1199 versionstatus = self._checkversion(out)
1198 versionstatus = self._checkversion(out)
1200 if versionstatus == 'unknown':
1199 if versionstatus == 'unknown':
1201 self.ui.warn(_('cannot retrieve git version\n'))
1200 self.ui.warn(_('cannot retrieve git version\n'))
1202 elif versionstatus == 'abort':
1201 elif versionstatus == 'abort':
1203 raise error.Abort(_('git subrepo requires at least 1.6.0 or later'))
1202 raise error.Abort(_('git subrepo requires at least 1.6.0 or later'))
1204 elif versionstatus == 'warning':
1203 elif versionstatus == 'warning':
1205 self.ui.warn(_('git subrepo requires at least 1.6.0 or later\n'))
1204 self.ui.warn(_('git subrepo requires at least 1.6.0 or later\n'))
1206
1205
1207 @staticmethod
1206 @staticmethod
1208 def _gitversion(out):
1207 def _gitversion(out):
1209 m = re.search(br'^git version (\d+)\.(\d+)\.(\d+)', out)
1208 m = re.search(br'^git version (\d+)\.(\d+)\.(\d+)', out)
1210 if m:
1209 if m:
1211 return (int(m.group(1)), int(m.group(2)), int(m.group(3)))
1210 return (int(m.group(1)), int(m.group(2)), int(m.group(3)))
1212
1211
1213 m = re.search(br'^git version (\d+)\.(\d+)', out)
1212 m = re.search(br'^git version (\d+)\.(\d+)', out)
1214 if m:
1213 if m:
1215 return (int(m.group(1)), int(m.group(2)), 0)
1214 return (int(m.group(1)), int(m.group(2)), 0)
1216
1215
1217 return -1
1216 return -1
1218
1217
1219 @staticmethod
1218 @staticmethod
1220 def _checkversion(out):
1219 def _checkversion(out):
1221 '''ensure git version is new enough
1220 '''ensure git version is new enough
1222
1221
1223 >>> _checkversion = gitsubrepo._checkversion
1222 >>> _checkversion = gitsubrepo._checkversion
1224 >>> _checkversion(b'git version 1.6.0')
1223 >>> _checkversion(b'git version 1.6.0')
1225 'ok'
1224 'ok'
1226 >>> _checkversion(b'git version 1.8.5')
1225 >>> _checkversion(b'git version 1.8.5')
1227 'ok'
1226 'ok'
1228 >>> _checkversion(b'git version 1.4.0')
1227 >>> _checkversion(b'git version 1.4.0')
1229 'abort'
1228 'abort'
1230 >>> _checkversion(b'git version 1.5.0')
1229 >>> _checkversion(b'git version 1.5.0')
1231 'warning'
1230 'warning'
1232 >>> _checkversion(b'git version 1.9-rc0')
1231 >>> _checkversion(b'git version 1.9-rc0')
1233 'ok'
1232 'ok'
1234 >>> _checkversion(b'git version 1.9.0.265.g81cdec2')
1233 >>> _checkversion(b'git version 1.9.0.265.g81cdec2')
1235 'ok'
1234 'ok'
1236 >>> _checkversion(b'git version 1.9.0.GIT')
1235 >>> _checkversion(b'git version 1.9.0.GIT')
1237 'ok'
1236 'ok'
1238 >>> _checkversion(b'git version 12345')
1237 >>> _checkversion(b'git version 12345')
1239 'unknown'
1238 'unknown'
1240 >>> _checkversion(b'no')
1239 >>> _checkversion(b'no')
1241 'unknown'
1240 'unknown'
1242 '''
1241 '''
1243 version = gitsubrepo._gitversion(out)
1242 version = gitsubrepo._gitversion(out)
1244 # git 1.4.0 can't work at all, but 1.5.X can in at least some cases,
1243 # git 1.4.0 can't work at all, but 1.5.X can in at least some cases,
1245 # despite the docstring comment. For now, error on 1.4.0, warn on
1244 # despite the docstring comment. For now, error on 1.4.0, warn on
1246 # 1.5.0 but attempt to continue.
1245 # 1.5.0 but attempt to continue.
1247 if version == -1:
1246 if version == -1:
1248 return 'unknown'
1247 return 'unknown'
1249 if version < (1, 5, 0):
1248 if version < (1, 5, 0):
1250 return 'abort'
1249 return 'abort'
1251 elif version < (1, 6, 0):
1250 elif version < (1, 6, 0):
1252 return 'warning'
1251 return 'warning'
1253 return 'ok'
1252 return 'ok'
1254
1253
1255 def _gitcommand(self, commands, env=None, stream=False):
1254 def _gitcommand(self, commands, env=None, stream=False):
1256 return self._gitdir(commands, env=env, stream=stream)[0]
1255 return self._gitdir(commands, env=env, stream=stream)[0]
1257
1256
1258 def _gitdir(self, commands, env=None, stream=False):
1257 def _gitdir(self, commands, env=None, stream=False):
1259 return self._gitnodir(commands, env=env, stream=stream,
1258 return self._gitnodir(commands, env=env, stream=stream,
1260 cwd=self._abspath)
1259 cwd=self._abspath)
1261
1260
1262 def _gitnodir(self, commands, env=None, stream=False, cwd=None):
1261 def _gitnodir(self, commands, env=None, stream=False, cwd=None):
1263 """Calls the git command
1262 """Calls the git command
1264
1263
1265 The methods tries to call the git command. versions prior to 1.6.0
1264 The methods tries to call the git command. versions prior to 1.6.0
1266 are not supported and very probably fail.
1265 are not supported and very probably fail.
1267 """
1266 """
1268 self.ui.debug('%s: git %s\n' % (self._relpath, ' '.join(commands)))
1267 self.ui.debug('%s: git %s\n' % (self._relpath, ' '.join(commands)))
1269 if env is None:
1268 if env is None:
1270 env = encoding.environ.copy()
1269 env = encoding.environ.copy()
1271 # disable localization for Git output (issue5176)
1270 # disable localization for Git output (issue5176)
1272 env['LC_ALL'] = 'C'
1271 env['LC_ALL'] = 'C'
1273 # fix for Git CVE-2015-7545
1272 # fix for Git CVE-2015-7545
1274 if 'GIT_ALLOW_PROTOCOL' not in env:
1273 if 'GIT_ALLOW_PROTOCOL' not in env:
1275 env['GIT_ALLOW_PROTOCOL'] = 'file:git:http:https:ssh'
1274 env['GIT_ALLOW_PROTOCOL'] = 'file:git:http:https:ssh'
1276 # unless ui.quiet is set, print git's stderr,
1275 # unless ui.quiet is set, print git's stderr,
1277 # which is mostly progress and useful info
1276 # which is mostly progress and useful info
1278 errpipe = None
1277 errpipe = None
1279 if self.ui.quiet:
1278 if self.ui.quiet:
1280 errpipe = open(os.devnull, 'w')
1279 errpipe = open(os.devnull, 'w')
1281 if self.ui._colormode and len(commands) and commands[0] == "diff":
1280 if self.ui._colormode and len(commands) and commands[0] == "diff":
1282 # insert the argument in the front,
1281 # insert the argument in the front,
1283 # the end of git diff arguments is used for paths
1282 # the end of git diff arguments is used for paths
1284 commands.insert(1, '--color')
1283 commands.insert(1, '--color')
1285 p = subprocess.Popen(pycompat.rapply(procutil.tonativestr,
1284 p = subprocess.Popen(pycompat.rapply(procutil.tonativestr,
1286 [self._gitexecutable] + commands),
1285 [self._gitexecutable] + commands),
1287 bufsize=-1,
1286 bufsize=-1,
1288 cwd=pycompat.rapply(procutil.tonativestr, cwd),
1287 cwd=pycompat.rapply(procutil.tonativestr, cwd),
1289 env=procutil.tonativeenv(env),
1288 env=procutil.tonativeenv(env),
1290 close_fds=procutil.closefds,
1289 close_fds=procutil.closefds,
1291 stdout=subprocess.PIPE, stderr=errpipe)
1290 stdout=subprocess.PIPE, stderr=errpipe)
1292 if stream:
1291 if stream:
1293 return p.stdout, None
1292 return p.stdout, None
1294
1293
1295 retdata = p.stdout.read().strip()
1294 retdata = p.stdout.read().strip()
1296 # wait for the child to exit to avoid race condition.
1295 # wait for the child to exit to avoid race condition.
1297 p.wait()
1296 p.wait()
1298
1297
1299 if p.returncode != 0 and p.returncode != 1:
1298 if p.returncode != 0 and p.returncode != 1:
1300 # there are certain error codes that are ok
1299 # there are certain error codes that are ok
1301 command = commands[0]
1300 command = commands[0]
1302 if command in ('cat-file', 'symbolic-ref'):
1301 if command in ('cat-file', 'symbolic-ref'):
1303 return retdata, p.returncode
1302 return retdata, p.returncode
1304 # for all others, abort
1303 # for all others, abort
1305 raise error.Abort(_('git %s error %d in %s') %
1304 raise error.Abort(_('git %s error %d in %s') %
1306 (command, p.returncode, self._relpath))
1305 (command, p.returncode, self._relpath))
1307
1306
1308 return retdata, p.returncode
1307 return retdata, p.returncode
1309
1308
1310 def _gitmissing(self):
1309 def _gitmissing(self):
1311 return not self.wvfs.exists('.git')
1310 return not self.wvfs.exists('.git')
1312
1311
1313 def _gitstate(self):
1312 def _gitstate(self):
1314 return self._gitcommand(['rev-parse', 'HEAD'])
1313 return self._gitcommand(['rev-parse', 'HEAD'])
1315
1314
1316 def _gitcurrentbranch(self):
1315 def _gitcurrentbranch(self):
1317 current, err = self._gitdir(['symbolic-ref', 'HEAD', '--quiet'])
1316 current, err = self._gitdir(['symbolic-ref', 'HEAD', '--quiet'])
1318 if err:
1317 if err:
1319 current = None
1318 current = None
1320 return current
1319 return current
1321
1320
1322 def _gitremote(self, remote):
1321 def _gitremote(self, remote):
1323 out = self._gitcommand(['remote', 'show', '-n', remote])
1322 out = self._gitcommand(['remote', 'show', '-n', remote])
1324 line = out.split('\n')[1]
1323 line = out.split('\n')[1]
1325 i = line.index('URL: ') + len('URL: ')
1324 i = line.index('URL: ') + len('URL: ')
1326 return line[i:]
1325 return line[i:]
1327
1326
1328 def _githavelocally(self, revision):
1327 def _githavelocally(self, revision):
1329 out, code = self._gitdir(['cat-file', '-e', revision])
1328 out, code = self._gitdir(['cat-file', '-e', revision])
1330 return code == 0
1329 return code == 0
1331
1330
1332 def _gitisancestor(self, r1, r2):
1331 def _gitisancestor(self, r1, r2):
1333 base = self._gitcommand(['merge-base', r1, r2])
1332 base = self._gitcommand(['merge-base', r1, r2])
1334 return base == r1
1333 return base == r1
1335
1334
1336 def _gitisbare(self):
1335 def _gitisbare(self):
1337 return self._gitcommand(['config', '--bool', 'core.bare']) == 'true'
1336 return self._gitcommand(['config', '--bool', 'core.bare']) == 'true'
1338
1337
1339 def _gitupdatestat(self):
1338 def _gitupdatestat(self):
1340 """This must be run before git diff-index.
1339 """This must be run before git diff-index.
1341 diff-index only looks at changes to file stat;
1340 diff-index only looks at changes to file stat;
1342 this command looks at file contents and updates the stat."""
1341 this command looks at file contents and updates the stat."""
1343 self._gitcommand(['update-index', '-q', '--refresh'])
1342 self._gitcommand(['update-index', '-q', '--refresh'])
1344
1343
1345 def _gitbranchmap(self):
1344 def _gitbranchmap(self):
1346 '''returns 2 things:
1345 '''returns 2 things:
1347 a map from git branch to revision
1346 a map from git branch to revision
1348 a map from revision to branches'''
1347 a map from revision to branches'''
1349 branch2rev = {}
1348 branch2rev = {}
1350 rev2branch = {}
1349 rev2branch = {}
1351
1350
1352 out = self._gitcommand(['for-each-ref', '--format',
1351 out = self._gitcommand(['for-each-ref', '--format',
1353 '%(objectname) %(refname)'])
1352 '%(objectname) %(refname)'])
1354 for line in out.split('\n'):
1353 for line in out.split('\n'):
1355 revision, ref = line.split(' ')
1354 revision, ref = line.split(' ')
1356 if (not ref.startswith('refs/heads/') and
1355 if (not ref.startswith('refs/heads/') and
1357 not ref.startswith('refs/remotes/')):
1356 not ref.startswith('refs/remotes/')):
1358 continue
1357 continue
1359 if ref.startswith('refs/remotes/') and ref.endswith('/HEAD'):
1358 if ref.startswith('refs/remotes/') and ref.endswith('/HEAD'):
1360 continue # ignore remote/HEAD redirects
1359 continue # ignore remote/HEAD redirects
1361 branch2rev[ref] = revision
1360 branch2rev[ref] = revision
1362 rev2branch.setdefault(revision, []).append(ref)
1361 rev2branch.setdefault(revision, []).append(ref)
1363 return branch2rev, rev2branch
1362 return branch2rev, rev2branch
1364
1363
1365 def _gittracking(self, branches):
1364 def _gittracking(self, branches):
1366 'return map of remote branch to local tracking branch'
1365 'return map of remote branch to local tracking branch'
1367 # assumes no more than one local tracking branch for each remote
1366 # assumes no more than one local tracking branch for each remote
1368 tracking = {}
1367 tracking = {}
1369 for b in branches:
1368 for b in branches:
1370 if b.startswith('refs/remotes/'):
1369 if b.startswith('refs/remotes/'):
1371 continue
1370 continue
1372 bname = b.split('/', 2)[2]
1371 bname = b.split('/', 2)[2]
1373 remote = self._gitcommand(['config', 'branch.%s.remote' % bname])
1372 remote = self._gitcommand(['config', 'branch.%s.remote' % bname])
1374 if remote:
1373 if remote:
1375 ref = self._gitcommand(['config', 'branch.%s.merge' % bname])
1374 ref = self._gitcommand(['config', 'branch.%s.merge' % bname])
1376 tracking['refs/remotes/%s/%s' %
1375 tracking['refs/remotes/%s/%s' %
1377 (remote, ref.split('/', 2)[2])] = b
1376 (remote, ref.split('/', 2)[2])] = b
1378 return tracking
1377 return tracking
1379
1378
1380 def _abssource(self, source):
1379 def _abssource(self, source):
1381 if '://' not in source:
1380 if '://' not in source:
1382 # recognize the scp syntax as an absolute source
1381 # recognize the scp syntax as an absolute source
1383 colon = source.find(':')
1382 colon = source.find(':')
1384 if colon != -1 and '/' not in source[:colon]:
1383 if colon != -1 and '/' not in source[:colon]:
1385 return source
1384 return source
1386 self._subsource = source
1385 self._subsource = source
1387 return _abssource(self)
1386 return _abssource(self)
1388
1387
1389 def _fetch(self, source, revision):
1388 def _fetch(self, source, revision):
1390 if self._gitmissing():
1389 if self._gitmissing():
1391 # SEC: check for safe ssh url
1390 # SEC: check for safe ssh url
1392 util.checksafessh(source)
1391 util.checksafessh(source)
1393
1392
1394 source = self._abssource(source)
1393 source = self._abssource(source)
1395 self.ui.status(_('cloning subrepo %s from %s\n') %
1394 self.ui.status(_('cloning subrepo %s from %s\n') %
1396 (self._relpath, source))
1395 (self._relpath, source))
1397 self._gitnodir(['clone', source, self._abspath])
1396 self._gitnodir(['clone', source, self._abspath])
1398 if self._githavelocally(revision):
1397 if self._githavelocally(revision):
1399 return
1398 return
1400 self.ui.status(_('pulling subrepo %s from %s\n') %
1399 self.ui.status(_('pulling subrepo %s from %s\n') %
1401 (self._relpath, self._gitremote('origin')))
1400 (self._relpath, self._gitremote('origin')))
1402 # try only origin: the originally cloned repo
1401 # try only origin: the originally cloned repo
1403 self._gitcommand(['fetch'])
1402 self._gitcommand(['fetch'])
1404 if not self._githavelocally(revision):
1403 if not self._githavelocally(revision):
1405 raise error.Abort(_('revision %s does not exist in subrepository '
1404 raise error.Abort(_('revision %s does not exist in subrepository '
1406 '"%s"\n') % (revision, self._relpath))
1405 '"%s"\n') % (revision, self._relpath))
1407
1406
1408 @annotatesubrepoerror
1407 @annotatesubrepoerror
1409 def dirty(self, ignoreupdate=False, missing=False):
1408 def dirty(self, ignoreupdate=False, missing=False):
1410 if self._gitmissing():
1409 if self._gitmissing():
1411 return self._state[1] != ''
1410 return self._state[1] != ''
1412 if self._gitisbare():
1411 if self._gitisbare():
1413 return True
1412 return True
1414 if not ignoreupdate and self._state[1] != self._gitstate():
1413 if not ignoreupdate and self._state[1] != self._gitstate():
1415 # different version checked out
1414 # different version checked out
1416 return True
1415 return True
1417 # check for staged changes or modified files; ignore untracked files
1416 # check for staged changes or modified files; ignore untracked files
1418 self._gitupdatestat()
1417 self._gitupdatestat()
1419 out, code = self._gitdir(['diff-index', '--quiet', 'HEAD'])
1418 out, code = self._gitdir(['diff-index', '--quiet', 'HEAD'])
1420 return code == 1
1419 return code == 1
1421
1420
1422 def basestate(self):
1421 def basestate(self):
1423 return self._gitstate()
1422 return self._gitstate()
1424
1423
1425 @annotatesubrepoerror
1424 @annotatesubrepoerror
1426 def get(self, state, overwrite=False):
1425 def get(self, state, overwrite=False):
1427 source, revision, kind = state
1426 source, revision, kind = state
1428 if not revision:
1427 if not revision:
1429 self.remove()
1428 self.remove()
1430 return
1429 return
1431 self._fetch(source, revision)
1430 self._fetch(source, revision)
1432 # if the repo was set to be bare, unbare it
1431 # if the repo was set to be bare, unbare it
1433 if self._gitisbare():
1432 if self._gitisbare():
1434 self._gitcommand(['config', 'core.bare', 'false'])
1433 self._gitcommand(['config', 'core.bare', 'false'])
1435 if self._gitstate() == revision:
1434 if self._gitstate() == revision:
1436 self._gitcommand(['reset', '--hard', 'HEAD'])
1435 self._gitcommand(['reset', '--hard', 'HEAD'])
1437 return
1436 return
1438 elif self._gitstate() == revision:
1437 elif self._gitstate() == revision:
1439 if overwrite:
1438 if overwrite:
1440 # first reset the index to unmark new files for commit, because
1439 # first reset the index to unmark new files for commit, because
1441 # reset --hard will otherwise throw away files added for commit,
1440 # reset --hard will otherwise throw away files added for commit,
1442 # not just unmark them.
1441 # not just unmark them.
1443 self._gitcommand(['reset', 'HEAD'])
1442 self._gitcommand(['reset', 'HEAD'])
1444 self._gitcommand(['reset', '--hard', 'HEAD'])
1443 self._gitcommand(['reset', '--hard', 'HEAD'])
1445 return
1444 return
1446 branch2rev, rev2branch = self._gitbranchmap()
1445 branch2rev, rev2branch = self._gitbranchmap()
1447
1446
1448 def checkout(args):
1447 def checkout(args):
1449 cmd = ['checkout']
1448 cmd = ['checkout']
1450 if overwrite:
1449 if overwrite:
1451 # first reset the index to unmark new files for commit, because
1450 # first reset the index to unmark new files for commit, because
1452 # the -f option will otherwise throw away files added for
1451 # the -f option will otherwise throw away files added for
1453 # commit, not just unmark them.
1452 # commit, not just unmark them.
1454 self._gitcommand(['reset', 'HEAD'])
1453 self._gitcommand(['reset', 'HEAD'])
1455 cmd.append('-f')
1454 cmd.append('-f')
1456 self._gitcommand(cmd + args)
1455 self._gitcommand(cmd + args)
1457 _sanitize(self.ui, self.wvfs, '.git')
1456 _sanitize(self.ui, self.wvfs, '.git')
1458
1457
1459 def rawcheckout():
1458 def rawcheckout():
1460 # no branch to checkout, check it out with no branch
1459 # no branch to checkout, check it out with no branch
1461 self.ui.warn(_('checking out detached HEAD in '
1460 self.ui.warn(_('checking out detached HEAD in '
1462 'subrepository "%s"\n') % self._relpath)
1461 'subrepository "%s"\n') % self._relpath)
1463 self.ui.warn(_('check out a git branch if you intend '
1462 self.ui.warn(_('check out a git branch if you intend '
1464 'to make changes\n'))
1463 'to make changes\n'))
1465 checkout(['-q', revision])
1464 checkout(['-q', revision])
1466
1465
1467 if revision not in rev2branch:
1466 if revision not in rev2branch:
1468 rawcheckout()
1467 rawcheckout()
1469 return
1468 return
1470 branches = rev2branch[revision]
1469 branches = rev2branch[revision]
1471 firstlocalbranch = None
1470 firstlocalbranch = None
1472 for b in branches:
1471 for b in branches:
1473 if b == 'refs/heads/master':
1472 if b == 'refs/heads/master':
1474 # master trumps all other branches
1473 # master trumps all other branches
1475 checkout(['refs/heads/master'])
1474 checkout(['refs/heads/master'])
1476 return
1475 return
1477 if not firstlocalbranch and not b.startswith('refs/remotes/'):
1476 if not firstlocalbranch and not b.startswith('refs/remotes/'):
1478 firstlocalbranch = b
1477 firstlocalbranch = b
1479 if firstlocalbranch:
1478 if firstlocalbranch:
1480 checkout([firstlocalbranch])
1479 checkout([firstlocalbranch])
1481 return
1480 return
1482
1481
1483 tracking = self._gittracking(branch2rev.keys())
1482 tracking = self._gittracking(branch2rev.keys())
1484 # choose a remote branch already tracked if possible
1483 # choose a remote branch already tracked if possible
1485 remote = branches[0]
1484 remote = branches[0]
1486 if remote not in tracking:
1485 if remote not in tracking:
1487 for b in branches:
1486 for b in branches:
1488 if b in tracking:
1487 if b in tracking:
1489 remote = b
1488 remote = b
1490 break
1489 break
1491
1490
1492 if remote not in tracking:
1491 if remote not in tracking:
1493 # create a new local tracking branch
1492 # create a new local tracking branch
1494 local = remote.split('/', 3)[3]
1493 local = remote.split('/', 3)[3]
1495 checkout(['-b', local, remote])
1494 checkout(['-b', local, remote])
1496 elif self._gitisancestor(branch2rev[tracking[remote]], remote):
1495 elif self._gitisancestor(branch2rev[tracking[remote]], remote):
1497 # When updating to a tracked remote branch,
1496 # When updating to a tracked remote branch,
1498 # if the local tracking branch is downstream of it,
1497 # if the local tracking branch is downstream of it,
1499 # a normal `git pull` would have performed a "fast-forward merge"
1498 # a normal `git pull` would have performed a "fast-forward merge"
1500 # which is equivalent to updating the local branch to the remote.
1499 # which is equivalent to updating the local branch to the remote.
1501 # Since we are only looking at branching at update, we need to
1500 # Since we are only looking at branching at update, we need to
1502 # detect this situation and perform this action lazily.
1501 # detect this situation and perform this action lazily.
1503 if tracking[remote] != self._gitcurrentbranch():
1502 if tracking[remote] != self._gitcurrentbranch():
1504 checkout([tracking[remote]])
1503 checkout([tracking[remote]])
1505 self._gitcommand(['merge', '--ff', remote])
1504 self._gitcommand(['merge', '--ff', remote])
1506 _sanitize(self.ui, self.wvfs, '.git')
1505 _sanitize(self.ui, self.wvfs, '.git')
1507 else:
1506 else:
1508 # a real merge would be required, just checkout the revision
1507 # a real merge would be required, just checkout the revision
1509 rawcheckout()
1508 rawcheckout()
1510
1509
1511 @annotatesubrepoerror
1510 @annotatesubrepoerror
1512 def commit(self, text, user, date):
1511 def commit(self, text, user, date):
1513 if self._gitmissing():
1512 if self._gitmissing():
1514 raise error.Abort(_("subrepo %s is missing") % self._relpath)
1513 raise error.Abort(_("subrepo %s is missing") % self._relpath)
1515 cmd = ['commit', '-a', '-m', text]
1514 cmd = ['commit', '-a', '-m', text]
1516 env = encoding.environ.copy()
1515 env = encoding.environ.copy()
1517 if user:
1516 if user:
1518 cmd += ['--author', user]
1517 cmd += ['--author', user]
1519 if date:
1518 if date:
1520 # git's date parser silently ignores when seconds < 1e9
1519 # git's date parser silently ignores when seconds < 1e9
1521 # convert to ISO8601
1520 # convert to ISO8601
1522 env['GIT_AUTHOR_DATE'] = dateutil.datestr(date,
1521 env['GIT_AUTHOR_DATE'] = dateutil.datestr(date,
1523 '%Y-%m-%dT%H:%M:%S %1%2')
1522 '%Y-%m-%dT%H:%M:%S %1%2')
1524 self._gitcommand(cmd, env=env)
1523 self._gitcommand(cmd, env=env)
1525 # make sure commit works otherwise HEAD might not exist under certain
1524 # make sure commit works otherwise HEAD might not exist under certain
1526 # circumstances
1525 # circumstances
1527 return self._gitstate()
1526 return self._gitstate()
1528
1527
1529 @annotatesubrepoerror
1528 @annotatesubrepoerror
1530 def merge(self, state):
1529 def merge(self, state):
1531 source, revision, kind = state
1530 source, revision, kind = state
1532 self._fetch(source, revision)
1531 self._fetch(source, revision)
1533 base = self._gitcommand(['merge-base', revision, self._state[1]])
1532 base = self._gitcommand(['merge-base', revision, self._state[1]])
1534 self._gitupdatestat()
1533 self._gitupdatestat()
1535 out, code = self._gitdir(['diff-index', '--quiet', 'HEAD'])
1534 out, code = self._gitdir(['diff-index', '--quiet', 'HEAD'])
1536
1535
1537 def mergefunc():
1536 def mergefunc():
1538 if base == revision:
1537 if base == revision:
1539 self.get(state) # fast forward merge
1538 self.get(state) # fast forward merge
1540 elif base != self._state[1]:
1539 elif base != self._state[1]:
1541 self._gitcommand(['merge', '--no-commit', revision])
1540 self._gitcommand(['merge', '--no-commit', revision])
1542 _sanitize(self.ui, self.wvfs, '.git')
1541 _sanitize(self.ui, self.wvfs, '.git')
1543
1542
1544 if self.dirty():
1543 if self.dirty():
1545 if self._gitstate() != revision:
1544 if self._gitstate() != revision:
1546 dirty = self._gitstate() == self._state[1] or code != 0
1545 dirty = self._gitstate() == self._state[1] or code != 0
1547 if _updateprompt(self.ui, self, dirty,
1546 if _updateprompt(self.ui, self, dirty,
1548 self._state[1][:7], revision[:7]):
1547 self._state[1][:7], revision[:7]):
1549 mergefunc()
1548 mergefunc()
1550 else:
1549 else:
1551 mergefunc()
1550 mergefunc()
1552
1551
1553 @annotatesubrepoerror
1552 @annotatesubrepoerror
1554 def push(self, opts):
1553 def push(self, opts):
1555 force = opts.get('force')
1554 force = opts.get('force')
1556
1555
1557 if not self._state[1]:
1556 if not self._state[1]:
1558 return True
1557 return True
1559 if self._gitmissing():
1558 if self._gitmissing():
1560 raise error.Abort(_("subrepo %s is missing") % self._relpath)
1559 raise error.Abort(_("subrepo %s is missing") % self._relpath)
1561 # if a branch in origin contains the revision, nothing to do
1560 # if a branch in origin contains the revision, nothing to do
1562 branch2rev, rev2branch = self._gitbranchmap()
1561 branch2rev, rev2branch = self._gitbranchmap()
1563 if self._state[1] in rev2branch:
1562 if self._state[1] in rev2branch:
1564 for b in rev2branch[self._state[1]]:
1563 for b in rev2branch[self._state[1]]:
1565 if b.startswith('refs/remotes/origin/'):
1564 if b.startswith('refs/remotes/origin/'):
1566 return True
1565 return True
1567 for b, revision in branch2rev.iteritems():
1566 for b, revision in branch2rev.iteritems():
1568 if b.startswith('refs/remotes/origin/'):
1567 if b.startswith('refs/remotes/origin/'):
1569 if self._gitisancestor(self._state[1], revision):
1568 if self._gitisancestor(self._state[1], revision):
1570 return True
1569 return True
1571 # otherwise, try to push the currently checked out branch
1570 # otherwise, try to push the currently checked out branch
1572 cmd = ['push']
1571 cmd = ['push']
1573 if force:
1572 if force:
1574 cmd.append('--force')
1573 cmd.append('--force')
1575
1574
1576 current = self._gitcurrentbranch()
1575 current = self._gitcurrentbranch()
1577 if current:
1576 if current:
1578 # determine if the current branch is even useful
1577 # determine if the current branch is even useful
1579 if not self._gitisancestor(self._state[1], current):
1578 if not self._gitisancestor(self._state[1], current):
1580 self.ui.warn(_('unrelated git branch checked out '
1579 self.ui.warn(_('unrelated git branch checked out '
1581 'in subrepository "%s"\n') % self._relpath)
1580 'in subrepository "%s"\n') % self._relpath)
1582 return False
1581 return False
1583 self.ui.status(_('pushing branch %s of subrepository "%s"\n') %
1582 self.ui.status(_('pushing branch %s of subrepository "%s"\n') %
1584 (current.split('/', 2)[2], self._relpath))
1583 (current.split('/', 2)[2], self._relpath))
1585 ret = self._gitdir(cmd + ['origin', current])
1584 ret = self._gitdir(cmd + ['origin', current])
1586 return ret[1] == 0
1585 return ret[1] == 0
1587 else:
1586 else:
1588 self.ui.warn(_('no branch checked out in subrepository "%s"\n'
1587 self.ui.warn(_('no branch checked out in subrepository "%s"\n'
1589 'cannot push revision %s\n') %
1588 'cannot push revision %s\n') %
1590 (self._relpath, self._state[1]))
1589 (self._relpath, self._state[1]))
1591 return False
1590 return False
1592
1591
1593 @annotatesubrepoerror
1592 @annotatesubrepoerror
1594 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
1593 def add(self, ui, match, prefix, uipathfn, explicitonly, **opts):
1595 if self._gitmissing():
1594 if self._gitmissing():
1596 return []
1595 return []
1597
1596
1598 s = self.status(None, unknown=True, clean=True)
1597 s = self.status(None, unknown=True, clean=True)
1599
1598
1600 tracked = set()
1599 tracked = set()
1601 # dirstates 'amn' warn, 'r' is added again
1600 # dirstates 'amn' warn, 'r' is added again
1602 for l in (s.modified, s.added, s.deleted, s.clean):
1601 for l in (s.modified, s.added, s.deleted, s.clean):
1603 tracked.update(l)
1602 tracked.update(l)
1604
1603
1605 # Unknown files not of interest will be rejected by the matcher
1604 # Unknown files not of interest will be rejected by the matcher
1606 files = s.unknown
1605 files = s.unknown
1607 files.extend(match.files())
1606 files.extend(match.files())
1608
1607
1609 rejected = []
1608 rejected = []
1610
1609
1611 files = [f for f in sorted(set(files)) if match(f)]
1610 files = [f for f in sorted(set(files)) if match(f)]
1612 for f in files:
1611 for f in files:
1613 exact = match.exact(f)
1612 exact = match.exact(f)
1614 command = ["add"]
1613 command = ["add"]
1615 if exact:
1614 if exact:
1616 command.append("-f") #should be added, even if ignored
1615 command.append("-f") #should be added, even if ignored
1617 if ui.verbose or not exact:
1616 if ui.verbose or not exact:
1618 ui.status(_('adding %s\n') % uipathfn(f))
1617 ui.status(_('adding %s\n') % uipathfn(f))
1619
1618
1620 if f in tracked: # hg prints 'adding' even if already tracked
1619 if f in tracked: # hg prints 'adding' even if already tracked
1621 if exact:
1620 if exact:
1622 rejected.append(f)
1621 rejected.append(f)
1623 continue
1622 continue
1624 if not opts.get(r'dry_run'):
1623 if not opts.get(r'dry_run'):
1625 self._gitcommand(command + [f])
1624 self._gitcommand(command + [f])
1626
1625
1627 for f in rejected:
1626 for f in rejected:
1628 ui.warn(_("%s already tracked!\n") % uipathfn(f))
1627 ui.warn(_("%s already tracked!\n") % uipathfn(f))
1629
1628
1630 return rejected
1629 return rejected
1631
1630
1632 @annotatesubrepoerror
1631 @annotatesubrepoerror
1633 def remove(self):
1632 def remove(self):
1634 if self._gitmissing():
1633 if self._gitmissing():
1635 return
1634 return
1636 if self.dirty():
1635 if self.dirty():
1637 self.ui.warn(_('not removing repo %s because '
1636 self.ui.warn(_('not removing repo %s because '
1638 'it has changes.\n') % self._relpath)
1637 'it has changes.\n') % self._relpath)
1639 return
1638 return
1640 # we can't fully delete the repository as it may contain
1639 # we can't fully delete the repository as it may contain
1641 # local-only history
1640 # local-only history
1642 self.ui.note(_('removing subrepo %s\n') % self._relpath)
1641 self.ui.note(_('removing subrepo %s\n') % self._relpath)
1643 self._gitcommand(['config', 'core.bare', 'true'])
1642 self._gitcommand(['config', 'core.bare', 'true'])
1644 for f, kind in self.wvfs.readdir():
1643 for f, kind in self.wvfs.readdir():
1645 if f == '.git':
1644 if f == '.git':
1646 continue
1645 continue
1647 if kind == stat.S_IFDIR:
1646 if kind == stat.S_IFDIR:
1648 self.wvfs.rmtree(f)
1647 self.wvfs.rmtree(f)
1649 else:
1648 else:
1650 self.wvfs.unlink(f)
1649 self.wvfs.unlink(f)
1651
1650
1652 def archive(self, archiver, prefix, match=None, decode=True):
1651 def archive(self, archiver, prefix, match=None, decode=True):
1653 total = 0
1652 total = 0
1654 source, revision = self._state
1653 source, revision = self._state
1655 if not revision:
1654 if not revision:
1656 return total
1655 return total
1657 self._fetch(source, revision)
1656 self._fetch(source, revision)
1658
1657
1659 # Parse git's native archive command.
1658 # Parse git's native archive command.
1660 # This should be much faster than manually traversing the trees
1659 # This should be much faster than manually traversing the trees
1661 # and objects with many subprocess calls.
1660 # and objects with many subprocess calls.
1662 tarstream = self._gitcommand(['archive', revision], stream=True)
1661 tarstream = self._gitcommand(['archive', revision], stream=True)
1663 tar = tarfile.open(fileobj=tarstream, mode=r'r|')
1662 tar = tarfile.open(fileobj=tarstream, mode=r'r|')
1664 relpath = subrelpath(self)
1663 relpath = subrelpath(self)
1665 progress = self.ui.makeprogress(_('archiving (%s)') % relpath,
1664 progress = self.ui.makeprogress(_('archiving (%s)') % relpath,
1666 unit=_('files'))
1665 unit=_('files'))
1667 progress.update(0)
1666 progress.update(0)
1668 for info in tar:
1667 for info in tar:
1669 if info.isdir():
1668 if info.isdir():
1670 continue
1669 continue
1671 bname = pycompat.fsencode(info.name)
1670 bname = pycompat.fsencode(info.name)
1672 if match and not match(bname):
1671 if match and not match(bname):
1673 continue
1672 continue
1674 if info.issym():
1673 if info.issym():
1675 data = info.linkname
1674 data = info.linkname
1676 else:
1675 else:
1677 data = tar.extractfile(info).read()
1676 data = tar.extractfile(info).read()
1678 archiver.addfile(prefix + bname, info.mode, info.issym(), data)
1677 archiver.addfile(prefix + bname, info.mode, info.issym(), data)
1679 total += 1
1678 total += 1
1680 progress.increment()
1679 progress.increment()
1681 progress.complete()
1680 progress.complete()
1682 return total
1681 return total
1683
1682
1684
1683
1685 @annotatesubrepoerror
1684 @annotatesubrepoerror
1686 def cat(self, match, fm, fntemplate, prefix, **opts):
1685 def cat(self, match, fm, fntemplate, prefix, **opts):
1687 rev = self._state[1]
1686 rev = self._state[1]
1688 if match.anypats():
1687 if match.anypats():
1689 return 1 #No support for include/exclude yet
1688 return 1 #No support for include/exclude yet
1690
1689
1691 if not match.files():
1690 if not match.files():
1692 return 1
1691 return 1
1693
1692
1694 # TODO: add support for non-plain formatter (see cmdutil.cat())
1693 # TODO: add support for non-plain formatter (see cmdutil.cat())
1695 for f in match.files():
1694 for f in match.files():
1696 output = self._gitcommand(["show", "%s:%s" % (rev, f)])
1695 output = self._gitcommand(["show", "%s:%s" % (rev, f)])
1697 fp = cmdutil.makefileobj(self._ctx, fntemplate,
1696 fp = cmdutil.makefileobj(self._ctx, fntemplate,
1698 pathname=self.wvfs.reljoin(prefix, f))
1697 pathname=self.wvfs.reljoin(prefix, f))
1699 fp.write(output)
1698 fp.write(output)
1700 fp.close()
1699 fp.close()
1701 return 0
1700 return 0
1702
1701
1703
1702
1704 @annotatesubrepoerror
1703 @annotatesubrepoerror
1705 def status(self, rev2, **opts):
1704 def status(self, rev2, **opts):
1706 rev1 = self._state[1]
1705 rev1 = self._state[1]
1707 if self._gitmissing() or not rev1:
1706 if self._gitmissing() or not rev1:
1708 # if the repo is missing, return no results
1707 # if the repo is missing, return no results
1709 return scmutil.status([], [], [], [], [], [], [])
1708 return scmutil.status([], [], [], [], [], [], [])
1710 modified, added, removed = [], [], []
1709 modified, added, removed = [], [], []
1711 self._gitupdatestat()
1710 self._gitupdatestat()
1712 if rev2:
1711 if rev2:
1713 command = ['diff-tree', '--no-renames', '-r', rev1, rev2]
1712 command = ['diff-tree', '--no-renames', '-r', rev1, rev2]
1714 else:
1713 else:
1715 command = ['diff-index', '--no-renames', rev1]
1714 command = ['diff-index', '--no-renames', rev1]
1716 out = self._gitcommand(command)
1715 out = self._gitcommand(command)
1717 for line in out.split('\n'):
1716 for line in out.split('\n'):
1718 tab = line.find('\t')
1717 tab = line.find('\t')
1719 if tab == -1:
1718 if tab == -1:
1720 continue
1719 continue
1721 status, f = line[tab - 1:tab], line[tab + 1:]
1720 status, f = line[tab - 1:tab], line[tab + 1:]
1722 if status == 'M':
1721 if status == 'M':
1723 modified.append(f)
1722 modified.append(f)
1724 elif status == 'A':
1723 elif status == 'A':
1725 added.append(f)
1724 added.append(f)
1726 elif status == 'D':
1725 elif status == 'D':
1727 removed.append(f)
1726 removed.append(f)
1728
1727
1729 deleted, unknown, ignored, clean = [], [], [], []
1728 deleted, unknown, ignored, clean = [], [], [], []
1730
1729
1731 command = ['status', '--porcelain', '-z']
1730 command = ['status', '--porcelain', '-z']
1732 if opts.get(r'unknown'):
1731 if opts.get(r'unknown'):
1733 command += ['--untracked-files=all']
1732 command += ['--untracked-files=all']
1734 if opts.get(r'ignored'):
1733 if opts.get(r'ignored'):
1735 command += ['--ignored']
1734 command += ['--ignored']
1736 out = self._gitcommand(command)
1735 out = self._gitcommand(command)
1737
1736
1738 changedfiles = set()
1737 changedfiles = set()
1739 changedfiles.update(modified)
1738 changedfiles.update(modified)
1740 changedfiles.update(added)
1739 changedfiles.update(added)
1741 changedfiles.update(removed)
1740 changedfiles.update(removed)
1742 for line in out.split('\0'):
1741 for line in out.split('\0'):
1743 if not line:
1742 if not line:
1744 continue
1743 continue
1745 st = line[0:2]
1744 st = line[0:2]
1746 #moves and copies show 2 files on one line
1745 #moves and copies show 2 files on one line
1747 if line.find('\0') >= 0:
1746 if line.find('\0') >= 0:
1748 filename1, filename2 = line[3:].split('\0')
1747 filename1, filename2 = line[3:].split('\0')
1749 else:
1748 else:
1750 filename1 = line[3:]
1749 filename1 = line[3:]
1751 filename2 = None
1750 filename2 = None
1752
1751
1753 changedfiles.add(filename1)
1752 changedfiles.add(filename1)
1754 if filename2:
1753 if filename2:
1755 changedfiles.add(filename2)
1754 changedfiles.add(filename2)
1756
1755
1757 if st == '??':
1756 if st == '??':
1758 unknown.append(filename1)
1757 unknown.append(filename1)
1759 elif st == '!!':
1758 elif st == '!!':
1760 ignored.append(filename1)
1759 ignored.append(filename1)
1761
1760
1762 if opts.get(r'clean'):
1761 if opts.get(r'clean'):
1763 out = self._gitcommand(['ls-files'])
1762 out = self._gitcommand(['ls-files'])
1764 for f in out.split('\n'):
1763 for f in out.split('\n'):
1765 if not f in changedfiles:
1764 if not f in changedfiles:
1766 clean.append(f)
1765 clean.append(f)
1767
1766
1768 return scmutil.status(modified, added, removed, deleted,
1767 return scmutil.status(modified, added, removed, deleted,
1769 unknown, ignored, clean)
1768 unknown, ignored, clean)
1770
1769
1771 @annotatesubrepoerror
1770 @annotatesubrepoerror
1772 def diff(self, ui, diffopts, node2, match, prefix, **opts):
1771 def diff(self, ui, diffopts, node2, match, prefix, **opts):
1773 node1 = self._state[1]
1772 node1 = self._state[1]
1774 cmd = ['diff', '--no-renames']
1773 cmd = ['diff', '--no-renames']
1775 if opts[r'stat']:
1774 if opts[r'stat']:
1776 cmd.append('--stat')
1775 cmd.append('--stat')
1777 else:
1776 else:
1778 # for Git, this also implies '-p'
1777 # for Git, this also implies '-p'
1779 cmd.append('-U%d' % diffopts.context)
1778 cmd.append('-U%d' % diffopts.context)
1780
1779
1781 if diffopts.noprefix:
1780 if diffopts.noprefix:
1782 cmd.extend(['--src-prefix=%s/' % prefix,
1781 cmd.extend(['--src-prefix=%s/' % prefix,
1783 '--dst-prefix=%s/' % prefix])
1782 '--dst-prefix=%s/' % prefix])
1784 else:
1783 else:
1785 cmd.extend(['--src-prefix=a/%s/' % prefix,
1784 cmd.extend(['--src-prefix=a/%s/' % prefix,
1786 '--dst-prefix=b/%s/' % prefix])
1785 '--dst-prefix=b/%s/' % prefix])
1787
1786
1788 if diffopts.ignorews:
1787 if diffopts.ignorews:
1789 cmd.append('--ignore-all-space')
1788 cmd.append('--ignore-all-space')
1790 if diffopts.ignorewsamount:
1789 if diffopts.ignorewsamount:
1791 cmd.append('--ignore-space-change')
1790 cmd.append('--ignore-space-change')
1792 if self._gitversion(self._gitcommand(['--version'])) >= (1, 8, 4) \
1791 if self._gitversion(self._gitcommand(['--version'])) >= (1, 8, 4) \
1793 and diffopts.ignoreblanklines:
1792 and diffopts.ignoreblanklines:
1794 cmd.append('--ignore-blank-lines')
1793 cmd.append('--ignore-blank-lines')
1795
1794
1796 cmd.append(node1)
1795 cmd.append(node1)
1797 if node2:
1796 if node2:
1798 cmd.append(node2)
1797 cmd.append(node2)
1799
1798
1800 output = ""
1799 output = ""
1801 if match.always():
1800 if match.always():
1802 output += self._gitcommand(cmd) + '\n'
1801 output += self._gitcommand(cmd) + '\n'
1803 else:
1802 else:
1804 st = self.status(node2)[:3]
1803 st = self.status(node2)[:3]
1805 files = [f for sublist in st for f in sublist]
1804 files = [f for sublist in st for f in sublist]
1806 for f in files:
1805 for f in files:
1807 if match(f):
1806 if match(f):
1808 output += self._gitcommand(cmd + ['--', f]) + '\n'
1807 output += self._gitcommand(cmd + ['--', f]) + '\n'
1809
1808
1810 if output.strip():
1809 if output.strip():
1811 ui.write(output)
1810 ui.write(output)
1812
1811
1813 @annotatesubrepoerror
1812 @annotatesubrepoerror
1814 def revert(self, substate, *pats, **opts):
1813 def revert(self, substate, *pats, **opts):
1815 self.ui.status(_('reverting subrepo %s\n') % substate[0])
1814 self.ui.status(_('reverting subrepo %s\n') % substate[0])
1816 if not opts.get(r'no_backup'):
1815 if not opts.get(r'no_backup'):
1817 status = self.status(None)
1816 status = self.status(None)
1818 names = status.modified
1817 names = status.modified
1819 for name in names:
1818 for name in names:
1820 # backuppath() expects a path relative to the parent repo (the
1819 # backuppath() expects a path relative to the parent repo (the
1821 # repo that ui.origbackuppath is relative to)
1820 # repo that ui.origbackuppath is relative to)
1822 parentname = os.path.join(self._path, name)
1821 parentname = os.path.join(self._path, name)
1823 bakname = scmutil.backuppath(self.ui, self._subparent,
1822 bakname = scmutil.backuppath(self.ui, self._subparent,
1824 parentname)
1823 parentname)
1825 self.ui.note(_('saving current version of %s as %s\n') %
1824 self.ui.note(_('saving current version of %s as %s\n') %
1826 (name, os.path.relpath(bakname)))
1825 (name, os.path.relpath(bakname)))
1827 util.rename(self.wvfs.join(name), bakname)
1826 util.rename(self.wvfs.join(name), bakname)
1828
1827
1829 if not opts.get(r'dry_run'):
1828 if not opts.get(r'dry_run'):
1830 self.get(substate, overwrite=True)
1829 self.get(substate, overwrite=True)
1831 return []
1830 return []
1832
1831
1833 def shortid(self, revid):
1832 def shortid(self, revid):
1834 return revid[:7]
1833 return revid[:7]
1835
1834
1836 types = {
1835 types = {
1837 'hg': hgsubrepo,
1836 'hg': hgsubrepo,
1838 'svn': svnsubrepo,
1837 'svn': svnsubrepo,
1839 'git': gitsubrepo,
1838 'git': gitsubrepo,
1840 }
1839 }
@@ -1,832 +1,832 b''
1 from __future__ import absolute_import
1 from __future__ import absolute_import
2
2
3 import unittest
3 import unittest
4
4
5 import silenttestrunner
5 import silenttestrunner
6
6
7 from mercurial import (
7 from mercurial import (
8 match as matchmod,
8 match as matchmod,
9 util,
9 util,
10 )
10 )
11
11
12 class BaseMatcherTests(unittest.TestCase):
12 class BaseMatcherTests(unittest.TestCase):
13
13
14 def testVisitdir(self):
14 def testVisitdir(self):
15 m = matchmod.basematcher(b'', b'')
15 m = matchmod.basematcher()
16 self.assertTrue(m.visitdir(b'.'))
16 self.assertTrue(m.visitdir(b'.'))
17 self.assertTrue(m.visitdir(b'dir'))
17 self.assertTrue(m.visitdir(b'dir'))
18
18
19 def testVisitchildrenset(self):
19 def testVisitchildrenset(self):
20 m = matchmod.basematcher(b'', b'')
20 m = matchmod.basematcher()
21 self.assertEqual(m.visitchildrenset(b'.'), b'this')
21 self.assertEqual(m.visitchildrenset(b'.'), b'this')
22 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
22 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
23
23
24 class AlwaysMatcherTests(unittest.TestCase):
24 class AlwaysMatcherTests(unittest.TestCase):
25
25
26 def testVisitdir(self):
26 def testVisitdir(self):
27 m = matchmod.alwaysmatcher(b'', b'')
27 m = matchmod.alwaysmatcher()
28 self.assertEqual(m.visitdir(b'.'), b'all')
28 self.assertEqual(m.visitdir(b'.'), b'all')
29 self.assertEqual(m.visitdir(b'dir'), b'all')
29 self.assertEqual(m.visitdir(b'dir'), b'all')
30
30
31 def testVisitchildrenset(self):
31 def testVisitchildrenset(self):
32 m = matchmod.alwaysmatcher(b'', b'')
32 m = matchmod.alwaysmatcher()
33 self.assertEqual(m.visitchildrenset(b'.'), b'all')
33 self.assertEqual(m.visitchildrenset(b'.'), b'all')
34 self.assertEqual(m.visitchildrenset(b'dir'), b'all')
34 self.assertEqual(m.visitchildrenset(b'dir'), b'all')
35
35
36 class NeverMatcherTests(unittest.TestCase):
36 class NeverMatcherTests(unittest.TestCase):
37
37
38 def testVisitdir(self):
38 def testVisitdir(self):
39 m = matchmod.nevermatcher(b'', b'')
39 m = matchmod.nevermatcher()
40 self.assertFalse(m.visitdir(b'.'))
40 self.assertFalse(m.visitdir(b'.'))
41 self.assertFalse(m.visitdir(b'dir'))
41 self.assertFalse(m.visitdir(b'dir'))
42
42
43 def testVisitchildrenset(self):
43 def testVisitchildrenset(self):
44 m = matchmod.nevermatcher(b'', b'')
44 m = matchmod.nevermatcher()
45 self.assertEqual(m.visitchildrenset(b'.'), set())
45 self.assertEqual(m.visitchildrenset(b'.'), set())
46 self.assertEqual(m.visitchildrenset(b'dir'), set())
46 self.assertEqual(m.visitchildrenset(b'dir'), set())
47
47
48 class PredicateMatcherTests(unittest.TestCase):
48 class PredicateMatcherTests(unittest.TestCase):
49 # predicatematcher does not currently define either of these methods, so
49 # predicatematcher does not currently define either of these methods, so
50 # this is equivalent to BaseMatcherTests.
50 # this is equivalent to BaseMatcherTests.
51
51
52 def testVisitdir(self):
52 def testVisitdir(self):
53 m = matchmod.predicatematcher(b'', b'', lambda *a: False)
53 m = matchmod.predicatematcher(lambda *a: False)
54 self.assertTrue(m.visitdir(b'.'))
54 self.assertTrue(m.visitdir(b'.'))
55 self.assertTrue(m.visitdir(b'dir'))
55 self.assertTrue(m.visitdir(b'dir'))
56
56
57 def testVisitchildrenset(self):
57 def testVisitchildrenset(self):
58 m = matchmod.predicatematcher(b'', b'', lambda *a: False)
58 m = matchmod.predicatematcher(lambda *a: False)
59 self.assertEqual(m.visitchildrenset(b'.'), b'this')
59 self.assertEqual(m.visitchildrenset(b'.'), b'this')
60 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
60 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
61
61
62 class PatternMatcherTests(unittest.TestCase):
62 class PatternMatcherTests(unittest.TestCase):
63
63
64 def testVisitdirPrefix(self):
64 def testVisitdirPrefix(self):
65 m = matchmod.match(b'x', b'', patterns=[b'path:dir/subdir'])
65 m = matchmod.match(b'x', b'', patterns=[b'path:dir/subdir'])
66 assert isinstance(m, matchmod.patternmatcher)
66 assert isinstance(m, matchmod.patternmatcher)
67 self.assertTrue(m.visitdir(b'.'))
67 self.assertTrue(m.visitdir(b'.'))
68 self.assertTrue(m.visitdir(b'dir'))
68 self.assertTrue(m.visitdir(b'dir'))
69 self.assertEqual(m.visitdir(b'dir/subdir'), b'all')
69 self.assertEqual(m.visitdir(b'dir/subdir'), b'all')
70 # OPT: This should probably be 'all' if its parent is?
70 # OPT: This should probably be 'all' if its parent is?
71 self.assertTrue(m.visitdir(b'dir/subdir/x'))
71 self.assertTrue(m.visitdir(b'dir/subdir/x'))
72 self.assertFalse(m.visitdir(b'folder'))
72 self.assertFalse(m.visitdir(b'folder'))
73
73
74 def testVisitchildrensetPrefix(self):
74 def testVisitchildrensetPrefix(self):
75 m = matchmod.match(b'x', b'', patterns=[b'path:dir/subdir'])
75 m = matchmod.match(b'x', b'', patterns=[b'path:dir/subdir'])
76 assert isinstance(m, matchmod.patternmatcher)
76 assert isinstance(m, matchmod.patternmatcher)
77 self.assertEqual(m.visitchildrenset(b'.'), b'this')
77 self.assertEqual(m.visitchildrenset(b'.'), b'this')
78 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
78 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
79 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'all')
79 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'all')
80 # OPT: This should probably be 'all' if its parent is?
80 # OPT: This should probably be 'all' if its parent is?
81 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
81 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
82 self.assertEqual(m.visitchildrenset(b'folder'), set())
82 self.assertEqual(m.visitchildrenset(b'folder'), set())
83
83
84 def testVisitdirRootfilesin(self):
84 def testVisitdirRootfilesin(self):
85 m = matchmod.match(b'x', b'', patterns=[b'rootfilesin:dir/subdir'])
85 m = matchmod.match(b'x', b'', patterns=[b'rootfilesin:dir/subdir'])
86 assert isinstance(m, matchmod.patternmatcher)
86 assert isinstance(m, matchmod.patternmatcher)
87 self.assertTrue(m.visitdir(b'.'))
87 self.assertTrue(m.visitdir(b'.'))
88 self.assertFalse(m.visitdir(b'dir/subdir/x'))
88 self.assertFalse(m.visitdir(b'dir/subdir/x'))
89 self.assertFalse(m.visitdir(b'folder'))
89 self.assertFalse(m.visitdir(b'folder'))
90 # FIXME: These should probably be True.
90 # FIXME: These should probably be True.
91 self.assertFalse(m.visitdir(b'dir'))
91 self.assertFalse(m.visitdir(b'dir'))
92 self.assertFalse(m.visitdir(b'dir/subdir'))
92 self.assertFalse(m.visitdir(b'dir/subdir'))
93
93
94 def testVisitchildrensetRootfilesin(self):
94 def testVisitchildrensetRootfilesin(self):
95 m = matchmod.match(b'x', b'', patterns=[b'rootfilesin:dir/subdir'])
95 m = matchmod.match(b'x', b'', patterns=[b'rootfilesin:dir/subdir'])
96 assert isinstance(m, matchmod.patternmatcher)
96 assert isinstance(m, matchmod.patternmatcher)
97 self.assertEqual(m.visitchildrenset(b'.'), b'this')
97 self.assertEqual(m.visitchildrenset(b'.'), b'this')
98 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
98 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
99 self.assertEqual(m.visitchildrenset(b'folder'), set())
99 self.assertEqual(m.visitchildrenset(b'folder'), set())
100 # FIXME: These should probably be {'subdir'} and 'this', respectively,
100 # FIXME: These should probably be {'subdir'} and 'this', respectively,
101 # or at least 'this' and 'this'.
101 # or at least 'this' and 'this'.
102 self.assertEqual(m.visitchildrenset(b'dir'), set())
102 self.assertEqual(m.visitchildrenset(b'dir'), set())
103 self.assertEqual(m.visitchildrenset(b'dir/subdir'), set())
103 self.assertEqual(m.visitchildrenset(b'dir/subdir'), set())
104
104
105 def testVisitdirGlob(self):
105 def testVisitdirGlob(self):
106 m = matchmod.match(b'x', b'', patterns=[b'glob:dir/z*'])
106 m = matchmod.match(b'x', b'', patterns=[b'glob:dir/z*'])
107 assert isinstance(m, matchmod.patternmatcher)
107 assert isinstance(m, matchmod.patternmatcher)
108 self.assertTrue(m.visitdir(b'.'))
108 self.assertTrue(m.visitdir(b'.'))
109 self.assertTrue(m.visitdir(b'dir'))
109 self.assertTrue(m.visitdir(b'dir'))
110 self.assertFalse(m.visitdir(b'folder'))
110 self.assertFalse(m.visitdir(b'folder'))
111 # OPT: these should probably be False.
111 # OPT: these should probably be False.
112 self.assertTrue(m.visitdir(b'dir/subdir'))
112 self.assertTrue(m.visitdir(b'dir/subdir'))
113 self.assertTrue(m.visitdir(b'dir/subdir/x'))
113 self.assertTrue(m.visitdir(b'dir/subdir/x'))
114
114
115 def testVisitchildrensetGlob(self):
115 def testVisitchildrensetGlob(self):
116 m = matchmod.match(b'x', b'', patterns=[b'glob:dir/z*'])
116 m = matchmod.match(b'x', b'', patterns=[b'glob:dir/z*'])
117 assert isinstance(m, matchmod.patternmatcher)
117 assert isinstance(m, matchmod.patternmatcher)
118 self.assertEqual(m.visitchildrenset(b'.'), b'this')
118 self.assertEqual(m.visitchildrenset(b'.'), b'this')
119 self.assertEqual(m.visitchildrenset(b'folder'), set())
119 self.assertEqual(m.visitchildrenset(b'folder'), set())
120 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
120 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
121 # OPT: these should probably be set().
121 # OPT: these should probably be set().
122 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
122 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
123 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
123 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
124
124
125 class IncludeMatcherTests(unittest.TestCase):
125 class IncludeMatcherTests(unittest.TestCase):
126
126
127 def testVisitdirPrefix(self):
127 def testVisitdirPrefix(self):
128 m = matchmod.match(b'x', b'', include=[b'path:dir/subdir'])
128 m = matchmod.match(b'x', b'', include=[b'path:dir/subdir'])
129 assert isinstance(m, matchmod.includematcher)
129 assert isinstance(m, matchmod.includematcher)
130 self.assertTrue(m.visitdir(b'.'))
130 self.assertTrue(m.visitdir(b'.'))
131 self.assertTrue(m.visitdir(b'dir'))
131 self.assertTrue(m.visitdir(b'dir'))
132 self.assertEqual(m.visitdir(b'dir/subdir'), b'all')
132 self.assertEqual(m.visitdir(b'dir/subdir'), b'all')
133 # OPT: This should probably be 'all' if its parent is?
133 # OPT: This should probably be 'all' if its parent is?
134 self.assertTrue(m.visitdir(b'dir/subdir/x'))
134 self.assertTrue(m.visitdir(b'dir/subdir/x'))
135 self.assertFalse(m.visitdir(b'folder'))
135 self.assertFalse(m.visitdir(b'folder'))
136
136
137 def testVisitchildrensetPrefix(self):
137 def testVisitchildrensetPrefix(self):
138 m = matchmod.match(b'x', b'', include=[b'path:dir/subdir'])
138 m = matchmod.match(b'x', b'', include=[b'path:dir/subdir'])
139 assert isinstance(m, matchmod.includematcher)
139 assert isinstance(m, matchmod.includematcher)
140 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
140 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
141 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
141 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
142 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'all')
142 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'all')
143 # OPT: This should probably be 'all' if its parent is?
143 # OPT: This should probably be 'all' if its parent is?
144 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
144 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
145 self.assertEqual(m.visitchildrenset(b'folder'), set())
145 self.assertEqual(m.visitchildrenset(b'folder'), set())
146
146
147 def testVisitdirRootfilesin(self):
147 def testVisitdirRootfilesin(self):
148 m = matchmod.match(b'x', b'', include=[b'rootfilesin:dir/subdir'])
148 m = matchmod.match(b'x', b'', include=[b'rootfilesin:dir/subdir'])
149 assert isinstance(m, matchmod.includematcher)
149 assert isinstance(m, matchmod.includematcher)
150 self.assertTrue(m.visitdir(b'.'))
150 self.assertTrue(m.visitdir(b'.'))
151 self.assertTrue(m.visitdir(b'dir'))
151 self.assertTrue(m.visitdir(b'dir'))
152 self.assertTrue(m.visitdir(b'dir/subdir'))
152 self.assertTrue(m.visitdir(b'dir/subdir'))
153 self.assertFalse(m.visitdir(b'dir/subdir/x'))
153 self.assertFalse(m.visitdir(b'dir/subdir/x'))
154 self.assertFalse(m.visitdir(b'folder'))
154 self.assertFalse(m.visitdir(b'folder'))
155
155
156 def testVisitchildrensetRootfilesin(self):
156 def testVisitchildrensetRootfilesin(self):
157 m = matchmod.match(b'x', b'', include=[b'rootfilesin:dir/subdir'])
157 m = matchmod.match(b'x', b'', include=[b'rootfilesin:dir/subdir'])
158 assert isinstance(m, matchmod.includematcher)
158 assert isinstance(m, matchmod.includematcher)
159 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
159 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
160 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
160 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
161 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
161 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
162 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
162 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
163 self.assertEqual(m.visitchildrenset(b'folder'), set())
163 self.assertEqual(m.visitchildrenset(b'folder'), set())
164
164
165 def testVisitdirGlob(self):
165 def testVisitdirGlob(self):
166 m = matchmod.match(b'x', b'', include=[b'glob:dir/z*'])
166 m = matchmod.match(b'x', b'', include=[b'glob:dir/z*'])
167 assert isinstance(m, matchmod.includematcher)
167 assert isinstance(m, matchmod.includematcher)
168 self.assertTrue(m.visitdir(b'.'))
168 self.assertTrue(m.visitdir(b'.'))
169 self.assertTrue(m.visitdir(b'dir'))
169 self.assertTrue(m.visitdir(b'dir'))
170 self.assertFalse(m.visitdir(b'folder'))
170 self.assertFalse(m.visitdir(b'folder'))
171 # OPT: these should probably be False.
171 # OPT: these should probably be False.
172 self.assertTrue(m.visitdir(b'dir/subdir'))
172 self.assertTrue(m.visitdir(b'dir/subdir'))
173 self.assertTrue(m.visitdir(b'dir/subdir/x'))
173 self.assertTrue(m.visitdir(b'dir/subdir/x'))
174
174
175 def testVisitchildrensetGlob(self):
175 def testVisitchildrensetGlob(self):
176 m = matchmod.match(b'x', b'', include=[b'glob:dir/z*'])
176 m = matchmod.match(b'x', b'', include=[b'glob:dir/z*'])
177 assert isinstance(m, matchmod.includematcher)
177 assert isinstance(m, matchmod.includematcher)
178 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
178 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
179 self.assertEqual(m.visitchildrenset(b'folder'), set())
179 self.assertEqual(m.visitchildrenset(b'folder'), set())
180 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
180 self.assertEqual(m.visitchildrenset(b'dir'), b'this')
181 # OPT: these should probably be set().
181 # OPT: these should probably be set().
182 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
182 self.assertEqual(m.visitchildrenset(b'dir/subdir'), b'this')
183 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
183 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), b'this')
184
184
185 class ExactMatcherTests(unittest.TestCase):
185 class ExactMatcherTests(unittest.TestCase):
186
186
187 def testVisitdir(self):
187 def testVisitdir(self):
188 m = matchmod.exact(b'x', b'', files=[b'dir/subdir/foo.txt'])
188 m = matchmod.exact(b'x', b'', files=[b'dir/subdir/foo.txt'])
189 assert isinstance(m, matchmod.exactmatcher)
189 assert isinstance(m, matchmod.exactmatcher)
190 self.assertTrue(m.visitdir(b'.'))
190 self.assertTrue(m.visitdir(b'.'))
191 self.assertTrue(m.visitdir(b'dir'))
191 self.assertTrue(m.visitdir(b'dir'))
192 self.assertTrue(m.visitdir(b'dir/subdir'))
192 self.assertTrue(m.visitdir(b'dir/subdir'))
193 self.assertFalse(m.visitdir(b'dir/subdir/foo.txt'))
193 self.assertFalse(m.visitdir(b'dir/subdir/foo.txt'))
194 self.assertFalse(m.visitdir(b'dir/foo'))
194 self.assertFalse(m.visitdir(b'dir/foo'))
195 self.assertFalse(m.visitdir(b'dir/subdir/x'))
195 self.assertFalse(m.visitdir(b'dir/subdir/x'))
196 self.assertFalse(m.visitdir(b'folder'))
196 self.assertFalse(m.visitdir(b'folder'))
197
197
198 def testVisitchildrenset(self):
198 def testVisitchildrenset(self):
199 m = matchmod.exact(b'x', b'', files=[b'dir/subdir/foo.txt'])
199 m = matchmod.exact(b'x', b'', files=[b'dir/subdir/foo.txt'])
200 assert isinstance(m, matchmod.exactmatcher)
200 assert isinstance(m, matchmod.exactmatcher)
201 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
201 self.assertEqual(m.visitchildrenset(b'.'), {b'dir'})
202 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
202 self.assertEqual(m.visitchildrenset(b'dir'), {b'subdir'})
203 self.assertEqual(m.visitchildrenset(b'dir/subdir'), {b'foo.txt'})
203 self.assertEqual(m.visitchildrenset(b'dir/subdir'), {b'foo.txt'})
204 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
204 self.assertEqual(m.visitchildrenset(b'dir/subdir/x'), set())
205 self.assertEqual(m.visitchildrenset(b'dir/subdir/foo.txt'), set())
205 self.assertEqual(m.visitchildrenset(b'dir/subdir/foo.txt'), set())
206 self.assertEqual(m.visitchildrenset(b'folder'), set())
206 self.assertEqual(m.visitchildrenset(b'folder'), set())
207
207
208 def testVisitchildrensetFilesAndDirs(self):
208 def testVisitchildrensetFilesAndDirs(self):
209 m = matchmod.exact(b'x', b'', files=[b'rootfile.txt',
209 m = matchmod.exact(b'x', b'', files=[b'rootfile.txt',
210 b'a/file1.txt',
210 b'a/file1.txt',
211 b'a/b/file2.txt',
211 b'a/b/file2.txt',
212 # no file in a/b/c
212 # no file in a/b/c
213 b'a/b/c/d/file4.txt'])
213 b'a/b/c/d/file4.txt'])
214 assert isinstance(m, matchmod.exactmatcher)
214 assert isinstance(m, matchmod.exactmatcher)
215 self.assertEqual(m.visitchildrenset(b'.'), {b'a', b'rootfile.txt'})
215 self.assertEqual(m.visitchildrenset(b'.'), {b'a', b'rootfile.txt'})
216 self.assertEqual(m.visitchildrenset(b'a'), {b'b', b'file1.txt'})
216 self.assertEqual(m.visitchildrenset(b'a'), {b'b', b'file1.txt'})
217 self.assertEqual(m.visitchildrenset(b'a/b'), {b'c', b'file2.txt'})
217 self.assertEqual(m.visitchildrenset(b'a/b'), {b'c', b'file2.txt'})
218 self.assertEqual(m.visitchildrenset(b'a/b/c'), {b'd'})
218 self.assertEqual(m.visitchildrenset(b'a/b/c'), {b'd'})
219 self.assertEqual(m.visitchildrenset(b'a/b/c/d'), {b'file4.txt'})
219 self.assertEqual(m.visitchildrenset(b'a/b/c/d'), {b'file4.txt'})
220 self.assertEqual(m.visitchildrenset(b'a/b/c/d/e'), set())
220 self.assertEqual(m.visitchildrenset(b'a/b/c/d/e'), set())
221 self.assertEqual(m.visitchildrenset(b'folder'), set())
221 self.assertEqual(m.visitchildrenset(b'folder'), set())
222
222
223 class DifferenceMatcherTests(unittest.TestCase):
223 class DifferenceMatcherTests(unittest.TestCase):
224
224
225 def testVisitdirM2always(self):
225 def testVisitdirM2always(self):
226 m1 = matchmod.alwaysmatcher(b'', b'')
226 m1 = matchmod.alwaysmatcher()
227 m2 = matchmod.alwaysmatcher(b'', b'')
227 m2 = matchmod.alwaysmatcher()
228 dm = matchmod.differencematcher(m1, m2)
228 dm = matchmod.differencematcher(m1, m2)
229 # dm should be equivalent to a nevermatcher.
229 # dm should be equivalent to a nevermatcher.
230 self.assertFalse(dm.visitdir(b'.'))
230 self.assertFalse(dm.visitdir(b'.'))
231 self.assertFalse(dm.visitdir(b'dir'))
231 self.assertFalse(dm.visitdir(b'dir'))
232 self.assertFalse(dm.visitdir(b'dir/subdir'))
232 self.assertFalse(dm.visitdir(b'dir/subdir'))
233 self.assertFalse(dm.visitdir(b'dir/subdir/z'))
233 self.assertFalse(dm.visitdir(b'dir/subdir/z'))
234 self.assertFalse(dm.visitdir(b'dir/foo'))
234 self.assertFalse(dm.visitdir(b'dir/foo'))
235 self.assertFalse(dm.visitdir(b'dir/subdir/x'))
235 self.assertFalse(dm.visitdir(b'dir/subdir/x'))
236 self.assertFalse(dm.visitdir(b'folder'))
236 self.assertFalse(dm.visitdir(b'folder'))
237
237
238 def testVisitchildrensetM2always(self):
238 def testVisitchildrensetM2always(self):
239 m1 = matchmod.alwaysmatcher(b'', b'')
239 m1 = matchmod.alwaysmatcher()
240 m2 = matchmod.alwaysmatcher(b'', b'')
240 m2 = matchmod.alwaysmatcher()
241 dm = matchmod.differencematcher(m1, m2)
241 dm = matchmod.differencematcher(m1, m2)
242 # dm should be equivalent to a nevermatcher.
242 # dm should be equivalent to a nevermatcher.
243 self.assertEqual(dm.visitchildrenset(b'.'), set())
243 self.assertEqual(dm.visitchildrenset(b'.'), set())
244 self.assertEqual(dm.visitchildrenset(b'dir'), set())
244 self.assertEqual(dm.visitchildrenset(b'dir'), set())
245 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), set())
245 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), set())
246 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), set())
246 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), set())
247 self.assertEqual(dm.visitchildrenset(b'dir/foo'), set())
247 self.assertEqual(dm.visitchildrenset(b'dir/foo'), set())
248 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), set())
248 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), set())
249 self.assertEqual(dm.visitchildrenset(b'folder'), set())
249 self.assertEqual(dm.visitchildrenset(b'folder'), set())
250
250
251 def testVisitdirM2never(self):
251 def testVisitdirM2never(self):
252 m1 = matchmod.alwaysmatcher(b'', b'')
252 m1 = matchmod.alwaysmatcher()
253 m2 = matchmod.nevermatcher(b'', b'')
253 m2 = matchmod.nevermatcher()
254 dm = matchmod.differencematcher(m1, m2)
254 dm = matchmod.differencematcher(m1, m2)
255 # dm should be equivalent to a alwaysmatcher.
255 # dm should be equivalent to a alwaysmatcher.
256 #
256 #
257 # We're testing Equal-to-True instead of just 'assertTrue' since
257 # We're testing Equal-to-True instead of just 'assertTrue' since
258 # assertTrue does NOT verify that it's a bool, just that it's truthy.
258 # assertTrue does NOT verify that it's a bool, just that it's truthy.
259 # While we may want to eventually make these return 'all', they should
259 # While we may want to eventually make these return 'all', they should
260 # not currently do so.
260 # not currently do so.
261 self.assertEqual(dm.visitdir(b'.'), b'all')
261 self.assertEqual(dm.visitdir(b'.'), b'all')
262 self.assertEqual(dm.visitdir(b'dir'), b'all')
262 self.assertEqual(dm.visitdir(b'dir'), b'all')
263 self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
263 self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
264 self.assertEqual(dm.visitdir(b'dir/subdir/z'), b'all')
264 self.assertEqual(dm.visitdir(b'dir/subdir/z'), b'all')
265 self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
265 self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
266 self.assertEqual(dm.visitdir(b'dir/subdir/x'), b'all')
266 self.assertEqual(dm.visitdir(b'dir/subdir/x'), b'all')
267 self.assertEqual(dm.visitdir(b'folder'), b'all')
267 self.assertEqual(dm.visitdir(b'folder'), b'all')
268
268
269 def testVisitchildrensetM2never(self):
269 def testVisitchildrensetM2never(self):
270 m1 = matchmod.alwaysmatcher(b'', b'')
270 m1 = matchmod.alwaysmatcher()
271 m2 = matchmod.nevermatcher(b'', b'')
271 m2 = matchmod.nevermatcher()
272 dm = matchmod.differencematcher(m1, m2)
272 dm = matchmod.differencematcher(m1, m2)
273 # dm should be equivalent to a alwaysmatcher.
273 # dm should be equivalent to a alwaysmatcher.
274 self.assertEqual(dm.visitchildrenset(b'.'), b'all')
274 self.assertEqual(dm.visitchildrenset(b'.'), b'all')
275 self.assertEqual(dm.visitchildrenset(b'dir'), b'all')
275 self.assertEqual(dm.visitchildrenset(b'dir'), b'all')
276 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), b'all')
276 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), b'all')
277 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'all')
277 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'all')
278 self.assertEqual(dm.visitchildrenset(b'dir/foo'), b'all')
278 self.assertEqual(dm.visitchildrenset(b'dir/foo'), b'all')
279 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'all')
279 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'all')
280 self.assertEqual(dm.visitchildrenset(b'folder'), b'all')
280 self.assertEqual(dm.visitchildrenset(b'folder'), b'all')
281
281
282 def testVisitdirM2SubdirPrefix(self):
282 def testVisitdirM2SubdirPrefix(self):
283 m1 = matchmod.alwaysmatcher(b'', b'')
283 m1 = matchmod.alwaysmatcher()
284 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
284 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
285 dm = matchmod.differencematcher(m1, m2)
285 dm = matchmod.differencematcher(m1, m2)
286 self.assertEqual(dm.visitdir(b'.'), True)
286 self.assertEqual(dm.visitdir(b'.'), True)
287 self.assertEqual(dm.visitdir(b'dir'), True)
287 self.assertEqual(dm.visitdir(b'dir'), True)
288 self.assertFalse(dm.visitdir(b'dir/subdir'))
288 self.assertFalse(dm.visitdir(b'dir/subdir'))
289 # OPT: We should probably return False for these; we don't because
289 # OPT: We should probably return False for these; we don't because
290 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
290 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
291 # an 'all' pattern, just True.
291 # an 'all' pattern, just True.
292 self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
292 self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
293 self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
293 self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
294 self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
294 self.assertEqual(dm.visitdir(b'dir/foo'), b'all')
295 self.assertEqual(dm.visitdir(b'folder'), b'all')
295 self.assertEqual(dm.visitdir(b'folder'), b'all')
296
296
297 def testVisitchildrensetM2SubdirPrefix(self):
297 def testVisitchildrensetM2SubdirPrefix(self):
298 m1 = matchmod.alwaysmatcher(b'', b'')
298 m1 = matchmod.alwaysmatcher()
299 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
299 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
300 dm = matchmod.differencematcher(m1, m2)
300 dm = matchmod.differencematcher(m1, m2)
301 self.assertEqual(dm.visitchildrenset(b'.'), b'this')
301 self.assertEqual(dm.visitchildrenset(b'.'), b'this')
302 self.assertEqual(dm.visitchildrenset(b'dir'), b'this')
302 self.assertEqual(dm.visitchildrenset(b'dir'), b'this')
303 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), set())
303 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), set())
304 self.assertEqual(dm.visitchildrenset(b'dir/foo'), b'all')
304 self.assertEqual(dm.visitchildrenset(b'dir/foo'), b'all')
305 self.assertEqual(dm.visitchildrenset(b'folder'), b'all')
305 self.assertEqual(dm.visitchildrenset(b'folder'), b'all')
306 # OPT: We should probably return set() for these; we don't because
306 # OPT: We should probably return set() for these; we don't because
307 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
307 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
308 # an 'all' pattern, just 'this'.
308 # an 'all' pattern, just 'this'.
309 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'this')
309 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'this')
310 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'this')
310 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'this')
311
311
312 # We're using includematcher instead of patterns because it behaves slightly
312 # We're using includematcher instead of patterns because it behaves slightly
313 # better (giving narrower results) than patternmatcher.
313 # better (giving narrower results) than patternmatcher.
314 def testVisitdirIncludeIncludfe(self):
314 def testVisitdirIncludeIncludfe(self):
315 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
315 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
316 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
316 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
317 dm = matchmod.differencematcher(m1, m2)
317 dm = matchmod.differencematcher(m1, m2)
318 self.assertEqual(dm.visitdir(b'.'), True)
318 self.assertEqual(dm.visitdir(b'.'), True)
319 self.assertEqual(dm.visitdir(b'dir'), True)
319 self.assertEqual(dm.visitdir(b'dir'), True)
320 self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
320 self.assertEqual(dm.visitdir(b'dir/subdir'), b'all')
321 self.assertFalse(dm.visitdir(b'dir/foo'))
321 self.assertFalse(dm.visitdir(b'dir/foo'))
322 self.assertFalse(dm.visitdir(b'folder'))
322 self.assertFalse(dm.visitdir(b'folder'))
323 # OPT: We should probably return False for these; we don't because
323 # OPT: We should probably return False for these; we don't because
324 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
324 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
325 # an 'all' pattern, just True.
325 # an 'all' pattern, just True.
326 self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
326 self.assertEqual(dm.visitdir(b'dir/subdir/z'), True)
327 self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
327 self.assertEqual(dm.visitdir(b'dir/subdir/x'), True)
328
328
329 def testVisitchildrensetIncludeInclude(self):
329 def testVisitchildrensetIncludeInclude(self):
330 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
330 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
331 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
331 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
332 dm = matchmod.differencematcher(m1, m2)
332 dm = matchmod.differencematcher(m1, m2)
333 self.assertEqual(dm.visitchildrenset(b'.'), {b'dir'})
333 self.assertEqual(dm.visitchildrenset(b'.'), {b'dir'})
334 self.assertEqual(dm.visitchildrenset(b'dir'), {b'subdir'})
334 self.assertEqual(dm.visitchildrenset(b'dir'), {b'subdir'})
335 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), b'all')
335 self.assertEqual(dm.visitchildrenset(b'dir/subdir'), b'all')
336 self.assertEqual(dm.visitchildrenset(b'dir/foo'), set())
336 self.assertEqual(dm.visitchildrenset(b'dir/foo'), set())
337 self.assertEqual(dm.visitchildrenset(b'folder'), set())
337 self.assertEqual(dm.visitchildrenset(b'folder'), set())
338 # OPT: We should probably return set() for these; we don't because
338 # OPT: We should probably return set() for these; we don't because
339 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
339 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
340 # an 'all' pattern, just 'this'.
340 # an 'all' pattern, just 'this'.
341 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'this')
341 self.assertEqual(dm.visitchildrenset(b'dir/subdir/z'), b'this')
342 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'this')
342 self.assertEqual(dm.visitchildrenset(b'dir/subdir/x'), b'this')
343
343
344 class IntersectionMatcherTests(unittest.TestCase):
344 class IntersectionMatcherTests(unittest.TestCase):
345
345
346 def testVisitdirM2always(self):
346 def testVisitdirM2always(self):
347 m1 = matchmod.alwaysmatcher(b'', b'')
347 m1 = matchmod.alwaysmatcher()
348 m2 = matchmod.alwaysmatcher(b'', b'')
348 m2 = matchmod.alwaysmatcher()
349 im = matchmod.intersectmatchers(m1, m2)
349 im = matchmod.intersectmatchers(m1, m2)
350 # im should be equivalent to a alwaysmatcher.
350 # im should be equivalent to a alwaysmatcher.
351 self.assertEqual(im.visitdir(b'.'), b'all')
351 self.assertEqual(im.visitdir(b'.'), b'all')
352 self.assertEqual(im.visitdir(b'dir'), b'all')
352 self.assertEqual(im.visitdir(b'dir'), b'all')
353 self.assertEqual(im.visitdir(b'dir/subdir'), b'all')
353 self.assertEqual(im.visitdir(b'dir/subdir'), b'all')
354 self.assertEqual(im.visitdir(b'dir/subdir/z'), b'all')
354 self.assertEqual(im.visitdir(b'dir/subdir/z'), b'all')
355 self.assertEqual(im.visitdir(b'dir/foo'), b'all')
355 self.assertEqual(im.visitdir(b'dir/foo'), b'all')
356 self.assertEqual(im.visitdir(b'dir/subdir/x'), b'all')
356 self.assertEqual(im.visitdir(b'dir/subdir/x'), b'all')
357 self.assertEqual(im.visitdir(b'folder'), b'all')
357 self.assertEqual(im.visitdir(b'folder'), b'all')
358
358
359 def testVisitchildrensetM2always(self):
359 def testVisitchildrensetM2always(self):
360 m1 = matchmod.alwaysmatcher(b'', b'')
360 m1 = matchmod.alwaysmatcher()
361 m2 = matchmod.alwaysmatcher(b'', b'')
361 m2 = matchmod.alwaysmatcher()
362 im = matchmod.intersectmatchers(m1, m2)
362 im = matchmod.intersectmatchers(m1, m2)
363 # im should be equivalent to a alwaysmatcher.
363 # im should be equivalent to a alwaysmatcher.
364 self.assertEqual(im.visitchildrenset(b'.'), b'all')
364 self.assertEqual(im.visitchildrenset(b'.'), b'all')
365 self.assertEqual(im.visitchildrenset(b'dir'), b'all')
365 self.assertEqual(im.visitchildrenset(b'dir'), b'all')
366 self.assertEqual(im.visitchildrenset(b'dir/subdir'), b'all')
366 self.assertEqual(im.visitchildrenset(b'dir/subdir'), b'all')
367 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), b'all')
367 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), b'all')
368 self.assertEqual(im.visitchildrenset(b'dir/foo'), b'all')
368 self.assertEqual(im.visitchildrenset(b'dir/foo'), b'all')
369 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'all')
369 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'all')
370 self.assertEqual(im.visitchildrenset(b'folder'), b'all')
370 self.assertEqual(im.visitchildrenset(b'folder'), b'all')
371
371
372 def testVisitdirM2never(self):
372 def testVisitdirM2never(self):
373 m1 = matchmod.alwaysmatcher(b'', b'')
373 m1 = matchmod.alwaysmatcher()
374 m2 = matchmod.nevermatcher(b'', b'')
374 m2 = matchmod.nevermatcher()
375 im = matchmod.intersectmatchers(m1, m2)
375 im = matchmod.intersectmatchers(m1, m2)
376 # im should be equivalent to a nevermatcher.
376 # im should be equivalent to a nevermatcher.
377 self.assertFalse(im.visitdir(b'.'))
377 self.assertFalse(im.visitdir(b'.'))
378 self.assertFalse(im.visitdir(b'dir'))
378 self.assertFalse(im.visitdir(b'dir'))
379 self.assertFalse(im.visitdir(b'dir/subdir'))
379 self.assertFalse(im.visitdir(b'dir/subdir'))
380 self.assertFalse(im.visitdir(b'dir/subdir/z'))
380 self.assertFalse(im.visitdir(b'dir/subdir/z'))
381 self.assertFalse(im.visitdir(b'dir/foo'))
381 self.assertFalse(im.visitdir(b'dir/foo'))
382 self.assertFalse(im.visitdir(b'dir/subdir/x'))
382 self.assertFalse(im.visitdir(b'dir/subdir/x'))
383 self.assertFalse(im.visitdir(b'folder'))
383 self.assertFalse(im.visitdir(b'folder'))
384
384
385 def testVisitchildrensetM2never(self):
385 def testVisitchildrensetM2never(self):
386 m1 = matchmod.alwaysmatcher(b'', b'')
386 m1 = matchmod.alwaysmatcher()
387 m2 = matchmod.nevermatcher(b'', b'')
387 m2 = matchmod.nevermatcher()
388 im = matchmod.intersectmatchers(m1, m2)
388 im = matchmod.intersectmatchers(m1, m2)
389 # im should be equivalent to a nevermqtcher.
389 # im should be equivalent to a nevermqtcher.
390 self.assertEqual(im.visitchildrenset(b'.'), set())
390 self.assertEqual(im.visitchildrenset(b'.'), set())
391 self.assertEqual(im.visitchildrenset(b'dir'), set())
391 self.assertEqual(im.visitchildrenset(b'dir'), set())
392 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
392 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
393 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
393 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
394 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
394 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
395 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
395 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
396 self.assertEqual(im.visitchildrenset(b'folder'), set())
396 self.assertEqual(im.visitchildrenset(b'folder'), set())
397
397
398 def testVisitdirM2SubdirPrefix(self):
398 def testVisitdirM2SubdirPrefix(self):
399 m1 = matchmod.alwaysmatcher(b'', b'')
399 m1 = matchmod.alwaysmatcher()
400 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
400 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
401 im = matchmod.intersectmatchers(m1, m2)
401 im = matchmod.intersectmatchers(m1, m2)
402 self.assertEqual(im.visitdir(b'.'), True)
402 self.assertEqual(im.visitdir(b'.'), True)
403 self.assertEqual(im.visitdir(b'dir'), True)
403 self.assertEqual(im.visitdir(b'dir'), True)
404 self.assertEqual(im.visitdir(b'dir/subdir'), b'all')
404 self.assertEqual(im.visitdir(b'dir/subdir'), b'all')
405 self.assertFalse(im.visitdir(b'dir/foo'))
405 self.assertFalse(im.visitdir(b'dir/foo'))
406 self.assertFalse(im.visitdir(b'folder'))
406 self.assertFalse(im.visitdir(b'folder'))
407 # OPT: We should probably return 'all' for these; we don't because
407 # OPT: We should probably return 'all' for these; we don't because
408 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
408 # patternmatcher.visitdir() (our m2) doesn't return 'all' for subdirs of
409 # an 'all' pattern, just True.
409 # an 'all' pattern, just True.
410 self.assertEqual(im.visitdir(b'dir/subdir/z'), True)
410 self.assertEqual(im.visitdir(b'dir/subdir/z'), True)
411 self.assertEqual(im.visitdir(b'dir/subdir/x'), True)
411 self.assertEqual(im.visitdir(b'dir/subdir/x'), True)
412
412
413 def testVisitchildrensetM2SubdirPrefix(self):
413 def testVisitchildrensetM2SubdirPrefix(self):
414 m1 = matchmod.alwaysmatcher(b'', b'')
414 m1 = matchmod.alwaysmatcher()
415 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
415 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
416 im = matchmod.intersectmatchers(m1, m2)
416 im = matchmod.intersectmatchers(m1, m2)
417 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
417 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
418 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
418 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
419 self.assertEqual(im.visitchildrenset(b'dir/subdir'), b'all')
419 self.assertEqual(im.visitchildrenset(b'dir/subdir'), b'all')
420 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
420 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
421 self.assertEqual(im.visitchildrenset(b'folder'), set())
421 self.assertEqual(im.visitchildrenset(b'folder'), set())
422 # OPT: We should probably return 'all' for these
422 # OPT: We should probably return 'all' for these
423 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), b'this')
423 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), b'this')
424 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'this')
424 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'this')
425
425
426 # We're using includematcher instead of patterns because it behaves slightly
426 # We're using includematcher instead of patterns because it behaves slightly
427 # better (giving narrower results) than patternmatcher.
427 # better (giving narrower results) than patternmatcher.
428 def testVisitdirIncludeIncludfe(self):
428 def testVisitdirIncludeIncludfe(self):
429 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
429 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
430 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
430 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
431 im = matchmod.intersectmatchers(m1, m2)
431 im = matchmod.intersectmatchers(m1, m2)
432 self.assertEqual(im.visitdir(b'.'), True)
432 self.assertEqual(im.visitdir(b'.'), True)
433 self.assertEqual(im.visitdir(b'dir'), True)
433 self.assertEqual(im.visitdir(b'dir'), True)
434 self.assertFalse(im.visitdir(b'dir/subdir'))
434 self.assertFalse(im.visitdir(b'dir/subdir'))
435 self.assertFalse(im.visitdir(b'dir/foo'))
435 self.assertFalse(im.visitdir(b'dir/foo'))
436 self.assertFalse(im.visitdir(b'folder'))
436 self.assertFalse(im.visitdir(b'folder'))
437 self.assertFalse(im.visitdir(b'dir/subdir/z'))
437 self.assertFalse(im.visitdir(b'dir/subdir/z'))
438 self.assertFalse(im.visitdir(b'dir/subdir/x'))
438 self.assertFalse(im.visitdir(b'dir/subdir/x'))
439
439
440 def testVisitchildrensetIncludeInclude(self):
440 def testVisitchildrensetIncludeInclude(self):
441 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
441 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
442 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
442 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
443 im = matchmod.intersectmatchers(m1, m2)
443 im = matchmod.intersectmatchers(m1, m2)
444 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
444 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
445 self.assertEqual(im.visitchildrenset(b'dir'), b'this')
445 self.assertEqual(im.visitchildrenset(b'dir'), b'this')
446 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
446 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
447 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
447 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
448 self.assertEqual(im.visitchildrenset(b'folder'), set())
448 self.assertEqual(im.visitchildrenset(b'folder'), set())
449 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
449 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
450 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
450 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
451
451
452 # We're using includematcher instead of patterns because it behaves slightly
452 # We're using includematcher instead of patterns because it behaves slightly
453 # better (giving narrower results) than patternmatcher.
453 # better (giving narrower results) than patternmatcher.
454 def testVisitdirIncludeInclude2(self):
454 def testVisitdirIncludeInclude2(self):
455 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
455 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
456 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
456 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
457 im = matchmod.intersectmatchers(m1, m2)
457 im = matchmod.intersectmatchers(m1, m2)
458 # FIXME: is True correct here?
458 # FIXME: is True correct here?
459 self.assertEqual(im.visitdir(b'.'), True)
459 self.assertEqual(im.visitdir(b'.'), True)
460 self.assertFalse(im.visitdir(b'dir'))
460 self.assertFalse(im.visitdir(b'dir'))
461 self.assertFalse(im.visitdir(b'dir/subdir'))
461 self.assertFalse(im.visitdir(b'dir/subdir'))
462 self.assertFalse(im.visitdir(b'dir/foo'))
462 self.assertFalse(im.visitdir(b'dir/foo'))
463 self.assertFalse(im.visitdir(b'folder'))
463 self.assertFalse(im.visitdir(b'folder'))
464 self.assertFalse(im.visitdir(b'dir/subdir/z'))
464 self.assertFalse(im.visitdir(b'dir/subdir/z'))
465 self.assertFalse(im.visitdir(b'dir/subdir/x'))
465 self.assertFalse(im.visitdir(b'dir/subdir/x'))
466
466
467 def testVisitchildrensetIncludeInclude2(self):
467 def testVisitchildrensetIncludeInclude2(self):
468 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
468 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
469 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
469 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
470 im = matchmod.intersectmatchers(m1, m2)
470 im = matchmod.intersectmatchers(m1, m2)
471 # FIXME: is set() correct here?
471 # FIXME: is set() correct here?
472 self.assertEqual(im.visitchildrenset(b'.'), set())
472 self.assertEqual(im.visitchildrenset(b'.'), set())
473 self.assertEqual(im.visitchildrenset(b'dir'), set())
473 self.assertEqual(im.visitchildrenset(b'dir'), set())
474 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
474 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
475 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
475 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
476 self.assertEqual(im.visitchildrenset(b'folder'), set())
476 self.assertEqual(im.visitchildrenset(b'folder'), set())
477 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
477 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
478 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
478 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
479
479
480 # We're using includematcher instead of patterns because it behaves slightly
480 # We're using includematcher instead of patterns because it behaves slightly
481 # better (giving narrower results) than patternmatcher.
481 # better (giving narrower results) than patternmatcher.
482 def testVisitdirIncludeInclude3(self):
482 def testVisitdirIncludeInclude3(self):
483 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
483 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
484 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
484 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
485 im = matchmod.intersectmatchers(m1, m2)
485 im = matchmod.intersectmatchers(m1, m2)
486 self.assertEqual(im.visitdir(b'.'), True)
486 self.assertEqual(im.visitdir(b'.'), True)
487 self.assertEqual(im.visitdir(b'dir'), True)
487 self.assertEqual(im.visitdir(b'dir'), True)
488 self.assertEqual(im.visitdir(b'dir/subdir'), True)
488 self.assertEqual(im.visitdir(b'dir/subdir'), True)
489 self.assertFalse(im.visitdir(b'dir/foo'))
489 self.assertFalse(im.visitdir(b'dir/foo'))
490 self.assertFalse(im.visitdir(b'folder'))
490 self.assertFalse(im.visitdir(b'folder'))
491 self.assertFalse(im.visitdir(b'dir/subdir/z'))
491 self.assertFalse(im.visitdir(b'dir/subdir/z'))
492 # OPT: this should probably be 'all' not True.
492 # OPT: this should probably be 'all' not True.
493 self.assertEqual(im.visitdir(b'dir/subdir/x'), True)
493 self.assertEqual(im.visitdir(b'dir/subdir/x'), True)
494
494
495 def testVisitchildrensetIncludeInclude3(self):
495 def testVisitchildrensetIncludeInclude3(self):
496 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
496 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
497 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
497 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
498 im = matchmod.intersectmatchers(m1, m2)
498 im = matchmod.intersectmatchers(m1, m2)
499 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
499 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
500 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
500 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
501 self.assertEqual(im.visitchildrenset(b'dir/subdir'), {b'x'})
501 self.assertEqual(im.visitchildrenset(b'dir/subdir'), {b'x'})
502 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
502 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
503 self.assertEqual(im.visitchildrenset(b'folder'), set())
503 self.assertEqual(im.visitchildrenset(b'folder'), set())
504 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
504 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
505 # OPT: this should probably be 'all' not 'this'.
505 # OPT: this should probably be 'all' not 'this'.
506 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'this')
506 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), b'this')
507
507
508 # We're using includematcher instead of patterns because it behaves slightly
508 # We're using includematcher instead of patterns because it behaves slightly
509 # better (giving narrower results) than patternmatcher.
509 # better (giving narrower results) than patternmatcher.
510 def testVisitdirIncludeInclude4(self):
510 def testVisitdirIncludeInclude4(self):
511 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
511 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
512 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
512 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
513 im = matchmod.intersectmatchers(m1, m2)
513 im = matchmod.intersectmatchers(m1, m2)
514 # OPT: these next three could probably be False as well.
514 # OPT: these next three could probably be False as well.
515 self.assertEqual(im.visitdir(b'.'), True)
515 self.assertEqual(im.visitdir(b'.'), True)
516 self.assertEqual(im.visitdir(b'dir'), True)
516 self.assertEqual(im.visitdir(b'dir'), True)
517 self.assertEqual(im.visitdir(b'dir/subdir'), True)
517 self.assertEqual(im.visitdir(b'dir/subdir'), True)
518 self.assertFalse(im.visitdir(b'dir/foo'))
518 self.assertFalse(im.visitdir(b'dir/foo'))
519 self.assertFalse(im.visitdir(b'folder'))
519 self.assertFalse(im.visitdir(b'folder'))
520 self.assertFalse(im.visitdir(b'dir/subdir/z'))
520 self.assertFalse(im.visitdir(b'dir/subdir/z'))
521 self.assertFalse(im.visitdir(b'dir/subdir/x'))
521 self.assertFalse(im.visitdir(b'dir/subdir/x'))
522
522
523 def testVisitchildrensetIncludeInclude4(self):
523 def testVisitchildrensetIncludeInclude4(self):
524 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
524 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
525 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
525 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
526 im = matchmod.intersectmatchers(m1, m2)
526 im = matchmod.intersectmatchers(m1, m2)
527 # OPT: these next two could probably be set() as well.
527 # OPT: these next two could probably be set() as well.
528 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
528 self.assertEqual(im.visitchildrenset(b'.'), {b'dir'})
529 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
529 self.assertEqual(im.visitchildrenset(b'dir'), {b'subdir'})
530 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
530 self.assertEqual(im.visitchildrenset(b'dir/subdir'), set())
531 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
531 self.assertEqual(im.visitchildrenset(b'dir/foo'), set())
532 self.assertEqual(im.visitchildrenset(b'folder'), set())
532 self.assertEqual(im.visitchildrenset(b'folder'), set())
533 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
533 self.assertEqual(im.visitchildrenset(b'dir/subdir/z'), set())
534 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
534 self.assertEqual(im.visitchildrenset(b'dir/subdir/x'), set())
535
535
536 class UnionMatcherTests(unittest.TestCase):
536 class UnionMatcherTests(unittest.TestCase):
537
537
538 def testVisitdirM2always(self):
538 def testVisitdirM2always(self):
539 m1 = matchmod.alwaysmatcher(b'', b'')
539 m1 = matchmod.alwaysmatcher()
540 m2 = matchmod.alwaysmatcher(b'', b'')
540 m2 = matchmod.alwaysmatcher()
541 um = matchmod.unionmatcher([m1, m2])
541 um = matchmod.unionmatcher([m1, m2])
542 # um should be equivalent to a alwaysmatcher.
542 # um should be equivalent to a alwaysmatcher.
543 self.assertEqual(um.visitdir(b'.'), b'all')
543 self.assertEqual(um.visitdir(b'.'), b'all')
544 self.assertEqual(um.visitdir(b'dir'), b'all')
544 self.assertEqual(um.visitdir(b'dir'), b'all')
545 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
545 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
546 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
546 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
547 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
547 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
548 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
548 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
549 self.assertEqual(um.visitdir(b'folder'), b'all')
549 self.assertEqual(um.visitdir(b'folder'), b'all')
550
550
551 def testVisitchildrensetM2always(self):
551 def testVisitchildrensetM2always(self):
552 m1 = matchmod.alwaysmatcher(b'', b'')
552 m1 = matchmod.alwaysmatcher()
553 m2 = matchmod.alwaysmatcher(b'', b'')
553 m2 = matchmod.alwaysmatcher()
554 um = matchmod.unionmatcher([m1, m2])
554 um = matchmod.unionmatcher([m1, m2])
555 # um should be equivalent to a alwaysmatcher.
555 # um should be equivalent to a alwaysmatcher.
556 self.assertEqual(um.visitchildrenset(b'.'), b'all')
556 self.assertEqual(um.visitchildrenset(b'.'), b'all')
557 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
557 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
558 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
558 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
559 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
559 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
560 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
560 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
561 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
561 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
562 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
562 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
563
563
564 def testVisitdirM1never(self):
564 def testVisitdirM1never(self):
565 m1 = matchmod.nevermatcher(b'', b'')
565 m1 = matchmod.nevermatcher()
566 m2 = matchmod.alwaysmatcher(b'', b'')
566 m2 = matchmod.alwaysmatcher()
567 um = matchmod.unionmatcher([m1, m2])
567 um = matchmod.unionmatcher([m1, m2])
568 # um should be equivalent to a alwaysmatcher.
568 # um should be equivalent to a alwaysmatcher.
569 self.assertEqual(um.visitdir(b'.'), b'all')
569 self.assertEqual(um.visitdir(b'.'), b'all')
570 self.assertEqual(um.visitdir(b'dir'), b'all')
570 self.assertEqual(um.visitdir(b'dir'), b'all')
571 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
571 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
572 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
572 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
573 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
573 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
574 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
574 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
575 self.assertEqual(um.visitdir(b'folder'), b'all')
575 self.assertEqual(um.visitdir(b'folder'), b'all')
576
576
577 def testVisitchildrensetM1never(self):
577 def testVisitchildrensetM1never(self):
578 m1 = matchmod.nevermatcher(b'', b'')
578 m1 = matchmod.nevermatcher()
579 m2 = matchmod.alwaysmatcher(b'', b'')
579 m2 = matchmod.alwaysmatcher()
580 um = matchmod.unionmatcher([m1, m2])
580 um = matchmod.unionmatcher([m1, m2])
581 # um should be equivalent to a alwaysmatcher.
581 # um should be equivalent to a alwaysmatcher.
582 self.assertEqual(um.visitchildrenset(b'.'), b'all')
582 self.assertEqual(um.visitchildrenset(b'.'), b'all')
583 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
583 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
584 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
584 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
585 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
585 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
586 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
586 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
587 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
587 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
588 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
588 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
589
589
590 def testVisitdirM2never(self):
590 def testVisitdirM2never(self):
591 m1 = matchmod.alwaysmatcher(b'', b'')
591 m1 = matchmod.alwaysmatcher()
592 m2 = matchmod.nevermatcher(b'', b'')
592 m2 = matchmod.nevermatcher()
593 um = matchmod.unionmatcher([m1, m2])
593 um = matchmod.unionmatcher([m1, m2])
594 # um should be equivalent to a alwaysmatcher.
594 # um should be equivalent to a alwaysmatcher.
595 self.assertEqual(um.visitdir(b'.'), b'all')
595 self.assertEqual(um.visitdir(b'.'), b'all')
596 self.assertEqual(um.visitdir(b'dir'), b'all')
596 self.assertEqual(um.visitdir(b'dir'), b'all')
597 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
597 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
598 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
598 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
599 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
599 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
600 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
600 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
601 self.assertEqual(um.visitdir(b'folder'), b'all')
601 self.assertEqual(um.visitdir(b'folder'), b'all')
602
602
603 def testVisitchildrensetM2never(self):
603 def testVisitchildrensetM2never(self):
604 m1 = matchmod.alwaysmatcher(b'', b'')
604 m1 = matchmod.alwaysmatcher()
605 m2 = matchmod.nevermatcher(b'', b'')
605 m2 = matchmod.nevermatcher()
606 um = matchmod.unionmatcher([m1, m2])
606 um = matchmod.unionmatcher([m1, m2])
607 # um should be equivalent to a alwaysmatcher.
607 # um should be equivalent to a alwaysmatcher.
608 self.assertEqual(um.visitchildrenset(b'.'), b'all')
608 self.assertEqual(um.visitchildrenset(b'.'), b'all')
609 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
609 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
610 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
610 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
611 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
611 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
612 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
612 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
613 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
613 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
614 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
614 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
615
615
616 def testVisitdirM2SubdirPrefix(self):
616 def testVisitdirM2SubdirPrefix(self):
617 m1 = matchmod.alwaysmatcher(b'', b'')
617 m1 = matchmod.alwaysmatcher()
618 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
618 m2 = matchmod.match(b'', b'', patterns=[b'path:dir/subdir'])
619 um = matchmod.unionmatcher([m1, m2])
619 um = matchmod.unionmatcher([m1, m2])
620 self.assertEqual(um.visitdir(b'.'), b'all')
620 self.assertEqual(um.visitdir(b'.'), b'all')
621 self.assertEqual(um.visitdir(b'dir'), b'all')
621 self.assertEqual(um.visitdir(b'dir'), b'all')
622 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
622 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
623 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
623 self.assertEqual(um.visitdir(b'dir/foo'), b'all')
624 self.assertEqual(um.visitdir(b'folder'), b'all')
624 self.assertEqual(um.visitdir(b'folder'), b'all')
625 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
625 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
626 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
626 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
627
627
628 def testVisitchildrensetM2SubdirPrefix(self):
628 def testVisitchildrensetM2SubdirPrefix(self):
629 m1 = matchmod.alwaysmatcher(b'', b'')
629 m1 = matchmod.alwaysmatcher()
630 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
630 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
631 um = matchmod.unionmatcher([m1, m2])
631 um = matchmod.unionmatcher([m1, m2])
632 self.assertEqual(um.visitchildrenset(b'.'), b'all')
632 self.assertEqual(um.visitchildrenset(b'.'), b'all')
633 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
633 self.assertEqual(um.visitchildrenset(b'dir'), b'all')
634 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
634 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
635 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
635 self.assertEqual(um.visitchildrenset(b'dir/foo'), b'all')
636 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
636 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
637 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
637 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
638 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
638 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
639
639
640 # We're using includematcher instead of patterns because it behaves slightly
640 # We're using includematcher instead of patterns because it behaves slightly
641 # better (giving narrower results) than patternmatcher.
641 # better (giving narrower results) than patternmatcher.
642 def testVisitdirIncludeIncludfe(self):
642 def testVisitdirIncludeIncludfe(self):
643 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
643 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
644 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
644 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
645 um = matchmod.unionmatcher([m1, m2])
645 um = matchmod.unionmatcher([m1, m2])
646 self.assertEqual(um.visitdir(b'.'), True)
646 self.assertEqual(um.visitdir(b'.'), True)
647 self.assertEqual(um.visitdir(b'dir'), True)
647 self.assertEqual(um.visitdir(b'dir'), True)
648 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
648 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
649 self.assertFalse(um.visitdir(b'dir/foo'))
649 self.assertFalse(um.visitdir(b'dir/foo'))
650 self.assertFalse(um.visitdir(b'folder'))
650 self.assertFalse(um.visitdir(b'folder'))
651 # OPT: These two should probably be 'all' not True.
651 # OPT: These two should probably be 'all' not True.
652 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
652 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
653 self.assertEqual(um.visitdir(b'dir/subdir/x'), True)
653 self.assertEqual(um.visitdir(b'dir/subdir/x'), True)
654
654
655 def testVisitchildrensetIncludeInclude(self):
655 def testVisitchildrensetIncludeInclude(self):
656 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
656 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
657 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
657 m2 = matchmod.match(b'', b'', include=[b'rootfilesin:dir'])
658 um = matchmod.unionmatcher([m1, m2])
658 um = matchmod.unionmatcher([m1, m2])
659 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
659 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
660 self.assertEqual(um.visitchildrenset(b'dir'), b'this')
660 self.assertEqual(um.visitchildrenset(b'dir'), b'this')
661 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
661 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
662 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
662 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
663 self.assertEqual(um.visitchildrenset(b'folder'), set())
663 self.assertEqual(um.visitchildrenset(b'folder'), set())
664 # OPT: These next two could be 'all' instead of 'this'.
664 # OPT: These next two could be 'all' instead of 'this'.
665 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
665 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
666 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'this')
666 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'this')
667
667
668 # We're using includematcher instead of patterns because it behaves slightly
668 # We're using includematcher instead of patterns because it behaves slightly
669 # better (giving narrower results) than patternmatcher.
669 # better (giving narrower results) than patternmatcher.
670 def testVisitdirIncludeInclude2(self):
670 def testVisitdirIncludeInclude2(self):
671 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
671 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
672 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
672 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
673 um = matchmod.unionmatcher([m1, m2])
673 um = matchmod.unionmatcher([m1, m2])
674 self.assertEqual(um.visitdir(b'.'), True)
674 self.assertEqual(um.visitdir(b'.'), True)
675 self.assertEqual(um.visitdir(b'dir'), True)
675 self.assertEqual(um.visitdir(b'dir'), True)
676 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
676 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
677 self.assertFalse(um.visitdir(b'dir/foo'))
677 self.assertFalse(um.visitdir(b'dir/foo'))
678 self.assertEqual(um.visitdir(b'folder'), b'all')
678 self.assertEqual(um.visitdir(b'folder'), b'all')
679 # OPT: These should probably be 'all' not True.
679 # OPT: These should probably be 'all' not True.
680 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
680 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
681 self.assertEqual(um.visitdir(b'dir/subdir/x'), True)
681 self.assertEqual(um.visitdir(b'dir/subdir/x'), True)
682
682
683 def testVisitchildrensetIncludeInclude2(self):
683 def testVisitchildrensetIncludeInclude2(self):
684 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
684 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
685 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
685 m2 = matchmod.match(b'', b'', include=[b'path:folder'])
686 um = matchmod.unionmatcher([m1, m2])
686 um = matchmod.unionmatcher([m1, m2])
687 self.assertEqual(um.visitchildrenset(b'.'), {b'folder', b'dir'})
687 self.assertEqual(um.visitchildrenset(b'.'), {b'folder', b'dir'})
688 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
688 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
689 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
689 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
690 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
690 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
691 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
691 self.assertEqual(um.visitchildrenset(b'folder'), b'all')
692 # OPT: These next two could be 'all' instead of 'this'.
692 # OPT: These next two could be 'all' instead of 'this'.
693 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
693 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
694 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'this')
694 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'this')
695
695
696 # We're using includematcher instead of patterns because it behaves slightly
696 # We're using includematcher instead of patterns because it behaves slightly
697 # better (giving narrower results) than patternmatcher.
697 # better (giving narrower results) than patternmatcher.
698 def testVisitdirIncludeInclude3(self):
698 def testVisitdirIncludeInclude3(self):
699 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
699 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
700 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
700 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
701 um = matchmod.unionmatcher([m1, m2])
701 um = matchmod.unionmatcher([m1, m2])
702 self.assertEqual(um.visitdir(b'.'), True)
702 self.assertEqual(um.visitdir(b'.'), True)
703 self.assertEqual(um.visitdir(b'dir'), True)
703 self.assertEqual(um.visitdir(b'dir'), True)
704 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
704 self.assertEqual(um.visitdir(b'dir/subdir'), b'all')
705 self.assertFalse(um.visitdir(b'dir/foo'))
705 self.assertFalse(um.visitdir(b'dir/foo'))
706 self.assertFalse(um.visitdir(b'folder'))
706 self.assertFalse(um.visitdir(b'folder'))
707 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
707 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
708 # OPT: this should probably be 'all' not True.
708 # OPT: this should probably be 'all' not True.
709 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
709 self.assertEqual(um.visitdir(b'dir/subdir/z'), True)
710
710
711 def testVisitchildrensetIncludeInclude3(self):
711 def testVisitchildrensetIncludeInclude3(self):
712 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
712 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
713 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
713 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
714 um = matchmod.unionmatcher([m1, m2])
714 um = matchmod.unionmatcher([m1, m2])
715 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
715 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
716 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
716 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
717 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
717 self.assertEqual(um.visitchildrenset(b'dir/subdir'), b'all')
718 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
718 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
719 self.assertEqual(um.visitchildrenset(b'folder'), set())
719 self.assertEqual(um.visitchildrenset(b'folder'), set())
720 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
720 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
721 # OPT: this should probably be 'all' not 'this'.
721 # OPT: this should probably be 'all' not 'this'.
722 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
722 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'this')
723
723
724 # We're using includematcher instead of patterns because it behaves slightly
724 # We're using includematcher instead of patterns because it behaves slightly
725 # better (giving narrower results) than patternmatcher.
725 # better (giving narrower results) than patternmatcher.
726 def testVisitdirIncludeInclude4(self):
726 def testVisitdirIncludeInclude4(self):
727 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
727 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
728 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
728 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
729 um = matchmod.unionmatcher([m1, m2])
729 um = matchmod.unionmatcher([m1, m2])
730 # OPT: these next three could probably be False as well.
730 # OPT: these next three could probably be False as well.
731 self.assertEqual(um.visitdir(b'.'), True)
731 self.assertEqual(um.visitdir(b'.'), True)
732 self.assertEqual(um.visitdir(b'dir'), True)
732 self.assertEqual(um.visitdir(b'dir'), True)
733 self.assertEqual(um.visitdir(b'dir/subdir'), True)
733 self.assertEqual(um.visitdir(b'dir/subdir'), True)
734 self.assertFalse(um.visitdir(b'dir/foo'))
734 self.assertFalse(um.visitdir(b'dir/foo'))
735 self.assertFalse(um.visitdir(b'folder'))
735 self.assertFalse(um.visitdir(b'folder'))
736 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
736 self.assertEqual(um.visitdir(b'dir/subdir/z'), b'all')
737 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
737 self.assertEqual(um.visitdir(b'dir/subdir/x'), b'all')
738
738
739 def testVisitchildrensetIncludeInclude4(self):
739 def testVisitchildrensetIncludeInclude4(self):
740 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
740 m1 = matchmod.match(b'', b'', include=[b'path:dir/subdir/x'])
741 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
741 m2 = matchmod.match(b'', b'', include=[b'path:dir/subdir/z'])
742 um = matchmod.unionmatcher([m1, m2])
742 um = matchmod.unionmatcher([m1, m2])
743 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
743 self.assertEqual(um.visitchildrenset(b'.'), {b'dir'})
744 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
744 self.assertEqual(um.visitchildrenset(b'dir'), {b'subdir'})
745 self.assertEqual(um.visitchildrenset(b'dir/subdir'), {b'x', b'z'})
745 self.assertEqual(um.visitchildrenset(b'dir/subdir'), {b'x', b'z'})
746 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
746 self.assertEqual(um.visitchildrenset(b'dir/foo'), set())
747 self.assertEqual(um.visitchildrenset(b'folder'), set())
747 self.assertEqual(um.visitchildrenset(b'folder'), set())
748 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
748 self.assertEqual(um.visitchildrenset(b'dir/subdir/z'), b'all')
749 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
749 self.assertEqual(um.visitchildrenset(b'dir/subdir/x'), b'all')
750
750
751 class SubdirMatcherTests(unittest.TestCase):
751 class SubdirMatcherTests(unittest.TestCase):
752
752
753 def testVisitdir(self):
753 def testVisitdir(self):
754 m = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
754 m = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
755 sm = matchmod.subdirmatcher(b'dir', m)
755 sm = matchmod.subdirmatcher(b'dir', m)
756
756
757 self.assertEqual(sm.visitdir(b'.'), True)
757 self.assertEqual(sm.visitdir(b'.'), True)
758 self.assertEqual(sm.visitdir(b'subdir'), b'all')
758 self.assertEqual(sm.visitdir(b'subdir'), b'all')
759 # OPT: These next two should probably be 'all' not True.
759 # OPT: These next two should probably be 'all' not True.
760 self.assertEqual(sm.visitdir(b'subdir/x'), True)
760 self.assertEqual(sm.visitdir(b'subdir/x'), True)
761 self.assertEqual(sm.visitdir(b'subdir/z'), True)
761 self.assertEqual(sm.visitdir(b'subdir/z'), True)
762 self.assertFalse(sm.visitdir(b'foo'))
762 self.assertFalse(sm.visitdir(b'foo'))
763
763
764 def testVisitchildrenset(self):
764 def testVisitchildrenset(self):
765 m = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
765 m = matchmod.match(b'', b'', include=[b'path:dir/subdir'])
766 sm = matchmod.subdirmatcher(b'dir', m)
766 sm = matchmod.subdirmatcher(b'dir', m)
767
767
768 self.assertEqual(sm.visitchildrenset(b'.'), {b'subdir'})
768 self.assertEqual(sm.visitchildrenset(b'.'), {b'subdir'})
769 self.assertEqual(sm.visitchildrenset(b'subdir'), b'all')
769 self.assertEqual(sm.visitchildrenset(b'subdir'), b'all')
770 # OPT: These next two should probably be 'all' not 'this'.
770 # OPT: These next two should probably be 'all' not 'this'.
771 self.assertEqual(sm.visitchildrenset(b'subdir/x'), b'this')
771 self.assertEqual(sm.visitchildrenset(b'subdir/x'), b'this')
772 self.assertEqual(sm.visitchildrenset(b'subdir/z'), b'this')
772 self.assertEqual(sm.visitchildrenset(b'subdir/z'), b'this')
773 self.assertEqual(sm.visitchildrenset(b'foo'), set())
773 self.assertEqual(sm.visitchildrenset(b'foo'), set())
774
774
775 class PrefixdirMatcherTests(unittest.TestCase):
775 class PrefixdirMatcherTests(unittest.TestCase):
776
776
777 def testVisitdir(self):
777 def testVisitdir(self):
778 m = matchmod.match(util.localpath(b'root/d'), b'e/f',
778 m = matchmod.match(util.localpath(b'root/d'), b'e/f',
779 [b'../a.txt', b'b.txt'])
779 [b'../a.txt', b'b.txt'])
780 pm = matchmod.prefixdirmatcher(b'root', b'd/e/f', b'd', m)
780 pm = matchmod.prefixdirmatcher(b'd', m)
781
781
782 # `m` elides 'd' because it's part of the root, and the rest of the
782 # `m` elides 'd' because it's part of the root, and the rest of the
783 # patterns are relative.
783 # patterns are relative.
784 self.assertEqual(bool(m(b'a.txt')), False)
784 self.assertEqual(bool(m(b'a.txt')), False)
785 self.assertEqual(bool(m(b'b.txt')), False)
785 self.assertEqual(bool(m(b'b.txt')), False)
786 self.assertEqual(bool(m(b'e/a.txt')), True)
786 self.assertEqual(bool(m(b'e/a.txt')), True)
787 self.assertEqual(bool(m(b'e/b.txt')), False)
787 self.assertEqual(bool(m(b'e/b.txt')), False)
788 self.assertEqual(bool(m(b'e/f/b.txt')), True)
788 self.assertEqual(bool(m(b'e/f/b.txt')), True)
789
789
790 # The prefix matcher re-adds 'd' to the paths, so they need to be
790 # The prefix matcher re-adds 'd' to the paths, so they need to be
791 # specified when using the prefixdirmatcher.
791 # specified when using the prefixdirmatcher.
792 self.assertEqual(bool(pm(b'a.txt')), False)
792 self.assertEqual(bool(pm(b'a.txt')), False)
793 self.assertEqual(bool(pm(b'b.txt')), False)
793 self.assertEqual(bool(pm(b'b.txt')), False)
794 self.assertEqual(bool(pm(b'd/e/a.txt')), True)
794 self.assertEqual(bool(pm(b'd/e/a.txt')), True)
795 self.assertEqual(bool(pm(b'd/e/b.txt')), False)
795 self.assertEqual(bool(pm(b'd/e/b.txt')), False)
796 self.assertEqual(bool(pm(b'd/e/f/b.txt')), True)
796 self.assertEqual(bool(pm(b'd/e/f/b.txt')), True)
797
797
798 self.assertEqual(m.visitdir(b'.'), True)
798 self.assertEqual(m.visitdir(b'.'), True)
799 self.assertEqual(m.visitdir(b'e'), True)
799 self.assertEqual(m.visitdir(b'e'), True)
800 self.assertEqual(m.visitdir(b'e/f'), True)
800 self.assertEqual(m.visitdir(b'e/f'), True)
801 self.assertEqual(m.visitdir(b'e/f/g'), False)
801 self.assertEqual(m.visitdir(b'e/f/g'), False)
802
802
803 self.assertEqual(pm.visitdir(b'.'), True)
803 self.assertEqual(pm.visitdir(b'.'), True)
804 self.assertEqual(pm.visitdir(b'd'), True)
804 self.assertEqual(pm.visitdir(b'd'), True)
805 self.assertEqual(pm.visitdir(b'd/e'), True)
805 self.assertEqual(pm.visitdir(b'd/e'), True)
806 self.assertEqual(pm.visitdir(b'd/e/f'), True)
806 self.assertEqual(pm.visitdir(b'd/e/f'), True)
807 self.assertEqual(pm.visitdir(b'd/e/f/g'), False)
807 self.assertEqual(pm.visitdir(b'd/e/f/g'), False)
808
808
809 def testVisitchildrenset(self):
809 def testVisitchildrenset(self):
810 m = matchmod.match(util.localpath(b'root/d'), b'e/f',
810 m = matchmod.match(util.localpath(b'root/d'), b'e/f',
811 [b'../a.txt', b'b.txt'])
811 [b'../a.txt', b'b.txt'])
812 pm = matchmod.prefixdirmatcher(b'root', b'd/e/f', b'd', m)
812 pm = matchmod.prefixdirmatcher(b'd', m)
813
813
814 # OPT: visitchildrenset could possibly return {'e'} and {'f'} for these
814 # OPT: visitchildrenset could possibly return {'e'} and {'f'} for these
815 # next two, respectively; patternmatcher does not have this
815 # next two, respectively; patternmatcher does not have this
816 # optimization.
816 # optimization.
817 self.assertEqual(m.visitchildrenset(b'.'), b'this')
817 self.assertEqual(m.visitchildrenset(b'.'), b'this')
818 self.assertEqual(m.visitchildrenset(b'e'), b'this')
818 self.assertEqual(m.visitchildrenset(b'e'), b'this')
819 self.assertEqual(m.visitchildrenset(b'e/f'), b'this')
819 self.assertEqual(m.visitchildrenset(b'e/f'), b'this')
820 self.assertEqual(m.visitchildrenset(b'e/f/g'), set())
820 self.assertEqual(m.visitchildrenset(b'e/f/g'), set())
821
821
822 # OPT: visitchildrenset could possibly return {'d'}, {'e'}, and {'f'}
822 # OPT: visitchildrenset could possibly return {'d'}, {'e'}, and {'f'}
823 # for these next three, respectively; patternmatcher does not have this
823 # for these next three, respectively; patternmatcher does not have this
824 # optimization.
824 # optimization.
825 self.assertEqual(pm.visitchildrenset(b'.'), b'this')
825 self.assertEqual(pm.visitchildrenset(b'.'), b'this')
826 self.assertEqual(pm.visitchildrenset(b'd'), b'this')
826 self.assertEqual(pm.visitchildrenset(b'd'), b'this')
827 self.assertEqual(pm.visitchildrenset(b'd/e'), b'this')
827 self.assertEqual(pm.visitchildrenset(b'd/e'), b'this')
828 self.assertEqual(pm.visitchildrenset(b'd/e/f'), b'this')
828 self.assertEqual(pm.visitchildrenset(b'd/e/f'), b'this')
829 self.assertEqual(pm.visitchildrenset(b'd/e/f/g'), set())
829 self.assertEqual(pm.visitchildrenset(b'd/e/f/g'), set())
830
830
831 if __name__ == '__main__':
831 if __name__ == '__main__':
832 silenttestrunner.main(__name__)
832 silenttestrunner.main(__name__)
General Comments 0
You need to be logged in to leave comments. Login now