##// END OF EJS Templates
match: implement __repr__() and update users (API)...
Martin von Zweigbergk -
r32406:95201747 default
parent child Browse files
Show More
@@ -1,741 +1,729 b''
1 # __init__.py - fsmonitor initialization and overrides
1 # __init__.py - fsmonitor initialization and overrides
2 #
2 #
3 # Copyright 2013-2016 Facebook, Inc.
3 # Copyright 2013-2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
9
9
10 Integrates the file-watching program Watchman with Mercurial to produce faster
10 Integrates the file-watching program Watchman with Mercurial to produce faster
11 status results.
11 status results.
12
12
13 On a particular Linux system, for a real-world repository with over 400,000
13 On a particular Linux system, for a real-world repository with over 400,000
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
15 system, with fsmonitor it takes about 0.3 seconds.
15 system, with fsmonitor it takes about 0.3 seconds.
16
16
17 fsmonitor requires no configuration -- it will tell Watchman about your
17 fsmonitor requires no configuration -- it will tell Watchman about your
18 repository as necessary. You'll need to install Watchman from
18 repository as necessary. You'll need to install Watchman from
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
20
20
21 The following configuration options exist:
21 The following configuration options exist:
22
22
23 ::
23 ::
24
24
25 [fsmonitor]
25 [fsmonitor]
26 mode = {off, on, paranoid}
26 mode = {off, on, paranoid}
27
27
28 When `mode = off`, fsmonitor will disable itself (similar to not loading the
28 When `mode = off`, fsmonitor will disable itself (similar to not loading the
29 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
29 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
30 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
30 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
31 and ensure that the results are consistent.
31 and ensure that the results are consistent.
32
32
33 ::
33 ::
34
34
35 [fsmonitor]
35 [fsmonitor]
36 timeout = (float)
36 timeout = (float)
37
37
38 A value, in seconds, that determines how long fsmonitor will wait for Watchman
38 A value, in seconds, that determines how long fsmonitor will wait for Watchman
39 to return results. Defaults to `2.0`.
39 to return results. Defaults to `2.0`.
40
40
41 ::
41 ::
42
42
43 [fsmonitor]
43 [fsmonitor]
44 blacklistusers = (list of userids)
44 blacklistusers = (list of userids)
45
45
46 A list of usernames for which fsmonitor will disable itself altogether.
46 A list of usernames for which fsmonitor will disable itself altogether.
47
47
48 ::
48 ::
49
49
50 [fsmonitor]
50 [fsmonitor]
51 walk_on_invalidate = (boolean)
51 walk_on_invalidate = (boolean)
52
52
53 Whether or not to walk the whole repo ourselves when our cached state has been
53 Whether or not to walk the whole repo ourselves when our cached state has been
54 invalidated, for example when Watchman has been restarted or .hgignore rules
54 invalidated, for example when Watchman has been restarted or .hgignore rules
55 have been changed. Walking the repo in that case can result in competing for
55 have been changed. Walking the repo in that case can result in competing for
56 I/O with Watchman. For large repos it is recommended to set this value to
56 I/O with Watchman. For large repos it is recommended to set this value to
57 false. You may wish to set this to true if you have a very fast filesystem
57 false. You may wish to set this to true if you have a very fast filesystem
58 that can outpace the IPC overhead of getting the result data for the full repo
58 that can outpace the IPC overhead of getting the result data for the full repo
59 from Watchman. Defaults to false.
59 from Watchman. Defaults to false.
60
60
61 fsmonitor is incompatible with the largefiles and eol extensions, and
61 fsmonitor is incompatible with the largefiles and eol extensions, and
62 will disable itself if any of those are active.
62 will disable itself if any of those are active.
63
63
64 '''
64 '''
65
65
66 # Platforms Supported
66 # Platforms Supported
67 # ===================
67 # ===================
68 #
68 #
69 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
69 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
70 # even under severe loads.
70 # even under severe loads.
71 #
71 #
72 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
72 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
73 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
73 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
74 # user testing under normal loads.
74 # user testing under normal loads.
75 #
75 #
76 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
76 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
77 # very little testing has been done.
77 # very little testing has been done.
78 #
78 #
79 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
79 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
80 #
80 #
81 # Known Issues
81 # Known Issues
82 # ============
82 # ============
83 #
83 #
84 # * fsmonitor will disable itself if any of the following extensions are
84 # * fsmonitor will disable itself if any of the following extensions are
85 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
85 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
86 # * fsmonitor will produce incorrect results if nested repos that are not
86 # * fsmonitor will produce incorrect results if nested repos that are not
87 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
87 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
88 #
88 #
89 # The issues related to nested repos and subrepos are probably not fundamental
89 # The issues related to nested repos and subrepos are probably not fundamental
90 # ones. Patches to fix them are welcome.
90 # ones. Patches to fix them are welcome.
91
91
92 from __future__ import absolute_import
92 from __future__ import absolute_import
93
93
94 import codecs
94 import codecs
95 import hashlib
95 import hashlib
96 import os
96 import os
97 import stat
97 import stat
98 import sys
98 import sys
99
99
100 from mercurial.i18n import _
100 from mercurial.i18n import _
101 from mercurial import (
101 from mercurial import (
102 context,
102 context,
103 encoding,
103 encoding,
104 error,
104 error,
105 extensions,
105 extensions,
106 localrepo,
106 localrepo,
107 merge,
107 merge,
108 pathutil,
108 pathutil,
109 pycompat,
109 pycompat,
110 scmutil,
110 scmutil,
111 util,
111 util,
112 )
112 )
113 from mercurial import match as matchmod
113 from mercurial import match as matchmod
114
114
115 from . import (
115 from . import (
116 pywatchman,
116 pywatchman,
117 state,
117 state,
118 watchmanclient,
118 watchmanclient,
119 )
119 )
120
120
121 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
121 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
122 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
122 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
123 # be specifying the version(s) of Mercurial they are tested with, or
123 # be specifying the version(s) of Mercurial they are tested with, or
124 # leave the attribute unspecified.
124 # leave the attribute unspecified.
125 testedwith = 'ships-with-hg-core'
125 testedwith = 'ships-with-hg-core'
126
126
127 # This extension is incompatible with the following blacklisted extensions
127 # This extension is incompatible with the following blacklisted extensions
128 # and will disable itself when encountering one of these:
128 # and will disable itself when encountering one of these:
129 _blacklist = ['largefiles', 'eol']
129 _blacklist = ['largefiles', 'eol']
130
130
131 def _handleunavailable(ui, state, ex):
131 def _handleunavailable(ui, state, ex):
132 """Exception handler for Watchman interaction exceptions"""
132 """Exception handler for Watchman interaction exceptions"""
133 if isinstance(ex, watchmanclient.Unavailable):
133 if isinstance(ex, watchmanclient.Unavailable):
134 if ex.warn:
134 if ex.warn:
135 ui.warn(str(ex) + '\n')
135 ui.warn(str(ex) + '\n')
136 if ex.invalidate:
136 if ex.invalidate:
137 state.invalidate()
137 state.invalidate()
138 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
138 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
139 else:
139 else:
140 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
140 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
141
141
142 def _hashignore(ignore):
142 def _hashignore(ignore):
143 """Calculate hash for ignore patterns and filenames
143 """Calculate hash for ignore patterns and filenames
144
144
145 If this information changes between Mercurial invocations, we can't
145 If this information changes between Mercurial invocations, we can't
146 rely on Watchman information anymore and have to re-scan the working
146 rely on Watchman information anymore and have to re-scan the working
147 copy.
147 copy.
148
148
149 """
149 """
150 sha1 = hashlib.sha1()
150 sha1 = hashlib.sha1()
151 if util.safehasattr(ignore, 'includepat'):
151 sha1.update(repr(ignore))
152 sha1.update(ignore.includepat)
153 sha1.update('\0\0')
154 if util.safehasattr(ignore, 'excludepat'):
155 sha1.update(ignore.excludepat)
156 sha1.update('\0\0')
157 if util.safehasattr(ignore, 'patternspat'):
158 sha1.update(ignore.patternspat)
159 sha1.update('\0\0')
160 if util.safehasattr(ignore, '_files'):
161 for f in ignore._files:
162 sha1.update(f)
163 sha1.update('\0')
164 return sha1.hexdigest()
152 return sha1.hexdigest()
165
153
166 _watchmanencoding = pywatchman.encoding.get_local_encoding()
154 _watchmanencoding = pywatchman.encoding.get_local_encoding()
167 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
155 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
168 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
156 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
169
157
170 def _watchmantofsencoding(path):
158 def _watchmantofsencoding(path):
171 """Fix path to match watchman and local filesystem encoding
159 """Fix path to match watchman and local filesystem encoding
172
160
173 watchman's paths encoding can differ from filesystem encoding. For example,
161 watchman's paths encoding can differ from filesystem encoding. For example,
174 on Windows, it's always utf-8.
162 on Windows, it's always utf-8.
175 """
163 """
176 try:
164 try:
177 decoded = path.decode(_watchmanencoding)
165 decoded = path.decode(_watchmanencoding)
178 except UnicodeDecodeError as e:
166 except UnicodeDecodeError as e:
179 raise error.Abort(str(e), hint='watchman encoding error')
167 raise error.Abort(str(e), hint='watchman encoding error')
180
168
181 try:
169 try:
182 encoded = decoded.encode(_fsencoding, 'strict')
170 encoded = decoded.encode(_fsencoding, 'strict')
183 except UnicodeEncodeError as e:
171 except UnicodeEncodeError as e:
184 raise error.Abort(str(e))
172 raise error.Abort(str(e))
185
173
186 return encoded
174 return encoded
187
175
188 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
176 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
189 '''Replacement for dirstate.walk, hooking into Watchman.
177 '''Replacement for dirstate.walk, hooking into Watchman.
190
178
191 Whenever full is False, ignored is False, and the Watchman client is
179 Whenever full is False, ignored is False, and the Watchman client is
192 available, use Watchman combined with saved state to possibly return only a
180 available, use Watchman combined with saved state to possibly return only a
193 subset of files.'''
181 subset of files.'''
194 def bail():
182 def bail():
195 return orig(match, subrepos, unknown, ignored, full=True)
183 return orig(match, subrepos, unknown, ignored, full=True)
196
184
197 if full or ignored or not self._watchmanclient.available():
185 if full or ignored or not self._watchmanclient.available():
198 return bail()
186 return bail()
199 state = self._fsmonitorstate
187 state = self._fsmonitorstate
200 clock, ignorehash, notefiles = state.get()
188 clock, ignorehash, notefiles = state.get()
201 if not clock:
189 if not clock:
202 if state.walk_on_invalidate:
190 if state.walk_on_invalidate:
203 return bail()
191 return bail()
204 # Initial NULL clock value, see
192 # Initial NULL clock value, see
205 # https://facebook.github.io/watchman/docs/clockspec.html
193 # https://facebook.github.io/watchman/docs/clockspec.html
206 clock = 'c:0:0'
194 clock = 'c:0:0'
207 notefiles = []
195 notefiles = []
208
196
209 def fwarn(f, msg):
197 def fwarn(f, msg):
210 self._ui.warn('%s: %s\n' % (self.pathto(f), msg))
198 self._ui.warn('%s: %s\n' % (self.pathto(f), msg))
211 return False
199 return False
212
200
213 def badtype(mode):
201 def badtype(mode):
214 kind = _('unknown')
202 kind = _('unknown')
215 if stat.S_ISCHR(mode):
203 if stat.S_ISCHR(mode):
216 kind = _('character device')
204 kind = _('character device')
217 elif stat.S_ISBLK(mode):
205 elif stat.S_ISBLK(mode):
218 kind = _('block device')
206 kind = _('block device')
219 elif stat.S_ISFIFO(mode):
207 elif stat.S_ISFIFO(mode):
220 kind = _('fifo')
208 kind = _('fifo')
221 elif stat.S_ISSOCK(mode):
209 elif stat.S_ISSOCK(mode):
222 kind = _('socket')
210 kind = _('socket')
223 elif stat.S_ISDIR(mode):
211 elif stat.S_ISDIR(mode):
224 kind = _('directory')
212 kind = _('directory')
225 return _('unsupported file type (type is %s)') % kind
213 return _('unsupported file type (type is %s)') % kind
226
214
227 ignore = self._ignore
215 ignore = self._ignore
228 dirignore = self._dirignore
216 dirignore = self._dirignore
229 if unknown:
217 if unknown:
230 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
218 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
231 # ignore list changed -- can't rely on Watchman state any more
219 # ignore list changed -- can't rely on Watchman state any more
232 if state.walk_on_invalidate:
220 if state.walk_on_invalidate:
233 return bail()
221 return bail()
234 notefiles = []
222 notefiles = []
235 clock = 'c:0:0'
223 clock = 'c:0:0'
236 else:
224 else:
237 # always ignore
225 # always ignore
238 ignore = util.always
226 ignore = util.always
239 dirignore = util.always
227 dirignore = util.always
240
228
241 matchfn = match.matchfn
229 matchfn = match.matchfn
242 matchalways = match.always()
230 matchalways = match.always()
243 dmap = self._map
231 dmap = self._map
244 nonnormalset = getattr(self, '_nonnormalset', None)
232 nonnormalset = getattr(self, '_nonnormalset', None)
245
233
246 copymap = self._copymap
234 copymap = self._copymap
247 getkind = stat.S_IFMT
235 getkind = stat.S_IFMT
248 dirkind = stat.S_IFDIR
236 dirkind = stat.S_IFDIR
249 regkind = stat.S_IFREG
237 regkind = stat.S_IFREG
250 lnkkind = stat.S_IFLNK
238 lnkkind = stat.S_IFLNK
251 join = self._join
239 join = self._join
252 normcase = util.normcase
240 normcase = util.normcase
253 fresh_instance = False
241 fresh_instance = False
254
242
255 exact = skipstep3 = False
243 exact = skipstep3 = False
256 if match.isexact(): # match.exact
244 if match.isexact(): # match.exact
257 exact = True
245 exact = True
258 dirignore = util.always # skip step 2
246 dirignore = util.always # skip step 2
259 elif match.prefix(): # match.match, no patterns
247 elif match.prefix(): # match.match, no patterns
260 skipstep3 = True
248 skipstep3 = True
261
249
262 if not exact and self._checkcase:
250 if not exact and self._checkcase:
263 # note that even though we could receive directory entries, we're only
251 # note that even though we could receive directory entries, we're only
264 # interested in checking if a file with the same name exists. So only
252 # interested in checking if a file with the same name exists. So only
265 # normalize files if possible.
253 # normalize files if possible.
266 normalize = self._normalizefile
254 normalize = self._normalizefile
267 skipstep3 = False
255 skipstep3 = False
268 else:
256 else:
269 normalize = None
257 normalize = None
270
258
271 # step 1: find all explicit files
259 # step 1: find all explicit files
272 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
260 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
273
261
274 skipstep3 = skipstep3 and not (work or dirsnotfound)
262 skipstep3 = skipstep3 and not (work or dirsnotfound)
275 work = [d for d in work if not dirignore(d[0])]
263 work = [d for d in work if not dirignore(d[0])]
276
264
277 if not work and (exact or skipstep3):
265 if not work and (exact or skipstep3):
278 for s in subrepos:
266 for s in subrepos:
279 del results[s]
267 del results[s]
280 del results['.hg']
268 del results['.hg']
281 return results
269 return results
282
270
283 # step 2: query Watchman
271 # step 2: query Watchman
284 try:
272 try:
285 # Use the user-configured timeout for the query.
273 # Use the user-configured timeout for the query.
286 # Add a little slack over the top of the user query to allow for
274 # Add a little slack over the top of the user query to allow for
287 # overheads while transferring the data
275 # overheads while transferring the data
288 self._watchmanclient.settimeout(state.timeout + 0.1)
276 self._watchmanclient.settimeout(state.timeout + 0.1)
289 result = self._watchmanclient.command('query', {
277 result = self._watchmanclient.command('query', {
290 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
278 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
291 'since': clock,
279 'since': clock,
292 'expression': [
280 'expression': [
293 'not', [
281 'not', [
294 'anyof', ['dirname', '.hg'],
282 'anyof', ['dirname', '.hg'],
295 ['name', '.hg', 'wholename']
283 ['name', '.hg', 'wholename']
296 ]
284 ]
297 ],
285 ],
298 'sync_timeout': int(state.timeout * 1000),
286 'sync_timeout': int(state.timeout * 1000),
299 'empty_on_fresh_instance': state.walk_on_invalidate,
287 'empty_on_fresh_instance': state.walk_on_invalidate,
300 })
288 })
301 except Exception as ex:
289 except Exception as ex:
302 _handleunavailable(self._ui, state, ex)
290 _handleunavailable(self._ui, state, ex)
303 self._watchmanclient.clearconnection()
291 self._watchmanclient.clearconnection()
304 return bail()
292 return bail()
305 else:
293 else:
306 # We need to propagate the last observed clock up so that we
294 # We need to propagate the last observed clock up so that we
307 # can use it for our next query
295 # can use it for our next query
308 state.setlastclock(result['clock'])
296 state.setlastclock(result['clock'])
309 if result['is_fresh_instance']:
297 if result['is_fresh_instance']:
310 if state.walk_on_invalidate:
298 if state.walk_on_invalidate:
311 state.invalidate()
299 state.invalidate()
312 return bail()
300 return bail()
313 fresh_instance = True
301 fresh_instance = True
314 # Ignore any prior noteable files from the state info
302 # Ignore any prior noteable files from the state info
315 notefiles = []
303 notefiles = []
316
304
317 # for file paths which require normalization and we encounter a case
305 # for file paths which require normalization and we encounter a case
318 # collision, we store our own foldmap
306 # collision, we store our own foldmap
319 if normalize:
307 if normalize:
320 foldmap = dict((normcase(k), k) for k in results)
308 foldmap = dict((normcase(k), k) for k in results)
321
309
322 switch_slashes = pycompat.ossep == '\\'
310 switch_slashes = pycompat.ossep == '\\'
323 # The order of the results is, strictly speaking, undefined.
311 # The order of the results is, strictly speaking, undefined.
324 # For case changes on a case insensitive filesystem we may receive
312 # For case changes on a case insensitive filesystem we may receive
325 # two entries, one with exists=True and another with exists=False.
313 # two entries, one with exists=True and another with exists=False.
326 # The exists=True entries in the same response should be interpreted
314 # The exists=True entries in the same response should be interpreted
327 # as being happens-after the exists=False entries due to the way that
315 # as being happens-after the exists=False entries due to the way that
328 # Watchman tracks files. We use this property to reconcile deletes
316 # Watchman tracks files. We use this property to reconcile deletes
329 # for name case changes.
317 # for name case changes.
330 for entry in result['files']:
318 for entry in result['files']:
331 fname = entry['name']
319 fname = entry['name']
332 if _fixencoding:
320 if _fixencoding:
333 fname = _watchmantofsencoding(fname)
321 fname = _watchmantofsencoding(fname)
334 if switch_slashes:
322 if switch_slashes:
335 fname = fname.replace('\\', '/')
323 fname = fname.replace('\\', '/')
336 if normalize:
324 if normalize:
337 normed = normcase(fname)
325 normed = normcase(fname)
338 fname = normalize(fname, True, True)
326 fname = normalize(fname, True, True)
339 foldmap[normed] = fname
327 foldmap[normed] = fname
340 fmode = entry['mode']
328 fmode = entry['mode']
341 fexists = entry['exists']
329 fexists = entry['exists']
342 kind = getkind(fmode)
330 kind = getkind(fmode)
343
331
344 if not fexists:
332 if not fexists:
345 # if marked as deleted and we don't already have a change
333 # if marked as deleted and we don't already have a change
346 # record, mark it as deleted. If we already have an entry
334 # record, mark it as deleted. If we already have an entry
347 # for fname then it was either part of walkexplicit or was
335 # for fname then it was either part of walkexplicit or was
348 # an earlier result that was a case change
336 # an earlier result that was a case change
349 if fname not in results and fname in dmap and (
337 if fname not in results and fname in dmap and (
350 matchalways or matchfn(fname)):
338 matchalways or matchfn(fname)):
351 results[fname] = None
339 results[fname] = None
352 elif kind == dirkind:
340 elif kind == dirkind:
353 if fname in dmap and (matchalways or matchfn(fname)):
341 if fname in dmap and (matchalways or matchfn(fname)):
354 results[fname] = None
342 results[fname] = None
355 elif kind == regkind or kind == lnkkind:
343 elif kind == regkind or kind == lnkkind:
356 if fname in dmap:
344 if fname in dmap:
357 if matchalways or matchfn(fname):
345 if matchalways or matchfn(fname):
358 results[fname] = entry
346 results[fname] = entry
359 elif (matchalways or matchfn(fname)) and not ignore(fname):
347 elif (matchalways or matchfn(fname)) and not ignore(fname):
360 results[fname] = entry
348 results[fname] = entry
361 elif fname in dmap and (matchalways or matchfn(fname)):
349 elif fname in dmap and (matchalways or matchfn(fname)):
362 results[fname] = None
350 results[fname] = None
363
351
364 # step 3: query notable files we don't already know about
352 # step 3: query notable files we don't already know about
365 # XXX try not to iterate over the entire dmap
353 # XXX try not to iterate over the entire dmap
366 if normalize:
354 if normalize:
367 # any notable files that have changed case will already be handled
355 # any notable files that have changed case will already be handled
368 # above, so just check membership in the foldmap
356 # above, so just check membership in the foldmap
369 notefiles = set((normalize(f, True, True) for f in notefiles
357 notefiles = set((normalize(f, True, True) for f in notefiles
370 if normcase(f) not in foldmap))
358 if normcase(f) not in foldmap))
371 visit = set((f for f in notefiles if (f not in results and matchfn(f)
359 visit = set((f for f in notefiles if (f not in results and matchfn(f)
372 and (f in dmap or not ignore(f)))))
360 and (f in dmap or not ignore(f)))))
373
361
374 if nonnormalset is not None and not fresh_instance:
362 if nonnormalset is not None and not fresh_instance:
375 if matchalways:
363 if matchalways:
376 visit.update(f for f in nonnormalset if f not in results)
364 visit.update(f for f in nonnormalset if f not in results)
377 visit.update(f for f in copymap if f not in results)
365 visit.update(f for f in copymap if f not in results)
378 else:
366 else:
379 visit.update(f for f in nonnormalset
367 visit.update(f for f in nonnormalset
380 if f not in results and matchfn(f))
368 if f not in results and matchfn(f))
381 visit.update(f for f in copymap
369 visit.update(f for f in copymap
382 if f not in results and matchfn(f))
370 if f not in results and matchfn(f))
383 else:
371 else:
384 if matchalways:
372 if matchalways:
385 visit.update(f for f, st in dmap.iteritems()
373 visit.update(f for f, st in dmap.iteritems()
386 if (f not in results and
374 if (f not in results and
387 (st[2] < 0 or st[0] != 'n' or fresh_instance)))
375 (st[2] < 0 or st[0] != 'n' or fresh_instance)))
388 visit.update(f for f in copymap if f not in results)
376 visit.update(f for f in copymap if f not in results)
389 else:
377 else:
390 visit.update(f for f, st in dmap.iteritems()
378 visit.update(f for f, st in dmap.iteritems()
391 if (f not in results and
379 if (f not in results and
392 (st[2] < 0 or st[0] != 'n' or fresh_instance)
380 (st[2] < 0 or st[0] != 'n' or fresh_instance)
393 and matchfn(f)))
381 and matchfn(f)))
394 visit.update(f for f in copymap
382 visit.update(f for f in copymap
395 if f not in results and matchfn(f))
383 if f not in results and matchfn(f))
396
384
397 audit = pathutil.pathauditor(self._root).check
385 audit = pathutil.pathauditor(self._root).check
398 auditpass = [f for f in visit if audit(f)]
386 auditpass = [f for f in visit if audit(f)]
399 auditpass.sort()
387 auditpass.sort()
400 auditfail = visit.difference(auditpass)
388 auditfail = visit.difference(auditpass)
401 for f in auditfail:
389 for f in auditfail:
402 results[f] = None
390 results[f] = None
403
391
404 nf = iter(auditpass).next
392 nf = iter(auditpass).next
405 for st in util.statfiles([join(f) for f in auditpass]):
393 for st in util.statfiles([join(f) for f in auditpass]):
406 f = nf()
394 f = nf()
407 if st or f in dmap:
395 if st or f in dmap:
408 results[f] = st
396 results[f] = st
409
397
410 for s in subrepos:
398 for s in subrepos:
411 del results[s]
399 del results[s]
412 del results['.hg']
400 del results['.hg']
413 return results
401 return results
414
402
415 def overridestatus(
403 def overridestatus(
416 orig, self, node1='.', node2=None, match=None, ignored=False,
404 orig, self, node1='.', node2=None, match=None, ignored=False,
417 clean=False, unknown=False, listsubrepos=False):
405 clean=False, unknown=False, listsubrepos=False):
418 listignored = ignored
406 listignored = ignored
419 listclean = clean
407 listclean = clean
420 listunknown = unknown
408 listunknown = unknown
421
409
422 def _cmpsets(l1, l2):
410 def _cmpsets(l1, l2):
423 try:
411 try:
424 if 'FSMONITOR_LOG_FILE' in encoding.environ:
412 if 'FSMONITOR_LOG_FILE' in encoding.environ:
425 fn = encoding.environ['FSMONITOR_LOG_FILE']
413 fn = encoding.environ['FSMONITOR_LOG_FILE']
426 f = open(fn, 'wb')
414 f = open(fn, 'wb')
427 else:
415 else:
428 fn = 'fsmonitorfail.log'
416 fn = 'fsmonitorfail.log'
429 f = self.opener(fn, 'wb')
417 f = self.opener(fn, 'wb')
430 except (IOError, OSError):
418 except (IOError, OSError):
431 self.ui.warn(_('warning: unable to write to %s\n') % fn)
419 self.ui.warn(_('warning: unable to write to %s\n') % fn)
432 return
420 return
433
421
434 try:
422 try:
435 for i, (s1, s2) in enumerate(zip(l1, l2)):
423 for i, (s1, s2) in enumerate(zip(l1, l2)):
436 if set(s1) != set(s2):
424 if set(s1) != set(s2):
437 f.write('sets at position %d are unequal\n' % i)
425 f.write('sets at position %d are unequal\n' % i)
438 f.write('watchman returned: %s\n' % s1)
426 f.write('watchman returned: %s\n' % s1)
439 f.write('stat returned: %s\n' % s2)
427 f.write('stat returned: %s\n' % s2)
440 finally:
428 finally:
441 f.close()
429 f.close()
442
430
443 if isinstance(node1, context.changectx):
431 if isinstance(node1, context.changectx):
444 ctx1 = node1
432 ctx1 = node1
445 else:
433 else:
446 ctx1 = self[node1]
434 ctx1 = self[node1]
447 if isinstance(node2, context.changectx):
435 if isinstance(node2, context.changectx):
448 ctx2 = node2
436 ctx2 = node2
449 else:
437 else:
450 ctx2 = self[node2]
438 ctx2 = self[node2]
451
439
452 working = ctx2.rev() is None
440 working = ctx2.rev() is None
453 parentworking = working and ctx1 == self['.']
441 parentworking = working and ctx1 == self['.']
454 match = match or matchmod.always(self.root, self.getcwd())
442 match = match or matchmod.always(self.root, self.getcwd())
455
443
456 # Maybe we can use this opportunity to update Watchman's state.
444 # Maybe we can use this opportunity to update Watchman's state.
457 # Mercurial uses workingcommitctx and/or memctx to represent the part of
445 # Mercurial uses workingcommitctx and/or memctx to represent the part of
458 # the workingctx that is to be committed. So don't update the state in
446 # the workingctx that is to be committed. So don't update the state in
459 # that case.
447 # that case.
460 # HG_PENDING is set in the environment when the dirstate is being updated
448 # HG_PENDING is set in the environment when the dirstate is being updated
461 # in the middle of a transaction; we must not update our state in that
449 # in the middle of a transaction; we must not update our state in that
462 # case, or we risk forgetting about changes in the working copy.
450 # case, or we risk forgetting about changes in the working copy.
463 updatestate = (parentworking and match.always() and
451 updatestate = (parentworking and match.always() and
464 not isinstance(ctx2, (context.workingcommitctx,
452 not isinstance(ctx2, (context.workingcommitctx,
465 context.memctx)) and
453 context.memctx)) and
466 'HG_PENDING' not in encoding.environ)
454 'HG_PENDING' not in encoding.environ)
467
455
468 try:
456 try:
469 if self._fsmonitorstate.walk_on_invalidate:
457 if self._fsmonitorstate.walk_on_invalidate:
470 # Use a short timeout to query the current clock. If that
458 # Use a short timeout to query the current clock. If that
471 # takes too long then we assume that the service will be slow
459 # takes too long then we assume that the service will be slow
472 # to answer our query.
460 # to answer our query.
473 # walk_on_invalidate indicates that we prefer to walk the
461 # walk_on_invalidate indicates that we prefer to walk the
474 # tree ourselves because we can ignore portions that Watchman
462 # tree ourselves because we can ignore portions that Watchman
475 # cannot and we tend to be faster in the warmer buffer cache
463 # cannot and we tend to be faster in the warmer buffer cache
476 # cases.
464 # cases.
477 self._watchmanclient.settimeout(0.1)
465 self._watchmanclient.settimeout(0.1)
478 else:
466 else:
479 # Give Watchman more time to potentially complete its walk
467 # Give Watchman more time to potentially complete its walk
480 # and return the initial clock. In this mode we assume that
468 # and return the initial clock. In this mode we assume that
481 # the filesystem will be slower than parsing a potentially
469 # the filesystem will be slower than parsing a potentially
482 # very large Watchman result set.
470 # very large Watchman result set.
483 self._watchmanclient.settimeout(
471 self._watchmanclient.settimeout(
484 self._fsmonitorstate.timeout + 0.1)
472 self._fsmonitorstate.timeout + 0.1)
485 startclock = self._watchmanclient.getcurrentclock()
473 startclock = self._watchmanclient.getcurrentclock()
486 except Exception as ex:
474 except Exception as ex:
487 self._watchmanclient.clearconnection()
475 self._watchmanclient.clearconnection()
488 _handleunavailable(self.ui, self._fsmonitorstate, ex)
476 _handleunavailable(self.ui, self._fsmonitorstate, ex)
489 # boo, Watchman failed. bail
477 # boo, Watchman failed. bail
490 return orig(node1, node2, match, listignored, listclean,
478 return orig(node1, node2, match, listignored, listclean,
491 listunknown, listsubrepos)
479 listunknown, listsubrepos)
492
480
493 if updatestate:
481 if updatestate:
494 # We need info about unknown files. This may make things slower the
482 # We need info about unknown files. This may make things slower the
495 # first time, but whatever.
483 # first time, but whatever.
496 stateunknown = True
484 stateunknown = True
497 else:
485 else:
498 stateunknown = listunknown
486 stateunknown = listunknown
499
487
500 r = orig(node1, node2, match, listignored, listclean, stateunknown,
488 r = orig(node1, node2, match, listignored, listclean, stateunknown,
501 listsubrepos)
489 listsubrepos)
502 modified, added, removed, deleted, unknown, ignored, clean = r
490 modified, added, removed, deleted, unknown, ignored, clean = r
503
491
504 if updatestate:
492 if updatestate:
505 notefiles = modified + added + removed + deleted + unknown
493 notefiles = modified + added + removed + deleted + unknown
506 self._fsmonitorstate.set(
494 self._fsmonitorstate.set(
507 self._fsmonitorstate.getlastclock() or startclock,
495 self._fsmonitorstate.getlastclock() or startclock,
508 _hashignore(self.dirstate._ignore),
496 _hashignore(self.dirstate._ignore),
509 notefiles)
497 notefiles)
510
498
511 if not listunknown:
499 if not listunknown:
512 unknown = []
500 unknown = []
513
501
514 # don't do paranoid checks if we're not going to query Watchman anyway
502 # don't do paranoid checks if we're not going to query Watchman anyway
515 full = listclean or match.traversedir is not None
503 full = listclean or match.traversedir is not None
516 if self._fsmonitorstate.mode == 'paranoid' and not full:
504 if self._fsmonitorstate.mode == 'paranoid' and not full:
517 # run status again and fall back to the old walk this time
505 # run status again and fall back to the old walk this time
518 self.dirstate._fsmonitordisable = True
506 self.dirstate._fsmonitordisable = True
519
507
520 # shut the UI up
508 # shut the UI up
521 quiet = self.ui.quiet
509 quiet = self.ui.quiet
522 self.ui.quiet = True
510 self.ui.quiet = True
523 fout, ferr = self.ui.fout, self.ui.ferr
511 fout, ferr = self.ui.fout, self.ui.ferr
524 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
512 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
525
513
526 try:
514 try:
527 rv2 = orig(
515 rv2 = orig(
528 node1, node2, match, listignored, listclean, listunknown,
516 node1, node2, match, listignored, listclean, listunknown,
529 listsubrepos)
517 listsubrepos)
530 finally:
518 finally:
531 self.dirstate._fsmonitordisable = False
519 self.dirstate._fsmonitordisable = False
532 self.ui.quiet = quiet
520 self.ui.quiet = quiet
533 self.ui.fout, self.ui.ferr = fout, ferr
521 self.ui.fout, self.ui.ferr = fout, ferr
534
522
535 # clean isn't tested since it's set to True above
523 # clean isn't tested since it's set to True above
536 _cmpsets([modified, added, removed, deleted, unknown, ignored, clean],
524 _cmpsets([modified, added, removed, deleted, unknown, ignored, clean],
537 rv2)
525 rv2)
538 modified, added, removed, deleted, unknown, ignored, clean = rv2
526 modified, added, removed, deleted, unknown, ignored, clean = rv2
539
527
540 return scmutil.status(
528 return scmutil.status(
541 modified, added, removed, deleted, unknown, ignored, clean)
529 modified, added, removed, deleted, unknown, ignored, clean)
542
530
543 def makedirstate(cls):
531 def makedirstate(cls):
544 class fsmonitordirstate(cls):
532 class fsmonitordirstate(cls):
545 def _fsmonitorinit(self, fsmonitorstate, watchmanclient):
533 def _fsmonitorinit(self, fsmonitorstate, watchmanclient):
546 # _fsmonitordisable is used in paranoid mode
534 # _fsmonitordisable is used in paranoid mode
547 self._fsmonitordisable = False
535 self._fsmonitordisable = False
548 self._fsmonitorstate = fsmonitorstate
536 self._fsmonitorstate = fsmonitorstate
549 self._watchmanclient = watchmanclient
537 self._watchmanclient = watchmanclient
550
538
551 def walk(self, *args, **kwargs):
539 def walk(self, *args, **kwargs):
552 orig = super(fsmonitordirstate, self).walk
540 orig = super(fsmonitordirstate, self).walk
553 if self._fsmonitordisable:
541 if self._fsmonitordisable:
554 return orig(*args, **kwargs)
542 return orig(*args, **kwargs)
555 return overridewalk(orig, self, *args, **kwargs)
543 return overridewalk(orig, self, *args, **kwargs)
556
544
557 def rebuild(self, *args, **kwargs):
545 def rebuild(self, *args, **kwargs):
558 self._fsmonitorstate.invalidate()
546 self._fsmonitorstate.invalidate()
559 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
547 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
560
548
561 def invalidate(self, *args, **kwargs):
549 def invalidate(self, *args, **kwargs):
562 self._fsmonitorstate.invalidate()
550 self._fsmonitorstate.invalidate()
563 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
551 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
564
552
565 return fsmonitordirstate
553 return fsmonitordirstate
566
554
567 def wrapdirstate(orig, self):
555 def wrapdirstate(orig, self):
568 ds = orig(self)
556 ds = orig(self)
569 # only override the dirstate when Watchman is available for the repo
557 # only override the dirstate when Watchman is available for the repo
570 if util.safehasattr(self, '_fsmonitorstate'):
558 if util.safehasattr(self, '_fsmonitorstate'):
571 ds.__class__ = makedirstate(ds.__class__)
559 ds.__class__ = makedirstate(ds.__class__)
572 ds._fsmonitorinit(self._fsmonitorstate, self._watchmanclient)
560 ds._fsmonitorinit(self._fsmonitorstate, self._watchmanclient)
573 return ds
561 return ds
574
562
575 def extsetup(ui):
563 def extsetup(ui):
576 wrapfilecache(localrepo.localrepository, 'dirstate', wrapdirstate)
564 wrapfilecache(localrepo.localrepository, 'dirstate', wrapdirstate)
577 if pycompat.sysplatform == 'darwin':
565 if pycompat.sysplatform == 'darwin':
578 # An assist for avoiding the dangling-symlink fsevents bug
566 # An assist for avoiding the dangling-symlink fsevents bug
579 extensions.wrapfunction(os, 'symlink', wrapsymlink)
567 extensions.wrapfunction(os, 'symlink', wrapsymlink)
580
568
581 extensions.wrapfunction(merge, 'update', wrapupdate)
569 extensions.wrapfunction(merge, 'update', wrapupdate)
582
570
583 def wrapsymlink(orig, source, link_name):
571 def wrapsymlink(orig, source, link_name):
584 ''' if we create a dangling symlink, also touch the parent dir
572 ''' if we create a dangling symlink, also touch the parent dir
585 to encourage fsevents notifications to work more correctly '''
573 to encourage fsevents notifications to work more correctly '''
586 try:
574 try:
587 return orig(source, link_name)
575 return orig(source, link_name)
588 finally:
576 finally:
589 try:
577 try:
590 os.utime(os.path.dirname(link_name), None)
578 os.utime(os.path.dirname(link_name), None)
591 except OSError:
579 except OSError:
592 pass
580 pass
593
581
594 class state_update(object):
582 class state_update(object):
595 ''' This context manager is responsible for dispatching the state-enter
583 ''' This context manager is responsible for dispatching the state-enter
596 and state-leave signals to the watchman service '''
584 and state-leave signals to the watchman service '''
597
585
598 def __init__(self, repo, node, distance, partial):
586 def __init__(self, repo, node, distance, partial):
599 self.repo = repo
587 self.repo = repo
600 self.node = node
588 self.node = node
601 self.distance = distance
589 self.distance = distance
602 self.partial = partial
590 self.partial = partial
603 self._lock = None
591 self._lock = None
604 self.need_leave = False
592 self.need_leave = False
605
593
606 def __enter__(self):
594 def __enter__(self):
607 # We explicitly need to take a lock here, before we proceed to update
595 # We explicitly need to take a lock here, before we proceed to update
608 # watchman about the update operation, so that we don't race with
596 # watchman about the update operation, so that we don't race with
609 # some other actor. merge.update is going to take the wlock almost
597 # some other actor. merge.update is going to take the wlock almost
610 # immediately anyway, so this is effectively extending the lock
598 # immediately anyway, so this is effectively extending the lock
611 # around a couple of short sanity checks.
599 # around a couple of short sanity checks.
612 self._lock = self.repo.wlock()
600 self._lock = self.repo.wlock()
613 self.need_leave = self._state('state-enter')
601 self.need_leave = self._state('state-enter')
614 return self
602 return self
615
603
616 def __exit__(self, type_, value, tb):
604 def __exit__(self, type_, value, tb):
617 try:
605 try:
618 if self.need_leave:
606 if self.need_leave:
619 status = 'ok' if type_ is None else 'failed'
607 status = 'ok' if type_ is None else 'failed'
620 self._state('state-leave', status=status)
608 self._state('state-leave', status=status)
621 finally:
609 finally:
622 if self._lock:
610 if self._lock:
623 self._lock.release()
611 self._lock.release()
624
612
625 def _state(self, cmd, status='ok'):
613 def _state(self, cmd, status='ok'):
626 if not util.safehasattr(self.repo, '_watchmanclient'):
614 if not util.safehasattr(self.repo, '_watchmanclient'):
627 return False
615 return False
628 try:
616 try:
629 commithash = self.repo[self.node].hex()
617 commithash = self.repo[self.node].hex()
630 self.repo._watchmanclient.command(cmd, {
618 self.repo._watchmanclient.command(cmd, {
631 'name': 'hg.update',
619 'name': 'hg.update',
632 'metadata': {
620 'metadata': {
633 # the target revision
621 # the target revision
634 'rev': commithash,
622 'rev': commithash,
635 # approximate number of commits between current and target
623 # approximate number of commits between current and target
636 'distance': self.distance,
624 'distance': self.distance,
637 # success/failure (only really meaningful for state-leave)
625 # success/failure (only really meaningful for state-leave)
638 'status': status,
626 'status': status,
639 # whether the working copy parent is changing
627 # whether the working copy parent is changing
640 'partial': self.partial,
628 'partial': self.partial,
641 }})
629 }})
642 return True
630 return True
643 except Exception as e:
631 except Exception as e:
644 # Swallow any errors; fire and forget
632 # Swallow any errors; fire and forget
645 self.repo.ui.log(
633 self.repo.ui.log(
646 'watchman', 'Exception %s while running %s\n', e, cmd)
634 'watchman', 'Exception %s while running %s\n', e, cmd)
647 return False
635 return False
648
636
649 # Bracket working copy updates with calls to the watchman state-enter
637 # Bracket working copy updates with calls to the watchman state-enter
650 # and state-leave commands. This allows clients to perform more intelligent
638 # and state-leave commands. This allows clients to perform more intelligent
651 # settling during bulk file change scenarios
639 # settling during bulk file change scenarios
652 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
640 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
653 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
641 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
654 mergeancestor=False, labels=None, matcher=None, **kwargs):
642 mergeancestor=False, labels=None, matcher=None, **kwargs):
655
643
656 distance = 0
644 distance = 0
657 partial = True
645 partial = True
658 if matcher is None or matcher.always():
646 if matcher is None or matcher.always():
659 partial = False
647 partial = False
660 wc = repo[None]
648 wc = repo[None]
661 parents = wc.parents()
649 parents = wc.parents()
662 if len(parents) == 2:
650 if len(parents) == 2:
663 anc = repo.changelog.ancestor(parents[0].node(), parents[1].node())
651 anc = repo.changelog.ancestor(parents[0].node(), parents[1].node())
664 ancrev = repo[anc].rev()
652 ancrev = repo[anc].rev()
665 distance = abs(repo[node].rev() - ancrev)
653 distance = abs(repo[node].rev() - ancrev)
666 elif len(parents) == 1:
654 elif len(parents) == 1:
667 distance = abs(repo[node].rev() - parents[0].rev())
655 distance = abs(repo[node].rev() - parents[0].rev())
668
656
669 with state_update(repo, node, distance, partial):
657 with state_update(repo, node, distance, partial):
670 return orig(
658 return orig(
671 repo, node, branchmerge, force, ancestor, mergeancestor,
659 repo, node, branchmerge, force, ancestor, mergeancestor,
672 labels, matcher, **kwargs)
660 labels, matcher, **kwargs)
673
661
674 def reposetup(ui, repo):
662 def reposetup(ui, repo):
675 # We don't work with largefiles or inotify
663 # We don't work with largefiles or inotify
676 exts = extensions.enabled()
664 exts = extensions.enabled()
677 for ext in _blacklist:
665 for ext in _blacklist:
678 if ext in exts:
666 if ext in exts:
679 ui.warn(_('The fsmonitor extension is incompatible with the %s '
667 ui.warn(_('The fsmonitor extension is incompatible with the %s '
680 'extension and has been disabled.\n') % ext)
668 'extension and has been disabled.\n') % ext)
681 return
669 return
682
670
683 if util.safehasattr(repo, 'dirstate'):
671 if util.safehasattr(repo, 'dirstate'):
684 # We don't work with subrepos either. Note that we can get passed in
672 # We don't work with subrepos either. Note that we can get passed in
685 # e.g. a statichttprepo, which throws on trying to access the substate.
673 # e.g. a statichttprepo, which throws on trying to access the substate.
686 # XXX This sucks.
674 # XXX This sucks.
687 try:
675 try:
688 # if repo[None].substate can cause a dirstate parse, which is too
676 # if repo[None].substate can cause a dirstate parse, which is too
689 # slow. Instead, look for a file called hgsubstate,
677 # slow. Instead, look for a file called hgsubstate,
690 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
678 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
691 return
679 return
692 except AttributeError:
680 except AttributeError:
693 return
681 return
694
682
695 fsmonitorstate = state.state(repo)
683 fsmonitorstate = state.state(repo)
696 if fsmonitorstate.mode == 'off':
684 if fsmonitorstate.mode == 'off':
697 return
685 return
698
686
699 try:
687 try:
700 client = watchmanclient.client(repo)
688 client = watchmanclient.client(repo)
701 except Exception as ex:
689 except Exception as ex:
702 _handleunavailable(ui, fsmonitorstate, ex)
690 _handleunavailable(ui, fsmonitorstate, ex)
703 return
691 return
704
692
705 repo._fsmonitorstate = fsmonitorstate
693 repo._fsmonitorstate = fsmonitorstate
706 repo._watchmanclient = client
694 repo._watchmanclient = client
707
695
708 # at this point since fsmonitorstate wasn't present, repo.dirstate is
696 # at this point since fsmonitorstate wasn't present, repo.dirstate is
709 # not a fsmonitordirstate
697 # not a fsmonitordirstate
710 dirstate = repo.dirstate
698 dirstate = repo.dirstate
711 dirstate.__class__ = makedirstate(dirstate.__class__)
699 dirstate.__class__ = makedirstate(dirstate.__class__)
712 dirstate._fsmonitorinit(fsmonitorstate, client)
700 dirstate._fsmonitorinit(fsmonitorstate, client)
713 # invalidate property cache, but keep filecache which contains the
701 # invalidate property cache, but keep filecache which contains the
714 # wrapped dirstate object
702 # wrapped dirstate object
715 del repo.unfiltered().__dict__['dirstate']
703 del repo.unfiltered().__dict__['dirstate']
716 assert dirstate is repo._filecache['dirstate'].obj
704 assert dirstate is repo._filecache['dirstate'].obj
717
705
718 class fsmonitorrepo(repo.__class__):
706 class fsmonitorrepo(repo.__class__):
719 def status(self, *args, **kwargs):
707 def status(self, *args, **kwargs):
720 orig = super(fsmonitorrepo, self).status
708 orig = super(fsmonitorrepo, self).status
721 return overridestatus(orig, self, *args, **kwargs)
709 return overridestatus(orig, self, *args, **kwargs)
722
710
723 repo.__class__ = fsmonitorrepo
711 repo.__class__ = fsmonitorrepo
724
712
725 def wrapfilecache(cls, propname, wrapper):
713 def wrapfilecache(cls, propname, wrapper):
726 """Wraps a filecache property. These can't be wrapped using the normal
714 """Wraps a filecache property. These can't be wrapped using the normal
727 wrapfunction. This should eventually go into upstream Mercurial.
715 wrapfunction. This should eventually go into upstream Mercurial.
728 """
716 """
729 assert callable(wrapper)
717 assert callable(wrapper)
730 for currcls in cls.__mro__:
718 for currcls in cls.__mro__:
731 if propname in currcls.__dict__:
719 if propname in currcls.__dict__:
732 origfn = currcls.__dict__[propname].func
720 origfn = currcls.__dict__[propname].func
733 assert callable(origfn)
721 assert callable(origfn)
734 def wrap(*args, **kwargs):
722 def wrap(*args, **kwargs):
735 return wrapper(origfn, *args, **kwargs)
723 return wrapper(origfn, *args, **kwargs)
736 currcls.__dict__[propname].func = wrap
724 currcls.__dict__[propname].func = wrap
737 break
725 break
738
726
739 if currcls is object:
727 if currcls is object:
740 raise AttributeError(
728 raise AttributeError(
741 _("type '%s' has no property '%s'") % (cls, propname))
729 _("type '%s' has no property '%s'") % (cls, propname))
@@ -1,2165 +1,2161 b''
1 # debugcommands.py - command processing for debug* commands
1 # debugcommands.py - command processing for debug* commands
2 #
2 #
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2016 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import difflib
10 import difflib
11 import errno
11 import errno
12 import operator
12 import operator
13 import os
13 import os
14 import random
14 import random
15 import socket
15 import socket
16 import string
16 import string
17 import sys
17 import sys
18 import tempfile
18 import tempfile
19 import time
19 import time
20
20
21 from .i18n import _
21 from .i18n import _
22 from .node import (
22 from .node import (
23 bin,
23 bin,
24 hex,
24 hex,
25 nullhex,
25 nullhex,
26 nullid,
26 nullid,
27 nullrev,
27 nullrev,
28 short,
28 short,
29 )
29 )
30 from . import (
30 from . import (
31 bundle2,
31 bundle2,
32 changegroup,
32 changegroup,
33 cmdutil,
33 cmdutil,
34 color,
34 color,
35 context,
35 context,
36 dagparser,
36 dagparser,
37 dagutil,
37 dagutil,
38 encoding,
38 encoding,
39 error,
39 error,
40 exchange,
40 exchange,
41 extensions,
41 extensions,
42 filemerge,
42 filemerge,
43 fileset,
43 fileset,
44 formatter,
44 formatter,
45 hg,
45 hg,
46 localrepo,
46 localrepo,
47 lock as lockmod,
47 lock as lockmod,
48 merge as mergemod,
48 merge as mergemod,
49 obsolete,
49 obsolete,
50 policy,
50 policy,
51 pvec,
51 pvec,
52 pycompat,
52 pycompat,
53 registrar,
53 registrar,
54 repair,
54 repair,
55 revlog,
55 revlog,
56 revset,
56 revset,
57 revsetlang,
57 revsetlang,
58 scmutil,
58 scmutil,
59 setdiscovery,
59 setdiscovery,
60 simplemerge,
60 simplemerge,
61 smartset,
61 smartset,
62 sslutil,
62 sslutil,
63 streamclone,
63 streamclone,
64 templater,
64 templater,
65 treediscovery,
65 treediscovery,
66 upgrade,
66 upgrade,
67 util,
67 util,
68 vfs as vfsmod,
68 vfs as vfsmod,
69 )
69 )
70
70
71 release = lockmod.release
71 release = lockmod.release
72
72
73 command = registrar.command()
73 command = registrar.command()
74
74
75 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
75 @command('debugancestor', [], _('[INDEX] REV1 REV2'), optionalrepo=True)
76 def debugancestor(ui, repo, *args):
76 def debugancestor(ui, repo, *args):
77 """find the ancestor revision of two revisions in a given index"""
77 """find the ancestor revision of two revisions in a given index"""
78 if len(args) == 3:
78 if len(args) == 3:
79 index, rev1, rev2 = args
79 index, rev1, rev2 = args
80 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False), index)
80 r = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False), index)
81 lookup = r.lookup
81 lookup = r.lookup
82 elif len(args) == 2:
82 elif len(args) == 2:
83 if not repo:
83 if not repo:
84 raise error.Abort(_('there is no Mercurial repository here '
84 raise error.Abort(_('there is no Mercurial repository here '
85 '(.hg not found)'))
85 '(.hg not found)'))
86 rev1, rev2 = args
86 rev1, rev2 = args
87 r = repo.changelog
87 r = repo.changelog
88 lookup = repo.lookup
88 lookup = repo.lookup
89 else:
89 else:
90 raise error.Abort(_('either two or three arguments required'))
90 raise error.Abort(_('either two or three arguments required'))
91 a = r.ancestor(lookup(rev1), lookup(rev2))
91 a = r.ancestor(lookup(rev1), lookup(rev2))
92 ui.write('%d:%s\n' % (r.rev(a), hex(a)))
92 ui.write('%d:%s\n' % (r.rev(a), hex(a)))
93
93
94 @command('debugapplystreamclonebundle', [], 'FILE')
94 @command('debugapplystreamclonebundle', [], 'FILE')
95 def debugapplystreamclonebundle(ui, repo, fname):
95 def debugapplystreamclonebundle(ui, repo, fname):
96 """apply a stream clone bundle file"""
96 """apply a stream clone bundle file"""
97 f = hg.openpath(ui, fname)
97 f = hg.openpath(ui, fname)
98 gen = exchange.readbundle(ui, f, fname)
98 gen = exchange.readbundle(ui, f, fname)
99 gen.apply(repo)
99 gen.apply(repo)
100
100
101 @command('debugbuilddag',
101 @command('debugbuilddag',
102 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
102 [('m', 'mergeable-file', None, _('add single file mergeable changes')),
103 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
103 ('o', 'overwritten-file', None, _('add single file all revs overwrite')),
104 ('n', 'new-file', None, _('add new file at each rev'))],
104 ('n', 'new-file', None, _('add new file at each rev'))],
105 _('[OPTION]... [TEXT]'))
105 _('[OPTION]... [TEXT]'))
106 def debugbuilddag(ui, repo, text=None,
106 def debugbuilddag(ui, repo, text=None,
107 mergeable_file=False,
107 mergeable_file=False,
108 overwritten_file=False,
108 overwritten_file=False,
109 new_file=False):
109 new_file=False):
110 """builds a repo with a given DAG from scratch in the current empty repo
110 """builds a repo with a given DAG from scratch in the current empty repo
111
111
112 The description of the DAG is read from stdin if not given on the
112 The description of the DAG is read from stdin if not given on the
113 command line.
113 command line.
114
114
115 Elements:
115 Elements:
116
116
117 - "+n" is a linear run of n nodes based on the current default parent
117 - "+n" is a linear run of n nodes based on the current default parent
118 - "." is a single node based on the current default parent
118 - "." is a single node based on the current default parent
119 - "$" resets the default parent to null (implied at the start);
119 - "$" resets the default parent to null (implied at the start);
120 otherwise the default parent is always the last node created
120 otherwise the default parent is always the last node created
121 - "<p" sets the default parent to the backref p
121 - "<p" sets the default parent to the backref p
122 - "*p" is a fork at parent p, which is a backref
122 - "*p" is a fork at parent p, which is a backref
123 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
123 - "*p1/p2" is a merge of parents p1 and p2, which are backrefs
124 - "/p2" is a merge of the preceding node and p2
124 - "/p2" is a merge of the preceding node and p2
125 - ":tag" defines a local tag for the preceding node
125 - ":tag" defines a local tag for the preceding node
126 - "@branch" sets the named branch for subsequent nodes
126 - "@branch" sets the named branch for subsequent nodes
127 - "#...\\n" is a comment up to the end of the line
127 - "#...\\n" is a comment up to the end of the line
128
128
129 Whitespace between the above elements is ignored.
129 Whitespace between the above elements is ignored.
130
130
131 A backref is either
131 A backref is either
132
132
133 - a number n, which references the node curr-n, where curr is the current
133 - a number n, which references the node curr-n, where curr is the current
134 node, or
134 node, or
135 - the name of a local tag you placed earlier using ":tag", or
135 - the name of a local tag you placed earlier using ":tag", or
136 - empty to denote the default parent.
136 - empty to denote the default parent.
137
137
138 All string valued-elements are either strictly alphanumeric, or must
138 All string valued-elements are either strictly alphanumeric, or must
139 be enclosed in double quotes ("..."), with "\\" as escape character.
139 be enclosed in double quotes ("..."), with "\\" as escape character.
140 """
140 """
141
141
142 if text is None:
142 if text is None:
143 ui.status(_("reading DAG from stdin\n"))
143 ui.status(_("reading DAG from stdin\n"))
144 text = ui.fin.read()
144 text = ui.fin.read()
145
145
146 cl = repo.changelog
146 cl = repo.changelog
147 if len(cl) > 0:
147 if len(cl) > 0:
148 raise error.Abort(_('repository is not empty'))
148 raise error.Abort(_('repository is not empty'))
149
149
150 # determine number of revs in DAG
150 # determine number of revs in DAG
151 total = 0
151 total = 0
152 for type, data in dagparser.parsedag(text):
152 for type, data in dagparser.parsedag(text):
153 if type == 'n':
153 if type == 'n':
154 total += 1
154 total += 1
155
155
156 if mergeable_file:
156 if mergeable_file:
157 linesperrev = 2
157 linesperrev = 2
158 # make a file with k lines per rev
158 # make a file with k lines per rev
159 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
159 initialmergedlines = [str(i) for i in xrange(0, total * linesperrev)]
160 initialmergedlines.append("")
160 initialmergedlines.append("")
161
161
162 tags = []
162 tags = []
163
163
164 wlock = lock = tr = None
164 wlock = lock = tr = None
165 try:
165 try:
166 wlock = repo.wlock()
166 wlock = repo.wlock()
167 lock = repo.lock()
167 lock = repo.lock()
168 tr = repo.transaction("builddag")
168 tr = repo.transaction("builddag")
169
169
170 at = -1
170 at = -1
171 atbranch = 'default'
171 atbranch = 'default'
172 nodeids = []
172 nodeids = []
173 id = 0
173 id = 0
174 ui.progress(_('building'), id, unit=_('revisions'), total=total)
174 ui.progress(_('building'), id, unit=_('revisions'), total=total)
175 for type, data in dagparser.parsedag(text):
175 for type, data in dagparser.parsedag(text):
176 if type == 'n':
176 if type == 'n':
177 ui.note(('node %s\n' % str(data)))
177 ui.note(('node %s\n' % str(data)))
178 id, ps = data
178 id, ps = data
179
179
180 files = []
180 files = []
181 fctxs = {}
181 fctxs = {}
182
182
183 p2 = None
183 p2 = None
184 if mergeable_file:
184 if mergeable_file:
185 fn = "mf"
185 fn = "mf"
186 p1 = repo[ps[0]]
186 p1 = repo[ps[0]]
187 if len(ps) > 1:
187 if len(ps) > 1:
188 p2 = repo[ps[1]]
188 p2 = repo[ps[1]]
189 pa = p1.ancestor(p2)
189 pa = p1.ancestor(p2)
190 base, local, other = [x[fn].data() for x in (pa, p1,
190 base, local, other = [x[fn].data() for x in (pa, p1,
191 p2)]
191 p2)]
192 m3 = simplemerge.Merge3Text(base, local, other)
192 m3 = simplemerge.Merge3Text(base, local, other)
193 ml = [l.strip() for l in m3.merge_lines()]
193 ml = [l.strip() for l in m3.merge_lines()]
194 ml.append("")
194 ml.append("")
195 elif at > 0:
195 elif at > 0:
196 ml = p1[fn].data().split("\n")
196 ml = p1[fn].data().split("\n")
197 else:
197 else:
198 ml = initialmergedlines
198 ml = initialmergedlines
199 ml[id * linesperrev] += " r%i" % id
199 ml[id * linesperrev] += " r%i" % id
200 mergedtext = "\n".join(ml)
200 mergedtext = "\n".join(ml)
201 files.append(fn)
201 files.append(fn)
202 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
202 fctxs[fn] = context.memfilectx(repo, fn, mergedtext)
203
203
204 if overwritten_file:
204 if overwritten_file:
205 fn = "of"
205 fn = "of"
206 files.append(fn)
206 files.append(fn)
207 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
207 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
208
208
209 if new_file:
209 if new_file:
210 fn = "nf%i" % id
210 fn = "nf%i" % id
211 files.append(fn)
211 files.append(fn)
212 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
212 fctxs[fn] = context.memfilectx(repo, fn, "r%i\n" % id)
213 if len(ps) > 1:
213 if len(ps) > 1:
214 if not p2:
214 if not p2:
215 p2 = repo[ps[1]]
215 p2 = repo[ps[1]]
216 for fn in p2:
216 for fn in p2:
217 if fn.startswith("nf"):
217 if fn.startswith("nf"):
218 files.append(fn)
218 files.append(fn)
219 fctxs[fn] = p2[fn]
219 fctxs[fn] = p2[fn]
220
220
221 def fctxfn(repo, cx, path):
221 def fctxfn(repo, cx, path):
222 return fctxs.get(path)
222 return fctxs.get(path)
223
223
224 if len(ps) == 0 or ps[0] < 0:
224 if len(ps) == 0 or ps[0] < 0:
225 pars = [None, None]
225 pars = [None, None]
226 elif len(ps) == 1:
226 elif len(ps) == 1:
227 pars = [nodeids[ps[0]], None]
227 pars = [nodeids[ps[0]], None]
228 else:
228 else:
229 pars = [nodeids[p] for p in ps]
229 pars = [nodeids[p] for p in ps]
230 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
230 cx = context.memctx(repo, pars, "r%i" % id, files, fctxfn,
231 date=(id, 0),
231 date=(id, 0),
232 user="debugbuilddag",
232 user="debugbuilddag",
233 extra={'branch': atbranch})
233 extra={'branch': atbranch})
234 nodeid = repo.commitctx(cx)
234 nodeid = repo.commitctx(cx)
235 nodeids.append(nodeid)
235 nodeids.append(nodeid)
236 at = id
236 at = id
237 elif type == 'l':
237 elif type == 'l':
238 id, name = data
238 id, name = data
239 ui.note(('tag %s\n' % name))
239 ui.note(('tag %s\n' % name))
240 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
240 tags.append("%s %s\n" % (hex(repo.changelog.node(id)), name))
241 elif type == 'a':
241 elif type == 'a':
242 ui.note(('branch %s\n' % data))
242 ui.note(('branch %s\n' % data))
243 atbranch = data
243 atbranch = data
244 ui.progress(_('building'), id, unit=_('revisions'), total=total)
244 ui.progress(_('building'), id, unit=_('revisions'), total=total)
245 tr.close()
245 tr.close()
246
246
247 if tags:
247 if tags:
248 repo.vfs.write("localtags", "".join(tags))
248 repo.vfs.write("localtags", "".join(tags))
249 finally:
249 finally:
250 ui.progress(_('building'), None)
250 ui.progress(_('building'), None)
251 release(tr, lock, wlock)
251 release(tr, lock, wlock)
252
252
253 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
253 def _debugchangegroup(ui, gen, all=None, indent=0, **opts):
254 indent_string = ' ' * indent
254 indent_string = ' ' * indent
255 if all:
255 if all:
256 ui.write(("%sformat: id, p1, p2, cset, delta base, len(delta)\n")
256 ui.write(("%sformat: id, p1, p2, cset, delta base, len(delta)\n")
257 % indent_string)
257 % indent_string)
258
258
259 def showchunks(named):
259 def showchunks(named):
260 ui.write("\n%s%s\n" % (indent_string, named))
260 ui.write("\n%s%s\n" % (indent_string, named))
261 chain = None
261 chain = None
262 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
262 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
263 node = chunkdata['node']
263 node = chunkdata['node']
264 p1 = chunkdata['p1']
264 p1 = chunkdata['p1']
265 p2 = chunkdata['p2']
265 p2 = chunkdata['p2']
266 cs = chunkdata['cs']
266 cs = chunkdata['cs']
267 deltabase = chunkdata['deltabase']
267 deltabase = chunkdata['deltabase']
268 delta = chunkdata['delta']
268 delta = chunkdata['delta']
269 ui.write("%s%s %s %s %s %s %s\n" %
269 ui.write("%s%s %s %s %s %s %s\n" %
270 (indent_string, hex(node), hex(p1), hex(p2),
270 (indent_string, hex(node), hex(p1), hex(p2),
271 hex(cs), hex(deltabase), len(delta)))
271 hex(cs), hex(deltabase), len(delta)))
272 chain = node
272 chain = node
273
273
274 chunkdata = gen.changelogheader()
274 chunkdata = gen.changelogheader()
275 showchunks("changelog")
275 showchunks("changelog")
276 chunkdata = gen.manifestheader()
276 chunkdata = gen.manifestheader()
277 showchunks("manifest")
277 showchunks("manifest")
278 for chunkdata in iter(gen.filelogheader, {}):
278 for chunkdata in iter(gen.filelogheader, {}):
279 fname = chunkdata['filename']
279 fname = chunkdata['filename']
280 showchunks(fname)
280 showchunks(fname)
281 else:
281 else:
282 if isinstance(gen, bundle2.unbundle20):
282 if isinstance(gen, bundle2.unbundle20):
283 raise error.Abort(_('use debugbundle2 for this file'))
283 raise error.Abort(_('use debugbundle2 for this file'))
284 chunkdata = gen.changelogheader()
284 chunkdata = gen.changelogheader()
285 chain = None
285 chain = None
286 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
286 for chunkdata in iter(lambda: gen.deltachunk(chain), {}):
287 node = chunkdata['node']
287 node = chunkdata['node']
288 ui.write("%s%s\n" % (indent_string, hex(node)))
288 ui.write("%s%s\n" % (indent_string, hex(node)))
289 chain = node
289 chain = node
290
290
291 def _debugbundle2(ui, gen, all=None, **opts):
291 def _debugbundle2(ui, gen, all=None, **opts):
292 """lists the contents of a bundle2"""
292 """lists the contents of a bundle2"""
293 if not isinstance(gen, bundle2.unbundle20):
293 if not isinstance(gen, bundle2.unbundle20):
294 raise error.Abort(_('not a bundle2 file'))
294 raise error.Abort(_('not a bundle2 file'))
295 ui.write(('Stream params: %s\n' % repr(gen.params)))
295 ui.write(('Stream params: %s\n' % repr(gen.params)))
296 for part in gen.iterparts():
296 for part in gen.iterparts():
297 ui.write('%s -- %r\n' % (part.type, repr(part.params)))
297 ui.write('%s -- %r\n' % (part.type, repr(part.params)))
298 if part.type == 'changegroup':
298 if part.type == 'changegroup':
299 version = part.params.get('version', '01')
299 version = part.params.get('version', '01')
300 cg = changegroup.getunbundler(version, part, 'UN')
300 cg = changegroup.getunbundler(version, part, 'UN')
301 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
301 _debugchangegroup(ui, cg, all=all, indent=4, **opts)
302
302
303 @command('debugbundle',
303 @command('debugbundle',
304 [('a', 'all', None, _('show all details')),
304 [('a', 'all', None, _('show all details')),
305 ('', 'spec', None, _('print the bundlespec of the bundle'))],
305 ('', 'spec', None, _('print the bundlespec of the bundle'))],
306 _('FILE'),
306 _('FILE'),
307 norepo=True)
307 norepo=True)
308 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
308 def debugbundle(ui, bundlepath, all=None, spec=None, **opts):
309 """lists the contents of a bundle"""
309 """lists the contents of a bundle"""
310 with hg.openpath(ui, bundlepath) as f:
310 with hg.openpath(ui, bundlepath) as f:
311 if spec:
311 if spec:
312 spec = exchange.getbundlespec(ui, f)
312 spec = exchange.getbundlespec(ui, f)
313 ui.write('%s\n' % spec)
313 ui.write('%s\n' % spec)
314 return
314 return
315
315
316 gen = exchange.readbundle(ui, f, bundlepath)
316 gen = exchange.readbundle(ui, f, bundlepath)
317 if isinstance(gen, bundle2.unbundle20):
317 if isinstance(gen, bundle2.unbundle20):
318 return _debugbundle2(ui, gen, all=all, **opts)
318 return _debugbundle2(ui, gen, all=all, **opts)
319 _debugchangegroup(ui, gen, all=all, **opts)
319 _debugchangegroup(ui, gen, all=all, **opts)
320
320
321 @command('debugcheckstate', [], '')
321 @command('debugcheckstate', [], '')
322 def debugcheckstate(ui, repo):
322 def debugcheckstate(ui, repo):
323 """validate the correctness of the current dirstate"""
323 """validate the correctness of the current dirstate"""
324 parent1, parent2 = repo.dirstate.parents()
324 parent1, parent2 = repo.dirstate.parents()
325 m1 = repo[parent1].manifest()
325 m1 = repo[parent1].manifest()
326 m2 = repo[parent2].manifest()
326 m2 = repo[parent2].manifest()
327 errors = 0
327 errors = 0
328 for f in repo.dirstate:
328 for f in repo.dirstate:
329 state = repo.dirstate[f]
329 state = repo.dirstate[f]
330 if state in "nr" and f not in m1:
330 if state in "nr" and f not in m1:
331 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
331 ui.warn(_("%s in state %s, but not in manifest1\n") % (f, state))
332 errors += 1
332 errors += 1
333 if state in "a" and f in m1:
333 if state in "a" and f in m1:
334 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
334 ui.warn(_("%s in state %s, but also in manifest1\n") % (f, state))
335 errors += 1
335 errors += 1
336 if state in "m" and f not in m1 and f not in m2:
336 if state in "m" and f not in m1 and f not in m2:
337 ui.warn(_("%s in state %s, but not in either manifest\n") %
337 ui.warn(_("%s in state %s, but not in either manifest\n") %
338 (f, state))
338 (f, state))
339 errors += 1
339 errors += 1
340 for f in m1:
340 for f in m1:
341 state = repo.dirstate[f]
341 state = repo.dirstate[f]
342 if state not in "nrm":
342 if state not in "nrm":
343 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
343 ui.warn(_("%s in manifest1, but listed as state %s") % (f, state))
344 errors += 1
344 errors += 1
345 if errors:
345 if errors:
346 error = _(".hg/dirstate inconsistent with current parent's manifest")
346 error = _(".hg/dirstate inconsistent with current parent's manifest")
347 raise error.Abort(error)
347 raise error.Abort(error)
348
348
349 @command('debugcolor',
349 @command('debugcolor',
350 [('', 'style', None, _('show all configured styles'))],
350 [('', 'style', None, _('show all configured styles'))],
351 'hg debugcolor')
351 'hg debugcolor')
352 def debugcolor(ui, repo, **opts):
352 def debugcolor(ui, repo, **opts):
353 """show available color, effects or style"""
353 """show available color, effects or style"""
354 ui.write(('color mode: %s\n') % ui._colormode)
354 ui.write(('color mode: %s\n') % ui._colormode)
355 if opts.get('style'):
355 if opts.get('style'):
356 return _debugdisplaystyle(ui)
356 return _debugdisplaystyle(ui)
357 else:
357 else:
358 return _debugdisplaycolor(ui)
358 return _debugdisplaycolor(ui)
359
359
360 def _debugdisplaycolor(ui):
360 def _debugdisplaycolor(ui):
361 ui = ui.copy()
361 ui = ui.copy()
362 ui._styles.clear()
362 ui._styles.clear()
363 for effect in color._activeeffects(ui).keys():
363 for effect in color._activeeffects(ui).keys():
364 ui._styles[effect] = effect
364 ui._styles[effect] = effect
365 if ui._terminfoparams:
365 if ui._terminfoparams:
366 for k, v in ui.configitems('color'):
366 for k, v in ui.configitems('color'):
367 if k.startswith('color.'):
367 if k.startswith('color.'):
368 ui._styles[k] = k[6:]
368 ui._styles[k] = k[6:]
369 elif k.startswith('terminfo.'):
369 elif k.startswith('terminfo.'):
370 ui._styles[k] = k[9:]
370 ui._styles[k] = k[9:]
371 ui.write(_('available colors:\n'))
371 ui.write(_('available colors:\n'))
372 # sort label with a '_' after the other to group '_background' entry.
372 # sort label with a '_' after the other to group '_background' entry.
373 items = sorted(ui._styles.items(),
373 items = sorted(ui._styles.items(),
374 key=lambda i: ('_' in i[0], i[0], i[1]))
374 key=lambda i: ('_' in i[0], i[0], i[1]))
375 for colorname, label in items:
375 for colorname, label in items:
376 ui.write(('%s\n') % colorname, label=label)
376 ui.write(('%s\n') % colorname, label=label)
377
377
378 def _debugdisplaystyle(ui):
378 def _debugdisplaystyle(ui):
379 ui.write(_('available style:\n'))
379 ui.write(_('available style:\n'))
380 width = max(len(s) for s in ui._styles)
380 width = max(len(s) for s in ui._styles)
381 for label, effects in sorted(ui._styles.items()):
381 for label, effects in sorted(ui._styles.items()):
382 ui.write('%s' % label, label=label)
382 ui.write('%s' % label, label=label)
383 if effects:
383 if effects:
384 # 50
384 # 50
385 ui.write(': ')
385 ui.write(': ')
386 ui.write(' ' * (max(0, width - len(label))))
386 ui.write(' ' * (max(0, width - len(label))))
387 ui.write(', '.join(ui.label(e, e) for e in effects.split()))
387 ui.write(', '.join(ui.label(e, e) for e in effects.split()))
388 ui.write('\n')
388 ui.write('\n')
389
389
390 @command('debugcreatestreamclonebundle', [], 'FILE')
390 @command('debugcreatestreamclonebundle', [], 'FILE')
391 def debugcreatestreamclonebundle(ui, repo, fname):
391 def debugcreatestreamclonebundle(ui, repo, fname):
392 """create a stream clone bundle file
392 """create a stream clone bundle file
393
393
394 Stream bundles are special bundles that are essentially archives of
394 Stream bundles are special bundles that are essentially archives of
395 revlog files. They are commonly used for cloning very quickly.
395 revlog files. They are commonly used for cloning very quickly.
396 """
396 """
397 requirements, gen = streamclone.generatebundlev1(repo)
397 requirements, gen = streamclone.generatebundlev1(repo)
398 changegroup.writechunks(ui, gen, fname)
398 changegroup.writechunks(ui, gen, fname)
399
399
400 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
400 ui.write(_('bundle requirements: %s\n') % ', '.join(sorted(requirements)))
401
401
402 @command('debugdag',
402 @command('debugdag',
403 [('t', 'tags', None, _('use tags as labels')),
403 [('t', 'tags', None, _('use tags as labels')),
404 ('b', 'branches', None, _('annotate with branch names')),
404 ('b', 'branches', None, _('annotate with branch names')),
405 ('', 'dots', None, _('use dots for runs')),
405 ('', 'dots', None, _('use dots for runs')),
406 ('s', 'spaces', None, _('separate elements by spaces'))],
406 ('s', 'spaces', None, _('separate elements by spaces'))],
407 _('[OPTION]... [FILE [REV]...]'),
407 _('[OPTION]... [FILE [REV]...]'),
408 optionalrepo=True)
408 optionalrepo=True)
409 def debugdag(ui, repo, file_=None, *revs, **opts):
409 def debugdag(ui, repo, file_=None, *revs, **opts):
410 """format the changelog or an index DAG as a concise textual description
410 """format the changelog or an index DAG as a concise textual description
411
411
412 If you pass a revlog index, the revlog's DAG is emitted. If you list
412 If you pass a revlog index, the revlog's DAG is emitted. If you list
413 revision numbers, they get labeled in the output as rN.
413 revision numbers, they get labeled in the output as rN.
414
414
415 Otherwise, the changelog DAG of the current repo is emitted.
415 Otherwise, the changelog DAG of the current repo is emitted.
416 """
416 """
417 spaces = opts.get('spaces')
417 spaces = opts.get('spaces')
418 dots = opts.get('dots')
418 dots = opts.get('dots')
419 if file_:
419 if file_:
420 rlog = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
420 rlog = revlog.revlog(vfsmod.vfs(pycompat.getcwd(), audit=False),
421 file_)
421 file_)
422 revs = set((int(r) for r in revs))
422 revs = set((int(r) for r in revs))
423 def events():
423 def events():
424 for r in rlog:
424 for r in rlog:
425 yield 'n', (r, list(p for p in rlog.parentrevs(r)
425 yield 'n', (r, list(p for p in rlog.parentrevs(r)
426 if p != -1))
426 if p != -1))
427 if r in revs:
427 if r in revs:
428 yield 'l', (r, "r%i" % r)
428 yield 'l', (r, "r%i" % r)
429 elif repo:
429 elif repo:
430 cl = repo.changelog
430 cl = repo.changelog
431 tags = opts.get('tags')
431 tags = opts.get('tags')
432 branches = opts.get('branches')
432 branches = opts.get('branches')
433 if tags:
433 if tags:
434 labels = {}
434 labels = {}
435 for l, n in repo.tags().items():
435 for l, n in repo.tags().items():
436 labels.setdefault(cl.rev(n), []).append(l)
436 labels.setdefault(cl.rev(n), []).append(l)
437 def events():
437 def events():
438 b = "default"
438 b = "default"
439 for r in cl:
439 for r in cl:
440 if branches:
440 if branches:
441 newb = cl.read(cl.node(r))[5]['branch']
441 newb = cl.read(cl.node(r))[5]['branch']
442 if newb != b:
442 if newb != b:
443 yield 'a', newb
443 yield 'a', newb
444 b = newb
444 b = newb
445 yield 'n', (r, list(p for p in cl.parentrevs(r)
445 yield 'n', (r, list(p for p in cl.parentrevs(r)
446 if p != -1))
446 if p != -1))
447 if tags:
447 if tags:
448 ls = labels.get(r)
448 ls = labels.get(r)
449 if ls:
449 if ls:
450 for l in ls:
450 for l in ls:
451 yield 'l', (r, l)
451 yield 'l', (r, l)
452 else:
452 else:
453 raise error.Abort(_('need repo for changelog dag'))
453 raise error.Abort(_('need repo for changelog dag'))
454
454
455 for line in dagparser.dagtextlines(events(),
455 for line in dagparser.dagtextlines(events(),
456 addspaces=spaces,
456 addspaces=spaces,
457 wraplabels=True,
457 wraplabels=True,
458 wrapannotations=True,
458 wrapannotations=True,
459 wrapnonlinear=dots,
459 wrapnonlinear=dots,
460 usedots=dots,
460 usedots=dots,
461 maxlinewidth=70):
461 maxlinewidth=70):
462 ui.write(line)
462 ui.write(line)
463 ui.write("\n")
463 ui.write("\n")
464
464
465 @command('debugdata', cmdutil.debugrevlogopts, _('-c|-m|FILE REV'))
465 @command('debugdata', cmdutil.debugrevlogopts, _('-c|-m|FILE REV'))
466 def debugdata(ui, repo, file_, rev=None, **opts):
466 def debugdata(ui, repo, file_, rev=None, **opts):
467 """dump the contents of a data file revision"""
467 """dump the contents of a data file revision"""
468 if opts.get('changelog') or opts.get('manifest') or opts.get('dir'):
468 if opts.get('changelog') or opts.get('manifest') or opts.get('dir'):
469 if rev is not None:
469 if rev is not None:
470 raise error.CommandError('debugdata', _('invalid arguments'))
470 raise error.CommandError('debugdata', _('invalid arguments'))
471 file_, rev = None, file_
471 file_, rev = None, file_
472 elif rev is None:
472 elif rev is None:
473 raise error.CommandError('debugdata', _('invalid arguments'))
473 raise error.CommandError('debugdata', _('invalid arguments'))
474 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
474 r = cmdutil.openrevlog(repo, 'debugdata', file_, opts)
475 try:
475 try:
476 ui.write(r.revision(r.lookup(rev), raw=True))
476 ui.write(r.revision(r.lookup(rev), raw=True))
477 except KeyError:
477 except KeyError:
478 raise error.Abort(_('invalid revision identifier %s') % rev)
478 raise error.Abort(_('invalid revision identifier %s') % rev)
479
479
480 @command('debugdate',
480 @command('debugdate',
481 [('e', 'extended', None, _('try extended date formats'))],
481 [('e', 'extended', None, _('try extended date formats'))],
482 _('[-e] DATE [RANGE]'),
482 _('[-e] DATE [RANGE]'),
483 norepo=True, optionalrepo=True)
483 norepo=True, optionalrepo=True)
484 def debugdate(ui, date, range=None, **opts):
484 def debugdate(ui, date, range=None, **opts):
485 """parse and display a date"""
485 """parse and display a date"""
486 if opts["extended"]:
486 if opts["extended"]:
487 d = util.parsedate(date, util.extendeddateformats)
487 d = util.parsedate(date, util.extendeddateformats)
488 else:
488 else:
489 d = util.parsedate(date)
489 d = util.parsedate(date)
490 ui.write(("internal: %s %s\n") % d)
490 ui.write(("internal: %s %s\n") % d)
491 ui.write(("standard: %s\n") % util.datestr(d))
491 ui.write(("standard: %s\n") % util.datestr(d))
492 if range:
492 if range:
493 m = util.matchdate(range)
493 m = util.matchdate(range)
494 ui.write(("match: %s\n") % m(d[0]))
494 ui.write(("match: %s\n") % m(d[0]))
495
495
496 @command('debugdeltachain',
496 @command('debugdeltachain',
497 cmdutil.debugrevlogopts + cmdutil.formatteropts,
497 cmdutil.debugrevlogopts + cmdutil.formatteropts,
498 _('-c|-m|FILE'),
498 _('-c|-m|FILE'),
499 optionalrepo=True)
499 optionalrepo=True)
500 def debugdeltachain(ui, repo, file_=None, **opts):
500 def debugdeltachain(ui, repo, file_=None, **opts):
501 """dump information about delta chains in a revlog
501 """dump information about delta chains in a revlog
502
502
503 Output can be templatized. Available template keywords are:
503 Output can be templatized. Available template keywords are:
504
504
505 :``rev``: revision number
505 :``rev``: revision number
506 :``chainid``: delta chain identifier (numbered by unique base)
506 :``chainid``: delta chain identifier (numbered by unique base)
507 :``chainlen``: delta chain length to this revision
507 :``chainlen``: delta chain length to this revision
508 :``prevrev``: previous revision in delta chain
508 :``prevrev``: previous revision in delta chain
509 :``deltatype``: role of delta / how it was computed
509 :``deltatype``: role of delta / how it was computed
510 :``compsize``: compressed size of revision
510 :``compsize``: compressed size of revision
511 :``uncompsize``: uncompressed size of revision
511 :``uncompsize``: uncompressed size of revision
512 :``chainsize``: total size of compressed revisions in chain
512 :``chainsize``: total size of compressed revisions in chain
513 :``chainratio``: total chain size divided by uncompressed revision size
513 :``chainratio``: total chain size divided by uncompressed revision size
514 (new delta chains typically start at ratio 2.00)
514 (new delta chains typically start at ratio 2.00)
515 :``lindist``: linear distance from base revision in delta chain to end
515 :``lindist``: linear distance from base revision in delta chain to end
516 of this revision
516 of this revision
517 :``extradist``: total size of revisions not part of this delta chain from
517 :``extradist``: total size of revisions not part of this delta chain from
518 base of delta chain to end of this revision; a measurement
518 base of delta chain to end of this revision; a measurement
519 of how much extra data we need to read/seek across to read
519 of how much extra data we need to read/seek across to read
520 the delta chain for this revision
520 the delta chain for this revision
521 :``extraratio``: extradist divided by chainsize; another representation of
521 :``extraratio``: extradist divided by chainsize; another representation of
522 how much unrelated data is needed to load this delta chain
522 how much unrelated data is needed to load this delta chain
523 """
523 """
524 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
524 r = cmdutil.openrevlog(repo, 'debugdeltachain', file_, opts)
525 index = r.index
525 index = r.index
526 generaldelta = r.version & revlog.FLAG_GENERALDELTA
526 generaldelta = r.version & revlog.FLAG_GENERALDELTA
527
527
528 def revinfo(rev):
528 def revinfo(rev):
529 e = index[rev]
529 e = index[rev]
530 compsize = e[1]
530 compsize = e[1]
531 uncompsize = e[2]
531 uncompsize = e[2]
532 chainsize = 0
532 chainsize = 0
533
533
534 if generaldelta:
534 if generaldelta:
535 if e[3] == e[5]:
535 if e[3] == e[5]:
536 deltatype = 'p1'
536 deltatype = 'p1'
537 elif e[3] == e[6]:
537 elif e[3] == e[6]:
538 deltatype = 'p2'
538 deltatype = 'p2'
539 elif e[3] == rev - 1:
539 elif e[3] == rev - 1:
540 deltatype = 'prev'
540 deltatype = 'prev'
541 elif e[3] == rev:
541 elif e[3] == rev:
542 deltatype = 'base'
542 deltatype = 'base'
543 else:
543 else:
544 deltatype = 'other'
544 deltatype = 'other'
545 else:
545 else:
546 if e[3] == rev:
546 if e[3] == rev:
547 deltatype = 'base'
547 deltatype = 'base'
548 else:
548 else:
549 deltatype = 'prev'
549 deltatype = 'prev'
550
550
551 chain = r._deltachain(rev)[0]
551 chain = r._deltachain(rev)[0]
552 for iterrev in chain:
552 for iterrev in chain:
553 e = index[iterrev]
553 e = index[iterrev]
554 chainsize += e[1]
554 chainsize += e[1]
555
555
556 return compsize, uncompsize, deltatype, chain, chainsize
556 return compsize, uncompsize, deltatype, chain, chainsize
557
557
558 fm = ui.formatter('debugdeltachain', opts)
558 fm = ui.formatter('debugdeltachain', opts)
559
559
560 fm.plain(' rev chain# chainlen prev delta '
560 fm.plain(' rev chain# chainlen prev delta '
561 'size rawsize chainsize ratio lindist extradist '
561 'size rawsize chainsize ratio lindist extradist '
562 'extraratio\n')
562 'extraratio\n')
563
563
564 chainbases = {}
564 chainbases = {}
565 for rev in r:
565 for rev in r:
566 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
566 comp, uncomp, deltatype, chain, chainsize = revinfo(rev)
567 chainbase = chain[0]
567 chainbase = chain[0]
568 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
568 chainid = chainbases.setdefault(chainbase, len(chainbases) + 1)
569 basestart = r.start(chainbase)
569 basestart = r.start(chainbase)
570 revstart = r.start(rev)
570 revstart = r.start(rev)
571 lineardist = revstart + comp - basestart
571 lineardist = revstart + comp - basestart
572 extradist = lineardist - chainsize
572 extradist = lineardist - chainsize
573 try:
573 try:
574 prevrev = chain[-2]
574 prevrev = chain[-2]
575 except IndexError:
575 except IndexError:
576 prevrev = -1
576 prevrev = -1
577
577
578 chainratio = float(chainsize) / float(uncomp)
578 chainratio = float(chainsize) / float(uncomp)
579 extraratio = float(extradist) / float(chainsize)
579 extraratio = float(extradist) / float(chainsize)
580
580
581 fm.startitem()
581 fm.startitem()
582 fm.write('rev chainid chainlen prevrev deltatype compsize '
582 fm.write('rev chainid chainlen prevrev deltatype compsize '
583 'uncompsize chainsize chainratio lindist extradist '
583 'uncompsize chainsize chainratio lindist extradist '
584 'extraratio',
584 'extraratio',
585 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
585 '%7d %7d %8d %8d %7s %10d %10d %10d %9.5f %9d %9d %10.5f\n',
586 rev, chainid, len(chain), prevrev, deltatype, comp,
586 rev, chainid, len(chain), prevrev, deltatype, comp,
587 uncomp, chainsize, chainratio, lineardist, extradist,
587 uncomp, chainsize, chainratio, lineardist, extradist,
588 extraratio,
588 extraratio,
589 rev=rev, chainid=chainid, chainlen=len(chain),
589 rev=rev, chainid=chainid, chainlen=len(chain),
590 prevrev=prevrev, deltatype=deltatype, compsize=comp,
590 prevrev=prevrev, deltatype=deltatype, compsize=comp,
591 uncompsize=uncomp, chainsize=chainsize,
591 uncompsize=uncomp, chainsize=chainsize,
592 chainratio=chainratio, lindist=lineardist,
592 chainratio=chainratio, lindist=lineardist,
593 extradist=extradist, extraratio=extraratio)
593 extradist=extradist, extraratio=extraratio)
594
594
595 fm.end()
595 fm.end()
596
596
597 @command('debugdirstate|debugstate',
597 @command('debugdirstate|debugstate',
598 [('', 'nodates', None, _('do not display the saved mtime')),
598 [('', 'nodates', None, _('do not display the saved mtime')),
599 ('', 'datesort', None, _('sort by saved mtime'))],
599 ('', 'datesort', None, _('sort by saved mtime'))],
600 _('[OPTION]...'))
600 _('[OPTION]...'))
601 def debugstate(ui, repo, **opts):
601 def debugstate(ui, repo, **opts):
602 """show the contents of the current dirstate"""
602 """show the contents of the current dirstate"""
603
603
604 nodates = opts.get('nodates')
604 nodates = opts.get('nodates')
605 datesort = opts.get('datesort')
605 datesort = opts.get('datesort')
606
606
607 timestr = ""
607 timestr = ""
608 if datesort:
608 if datesort:
609 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
609 keyfunc = lambda x: (x[1][3], x[0]) # sort by mtime, then by filename
610 else:
610 else:
611 keyfunc = None # sort by filename
611 keyfunc = None # sort by filename
612 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
612 for file_, ent in sorted(repo.dirstate._map.iteritems(), key=keyfunc):
613 if ent[3] == -1:
613 if ent[3] == -1:
614 timestr = 'unset '
614 timestr = 'unset '
615 elif nodates:
615 elif nodates:
616 timestr = 'set '
616 timestr = 'set '
617 else:
617 else:
618 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
618 timestr = time.strftime("%Y-%m-%d %H:%M:%S ",
619 time.localtime(ent[3]))
619 time.localtime(ent[3]))
620 if ent[1] & 0o20000:
620 if ent[1] & 0o20000:
621 mode = 'lnk'
621 mode = 'lnk'
622 else:
622 else:
623 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
623 mode = '%3o' % (ent[1] & 0o777 & ~util.umask)
624 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
624 ui.write("%c %s %10d %s%s\n" % (ent[0], mode, ent[2], timestr, file_))
625 for f in repo.dirstate.copies():
625 for f in repo.dirstate.copies():
626 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
626 ui.write(_("copy: %s -> %s\n") % (repo.dirstate.copied(f), f))
627
627
628 @command('debugdiscovery',
628 @command('debugdiscovery',
629 [('', 'old', None, _('use old-style discovery')),
629 [('', 'old', None, _('use old-style discovery')),
630 ('', 'nonheads', None,
630 ('', 'nonheads', None,
631 _('use old-style discovery with non-heads included')),
631 _('use old-style discovery with non-heads included')),
632 ] + cmdutil.remoteopts,
632 ] + cmdutil.remoteopts,
633 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
633 _('[-l REV] [-r REV] [-b BRANCH]... [OTHER]'))
634 def debugdiscovery(ui, repo, remoteurl="default", **opts):
634 def debugdiscovery(ui, repo, remoteurl="default", **opts):
635 """runs the changeset discovery protocol in isolation"""
635 """runs the changeset discovery protocol in isolation"""
636 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
636 remoteurl, branches = hg.parseurl(ui.expandpath(remoteurl),
637 opts.get('branch'))
637 opts.get('branch'))
638 remote = hg.peer(repo, opts, remoteurl)
638 remote = hg.peer(repo, opts, remoteurl)
639 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
639 ui.status(_('comparing with %s\n') % util.hidepassword(remoteurl))
640
640
641 # make sure tests are repeatable
641 # make sure tests are repeatable
642 random.seed(12323)
642 random.seed(12323)
643
643
644 def doit(localheads, remoteheads, remote=remote):
644 def doit(localheads, remoteheads, remote=remote):
645 if opts.get('old'):
645 if opts.get('old'):
646 if localheads:
646 if localheads:
647 raise error.Abort('cannot use localheads with old style '
647 raise error.Abort('cannot use localheads with old style '
648 'discovery')
648 'discovery')
649 if not util.safehasattr(remote, 'branches'):
649 if not util.safehasattr(remote, 'branches'):
650 # enable in-client legacy support
650 # enable in-client legacy support
651 remote = localrepo.locallegacypeer(remote.local())
651 remote = localrepo.locallegacypeer(remote.local())
652 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
652 common, _in, hds = treediscovery.findcommonincoming(repo, remote,
653 force=True)
653 force=True)
654 common = set(common)
654 common = set(common)
655 if not opts.get('nonheads'):
655 if not opts.get('nonheads'):
656 ui.write(("unpruned common: %s\n") %
656 ui.write(("unpruned common: %s\n") %
657 " ".join(sorted(short(n) for n in common)))
657 " ".join(sorted(short(n) for n in common)))
658 dag = dagutil.revlogdag(repo.changelog)
658 dag = dagutil.revlogdag(repo.changelog)
659 all = dag.ancestorset(dag.internalizeall(common))
659 all = dag.ancestorset(dag.internalizeall(common))
660 common = dag.externalizeall(dag.headsetofconnecteds(all))
660 common = dag.externalizeall(dag.headsetofconnecteds(all))
661 else:
661 else:
662 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
662 common, any, hds = setdiscovery.findcommonheads(ui, repo, remote)
663 common = set(common)
663 common = set(common)
664 rheads = set(hds)
664 rheads = set(hds)
665 lheads = set(repo.heads())
665 lheads = set(repo.heads())
666 ui.write(("common heads: %s\n") %
666 ui.write(("common heads: %s\n") %
667 " ".join(sorted(short(n) for n in common)))
667 " ".join(sorted(short(n) for n in common)))
668 if lheads <= common:
668 if lheads <= common:
669 ui.write(("local is subset\n"))
669 ui.write(("local is subset\n"))
670 elif rheads <= common:
670 elif rheads <= common:
671 ui.write(("remote is subset\n"))
671 ui.write(("remote is subset\n"))
672
672
673 serverlogs = opts.get('serverlog')
673 serverlogs = opts.get('serverlog')
674 if serverlogs:
674 if serverlogs:
675 for filename in serverlogs:
675 for filename in serverlogs:
676 with open(filename, 'r') as logfile:
676 with open(filename, 'r') as logfile:
677 line = logfile.readline()
677 line = logfile.readline()
678 while line:
678 while line:
679 parts = line.strip().split(';')
679 parts = line.strip().split(';')
680 op = parts[1]
680 op = parts[1]
681 if op == 'cg':
681 if op == 'cg':
682 pass
682 pass
683 elif op == 'cgss':
683 elif op == 'cgss':
684 doit(parts[2].split(' '), parts[3].split(' '))
684 doit(parts[2].split(' '), parts[3].split(' '))
685 elif op == 'unb':
685 elif op == 'unb':
686 doit(parts[3].split(' '), parts[2].split(' '))
686 doit(parts[3].split(' '), parts[2].split(' '))
687 line = logfile.readline()
687 line = logfile.readline()
688 else:
688 else:
689 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
689 remoterevs, _checkout = hg.addbranchrevs(repo, remote, branches,
690 opts.get('remote_head'))
690 opts.get('remote_head'))
691 localrevs = opts.get('local_head')
691 localrevs = opts.get('local_head')
692 doit(localrevs, remoterevs)
692 doit(localrevs, remoterevs)
693
693
694 @command('debugextensions', cmdutil.formatteropts, [], norepo=True)
694 @command('debugextensions', cmdutil.formatteropts, [], norepo=True)
695 def debugextensions(ui, **opts):
695 def debugextensions(ui, **opts):
696 '''show information about active extensions'''
696 '''show information about active extensions'''
697 exts = extensions.extensions(ui)
697 exts = extensions.extensions(ui)
698 hgver = util.version()
698 hgver = util.version()
699 fm = ui.formatter('debugextensions', opts)
699 fm = ui.formatter('debugextensions', opts)
700 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
700 for extname, extmod in sorted(exts, key=operator.itemgetter(0)):
701 isinternal = extensions.ismoduleinternal(extmod)
701 isinternal = extensions.ismoduleinternal(extmod)
702 extsource = pycompat.fsencode(extmod.__file__)
702 extsource = pycompat.fsencode(extmod.__file__)
703 if isinternal:
703 if isinternal:
704 exttestedwith = [] # never expose magic string to users
704 exttestedwith = [] # never expose magic string to users
705 else:
705 else:
706 exttestedwith = getattr(extmod, 'testedwith', '').split()
706 exttestedwith = getattr(extmod, 'testedwith', '').split()
707 extbuglink = getattr(extmod, 'buglink', None)
707 extbuglink = getattr(extmod, 'buglink', None)
708
708
709 fm.startitem()
709 fm.startitem()
710
710
711 if ui.quiet or ui.verbose:
711 if ui.quiet or ui.verbose:
712 fm.write('name', '%s\n', extname)
712 fm.write('name', '%s\n', extname)
713 else:
713 else:
714 fm.write('name', '%s', extname)
714 fm.write('name', '%s', extname)
715 if isinternal or hgver in exttestedwith:
715 if isinternal or hgver in exttestedwith:
716 fm.plain('\n')
716 fm.plain('\n')
717 elif not exttestedwith:
717 elif not exttestedwith:
718 fm.plain(_(' (untested!)\n'))
718 fm.plain(_(' (untested!)\n'))
719 else:
719 else:
720 lasttestedversion = exttestedwith[-1]
720 lasttestedversion = exttestedwith[-1]
721 fm.plain(' (%s!)\n' % lasttestedversion)
721 fm.plain(' (%s!)\n' % lasttestedversion)
722
722
723 fm.condwrite(ui.verbose and extsource, 'source',
723 fm.condwrite(ui.verbose and extsource, 'source',
724 _(' location: %s\n'), extsource or "")
724 _(' location: %s\n'), extsource or "")
725
725
726 if ui.verbose:
726 if ui.verbose:
727 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
727 fm.plain(_(' bundled: %s\n') % ['no', 'yes'][isinternal])
728 fm.data(bundled=isinternal)
728 fm.data(bundled=isinternal)
729
729
730 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
730 fm.condwrite(ui.verbose and exttestedwith, 'testedwith',
731 _(' tested with: %s\n'),
731 _(' tested with: %s\n'),
732 fm.formatlist(exttestedwith, name='ver'))
732 fm.formatlist(exttestedwith, name='ver'))
733
733
734 fm.condwrite(ui.verbose and extbuglink, 'buglink',
734 fm.condwrite(ui.verbose and extbuglink, 'buglink',
735 _(' bug reporting: %s\n'), extbuglink or "")
735 _(' bug reporting: %s\n'), extbuglink or "")
736
736
737 fm.end()
737 fm.end()
738
738
739 @command('debugfileset',
739 @command('debugfileset',
740 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
740 [('r', 'rev', '', _('apply the filespec on this revision'), _('REV'))],
741 _('[-r REV] FILESPEC'))
741 _('[-r REV] FILESPEC'))
742 def debugfileset(ui, repo, expr, **opts):
742 def debugfileset(ui, repo, expr, **opts):
743 '''parse and apply a fileset specification'''
743 '''parse and apply a fileset specification'''
744 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
744 ctx = scmutil.revsingle(repo, opts.get('rev'), None)
745 if ui.verbose:
745 if ui.verbose:
746 tree = fileset.parse(expr)
746 tree = fileset.parse(expr)
747 ui.note(fileset.prettyformat(tree), "\n")
747 ui.note(fileset.prettyformat(tree), "\n")
748
748
749 for f in ctx.getfileset(expr):
749 for f in ctx.getfileset(expr):
750 ui.write("%s\n" % f)
750 ui.write("%s\n" % f)
751
751
752 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
752 @command('debugfsinfo', [], _('[PATH]'), norepo=True)
753 def debugfsinfo(ui, path="."):
753 def debugfsinfo(ui, path="."):
754 """show information detected about current filesystem"""
754 """show information detected about current filesystem"""
755 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
755 ui.write(('exec: %s\n') % (util.checkexec(path) and 'yes' or 'no'))
756 ui.write(('fstype: %s\n') % (util.getfstype(path) or '(unknown)'))
756 ui.write(('fstype: %s\n') % (util.getfstype(path) or '(unknown)'))
757 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
757 ui.write(('symlink: %s\n') % (util.checklink(path) and 'yes' or 'no'))
758 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
758 ui.write(('hardlink: %s\n') % (util.checknlink(path) and 'yes' or 'no'))
759 casesensitive = '(unknown)'
759 casesensitive = '(unknown)'
760 try:
760 try:
761 with tempfile.NamedTemporaryFile(prefix='.debugfsinfo', dir=path) as f:
761 with tempfile.NamedTemporaryFile(prefix='.debugfsinfo', dir=path) as f:
762 casesensitive = util.fscasesensitive(f.name) and 'yes' or 'no'
762 casesensitive = util.fscasesensitive(f.name) and 'yes' or 'no'
763 except OSError:
763 except OSError:
764 pass
764 pass
765 ui.write(('case-sensitive: %s\n') % casesensitive)
765 ui.write(('case-sensitive: %s\n') % casesensitive)
766
766
767 @command('debuggetbundle',
767 @command('debuggetbundle',
768 [('H', 'head', [], _('id of head node'), _('ID')),
768 [('H', 'head', [], _('id of head node'), _('ID')),
769 ('C', 'common', [], _('id of common node'), _('ID')),
769 ('C', 'common', [], _('id of common node'), _('ID')),
770 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
770 ('t', 'type', 'bzip2', _('bundle compression type to use'), _('TYPE'))],
771 _('REPO FILE [-H|-C ID]...'),
771 _('REPO FILE [-H|-C ID]...'),
772 norepo=True)
772 norepo=True)
773 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
773 def debuggetbundle(ui, repopath, bundlepath, head=None, common=None, **opts):
774 """retrieves a bundle from a repo
774 """retrieves a bundle from a repo
775
775
776 Every ID must be a full-length hex node id string. Saves the bundle to the
776 Every ID must be a full-length hex node id string. Saves the bundle to the
777 given file.
777 given file.
778 """
778 """
779 repo = hg.peer(ui, opts, repopath)
779 repo = hg.peer(ui, opts, repopath)
780 if not repo.capable('getbundle'):
780 if not repo.capable('getbundle'):
781 raise error.Abort("getbundle() not supported by target repository")
781 raise error.Abort("getbundle() not supported by target repository")
782 args = {}
782 args = {}
783 if common:
783 if common:
784 args['common'] = [bin(s) for s in common]
784 args['common'] = [bin(s) for s in common]
785 if head:
785 if head:
786 args['heads'] = [bin(s) for s in head]
786 args['heads'] = [bin(s) for s in head]
787 # TODO: get desired bundlecaps from command line.
787 # TODO: get desired bundlecaps from command line.
788 args['bundlecaps'] = None
788 args['bundlecaps'] = None
789 bundle = repo.getbundle('debug', **args)
789 bundle = repo.getbundle('debug', **args)
790
790
791 bundletype = opts.get('type', 'bzip2').lower()
791 bundletype = opts.get('type', 'bzip2').lower()
792 btypes = {'none': 'HG10UN',
792 btypes = {'none': 'HG10UN',
793 'bzip2': 'HG10BZ',
793 'bzip2': 'HG10BZ',
794 'gzip': 'HG10GZ',
794 'gzip': 'HG10GZ',
795 'bundle2': 'HG20'}
795 'bundle2': 'HG20'}
796 bundletype = btypes.get(bundletype)
796 bundletype = btypes.get(bundletype)
797 if bundletype not in bundle2.bundletypes:
797 if bundletype not in bundle2.bundletypes:
798 raise error.Abort(_('unknown bundle type specified with --type'))
798 raise error.Abort(_('unknown bundle type specified with --type'))
799 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
799 bundle2.writebundle(ui, bundle, bundlepath, bundletype)
800
800
801 @command('debugignore', [], '[FILE]')
801 @command('debugignore', [], '[FILE]')
802 def debugignore(ui, repo, *files, **opts):
802 def debugignore(ui, repo, *files, **opts):
803 """display the combined ignore pattern and information about ignored files
803 """display the combined ignore pattern and information about ignored files
804
804
805 With no argument display the combined ignore pattern.
805 With no argument display the combined ignore pattern.
806
806
807 Given space separated file names, shows if the given file is ignored and
807 Given space separated file names, shows if the given file is ignored and
808 if so, show the ignore rule (file and line number) that matched it.
808 if so, show the ignore rule (file and line number) that matched it.
809 """
809 """
810 ignore = repo.dirstate._ignore
810 ignore = repo.dirstate._ignore
811 if not files:
811 if not files:
812 # Show all the patterns
812 # Show all the patterns
813 includepat = getattr(ignore, 'includepat', None)
813 ui.write("%s\n" % repr(ignore))
814 if includepat is not None:
815 ui.write("%s\n" % includepat)
816 else:
817 raise error.Abort(_("no ignore patterns found"))
818 else:
814 else:
819 for f in files:
815 for f in files:
820 nf = util.normpath(f)
816 nf = util.normpath(f)
821 ignored = None
817 ignored = None
822 ignoredata = None
818 ignoredata = None
823 if nf != '.':
819 if nf != '.':
824 if ignore(nf):
820 if ignore(nf):
825 ignored = nf
821 ignored = nf
826 ignoredata = repo.dirstate._ignorefileandline(nf)
822 ignoredata = repo.dirstate._ignorefileandline(nf)
827 else:
823 else:
828 for p in util.finddirs(nf):
824 for p in util.finddirs(nf):
829 if ignore(p):
825 if ignore(p):
830 ignored = p
826 ignored = p
831 ignoredata = repo.dirstate._ignorefileandline(p)
827 ignoredata = repo.dirstate._ignorefileandline(p)
832 break
828 break
833 if ignored:
829 if ignored:
834 if ignored == nf:
830 if ignored == nf:
835 ui.write(_("%s is ignored\n") % f)
831 ui.write(_("%s is ignored\n") % f)
836 else:
832 else:
837 ui.write(_("%s is ignored because of "
833 ui.write(_("%s is ignored because of "
838 "containing folder %s\n")
834 "containing folder %s\n")
839 % (f, ignored))
835 % (f, ignored))
840 ignorefile, lineno, line = ignoredata
836 ignorefile, lineno, line = ignoredata
841 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
837 ui.write(_("(ignore rule in %s, line %d: '%s')\n")
842 % (ignorefile, lineno, line))
838 % (ignorefile, lineno, line))
843 else:
839 else:
844 ui.write(_("%s is not ignored\n") % f)
840 ui.write(_("%s is not ignored\n") % f)
845
841
846 @command('debugindex', cmdutil.debugrevlogopts +
842 @command('debugindex', cmdutil.debugrevlogopts +
847 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
843 [('f', 'format', 0, _('revlog format'), _('FORMAT'))],
848 _('[-f FORMAT] -c|-m|FILE'),
844 _('[-f FORMAT] -c|-m|FILE'),
849 optionalrepo=True)
845 optionalrepo=True)
850 def debugindex(ui, repo, file_=None, **opts):
846 def debugindex(ui, repo, file_=None, **opts):
851 """dump the contents of an index file"""
847 """dump the contents of an index file"""
852 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
848 r = cmdutil.openrevlog(repo, 'debugindex', file_, opts)
853 format = opts.get('format', 0)
849 format = opts.get('format', 0)
854 if format not in (0, 1):
850 if format not in (0, 1):
855 raise error.Abort(_("unknown format %d") % format)
851 raise error.Abort(_("unknown format %d") % format)
856
852
857 generaldelta = r.version & revlog.FLAG_GENERALDELTA
853 generaldelta = r.version & revlog.FLAG_GENERALDELTA
858 if generaldelta:
854 if generaldelta:
859 basehdr = ' delta'
855 basehdr = ' delta'
860 else:
856 else:
861 basehdr = ' base'
857 basehdr = ' base'
862
858
863 if ui.debugflag:
859 if ui.debugflag:
864 shortfn = hex
860 shortfn = hex
865 else:
861 else:
866 shortfn = short
862 shortfn = short
867
863
868 # There might not be anything in r, so have a sane default
864 # There might not be anything in r, so have a sane default
869 idlen = 12
865 idlen = 12
870 for i in r:
866 for i in r:
871 idlen = len(shortfn(r.node(i)))
867 idlen = len(shortfn(r.node(i)))
872 break
868 break
873
869
874 if format == 0:
870 if format == 0:
875 ui.write((" rev offset length " + basehdr + " linkrev"
871 ui.write((" rev offset length " + basehdr + " linkrev"
876 " %s %s p2\n") % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
872 " %s %s p2\n") % ("nodeid".ljust(idlen), "p1".ljust(idlen)))
877 elif format == 1:
873 elif format == 1:
878 ui.write((" rev flag offset length"
874 ui.write((" rev flag offset length"
879 " size " + basehdr + " link p1 p2"
875 " size " + basehdr + " link p1 p2"
880 " %s\n") % "nodeid".rjust(idlen))
876 " %s\n") % "nodeid".rjust(idlen))
881
877
882 for i in r:
878 for i in r:
883 node = r.node(i)
879 node = r.node(i)
884 if generaldelta:
880 if generaldelta:
885 base = r.deltaparent(i)
881 base = r.deltaparent(i)
886 else:
882 else:
887 base = r.chainbase(i)
883 base = r.chainbase(i)
888 if format == 0:
884 if format == 0:
889 try:
885 try:
890 pp = r.parents(node)
886 pp = r.parents(node)
891 except Exception:
887 except Exception:
892 pp = [nullid, nullid]
888 pp = [nullid, nullid]
893 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
889 ui.write("% 6d % 9d % 7d % 6d % 7d %s %s %s\n" % (
894 i, r.start(i), r.length(i), base, r.linkrev(i),
890 i, r.start(i), r.length(i), base, r.linkrev(i),
895 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
891 shortfn(node), shortfn(pp[0]), shortfn(pp[1])))
896 elif format == 1:
892 elif format == 1:
897 pr = r.parentrevs(i)
893 pr = r.parentrevs(i)
898 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
894 ui.write("% 6d %04x % 8d % 8d % 8d % 6d % 6d % 6d % 6d %s\n" % (
899 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
895 i, r.flags(i), r.start(i), r.length(i), r.rawsize(i),
900 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
896 base, r.linkrev(i), pr[0], pr[1], shortfn(node)))
901
897
902 @command('debugindexdot', cmdutil.debugrevlogopts,
898 @command('debugindexdot', cmdutil.debugrevlogopts,
903 _('-c|-m|FILE'), optionalrepo=True)
899 _('-c|-m|FILE'), optionalrepo=True)
904 def debugindexdot(ui, repo, file_=None, **opts):
900 def debugindexdot(ui, repo, file_=None, **opts):
905 """dump an index DAG as a graphviz dot file"""
901 """dump an index DAG as a graphviz dot file"""
906 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
902 r = cmdutil.openrevlog(repo, 'debugindexdot', file_, opts)
907 ui.write(("digraph G {\n"))
903 ui.write(("digraph G {\n"))
908 for i in r:
904 for i in r:
909 node = r.node(i)
905 node = r.node(i)
910 pp = r.parents(node)
906 pp = r.parents(node)
911 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
907 ui.write("\t%d -> %d\n" % (r.rev(pp[0]), i))
912 if pp[1] != nullid:
908 if pp[1] != nullid:
913 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
909 ui.write("\t%d -> %d\n" % (r.rev(pp[1]), i))
914 ui.write("}\n")
910 ui.write("}\n")
915
911
916 @command('debuginstall', [] + cmdutil.formatteropts, '', norepo=True)
912 @command('debuginstall', [] + cmdutil.formatteropts, '', norepo=True)
917 def debuginstall(ui, **opts):
913 def debuginstall(ui, **opts):
918 '''test Mercurial installation
914 '''test Mercurial installation
919
915
920 Returns 0 on success.
916 Returns 0 on success.
921 '''
917 '''
922
918
923 def writetemp(contents):
919 def writetemp(contents):
924 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
920 (fd, name) = tempfile.mkstemp(prefix="hg-debuginstall-")
925 f = os.fdopen(fd, pycompat.sysstr("wb"))
921 f = os.fdopen(fd, pycompat.sysstr("wb"))
926 f.write(contents)
922 f.write(contents)
927 f.close()
923 f.close()
928 return name
924 return name
929
925
930 problems = 0
926 problems = 0
931
927
932 fm = ui.formatter('debuginstall', opts)
928 fm = ui.formatter('debuginstall', opts)
933 fm.startitem()
929 fm.startitem()
934
930
935 # encoding
931 # encoding
936 fm.write('encoding', _("checking encoding (%s)...\n"), encoding.encoding)
932 fm.write('encoding', _("checking encoding (%s)...\n"), encoding.encoding)
937 err = None
933 err = None
938 try:
934 try:
939 encoding.fromlocal("test")
935 encoding.fromlocal("test")
940 except error.Abort as inst:
936 except error.Abort as inst:
941 err = inst
937 err = inst
942 problems += 1
938 problems += 1
943 fm.condwrite(err, 'encodingerror', _(" %s\n"
939 fm.condwrite(err, 'encodingerror', _(" %s\n"
944 " (check that your locale is properly set)\n"), err)
940 " (check that your locale is properly set)\n"), err)
945
941
946 # Python
942 # Python
947 fm.write('pythonexe', _("checking Python executable (%s)\n"),
943 fm.write('pythonexe', _("checking Python executable (%s)\n"),
948 pycompat.sysexecutable)
944 pycompat.sysexecutable)
949 fm.write('pythonver', _("checking Python version (%s)\n"),
945 fm.write('pythonver', _("checking Python version (%s)\n"),
950 ("%d.%d.%d" % sys.version_info[:3]))
946 ("%d.%d.%d" % sys.version_info[:3]))
951 fm.write('pythonlib', _("checking Python lib (%s)...\n"),
947 fm.write('pythonlib', _("checking Python lib (%s)...\n"),
952 os.path.dirname(pycompat.fsencode(os.__file__)))
948 os.path.dirname(pycompat.fsencode(os.__file__)))
953
949
954 security = set(sslutil.supportedprotocols)
950 security = set(sslutil.supportedprotocols)
955 if sslutil.hassni:
951 if sslutil.hassni:
956 security.add('sni')
952 security.add('sni')
957
953
958 fm.write('pythonsecurity', _("checking Python security support (%s)\n"),
954 fm.write('pythonsecurity', _("checking Python security support (%s)\n"),
959 fm.formatlist(sorted(security), name='protocol',
955 fm.formatlist(sorted(security), name='protocol',
960 fmt='%s', sep=','))
956 fmt='%s', sep=','))
961
957
962 # These are warnings, not errors. So don't increment problem count. This
958 # These are warnings, not errors. So don't increment problem count. This
963 # may change in the future.
959 # may change in the future.
964 if 'tls1.2' not in security:
960 if 'tls1.2' not in security:
965 fm.plain(_(' TLS 1.2 not supported by Python install; '
961 fm.plain(_(' TLS 1.2 not supported by Python install; '
966 'network connections lack modern security\n'))
962 'network connections lack modern security\n'))
967 if 'sni' not in security:
963 if 'sni' not in security:
968 fm.plain(_(' SNI not supported by Python install; may have '
964 fm.plain(_(' SNI not supported by Python install; may have '
969 'connectivity issues with some servers\n'))
965 'connectivity issues with some servers\n'))
970
966
971 # TODO print CA cert info
967 # TODO print CA cert info
972
968
973 # hg version
969 # hg version
974 hgver = util.version()
970 hgver = util.version()
975 fm.write('hgver', _("checking Mercurial version (%s)\n"),
971 fm.write('hgver', _("checking Mercurial version (%s)\n"),
976 hgver.split('+')[0])
972 hgver.split('+')[0])
977 fm.write('hgverextra', _("checking Mercurial custom build (%s)\n"),
973 fm.write('hgverextra', _("checking Mercurial custom build (%s)\n"),
978 '+'.join(hgver.split('+')[1:]))
974 '+'.join(hgver.split('+')[1:]))
979
975
980 # compiled modules
976 # compiled modules
981 fm.write('hgmodulepolicy', _("checking module policy (%s)\n"),
977 fm.write('hgmodulepolicy', _("checking module policy (%s)\n"),
982 policy.policy)
978 policy.policy)
983 fm.write('hgmodules', _("checking installed modules (%s)...\n"),
979 fm.write('hgmodules', _("checking installed modules (%s)...\n"),
984 os.path.dirname(pycompat.fsencode(__file__)))
980 os.path.dirname(pycompat.fsencode(__file__)))
985
981
986 if policy.policy in ('c', 'allow'):
982 if policy.policy in ('c', 'allow'):
987 err = None
983 err = None
988 try:
984 try:
989 from .cext import (
985 from .cext import (
990 base85,
986 base85,
991 bdiff,
987 bdiff,
992 mpatch,
988 mpatch,
993 osutil,
989 osutil,
994 )
990 )
995 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
991 dir(bdiff), dir(mpatch), dir(base85), dir(osutil) # quiet pyflakes
996 except Exception as inst:
992 except Exception as inst:
997 err = inst
993 err = inst
998 problems += 1
994 problems += 1
999 fm.condwrite(err, 'extensionserror', " %s\n", err)
995 fm.condwrite(err, 'extensionserror', " %s\n", err)
1000
996
1001 compengines = util.compengines._engines.values()
997 compengines = util.compengines._engines.values()
1002 fm.write('compengines', _('checking registered compression engines (%s)\n'),
998 fm.write('compengines', _('checking registered compression engines (%s)\n'),
1003 fm.formatlist(sorted(e.name() for e in compengines),
999 fm.formatlist(sorted(e.name() for e in compengines),
1004 name='compengine', fmt='%s', sep=', '))
1000 name='compengine', fmt='%s', sep=', '))
1005 fm.write('compenginesavail', _('checking available compression engines '
1001 fm.write('compenginesavail', _('checking available compression engines '
1006 '(%s)\n'),
1002 '(%s)\n'),
1007 fm.formatlist(sorted(e.name() for e in compengines
1003 fm.formatlist(sorted(e.name() for e in compengines
1008 if e.available()),
1004 if e.available()),
1009 name='compengine', fmt='%s', sep=', '))
1005 name='compengine', fmt='%s', sep=', '))
1010 wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
1006 wirecompengines = util.compengines.supportedwireengines(util.SERVERROLE)
1011 fm.write('compenginesserver', _('checking available compression engines '
1007 fm.write('compenginesserver', _('checking available compression engines '
1012 'for wire protocol (%s)\n'),
1008 'for wire protocol (%s)\n'),
1013 fm.formatlist([e.name() for e in wirecompengines
1009 fm.formatlist([e.name() for e in wirecompengines
1014 if e.wireprotosupport()],
1010 if e.wireprotosupport()],
1015 name='compengine', fmt='%s', sep=', '))
1011 name='compengine', fmt='%s', sep=', '))
1016
1012
1017 # templates
1013 # templates
1018 p = templater.templatepaths()
1014 p = templater.templatepaths()
1019 fm.write('templatedirs', 'checking templates (%s)...\n', ' '.join(p))
1015 fm.write('templatedirs', 'checking templates (%s)...\n', ' '.join(p))
1020 fm.condwrite(not p, '', _(" no template directories found\n"))
1016 fm.condwrite(not p, '', _(" no template directories found\n"))
1021 if p:
1017 if p:
1022 m = templater.templatepath("map-cmdline.default")
1018 m = templater.templatepath("map-cmdline.default")
1023 if m:
1019 if m:
1024 # template found, check if it is working
1020 # template found, check if it is working
1025 err = None
1021 err = None
1026 try:
1022 try:
1027 templater.templater.frommapfile(m)
1023 templater.templater.frommapfile(m)
1028 except Exception as inst:
1024 except Exception as inst:
1029 err = inst
1025 err = inst
1030 p = None
1026 p = None
1031 fm.condwrite(err, 'defaulttemplateerror', " %s\n", err)
1027 fm.condwrite(err, 'defaulttemplateerror', " %s\n", err)
1032 else:
1028 else:
1033 p = None
1029 p = None
1034 fm.condwrite(p, 'defaulttemplate',
1030 fm.condwrite(p, 'defaulttemplate',
1035 _("checking default template (%s)\n"), m)
1031 _("checking default template (%s)\n"), m)
1036 fm.condwrite(not m, 'defaulttemplatenotfound',
1032 fm.condwrite(not m, 'defaulttemplatenotfound',
1037 _(" template '%s' not found\n"), "default")
1033 _(" template '%s' not found\n"), "default")
1038 if not p:
1034 if not p:
1039 problems += 1
1035 problems += 1
1040 fm.condwrite(not p, '',
1036 fm.condwrite(not p, '',
1041 _(" (templates seem to have been installed incorrectly)\n"))
1037 _(" (templates seem to have been installed incorrectly)\n"))
1042
1038
1043 # editor
1039 # editor
1044 editor = ui.geteditor()
1040 editor = ui.geteditor()
1045 editor = util.expandpath(editor)
1041 editor = util.expandpath(editor)
1046 fm.write('editor', _("checking commit editor... (%s)\n"), editor)
1042 fm.write('editor', _("checking commit editor... (%s)\n"), editor)
1047 cmdpath = util.findexe(pycompat.shlexsplit(editor)[0])
1043 cmdpath = util.findexe(pycompat.shlexsplit(editor)[0])
1048 fm.condwrite(not cmdpath and editor == 'vi', 'vinotfound',
1044 fm.condwrite(not cmdpath and editor == 'vi', 'vinotfound',
1049 _(" No commit editor set and can't find %s in PATH\n"
1045 _(" No commit editor set and can't find %s in PATH\n"
1050 " (specify a commit editor in your configuration"
1046 " (specify a commit editor in your configuration"
1051 " file)\n"), not cmdpath and editor == 'vi' and editor)
1047 " file)\n"), not cmdpath and editor == 'vi' and editor)
1052 fm.condwrite(not cmdpath and editor != 'vi', 'editornotfound',
1048 fm.condwrite(not cmdpath and editor != 'vi', 'editornotfound',
1053 _(" Can't find editor '%s' in PATH\n"
1049 _(" Can't find editor '%s' in PATH\n"
1054 " (specify a commit editor in your configuration"
1050 " (specify a commit editor in your configuration"
1055 " file)\n"), not cmdpath and editor)
1051 " file)\n"), not cmdpath and editor)
1056 if not cmdpath and editor != 'vi':
1052 if not cmdpath and editor != 'vi':
1057 problems += 1
1053 problems += 1
1058
1054
1059 # check username
1055 # check username
1060 username = None
1056 username = None
1061 err = None
1057 err = None
1062 try:
1058 try:
1063 username = ui.username()
1059 username = ui.username()
1064 except error.Abort as e:
1060 except error.Abort as e:
1065 err = e
1061 err = e
1066 problems += 1
1062 problems += 1
1067
1063
1068 fm.condwrite(username, 'username', _("checking username (%s)\n"), username)
1064 fm.condwrite(username, 'username', _("checking username (%s)\n"), username)
1069 fm.condwrite(err, 'usernameerror', _("checking username...\n %s\n"
1065 fm.condwrite(err, 'usernameerror', _("checking username...\n %s\n"
1070 " (specify a username in your configuration file)\n"), err)
1066 " (specify a username in your configuration file)\n"), err)
1071
1067
1072 fm.condwrite(not problems, '',
1068 fm.condwrite(not problems, '',
1073 _("no problems detected\n"))
1069 _("no problems detected\n"))
1074 if not problems:
1070 if not problems:
1075 fm.data(problems=problems)
1071 fm.data(problems=problems)
1076 fm.condwrite(problems, 'problems',
1072 fm.condwrite(problems, 'problems',
1077 _("%d problems detected,"
1073 _("%d problems detected,"
1078 " please check your install!\n"), problems)
1074 " please check your install!\n"), problems)
1079 fm.end()
1075 fm.end()
1080
1076
1081 return problems
1077 return problems
1082
1078
1083 @command('debugknown', [], _('REPO ID...'), norepo=True)
1079 @command('debugknown', [], _('REPO ID...'), norepo=True)
1084 def debugknown(ui, repopath, *ids, **opts):
1080 def debugknown(ui, repopath, *ids, **opts):
1085 """test whether node ids are known to a repo
1081 """test whether node ids are known to a repo
1086
1082
1087 Every ID must be a full-length hex node id string. Returns a list of 0s
1083 Every ID must be a full-length hex node id string. Returns a list of 0s
1088 and 1s indicating unknown/known.
1084 and 1s indicating unknown/known.
1089 """
1085 """
1090 repo = hg.peer(ui, opts, repopath)
1086 repo = hg.peer(ui, opts, repopath)
1091 if not repo.capable('known'):
1087 if not repo.capable('known'):
1092 raise error.Abort("known() not supported by target repository")
1088 raise error.Abort("known() not supported by target repository")
1093 flags = repo.known([bin(s) for s in ids])
1089 flags = repo.known([bin(s) for s in ids])
1094 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
1090 ui.write("%s\n" % ("".join([f and "1" or "0" for f in flags])))
1095
1091
1096 @command('debuglabelcomplete', [], _('LABEL...'))
1092 @command('debuglabelcomplete', [], _('LABEL...'))
1097 def debuglabelcomplete(ui, repo, *args):
1093 def debuglabelcomplete(ui, repo, *args):
1098 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1094 '''backwards compatibility with old bash completion scripts (DEPRECATED)'''
1099 debugnamecomplete(ui, repo, *args)
1095 debugnamecomplete(ui, repo, *args)
1100
1096
1101 @command('debuglocks',
1097 @command('debuglocks',
1102 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
1098 [('L', 'force-lock', None, _('free the store lock (DANGEROUS)')),
1103 ('W', 'force-wlock', None,
1099 ('W', 'force-wlock', None,
1104 _('free the working state lock (DANGEROUS)'))],
1100 _('free the working state lock (DANGEROUS)'))],
1105 _('[OPTION]...'))
1101 _('[OPTION]...'))
1106 def debuglocks(ui, repo, **opts):
1102 def debuglocks(ui, repo, **opts):
1107 """show or modify state of locks
1103 """show or modify state of locks
1108
1104
1109 By default, this command will show which locks are held. This
1105 By default, this command will show which locks are held. This
1110 includes the user and process holding the lock, the amount of time
1106 includes the user and process holding the lock, the amount of time
1111 the lock has been held, and the machine name where the process is
1107 the lock has been held, and the machine name where the process is
1112 running if it's not local.
1108 running if it's not local.
1113
1109
1114 Locks protect the integrity of Mercurial's data, so should be
1110 Locks protect the integrity of Mercurial's data, so should be
1115 treated with care. System crashes or other interruptions may cause
1111 treated with care. System crashes or other interruptions may cause
1116 locks to not be properly released, though Mercurial will usually
1112 locks to not be properly released, though Mercurial will usually
1117 detect and remove such stale locks automatically.
1113 detect and remove such stale locks automatically.
1118
1114
1119 However, detecting stale locks may not always be possible (for
1115 However, detecting stale locks may not always be possible (for
1120 instance, on a shared filesystem). Removing locks may also be
1116 instance, on a shared filesystem). Removing locks may also be
1121 blocked by filesystem permissions.
1117 blocked by filesystem permissions.
1122
1118
1123 Returns 0 if no locks are held.
1119 Returns 0 if no locks are held.
1124
1120
1125 """
1121 """
1126
1122
1127 if opts.get('force_lock'):
1123 if opts.get('force_lock'):
1128 repo.svfs.unlink('lock')
1124 repo.svfs.unlink('lock')
1129 if opts.get('force_wlock'):
1125 if opts.get('force_wlock'):
1130 repo.vfs.unlink('wlock')
1126 repo.vfs.unlink('wlock')
1131 if opts.get('force_lock') or opts.get('force_lock'):
1127 if opts.get('force_lock') or opts.get('force_lock'):
1132 return 0
1128 return 0
1133
1129
1134 now = time.time()
1130 now = time.time()
1135 held = 0
1131 held = 0
1136
1132
1137 def report(vfs, name, method):
1133 def report(vfs, name, method):
1138 # this causes stale locks to get reaped for more accurate reporting
1134 # this causes stale locks to get reaped for more accurate reporting
1139 try:
1135 try:
1140 l = method(False)
1136 l = method(False)
1141 except error.LockHeld:
1137 except error.LockHeld:
1142 l = None
1138 l = None
1143
1139
1144 if l:
1140 if l:
1145 l.release()
1141 l.release()
1146 else:
1142 else:
1147 try:
1143 try:
1148 stat = vfs.lstat(name)
1144 stat = vfs.lstat(name)
1149 age = now - stat.st_mtime
1145 age = now - stat.st_mtime
1150 user = util.username(stat.st_uid)
1146 user = util.username(stat.st_uid)
1151 locker = vfs.readlock(name)
1147 locker = vfs.readlock(name)
1152 if ":" in locker:
1148 if ":" in locker:
1153 host, pid = locker.split(':')
1149 host, pid = locker.split(':')
1154 if host == socket.gethostname():
1150 if host == socket.gethostname():
1155 locker = 'user %s, process %s' % (user, pid)
1151 locker = 'user %s, process %s' % (user, pid)
1156 else:
1152 else:
1157 locker = 'user %s, process %s, host %s' \
1153 locker = 'user %s, process %s, host %s' \
1158 % (user, pid, host)
1154 % (user, pid, host)
1159 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
1155 ui.write(("%-6s %s (%ds)\n") % (name + ":", locker, age))
1160 return 1
1156 return 1
1161 except OSError as e:
1157 except OSError as e:
1162 if e.errno != errno.ENOENT:
1158 if e.errno != errno.ENOENT:
1163 raise
1159 raise
1164
1160
1165 ui.write(("%-6s free\n") % (name + ":"))
1161 ui.write(("%-6s free\n") % (name + ":"))
1166 return 0
1162 return 0
1167
1163
1168 held += report(repo.svfs, "lock", repo.lock)
1164 held += report(repo.svfs, "lock", repo.lock)
1169 held += report(repo.vfs, "wlock", repo.wlock)
1165 held += report(repo.vfs, "wlock", repo.wlock)
1170
1166
1171 return held
1167 return held
1172
1168
1173 @command('debugmergestate', [], '')
1169 @command('debugmergestate', [], '')
1174 def debugmergestate(ui, repo, *args):
1170 def debugmergestate(ui, repo, *args):
1175 """print merge state
1171 """print merge state
1176
1172
1177 Use --verbose to print out information about whether v1 or v2 merge state
1173 Use --verbose to print out information about whether v1 or v2 merge state
1178 was chosen."""
1174 was chosen."""
1179 def _hashornull(h):
1175 def _hashornull(h):
1180 if h == nullhex:
1176 if h == nullhex:
1181 return 'null'
1177 return 'null'
1182 else:
1178 else:
1183 return h
1179 return h
1184
1180
1185 def printrecords(version):
1181 def printrecords(version):
1186 ui.write(('* version %s records\n') % version)
1182 ui.write(('* version %s records\n') % version)
1187 if version == 1:
1183 if version == 1:
1188 records = v1records
1184 records = v1records
1189 else:
1185 else:
1190 records = v2records
1186 records = v2records
1191
1187
1192 for rtype, record in records:
1188 for rtype, record in records:
1193 # pretty print some record types
1189 # pretty print some record types
1194 if rtype == 'L':
1190 if rtype == 'L':
1195 ui.write(('local: %s\n') % record)
1191 ui.write(('local: %s\n') % record)
1196 elif rtype == 'O':
1192 elif rtype == 'O':
1197 ui.write(('other: %s\n') % record)
1193 ui.write(('other: %s\n') % record)
1198 elif rtype == 'm':
1194 elif rtype == 'm':
1199 driver, mdstate = record.split('\0', 1)
1195 driver, mdstate = record.split('\0', 1)
1200 ui.write(('merge driver: %s (state "%s")\n')
1196 ui.write(('merge driver: %s (state "%s")\n')
1201 % (driver, mdstate))
1197 % (driver, mdstate))
1202 elif rtype in 'FDC':
1198 elif rtype in 'FDC':
1203 r = record.split('\0')
1199 r = record.split('\0')
1204 f, state, hash, lfile, afile, anode, ofile = r[0:7]
1200 f, state, hash, lfile, afile, anode, ofile = r[0:7]
1205 if version == 1:
1201 if version == 1:
1206 onode = 'not stored in v1 format'
1202 onode = 'not stored in v1 format'
1207 flags = r[7]
1203 flags = r[7]
1208 else:
1204 else:
1209 onode, flags = r[7:9]
1205 onode, flags = r[7:9]
1210 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
1206 ui.write(('file: %s (record type "%s", state "%s", hash %s)\n')
1211 % (f, rtype, state, _hashornull(hash)))
1207 % (f, rtype, state, _hashornull(hash)))
1212 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
1208 ui.write((' local path: %s (flags "%s")\n') % (lfile, flags))
1213 ui.write((' ancestor path: %s (node %s)\n')
1209 ui.write((' ancestor path: %s (node %s)\n')
1214 % (afile, _hashornull(anode)))
1210 % (afile, _hashornull(anode)))
1215 ui.write((' other path: %s (node %s)\n')
1211 ui.write((' other path: %s (node %s)\n')
1216 % (ofile, _hashornull(onode)))
1212 % (ofile, _hashornull(onode)))
1217 elif rtype == 'f':
1213 elif rtype == 'f':
1218 filename, rawextras = record.split('\0', 1)
1214 filename, rawextras = record.split('\0', 1)
1219 extras = rawextras.split('\0')
1215 extras = rawextras.split('\0')
1220 i = 0
1216 i = 0
1221 extrastrings = []
1217 extrastrings = []
1222 while i < len(extras):
1218 while i < len(extras):
1223 extrastrings.append('%s = %s' % (extras[i], extras[i + 1]))
1219 extrastrings.append('%s = %s' % (extras[i], extras[i + 1]))
1224 i += 2
1220 i += 2
1225
1221
1226 ui.write(('file extras: %s (%s)\n')
1222 ui.write(('file extras: %s (%s)\n')
1227 % (filename, ', '.join(extrastrings)))
1223 % (filename, ', '.join(extrastrings)))
1228 elif rtype == 'l':
1224 elif rtype == 'l':
1229 labels = record.split('\0', 2)
1225 labels = record.split('\0', 2)
1230 labels = [l for l in labels if len(l) > 0]
1226 labels = [l for l in labels if len(l) > 0]
1231 ui.write(('labels:\n'))
1227 ui.write(('labels:\n'))
1232 ui.write((' local: %s\n' % labels[0]))
1228 ui.write((' local: %s\n' % labels[0]))
1233 ui.write((' other: %s\n' % labels[1]))
1229 ui.write((' other: %s\n' % labels[1]))
1234 if len(labels) > 2:
1230 if len(labels) > 2:
1235 ui.write((' base: %s\n' % labels[2]))
1231 ui.write((' base: %s\n' % labels[2]))
1236 else:
1232 else:
1237 ui.write(('unrecognized entry: %s\t%s\n')
1233 ui.write(('unrecognized entry: %s\t%s\n')
1238 % (rtype, record.replace('\0', '\t')))
1234 % (rtype, record.replace('\0', '\t')))
1239
1235
1240 # Avoid mergestate.read() since it may raise an exception for unsupported
1236 # Avoid mergestate.read() since it may raise an exception for unsupported
1241 # merge state records. We shouldn't be doing this, but this is OK since this
1237 # merge state records. We shouldn't be doing this, but this is OK since this
1242 # command is pretty low-level.
1238 # command is pretty low-level.
1243 ms = mergemod.mergestate(repo)
1239 ms = mergemod.mergestate(repo)
1244
1240
1245 # sort so that reasonable information is on top
1241 # sort so that reasonable information is on top
1246 v1records = ms._readrecordsv1()
1242 v1records = ms._readrecordsv1()
1247 v2records = ms._readrecordsv2()
1243 v2records = ms._readrecordsv2()
1248 order = 'LOml'
1244 order = 'LOml'
1249 def key(r):
1245 def key(r):
1250 idx = order.find(r[0])
1246 idx = order.find(r[0])
1251 if idx == -1:
1247 if idx == -1:
1252 return (1, r[1])
1248 return (1, r[1])
1253 else:
1249 else:
1254 return (0, idx)
1250 return (0, idx)
1255 v1records.sort(key=key)
1251 v1records.sort(key=key)
1256 v2records.sort(key=key)
1252 v2records.sort(key=key)
1257
1253
1258 if not v1records and not v2records:
1254 if not v1records and not v2records:
1259 ui.write(('no merge state found\n'))
1255 ui.write(('no merge state found\n'))
1260 elif not v2records:
1256 elif not v2records:
1261 ui.note(('no version 2 merge state\n'))
1257 ui.note(('no version 2 merge state\n'))
1262 printrecords(1)
1258 printrecords(1)
1263 elif ms._v1v2match(v1records, v2records):
1259 elif ms._v1v2match(v1records, v2records):
1264 ui.note(('v1 and v2 states match: using v2\n'))
1260 ui.note(('v1 and v2 states match: using v2\n'))
1265 printrecords(2)
1261 printrecords(2)
1266 else:
1262 else:
1267 ui.note(('v1 and v2 states mismatch: using v1\n'))
1263 ui.note(('v1 and v2 states mismatch: using v1\n'))
1268 printrecords(1)
1264 printrecords(1)
1269 if ui.verbose:
1265 if ui.verbose:
1270 printrecords(2)
1266 printrecords(2)
1271
1267
1272 @command('debugnamecomplete', [], _('NAME...'))
1268 @command('debugnamecomplete', [], _('NAME...'))
1273 def debugnamecomplete(ui, repo, *args):
1269 def debugnamecomplete(ui, repo, *args):
1274 '''complete "names" - tags, open branch names, bookmark names'''
1270 '''complete "names" - tags, open branch names, bookmark names'''
1275
1271
1276 names = set()
1272 names = set()
1277 # since we previously only listed open branches, we will handle that
1273 # since we previously only listed open branches, we will handle that
1278 # specially (after this for loop)
1274 # specially (after this for loop)
1279 for name, ns in repo.names.iteritems():
1275 for name, ns in repo.names.iteritems():
1280 if name != 'branches':
1276 if name != 'branches':
1281 names.update(ns.listnames(repo))
1277 names.update(ns.listnames(repo))
1282 names.update(tag for (tag, heads, tip, closed)
1278 names.update(tag for (tag, heads, tip, closed)
1283 in repo.branchmap().iterbranches() if not closed)
1279 in repo.branchmap().iterbranches() if not closed)
1284 completions = set()
1280 completions = set()
1285 if not args:
1281 if not args:
1286 args = ['']
1282 args = ['']
1287 for a in args:
1283 for a in args:
1288 completions.update(n for n in names if n.startswith(a))
1284 completions.update(n for n in names if n.startswith(a))
1289 ui.write('\n'.join(sorted(completions)))
1285 ui.write('\n'.join(sorted(completions)))
1290 ui.write('\n')
1286 ui.write('\n')
1291
1287
1292 @command('debugobsolete',
1288 @command('debugobsolete',
1293 [('', 'flags', 0, _('markers flag')),
1289 [('', 'flags', 0, _('markers flag')),
1294 ('', 'record-parents', False,
1290 ('', 'record-parents', False,
1295 _('record parent information for the precursor')),
1291 _('record parent information for the precursor')),
1296 ('r', 'rev', [], _('display markers relevant to REV')),
1292 ('r', 'rev', [], _('display markers relevant to REV')),
1297 ('', 'index', False, _('display index of the marker')),
1293 ('', 'index', False, _('display index of the marker')),
1298 ('', 'delete', [], _('delete markers specified by indices')),
1294 ('', 'delete', [], _('delete markers specified by indices')),
1299 ] + cmdutil.commitopts2 + cmdutil.formatteropts,
1295 ] + cmdutil.commitopts2 + cmdutil.formatteropts,
1300 _('[OBSOLETED [REPLACEMENT ...]]'))
1296 _('[OBSOLETED [REPLACEMENT ...]]'))
1301 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
1297 def debugobsolete(ui, repo, precursor=None, *successors, **opts):
1302 """create arbitrary obsolete marker
1298 """create arbitrary obsolete marker
1303
1299
1304 With no arguments, displays the list of obsolescence markers."""
1300 With no arguments, displays the list of obsolescence markers."""
1305
1301
1306 def parsenodeid(s):
1302 def parsenodeid(s):
1307 try:
1303 try:
1308 # We do not use revsingle/revrange functions here to accept
1304 # We do not use revsingle/revrange functions here to accept
1309 # arbitrary node identifiers, possibly not present in the
1305 # arbitrary node identifiers, possibly not present in the
1310 # local repository.
1306 # local repository.
1311 n = bin(s)
1307 n = bin(s)
1312 if len(n) != len(nullid):
1308 if len(n) != len(nullid):
1313 raise TypeError()
1309 raise TypeError()
1314 return n
1310 return n
1315 except TypeError:
1311 except TypeError:
1316 raise error.Abort('changeset references must be full hexadecimal '
1312 raise error.Abort('changeset references must be full hexadecimal '
1317 'node identifiers')
1313 'node identifiers')
1318
1314
1319 if opts.get('delete'):
1315 if opts.get('delete'):
1320 indices = []
1316 indices = []
1321 for v in opts.get('delete'):
1317 for v in opts.get('delete'):
1322 try:
1318 try:
1323 indices.append(int(v))
1319 indices.append(int(v))
1324 except ValueError:
1320 except ValueError:
1325 raise error.Abort(_('invalid index value: %r') % v,
1321 raise error.Abort(_('invalid index value: %r') % v,
1326 hint=_('use integers for indices'))
1322 hint=_('use integers for indices'))
1327
1323
1328 if repo.currenttransaction():
1324 if repo.currenttransaction():
1329 raise error.Abort(_('cannot delete obsmarkers in the middle '
1325 raise error.Abort(_('cannot delete obsmarkers in the middle '
1330 'of transaction.'))
1326 'of transaction.'))
1331
1327
1332 with repo.lock():
1328 with repo.lock():
1333 n = repair.deleteobsmarkers(repo.obsstore, indices)
1329 n = repair.deleteobsmarkers(repo.obsstore, indices)
1334 ui.write(_('deleted %i obsolescence markers\n') % n)
1330 ui.write(_('deleted %i obsolescence markers\n') % n)
1335
1331
1336 return
1332 return
1337
1333
1338 if precursor is not None:
1334 if precursor is not None:
1339 if opts['rev']:
1335 if opts['rev']:
1340 raise error.Abort('cannot select revision when creating marker')
1336 raise error.Abort('cannot select revision when creating marker')
1341 metadata = {}
1337 metadata = {}
1342 metadata['user'] = opts['user'] or ui.username()
1338 metadata['user'] = opts['user'] or ui.username()
1343 succs = tuple(parsenodeid(succ) for succ in successors)
1339 succs = tuple(parsenodeid(succ) for succ in successors)
1344 l = repo.lock()
1340 l = repo.lock()
1345 try:
1341 try:
1346 tr = repo.transaction('debugobsolete')
1342 tr = repo.transaction('debugobsolete')
1347 try:
1343 try:
1348 date = opts.get('date')
1344 date = opts.get('date')
1349 if date:
1345 if date:
1350 date = util.parsedate(date)
1346 date = util.parsedate(date)
1351 else:
1347 else:
1352 date = None
1348 date = None
1353 prec = parsenodeid(precursor)
1349 prec = parsenodeid(precursor)
1354 parents = None
1350 parents = None
1355 if opts['record_parents']:
1351 if opts['record_parents']:
1356 if prec not in repo.unfiltered():
1352 if prec not in repo.unfiltered():
1357 raise error.Abort('cannot used --record-parents on '
1353 raise error.Abort('cannot used --record-parents on '
1358 'unknown changesets')
1354 'unknown changesets')
1359 parents = repo.unfiltered()[prec].parents()
1355 parents = repo.unfiltered()[prec].parents()
1360 parents = tuple(p.node() for p in parents)
1356 parents = tuple(p.node() for p in parents)
1361 repo.obsstore.create(tr, prec, succs, opts['flags'],
1357 repo.obsstore.create(tr, prec, succs, opts['flags'],
1362 parents=parents, date=date,
1358 parents=parents, date=date,
1363 metadata=metadata)
1359 metadata=metadata)
1364 tr.close()
1360 tr.close()
1365 except ValueError as exc:
1361 except ValueError as exc:
1366 raise error.Abort(_('bad obsmarker input: %s') % exc)
1362 raise error.Abort(_('bad obsmarker input: %s') % exc)
1367 finally:
1363 finally:
1368 tr.release()
1364 tr.release()
1369 finally:
1365 finally:
1370 l.release()
1366 l.release()
1371 else:
1367 else:
1372 if opts['rev']:
1368 if opts['rev']:
1373 revs = scmutil.revrange(repo, opts['rev'])
1369 revs = scmutil.revrange(repo, opts['rev'])
1374 nodes = [repo[r].node() for r in revs]
1370 nodes = [repo[r].node() for r in revs]
1375 markers = list(obsolete.getmarkers(repo, nodes=nodes))
1371 markers = list(obsolete.getmarkers(repo, nodes=nodes))
1376 markers.sort(key=lambda x: x._data)
1372 markers.sort(key=lambda x: x._data)
1377 else:
1373 else:
1378 markers = obsolete.getmarkers(repo)
1374 markers = obsolete.getmarkers(repo)
1379
1375
1380 markerstoiter = markers
1376 markerstoiter = markers
1381 isrelevant = lambda m: True
1377 isrelevant = lambda m: True
1382 if opts.get('rev') and opts.get('index'):
1378 if opts.get('rev') and opts.get('index'):
1383 markerstoiter = obsolete.getmarkers(repo)
1379 markerstoiter = obsolete.getmarkers(repo)
1384 markerset = set(markers)
1380 markerset = set(markers)
1385 isrelevant = lambda m: m in markerset
1381 isrelevant = lambda m: m in markerset
1386
1382
1387 fm = ui.formatter('debugobsolete', opts)
1383 fm = ui.formatter('debugobsolete', opts)
1388 for i, m in enumerate(markerstoiter):
1384 for i, m in enumerate(markerstoiter):
1389 if not isrelevant(m):
1385 if not isrelevant(m):
1390 # marker can be irrelevant when we're iterating over a set
1386 # marker can be irrelevant when we're iterating over a set
1391 # of markers (markerstoiter) which is bigger than the set
1387 # of markers (markerstoiter) which is bigger than the set
1392 # of markers we want to display (markers)
1388 # of markers we want to display (markers)
1393 # this can happen if both --index and --rev options are
1389 # this can happen if both --index and --rev options are
1394 # provided and thus we need to iterate over all of the markers
1390 # provided and thus we need to iterate over all of the markers
1395 # to get the correct indices, but only display the ones that
1391 # to get the correct indices, but only display the ones that
1396 # are relevant to --rev value
1392 # are relevant to --rev value
1397 continue
1393 continue
1398 fm.startitem()
1394 fm.startitem()
1399 ind = i if opts.get('index') else None
1395 ind = i if opts.get('index') else None
1400 cmdutil.showmarker(fm, m, index=ind)
1396 cmdutil.showmarker(fm, m, index=ind)
1401 fm.end()
1397 fm.end()
1402
1398
1403 @command('debugpathcomplete',
1399 @command('debugpathcomplete',
1404 [('f', 'full', None, _('complete an entire path')),
1400 [('f', 'full', None, _('complete an entire path')),
1405 ('n', 'normal', None, _('show only normal files')),
1401 ('n', 'normal', None, _('show only normal files')),
1406 ('a', 'added', None, _('show only added files')),
1402 ('a', 'added', None, _('show only added files')),
1407 ('r', 'removed', None, _('show only removed files'))],
1403 ('r', 'removed', None, _('show only removed files'))],
1408 _('FILESPEC...'))
1404 _('FILESPEC...'))
1409 def debugpathcomplete(ui, repo, *specs, **opts):
1405 def debugpathcomplete(ui, repo, *specs, **opts):
1410 '''complete part or all of a tracked path
1406 '''complete part or all of a tracked path
1411
1407
1412 This command supports shells that offer path name completion. It
1408 This command supports shells that offer path name completion. It
1413 currently completes only files already known to the dirstate.
1409 currently completes only files already known to the dirstate.
1414
1410
1415 Completion extends only to the next path segment unless
1411 Completion extends only to the next path segment unless
1416 --full is specified, in which case entire paths are used.'''
1412 --full is specified, in which case entire paths are used.'''
1417
1413
1418 def complete(path, acceptable):
1414 def complete(path, acceptable):
1419 dirstate = repo.dirstate
1415 dirstate = repo.dirstate
1420 spec = os.path.normpath(os.path.join(pycompat.getcwd(), path))
1416 spec = os.path.normpath(os.path.join(pycompat.getcwd(), path))
1421 rootdir = repo.root + pycompat.ossep
1417 rootdir = repo.root + pycompat.ossep
1422 if spec != repo.root and not spec.startswith(rootdir):
1418 if spec != repo.root and not spec.startswith(rootdir):
1423 return [], []
1419 return [], []
1424 if os.path.isdir(spec):
1420 if os.path.isdir(spec):
1425 spec += '/'
1421 spec += '/'
1426 spec = spec[len(rootdir):]
1422 spec = spec[len(rootdir):]
1427 fixpaths = pycompat.ossep != '/'
1423 fixpaths = pycompat.ossep != '/'
1428 if fixpaths:
1424 if fixpaths:
1429 spec = spec.replace(pycompat.ossep, '/')
1425 spec = spec.replace(pycompat.ossep, '/')
1430 speclen = len(spec)
1426 speclen = len(spec)
1431 fullpaths = opts['full']
1427 fullpaths = opts['full']
1432 files, dirs = set(), set()
1428 files, dirs = set(), set()
1433 adddir, addfile = dirs.add, files.add
1429 adddir, addfile = dirs.add, files.add
1434 for f, st in dirstate.iteritems():
1430 for f, st in dirstate.iteritems():
1435 if f.startswith(spec) and st[0] in acceptable:
1431 if f.startswith(spec) and st[0] in acceptable:
1436 if fixpaths:
1432 if fixpaths:
1437 f = f.replace('/', pycompat.ossep)
1433 f = f.replace('/', pycompat.ossep)
1438 if fullpaths:
1434 if fullpaths:
1439 addfile(f)
1435 addfile(f)
1440 continue
1436 continue
1441 s = f.find(pycompat.ossep, speclen)
1437 s = f.find(pycompat.ossep, speclen)
1442 if s >= 0:
1438 if s >= 0:
1443 adddir(f[:s])
1439 adddir(f[:s])
1444 else:
1440 else:
1445 addfile(f)
1441 addfile(f)
1446 return files, dirs
1442 return files, dirs
1447
1443
1448 acceptable = ''
1444 acceptable = ''
1449 if opts['normal']:
1445 if opts['normal']:
1450 acceptable += 'nm'
1446 acceptable += 'nm'
1451 if opts['added']:
1447 if opts['added']:
1452 acceptable += 'a'
1448 acceptable += 'a'
1453 if opts['removed']:
1449 if opts['removed']:
1454 acceptable += 'r'
1450 acceptable += 'r'
1455 cwd = repo.getcwd()
1451 cwd = repo.getcwd()
1456 if not specs:
1452 if not specs:
1457 specs = ['.']
1453 specs = ['.']
1458
1454
1459 files, dirs = set(), set()
1455 files, dirs = set(), set()
1460 for spec in specs:
1456 for spec in specs:
1461 f, d = complete(spec, acceptable or 'nmar')
1457 f, d = complete(spec, acceptable or 'nmar')
1462 files.update(f)
1458 files.update(f)
1463 dirs.update(d)
1459 dirs.update(d)
1464 files.update(dirs)
1460 files.update(dirs)
1465 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
1461 ui.write('\n'.join(repo.pathto(p, cwd) for p in sorted(files)))
1466 ui.write('\n')
1462 ui.write('\n')
1467
1463
1468 @command('debugpickmergetool',
1464 @command('debugpickmergetool',
1469 [('r', 'rev', '', _('check for files in this revision'), _('REV')),
1465 [('r', 'rev', '', _('check for files in this revision'), _('REV')),
1470 ('', 'changedelete', None, _('emulate merging change and delete')),
1466 ('', 'changedelete', None, _('emulate merging change and delete')),
1471 ] + cmdutil.walkopts + cmdutil.mergetoolopts,
1467 ] + cmdutil.walkopts + cmdutil.mergetoolopts,
1472 _('[PATTERN]...'),
1468 _('[PATTERN]...'),
1473 inferrepo=True)
1469 inferrepo=True)
1474 def debugpickmergetool(ui, repo, *pats, **opts):
1470 def debugpickmergetool(ui, repo, *pats, **opts):
1475 """examine which merge tool is chosen for specified file
1471 """examine which merge tool is chosen for specified file
1476
1472
1477 As described in :hg:`help merge-tools`, Mercurial examines
1473 As described in :hg:`help merge-tools`, Mercurial examines
1478 configurations below in this order to decide which merge tool is
1474 configurations below in this order to decide which merge tool is
1479 chosen for specified file.
1475 chosen for specified file.
1480
1476
1481 1. ``--tool`` option
1477 1. ``--tool`` option
1482 2. ``HGMERGE`` environment variable
1478 2. ``HGMERGE`` environment variable
1483 3. configurations in ``merge-patterns`` section
1479 3. configurations in ``merge-patterns`` section
1484 4. configuration of ``ui.merge``
1480 4. configuration of ``ui.merge``
1485 5. configurations in ``merge-tools`` section
1481 5. configurations in ``merge-tools`` section
1486 6. ``hgmerge`` tool (for historical reason only)
1482 6. ``hgmerge`` tool (for historical reason only)
1487 7. default tool for fallback (``:merge`` or ``:prompt``)
1483 7. default tool for fallback (``:merge`` or ``:prompt``)
1488
1484
1489 This command writes out examination result in the style below::
1485 This command writes out examination result in the style below::
1490
1486
1491 FILE = MERGETOOL
1487 FILE = MERGETOOL
1492
1488
1493 By default, all files known in the first parent context of the
1489 By default, all files known in the first parent context of the
1494 working directory are examined. Use file patterns and/or -I/-X
1490 working directory are examined. Use file patterns and/or -I/-X
1495 options to limit target files. -r/--rev is also useful to examine
1491 options to limit target files. -r/--rev is also useful to examine
1496 files in another context without actual updating to it.
1492 files in another context without actual updating to it.
1497
1493
1498 With --debug, this command shows warning messages while matching
1494 With --debug, this command shows warning messages while matching
1499 against ``merge-patterns`` and so on, too. It is recommended to
1495 against ``merge-patterns`` and so on, too. It is recommended to
1500 use this option with explicit file patterns and/or -I/-X options,
1496 use this option with explicit file patterns and/or -I/-X options,
1501 because this option increases amount of output per file according
1497 because this option increases amount of output per file according
1502 to configurations in hgrc.
1498 to configurations in hgrc.
1503
1499
1504 With -v/--verbose, this command shows configurations below at
1500 With -v/--verbose, this command shows configurations below at
1505 first (only if specified).
1501 first (only if specified).
1506
1502
1507 - ``--tool`` option
1503 - ``--tool`` option
1508 - ``HGMERGE`` environment variable
1504 - ``HGMERGE`` environment variable
1509 - configuration of ``ui.merge``
1505 - configuration of ``ui.merge``
1510
1506
1511 If merge tool is chosen before matching against
1507 If merge tool is chosen before matching against
1512 ``merge-patterns``, this command can't show any helpful
1508 ``merge-patterns``, this command can't show any helpful
1513 information, even with --debug. In such case, information above is
1509 information, even with --debug. In such case, information above is
1514 useful to know why a merge tool is chosen.
1510 useful to know why a merge tool is chosen.
1515 """
1511 """
1516 overrides = {}
1512 overrides = {}
1517 if opts['tool']:
1513 if opts['tool']:
1518 overrides[('ui', 'forcemerge')] = opts['tool']
1514 overrides[('ui', 'forcemerge')] = opts['tool']
1519 ui.note(('with --tool %r\n') % (opts['tool']))
1515 ui.note(('with --tool %r\n') % (opts['tool']))
1520
1516
1521 with ui.configoverride(overrides, 'debugmergepatterns'):
1517 with ui.configoverride(overrides, 'debugmergepatterns'):
1522 hgmerge = encoding.environ.get("HGMERGE")
1518 hgmerge = encoding.environ.get("HGMERGE")
1523 if hgmerge is not None:
1519 if hgmerge is not None:
1524 ui.note(('with HGMERGE=%r\n') % (hgmerge))
1520 ui.note(('with HGMERGE=%r\n') % (hgmerge))
1525 uimerge = ui.config("ui", "merge")
1521 uimerge = ui.config("ui", "merge")
1526 if uimerge:
1522 if uimerge:
1527 ui.note(('with ui.merge=%r\n') % (uimerge))
1523 ui.note(('with ui.merge=%r\n') % (uimerge))
1528
1524
1529 ctx = scmutil.revsingle(repo, opts.get('rev'))
1525 ctx = scmutil.revsingle(repo, opts.get('rev'))
1530 m = scmutil.match(ctx, pats, opts)
1526 m = scmutil.match(ctx, pats, opts)
1531 changedelete = opts['changedelete']
1527 changedelete = opts['changedelete']
1532 for path in ctx.walk(m):
1528 for path in ctx.walk(m):
1533 fctx = ctx[path]
1529 fctx = ctx[path]
1534 try:
1530 try:
1535 if not ui.debugflag:
1531 if not ui.debugflag:
1536 ui.pushbuffer(error=True)
1532 ui.pushbuffer(error=True)
1537 tool, toolpath = filemerge._picktool(repo, ui, path,
1533 tool, toolpath = filemerge._picktool(repo, ui, path,
1538 fctx.isbinary(),
1534 fctx.isbinary(),
1539 'l' in fctx.flags(),
1535 'l' in fctx.flags(),
1540 changedelete)
1536 changedelete)
1541 finally:
1537 finally:
1542 if not ui.debugflag:
1538 if not ui.debugflag:
1543 ui.popbuffer()
1539 ui.popbuffer()
1544 ui.write(('%s = %s\n') % (path, tool))
1540 ui.write(('%s = %s\n') % (path, tool))
1545
1541
1546 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
1542 @command('debugpushkey', [], _('REPO NAMESPACE [KEY OLD NEW]'), norepo=True)
1547 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
1543 def debugpushkey(ui, repopath, namespace, *keyinfo, **opts):
1548 '''access the pushkey key/value protocol
1544 '''access the pushkey key/value protocol
1549
1545
1550 With two args, list the keys in the given namespace.
1546 With two args, list the keys in the given namespace.
1551
1547
1552 With five args, set a key to new if it currently is set to old.
1548 With five args, set a key to new if it currently is set to old.
1553 Reports success or failure.
1549 Reports success or failure.
1554 '''
1550 '''
1555
1551
1556 target = hg.peer(ui, {}, repopath)
1552 target = hg.peer(ui, {}, repopath)
1557 if keyinfo:
1553 if keyinfo:
1558 key, old, new = keyinfo
1554 key, old, new = keyinfo
1559 r = target.pushkey(namespace, key, old, new)
1555 r = target.pushkey(namespace, key, old, new)
1560 ui.status(str(r) + '\n')
1556 ui.status(str(r) + '\n')
1561 return not r
1557 return not r
1562 else:
1558 else:
1563 for k, v in sorted(target.listkeys(namespace).iteritems()):
1559 for k, v in sorted(target.listkeys(namespace).iteritems()):
1564 ui.write("%s\t%s\n" % (util.escapestr(k),
1560 ui.write("%s\t%s\n" % (util.escapestr(k),
1565 util.escapestr(v)))
1561 util.escapestr(v)))
1566
1562
1567 @command('debugpvec', [], _('A B'))
1563 @command('debugpvec', [], _('A B'))
1568 def debugpvec(ui, repo, a, b=None):
1564 def debugpvec(ui, repo, a, b=None):
1569 ca = scmutil.revsingle(repo, a)
1565 ca = scmutil.revsingle(repo, a)
1570 cb = scmutil.revsingle(repo, b)
1566 cb = scmutil.revsingle(repo, b)
1571 pa = pvec.ctxpvec(ca)
1567 pa = pvec.ctxpvec(ca)
1572 pb = pvec.ctxpvec(cb)
1568 pb = pvec.ctxpvec(cb)
1573 if pa == pb:
1569 if pa == pb:
1574 rel = "="
1570 rel = "="
1575 elif pa > pb:
1571 elif pa > pb:
1576 rel = ">"
1572 rel = ">"
1577 elif pa < pb:
1573 elif pa < pb:
1578 rel = "<"
1574 rel = "<"
1579 elif pa | pb:
1575 elif pa | pb:
1580 rel = "|"
1576 rel = "|"
1581 ui.write(_("a: %s\n") % pa)
1577 ui.write(_("a: %s\n") % pa)
1582 ui.write(_("b: %s\n") % pb)
1578 ui.write(_("b: %s\n") % pb)
1583 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
1579 ui.write(_("depth(a): %d depth(b): %d\n") % (pa._depth, pb._depth))
1584 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
1580 ui.write(_("delta: %d hdist: %d distance: %d relation: %s\n") %
1585 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
1581 (abs(pa._depth - pb._depth), pvec._hamming(pa._vec, pb._vec),
1586 pa.distance(pb), rel))
1582 pa.distance(pb), rel))
1587
1583
1588 @command('debugrebuilddirstate|debugrebuildstate',
1584 @command('debugrebuilddirstate|debugrebuildstate',
1589 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
1585 [('r', 'rev', '', _('revision to rebuild to'), _('REV')),
1590 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
1586 ('', 'minimal', None, _('only rebuild files that are inconsistent with '
1591 'the working copy parent')),
1587 'the working copy parent')),
1592 ],
1588 ],
1593 _('[-r REV]'))
1589 _('[-r REV]'))
1594 def debugrebuilddirstate(ui, repo, rev, **opts):
1590 def debugrebuilddirstate(ui, repo, rev, **opts):
1595 """rebuild the dirstate as it would look like for the given revision
1591 """rebuild the dirstate as it would look like for the given revision
1596
1592
1597 If no revision is specified the first current parent will be used.
1593 If no revision is specified the first current parent will be used.
1598
1594
1599 The dirstate will be set to the files of the given revision.
1595 The dirstate will be set to the files of the given revision.
1600 The actual working directory content or existing dirstate
1596 The actual working directory content or existing dirstate
1601 information such as adds or removes is not considered.
1597 information such as adds or removes is not considered.
1602
1598
1603 ``minimal`` will only rebuild the dirstate status for files that claim to be
1599 ``minimal`` will only rebuild the dirstate status for files that claim to be
1604 tracked but are not in the parent manifest, or that exist in the parent
1600 tracked but are not in the parent manifest, or that exist in the parent
1605 manifest but are not in the dirstate. It will not change adds, removes, or
1601 manifest but are not in the dirstate. It will not change adds, removes, or
1606 modified files that are in the working copy parent.
1602 modified files that are in the working copy parent.
1607
1603
1608 One use of this command is to make the next :hg:`status` invocation
1604 One use of this command is to make the next :hg:`status` invocation
1609 check the actual file content.
1605 check the actual file content.
1610 """
1606 """
1611 ctx = scmutil.revsingle(repo, rev)
1607 ctx = scmutil.revsingle(repo, rev)
1612 with repo.wlock():
1608 with repo.wlock():
1613 dirstate = repo.dirstate
1609 dirstate = repo.dirstate
1614 changedfiles = None
1610 changedfiles = None
1615 # See command doc for what minimal does.
1611 # See command doc for what minimal does.
1616 if opts.get('minimal'):
1612 if opts.get('minimal'):
1617 manifestfiles = set(ctx.manifest().keys())
1613 manifestfiles = set(ctx.manifest().keys())
1618 dirstatefiles = set(dirstate)
1614 dirstatefiles = set(dirstate)
1619 manifestonly = manifestfiles - dirstatefiles
1615 manifestonly = manifestfiles - dirstatefiles
1620 dsonly = dirstatefiles - manifestfiles
1616 dsonly = dirstatefiles - manifestfiles
1621 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
1617 dsnotadded = set(f for f in dsonly if dirstate[f] != 'a')
1622 changedfiles = manifestonly | dsnotadded
1618 changedfiles = manifestonly | dsnotadded
1623
1619
1624 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
1620 dirstate.rebuild(ctx.node(), ctx.manifest(), changedfiles)
1625
1621
1626 @command('debugrebuildfncache', [], '')
1622 @command('debugrebuildfncache', [], '')
1627 def debugrebuildfncache(ui, repo):
1623 def debugrebuildfncache(ui, repo):
1628 """rebuild the fncache file"""
1624 """rebuild the fncache file"""
1629 repair.rebuildfncache(ui, repo)
1625 repair.rebuildfncache(ui, repo)
1630
1626
1631 @command('debugrename',
1627 @command('debugrename',
1632 [('r', 'rev', '', _('revision to debug'), _('REV'))],
1628 [('r', 'rev', '', _('revision to debug'), _('REV'))],
1633 _('[-r REV] FILE'))
1629 _('[-r REV] FILE'))
1634 def debugrename(ui, repo, file1, *pats, **opts):
1630 def debugrename(ui, repo, file1, *pats, **opts):
1635 """dump rename information"""
1631 """dump rename information"""
1636
1632
1637 ctx = scmutil.revsingle(repo, opts.get('rev'))
1633 ctx = scmutil.revsingle(repo, opts.get('rev'))
1638 m = scmutil.match(ctx, (file1,) + pats, opts)
1634 m = scmutil.match(ctx, (file1,) + pats, opts)
1639 for abs in ctx.walk(m):
1635 for abs in ctx.walk(m):
1640 fctx = ctx[abs]
1636 fctx = ctx[abs]
1641 o = fctx.filelog().renamed(fctx.filenode())
1637 o = fctx.filelog().renamed(fctx.filenode())
1642 rel = m.rel(abs)
1638 rel = m.rel(abs)
1643 if o:
1639 if o:
1644 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
1640 ui.write(_("%s renamed from %s:%s\n") % (rel, o[0], hex(o[1])))
1645 else:
1641 else:
1646 ui.write(_("%s not renamed\n") % rel)
1642 ui.write(_("%s not renamed\n") % rel)
1647
1643
1648 @command('debugrevlog', cmdutil.debugrevlogopts +
1644 @command('debugrevlog', cmdutil.debugrevlogopts +
1649 [('d', 'dump', False, _('dump index data'))],
1645 [('d', 'dump', False, _('dump index data'))],
1650 _('-c|-m|FILE'),
1646 _('-c|-m|FILE'),
1651 optionalrepo=True)
1647 optionalrepo=True)
1652 def debugrevlog(ui, repo, file_=None, **opts):
1648 def debugrevlog(ui, repo, file_=None, **opts):
1653 """show data and statistics about a revlog"""
1649 """show data and statistics about a revlog"""
1654 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
1650 r = cmdutil.openrevlog(repo, 'debugrevlog', file_, opts)
1655
1651
1656 if opts.get("dump"):
1652 if opts.get("dump"):
1657 numrevs = len(r)
1653 numrevs = len(r)
1658 ui.write(("# rev p1rev p2rev start end deltastart base p1 p2"
1654 ui.write(("# rev p1rev p2rev start end deltastart base p1 p2"
1659 " rawsize totalsize compression heads chainlen\n"))
1655 " rawsize totalsize compression heads chainlen\n"))
1660 ts = 0
1656 ts = 0
1661 heads = set()
1657 heads = set()
1662
1658
1663 for rev in xrange(numrevs):
1659 for rev in xrange(numrevs):
1664 dbase = r.deltaparent(rev)
1660 dbase = r.deltaparent(rev)
1665 if dbase == -1:
1661 if dbase == -1:
1666 dbase = rev
1662 dbase = rev
1667 cbase = r.chainbase(rev)
1663 cbase = r.chainbase(rev)
1668 clen = r.chainlen(rev)
1664 clen = r.chainlen(rev)
1669 p1, p2 = r.parentrevs(rev)
1665 p1, p2 = r.parentrevs(rev)
1670 rs = r.rawsize(rev)
1666 rs = r.rawsize(rev)
1671 ts = ts + rs
1667 ts = ts + rs
1672 heads -= set(r.parentrevs(rev))
1668 heads -= set(r.parentrevs(rev))
1673 heads.add(rev)
1669 heads.add(rev)
1674 try:
1670 try:
1675 compression = ts / r.end(rev)
1671 compression = ts / r.end(rev)
1676 except ZeroDivisionError:
1672 except ZeroDivisionError:
1677 compression = 0
1673 compression = 0
1678 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
1674 ui.write("%5d %5d %5d %5d %5d %10d %4d %4d %4d %7d %9d "
1679 "%11d %5d %8d\n" %
1675 "%11d %5d %8d\n" %
1680 (rev, p1, p2, r.start(rev), r.end(rev),
1676 (rev, p1, p2, r.start(rev), r.end(rev),
1681 r.start(dbase), r.start(cbase),
1677 r.start(dbase), r.start(cbase),
1682 r.start(p1), r.start(p2),
1678 r.start(p1), r.start(p2),
1683 rs, ts, compression, len(heads), clen))
1679 rs, ts, compression, len(heads), clen))
1684 return 0
1680 return 0
1685
1681
1686 v = r.version
1682 v = r.version
1687 format = v & 0xFFFF
1683 format = v & 0xFFFF
1688 flags = []
1684 flags = []
1689 gdelta = False
1685 gdelta = False
1690 if v & revlog.FLAG_INLINE_DATA:
1686 if v & revlog.FLAG_INLINE_DATA:
1691 flags.append('inline')
1687 flags.append('inline')
1692 if v & revlog.FLAG_GENERALDELTA:
1688 if v & revlog.FLAG_GENERALDELTA:
1693 gdelta = True
1689 gdelta = True
1694 flags.append('generaldelta')
1690 flags.append('generaldelta')
1695 if not flags:
1691 if not flags:
1696 flags = ['(none)']
1692 flags = ['(none)']
1697
1693
1698 nummerges = 0
1694 nummerges = 0
1699 numfull = 0
1695 numfull = 0
1700 numprev = 0
1696 numprev = 0
1701 nump1 = 0
1697 nump1 = 0
1702 nump2 = 0
1698 nump2 = 0
1703 numother = 0
1699 numother = 0
1704 nump1prev = 0
1700 nump1prev = 0
1705 nump2prev = 0
1701 nump2prev = 0
1706 chainlengths = []
1702 chainlengths = []
1707
1703
1708 datasize = [None, 0, 0]
1704 datasize = [None, 0, 0]
1709 fullsize = [None, 0, 0]
1705 fullsize = [None, 0, 0]
1710 deltasize = [None, 0, 0]
1706 deltasize = [None, 0, 0]
1711 chunktypecounts = {}
1707 chunktypecounts = {}
1712 chunktypesizes = {}
1708 chunktypesizes = {}
1713
1709
1714 def addsize(size, l):
1710 def addsize(size, l):
1715 if l[0] is None or size < l[0]:
1711 if l[0] is None or size < l[0]:
1716 l[0] = size
1712 l[0] = size
1717 if size > l[1]:
1713 if size > l[1]:
1718 l[1] = size
1714 l[1] = size
1719 l[2] += size
1715 l[2] += size
1720
1716
1721 numrevs = len(r)
1717 numrevs = len(r)
1722 for rev in xrange(numrevs):
1718 for rev in xrange(numrevs):
1723 p1, p2 = r.parentrevs(rev)
1719 p1, p2 = r.parentrevs(rev)
1724 delta = r.deltaparent(rev)
1720 delta = r.deltaparent(rev)
1725 if format > 0:
1721 if format > 0:
1726 addsize(r.rawsize(rev), datasize)
1722 addsize(r.rawsize(rev), datasize)
1727 if p2 != nullrev:
1723 if p2 != nullrev:
1728 nummerges += 1
1724 nummerges += 1
1729 size = r.length(rev)
1725 size = r.length(rev)
1730 if delta == nullrev:
1726 if delta == nullrev:
1731 chainlengths.append(0)
1727 chainlengths.append(0)
1732 numfull += 1
1728 numfull += 1
1733 addsize(size, fullsize)
1729 addsize(size, fullsize)
1734 else:
1730 else:
1735 chainlengths.append(chainlengths[delta] + 1)
1731 chainlengths.append(chainlengths[delta] + 1)
1736 addsize(size, deltasize)
1732 addsize(size, deltasize)
1737 if delta == rev - 1:
1733 if delta == rev - 1:
1738 numprev += 1
1734 numprev += 1
1739 if delta == p1:
1735 if delta == p1:
1740 nump1prev += 1
1736 nump1prev += 1
1741 elif delta == p2:
1737 elif delta == p2:
1742 nump2prev += 1
1738 nump2prev += 1
1743 elif delta == p1:
1739 elif delta == p1:
1744 nump1 += 1
1740 nump1 += 1
1745 elif delta == p2:
1741 elif delta == p2:
1746 nump2 += 1
1742 nump2 += 1
1747 elif delta != nullrev:
1743 elif delta != nullrev:
1748 numother += 1
1744 numother += 1
1749
1745
1750 # Obtain data on the raw chunks in the revlog.
1746 # Obtain data on the raw chunks in the revlog.
1751 segment = r._getsegmentforrevs(rev, rev)[1]
1747 segment = r._getsegmentforrevs(rev, rev)[1]
1752 if segment:
1748 if segment:
1753 chunktype = segment[0]
1749 chunktype = segment[0]
1754 else:
1750 else:
1755 chunktype = 'empty'
1751 chunktype = 'empty'
1756
1752
1757 if chunktype not in chunktypecounts:
1753 if chunktype not in chunktypecounts:
1758 chunktypecounts[chunktype] = 0
1754 chunktypecounts[chunktype] = 0
1759 chunktypesizes[chunktype] = 0
1755 chunktypesizes[chunktype] = 0
1760
1756
1761 chunktypecounts[chunktype] += 1
1757 chunktypecounts[chunktype] += 1
1762 chunktypesizes[chunktype] += size
1758 chunktypesizes[chunktype] += size
1763
1759
1764 # Adjust size min value for empty cases
1760 # Adjust size min value for empty cases
1765 for size in (datasize, fullsize, deltasize):
1761 for size in (datasize, fullsize, deltasize):
1766 if size[0] is None:
1762 if size[0] is None:
1767 size[0] = 0
1763 size[0] = 0
1768
1764
1769 numdeltas = numrevs - numfull
1765 numdeltas = numrevs - numfull
1770 numoprev = numprev - nump1prev - nump2prev
1766 numoprev = numprev - nump1prev - nump2prev
1771 totalrawsize = datasize[2]
1767 totalrawsize = datasize[2]
1772 datasize[2] /= numrevs
1768 datasize[2] /= numrevs
1773 fulltotal = fullsize[2]
1769 fulltotal = fullsize[2]
1774 fullsize[2] /= numfull
1770 fullsize[2] /= numfull
1775 deltatotal = deltasize[2]
1771 deltatotal = deltasize[2]
1776 if numrevs - numfull > 0:
1772 if numrevs - numfull > 0:
1777 deltasize[2] /= numrevs - numfull
1773 deltasize[2] /= numrevs - numfull
1778 totalsize = fulltotal + deltatotal
1774 totalsize = fulltotal + deltatotal
1779 avgchainlen = sum(chainlengths) / numrevs
1775 avgchainlen = sum(chainlengths) / numrevs
1780 maxchainlen = max(chainlengths)
1776 maxchainlen = max(chainlengths)
1781 compratio = 1
1777 compratio = 1
1782 if totalsize:
1778 if totalsize:
1783 compratio = totalrawsize / totalsize
1779 compratio = totalrawsize / totalsize
1784
1780
1785 basedfmtstr = '%%%dd\n'
1781 basedfmtstr = '%%%dd\n'
1786 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
1782 basepcfmtstr = '%%%dd %s(%%5.2f%%%%)\n'
1787
1783
1788 def dfmtstr(max):
1784 def dfmtstr(max):
1789 return basedfmtstr % len(str(max))
1785 return basedfmtstr % len(str(max))
1790 def pcfmtstr(max, padding=0):
1786 def pcfmtstr(max, padding=0):
1791 return basepcfmtstr % (len(str(max)), ' ' * padding)
1787 return basepcfmtstr % (len(str(max)), ' ' * padding)
1792
1788
1793 def pcfmt(value, total):
1789 def pcfmt(value, total):
1794 if total:
1790 if total:
1795 return (value, 100 * float(value) / total)
1791 return (value, 100 * float(value) / total)
1796 else:
1792 else:
1797 return value, 100.0
1793 return value, 100.0
1798
1794
1799 ui.write(('format : %d\n') % format)
1795 ui.write(('format : %d\n') % format)
1800 ui.write(('flags : %s\n') % ', '.join(flags))
1796 ui.write(('flags : %s\n') % ', '.join(flags))
1801
1797
1802 ui.write('\n')
1798 ui.write('\n')
1803 fmt = pcfmtstr(totalsize)
1799 fmt = pcfmtstr(totalsize)
1804 fmt2 = dfmtstr(totalsize)
1800 fmt2 = dfmtstr(totalsize)
1805 ui.write(('revisions : ') + fmt2 % numrevs)
1801 ui.write(('revisions : ') + fmt2 % numrevs)
1806 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
1802 ui.write((' merges : ') + fmt % pcfmt(nummerges, numrevs))
1807 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
1803 ui.write((' normal : ') + fmt % pcfmt(numrevs - nummerges, numrevs))
1808 ui.write(('revisions : ') + fmt2 % numrevs)
1804 ui.write(('revisions : ') + fmt2 % numrevs)
1809 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
1805 ui.write((' full : ') + fmt % pcfmt(numfull, numrevs))
1810 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
1806 ui.write((' deltas : ') + fmt % pcfmt(numdeltas, numrevs))
1811 ui.write(('revision size : ') + fmt2 % totalsize)
1807 ui.write(('revision size : ') + fmt2 % totalsize)
1812 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
1808 ui.write((' full : ') + fmt % pcfmt(fulltotal, totalsize))
1813 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
1809 ui.write((' deltas : ') + fmt % pcfmt(deltatotal, totalsize))
1814
1810
1815 def fmtchunktype(chunktype):
1811 def fmtchunktype(chunktype):
1816 if chunktype == 'empty':
1812 if chunktype == 'empty':
1817 return ' %s : ' % chunktype
1813 return ' %s : ' % chunktype
1818 elif chunktype in string.ascii_letters:
1814 elif chunktype in string.ascii_letters:
1819 return ' 0x%s (%s) : ' % (hex(chunktype), chunktype)
1815 return ' 0x%s (%s) : ' % (hex(chunktype), chunktype)
1820 else:
1816 else:
1821 return ' 0x%s : ' % hex(chunktype)
1817 return ' 0x%s : ' % hex(chunktype)
1822
1818
1823 ui.write('\n')
1819 ui.write('\n')
1824 ui.write(('chunks : ') + fmt2 % numrevs)
1820 ui.write(('chunks : ') + fmt2 % numrevs)
1825 for chunktype in sorted(chunktypecounts):
1821 for chunktype in sorted(chunktypecounts):
1826 ui.write(fmtchunktype(chunktype))
1822 ui.write(fmtchunktype(chunktype))
1827 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
1823 ui.write(fmt % pcfmt(chunktypecounts[chunktype], numrevs))
1828 ui.write(('chunks size : ') + fmt2 % totalsize)
1824 ui.write(('chunks size : ') + fmt2 % totalsize)
1829 for chunktype in sorted(chunktypecounts):
1825 for chunktype in sorted(chunktypecounts):
1830 ui.write(fmtchunktype(chunktype))
1826 ui.write(fmtchunktype(chunktype))
1831 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
1827 ui.write(fmt % pcfmt(chunktypesizes[chunktype], totalsize))
1832
1828
1833 ui.write('\n')
1829 ui.write('\n')
1834 fmt = dfmtstr(max(avgchainlen, compratio))
1830 fmt = dfmtstr(max(avgchainlen, compratio))
1835 ui.write(('avg chain length : ') + fmt % avgchainlen)
1831 ui.write(('avg chain length : ') + fmt % avgchainlen)
1836 ui.write(('max chain length : ') + fmt % maxchainlen)
1832 ui.write(('max chain length : ') + fmt % maxchainlen)
1837 ui.write(('compression ratio : ') + fmt % compratio)
1833 ui.write(('compression ratio : ') + fmt % compratio)
1838
1834
1839 if format > 0:
1835 if format > 0:
1840 ui.write('\n')
1836 ui.write('\n')
1841 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
1837 ui.write(('uncompressed data size (min/max/avg) : %d / %d / %d\n')
1842 % tuple(datasize))
1838 % tuple(datasize))
1843 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
1839 ui.write(('full revision size (min/max/avg) : %d / %d / %d\n')
1844 % tuple(fullsize))
1840 % tuple(fullsize))
1845 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
1841 ui.write(('delta size (min/max/avg) : %d / %d / %d\n')
1846 % tuple(deltasize))
1842 % tuple(deltasize))
1847
1843
1848 if numdeltas > 0:
1844 if numdeltas > 0:
1849 ui.write('\n')
1845 ui.write('\n')
1850 fmt = pcfmtstr(numdeltas)
1846 fmt = pcfmtstr(numdeltas)
1851 fmt2 = pcfmtstr(numdeltas, 4)
1847 fmt2 = pcfmtstr(numdeltas, 4)
1852 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
1848 ui.write(('deltas against prev : ') + fmt % pcfmt(numprev, numdeltas))
1853 if numprev > 0:
1849 if numprev > 0:
1854 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
1850 ui.write((' where prev = p1 : ') + fmt2 % pcfmt(nump1prev,
1855 numprev))
1851 numprev))
1856 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
1852 ui.write((' where prev = p2 : ') + fmt2 % pcfmt(nump2prev,
1857 numprev))
1853 numprev))
1858 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
1854 ui.write((' other : ') + fmt2 % pcfmt(numoprev,
1859 numprev))
1855 numprev))
1860 if gdelta:
1856 if gdelta:
1861 ui.write(('deltas against p1 : ')
1857 ui.write(('deltas against p1 : ')
1862 + fmt % pcfmt(nump1, numdeltas))
1858 + fmt % pcfmt(nump1, numdeltas))
1863 ui.write(('deltas against p2 : ')
1859 ui.write(('deltas against p2 : ')
1864 + fmt % pcfmt(nump2, numdeltas))
1860 + fmt % pcfmt(nump2, numdeltas))
1865 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
1861 ui.write(('deltas against other : ') + fmt % pcfmt(numother,
1866 numdeltas))
1862 numdeltas))
1867
1863
1868 @command('debugrevspec',
1864 @command('debugrevspec',
1869 [('', 'optimize', None,
1865 [('', 'optimize', None,
1870 _('print parsed tree after optimizing (DEPRECATED)')),
1866 _('print parsed tree after optimizing (DEPRECATED)')),
1871 ('p', 'show-stage', [],
1867 ('p', 'show-stage', [],
1872 _('print parsed tree at the given stage'), _('NAME')),
1868 _('print parsed tree at the given stage'), _('NAME')),
1873 ('', 'no-optimized', False, _('evaluate tree without optimization')),
1869 ('', 'no-optimized', False, _('evaluate tree without optimization')),
1874 ('', 'verify-optimized', False, _('verify optimized result')),
1870 ('', 'verify-optimized', False, _('verify optimized result')),
1875 ],
1871 ],
1876 ('REVSPEC'))
1872 ('REVSPEC'))
1877 def debugrevspec(ui, repo, expr, **opts):
1873 def debugrevspec(ui, repo, expr, **opts):
1878 """parse and apply a revision specification
1874 """parse and apply a revision specification
1879
1875
1880 Use -p/--show-stage option to print the parsed tree at the given stages.
1876 Use -p/--show-stage option to print the parsed tree at the given stages.
1881 Use -p all to print tree at every stage.
1877 Use -p all to print tree at every stage.
1882
1878
1883 Use --verify-optimized to compare the optimized result with the unoptimized
1879 Use --verify-optimized to compare the optimized result with the unoptimized
1884 one. Returns 1 if the optimized result differs.
1880 one. Returns 1 if the optimized result differs.
1885 """
1881 """
1886 stages = [
1882 stages = [
1887 ('parsed', lambda tree: tree),
1883 ('parsed', lambda tree: tree),
1888 ('expanded', lambda tree: revsetlang.expandaliases(ui, tree)),
1884 ('expanded', lambda tree: revsetlang.expandaliases(ui, tree)),
1889 ('concatenated', revsetlang.foldconcat),
1885 ('concatenated', revsetlang.foldconcat),
1890 ('analyzed', revsetlang.analyze),
1886 ('analyzed', revsetlang.analyze),
1891 ('optimized', revsetlang.optimize),
1887 ('optimized', revsetlang.optimize),
1892 ]
1888 ]
1893 if opts['no_optimized']:
1889 if opts['no_optimized']:
1894 stages = stages[:-1]
1890 stages = stages[:-1]
1895 if opts['verify_optimized'] and opts['no_optimized']:
1891 if opts['verify_optimized'] and opts['no_optimized']:
1896 raise error.Abort(_('cannot use --verify-optimized with '
1892 raise error.Abort(_('cannot use --verify-optimized with '
1897 '--no-optimized'))
1893 '--no-optimized'))
1898 stagenames = set(n for n, f in stages)
1894 stagenames = set(n for n, f in stages)
1899
1895
1900 showalways = set()
1896 showalways = set()
1901 showchanged = set()
1897 showchanged = set()
1902 if ui.verbose and not opts['show_stage']:
1898 if ui.verbose and not opts['show_stage']:
1903 # show parsed tree by --verbose (deprecated)
1899 # show parsed tree by --verbose (deprecated)
1904 showalways.add('parsed')
1900 showalways.add('parsed')
1905 showchanged.update(['expanded', 'concatenated'])
1901 showchanged.update(['expanded', 'concatenated'])
1906 if opts['optimize']:
1902 if opts['optimize']:
1907 showalways.add('optimized')
1903 showalways.add('optimized')
1908 if opts['show_stage'] and opts['optimize']:
1904 if opts['show_stage'] and opts['optimize']:
1909 raise error.Abort(_('cannot use --optimize with --show-stage'))
1905 raise error.Abort(_('cannot use --optimize with --show-stage'))
1910 if opts['show_stage'] == ['all']:
1906 if opts['show_stage'] == ['all']:
1911 showalways.update(stagenames)
1907 showalways.update(stagenames)
1912 else:
1908 else:
1913 for n in opts['show_stage']:
1909 for n in opts['show_stage']:
1914 if n not in stagenames:
1910 if n not in stagenames:
1915 raise error.Abort(_('invalid stage name: %s') % n)
1911 raise error.Abort(_('invalid stage name: %s') % n)
1916 showalways.update(opts['show_stage'])
1912 showalways.update(opts['show_stage'])
1917
1913
1918 treebystage = {}
1914 treebystage = {}
1919 printedtree = None
1915 printedtree = None
1920 tree = revsetlang.parse(expr, lookup=repo.__contains__)
1916 tree = revsetlang.parse(expr, lookup=repo.__contains__)
1921 for n, f in stages:
1917 for n, f in stages:
1922 treebystage[n] = tree = f(tree)
1918 treebystage[n] = tree = f(tree)
1923 if n in showalways or (n in showchanged and tree != printedtree):
1919 if n in showalways or (n in showchanged and tree != printedtree):
1924 if opts['show_stage'] or n != 'parsed':
1920 if opts['show_stage'] or n != 'parsed':
1925 ui.write(("* %s:\n") % n)
1921 ui.write(("* %s:\n") % n)
1926 ui.write(revsetlang.prettyformat(tree), "\n")
1922 ui.write(revsetlang.prettyformat(tree), "\n")
1927 printedtree = tree
1923 printedtree = tree
1928
1924
1929 if opts['verify_optimized']:
1925 if opts['verify_optimized']:
1930 arevs = revset.makematcher(treebystage['analyzed'])(repo)
1926 arevs = revset.makematcher(treebystage['analyzed'])(repo)
1931 brevs = revset.makematcher(treebystage['optimized'])(repo)
1927 brevs = revset.makematcher(treebystage['optimized'])(repo)
1932 if ui.verbose:
1928 if ui.verbose:
1933 ui.note(("* analyzed set:\n"), smartset.prettyformat(arevs), "\n")
1929 ui.note(("* analyzed set:\n"), smartset.prettyformat(arevs), "\n")
1934 ui.note(("* optimized set:\n"), smartset.prettyformat(brevs), "\n")
1930 ui.note(("* optimized set:\n"), smartset.prettyformat(brevs), "\n")
1935 arevs = list(arevs)
1931 arevs = list(arevs)
1936 brevs = list(brevs)
1932 brevs = list(brevs)
1937 if arevs == brevs:
1933 if arevs == brevs:
1938 return 0
1934 return 0
1939 ui.write(('--- analyzed\n'), label='diff.file_a')
1935 ui.write(('--- analyzed\n'), label='diff.file_a')
1940 ui.write(('+++ optimized\n'), label='diff.file_b')
1936 ui.write(('+++ optimized\n'), label='diff.file_b')
1941 sm = difflib.SequenceMatcher(None, arevs, brevs)
1937 sm = difflib.SequenceMatcher(None, arevs, brevs)
1942 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
1938 for tag, alo, ahi, blo, bhi in sm.get_opcodes():
1943 if tag in ('delete', 'replace'):
1939 if tag in ('delete', 'replace'):
1944 for c in arevs[alo:ahi]:
1940 for c in arevs[alo:ahi]:
1945 ui.write('-%s\n' % c, label='diff.deleted')
1941 ui.write('-%s\n' % c, label='diff.deleted')
1946 if tag in ('insert', 'replace'):
1942 if tag in ('insert', 'replace'):
1947 for c in brevs[blo:bhi]:
1943 for c in brevs[blo:bhi]:
1948 ui.write('+%s\n' % c, label='diff.inserted')
1944 ui.write('+%s\n' % c, label='diff.inserted')
1949 if tag == 'equal':
1945 if tag == 'equal':
1950 for c in arevs[alo:ahi]:
1946 for c in arevs[alo:ahi]:
1951 ui.write(' %s\n' % c)
1947 ui.write(' %s\n' % c)
1952 return 1
1948 return 1
1953
1949
1954 func = revset.makematcher(tree)
1950 func = revset.makematcher(tree)
1955 revs = func(repo)
1951 revs = func(repo)
1956 if ui.verbose:
1952 if ui.verbose:
1957 ui.note(("* set:\n"), smartset.prettyformat(revs), "\n")
1953 ui.note(("* set:\n"), smartset.prettyformat(revs), "\n")
1958 for c in revs:
1954 for c in revs:
1959 ui.write("%s\n" % c)
1955 ui.write("%s\n" % c)
1960
1956
1961 @command('debugsetparents', [], _('REV1 [REV2]'))
1957 @command('debugsetparents', [], _('REV1 [REV2]'))
1962 def debugsetparents(ui, repo, rev1, rev2=None):
1958 def debugsetparents(ui, repo, rev1, rev2=None):
1963 """manually set the parents of the current working directory
1959 """manually set the parents of the current working directory
1964
1960
1965 This is useful for writing repository conversion tools, but should
1961 This is useful for writing repository conversion tools, but should
1966 be used with care. For example, neither the working directory nor the
1962 be used with care. For example, neither the working directory nor the
1967 dirstate is updated, so file status may be incorrect after running this
1963 dirstate is updated, so file status may be incorrect after running this
1968 command.
1964 command.
1969
1965
1970 Returns 0 on success.
1966 Returns 0 on success.
1971 """
1967 """
1972
1968
1973 r1 = scmutil.revsingle(repo, rev1).node()
1969 r1 = scmutil.revsingle(repo, rev1).node()
1974 r2 = scmutil.revsingle(repo, rev2, 'null').node()
1970 r2 = scmutil.revsingle(repo, rev2, 'null').node()
1975
1971
1976 with repo.wlock():
1972 with repo.wlock():
1977 repo.setparents(r1, r2)
1973 repo.setparents(r1, r2)
1978
1974
1979 @command('debugsub',
1975 @command('debugsub',
1980 [('r', 'rev', '',
1976 [('r', 'rev', '',
1981 _('revision to check'), _('REV'))],
1977 _('revision to check'), _('REV'))],
1982 _('[-r REV] [REV]'))
1978 _('[-r REV] [REV]'))
1983 def debugsub(ui, repo, rev=None):
1979 def debugsub(ui, repo, rev=None):
1984 ctx = scmutil.revsingle(repo, rev, None)
1980 ctx = scmutil.revsingle(repo, rev, None)
1985 for k, v in sorted(ctx.substate.items()):
1981 for k, v in sorted(ctx.substate.items()):
1986 ui.write(('path %s\n') % k)
1982 ui.write(('path %s\n') % k)
1987 ui.write((' source %s\n') % v[0])
1983 ui.write((' source %s\n') % v[0])
1988 ui.write((' revision %s\n') % v[1])
1984 ui.write((' revision %s\n') % v[1])
1989
1985
1990 @command('debugsuccessorssets',
1986 @command('debugsuccessorssets',
1991 [],
1987 [],
1992 _('[REV]'))
1988 _('[REV]'))
1993 def debugsuccessorssets(ui, repo, *revs):
1989 def debugsuccessorssets(ui, repo, *revs):
1994 """show set of successors for revision
1990 """show set of successors for revision
1995
1991
1996 A successors set of changeset A is a consistent group of revisions that
1992 A successors set of changeset A is a consistent group of revisions that
1997 succeed A. It contains non-obsolete changesets only.
1993 succeed A. It contains non-obsolete changesets only.
1998
1994
1999 In most cases a changeset A has a single successors set containing a single
1995 In most cases a changeset A has a single successors set containing a single
2000 successor (changeset A replaced by A').
1996 successor (changeset A replaced by A').
2001
1997
2002 A changeset that is made obsolete with no successors are called "pruned".
1998 A changeset that is made obsolete with no successors are called "pruned".
2003 Such changesets have no successors sets at all.
1999 Such changesets have no successors sets at all.
2004
2000
2005 A changeset that has been "split" will have a successors set containing
2001 A changeset that has been "split" will have a successors set containing
2006 more than one successor.
2002 more than one successor.
2007
2003
2008 A changeset that has been rewritten in multiple different ways is called
2004 A changeset that has been rewritten in multiple different ways is called
2009 "divergent". Such changesets have multiple successor sets (each of which
2005 "divergent". Such changesets have multiple successor sets (each of which
2010 may also be split, i.e. have multiple successors).
2006 may also be split, i.e. have multiple successors).
2011
2007
2012 Results are displayed as follows::
2008 Results are displayed as follows::
2013
2009
2014 <rev1>
2010 <rev1>
2015 <successors-1A>
2011 <successors-1A>
2016 <rev2>
2012 <rev2>
2017 <successors-2A>
2013 <successors-2A>
2018 <successors-2B1> <successors-2B2> <successors-2B3>
2014 <successors-2B1> <successors-2B2> <successors-2B3>
2019
2015
2020 Here rev2 has two possible (i.e. divergent) successors sets. The first
2016 Here rev2 has two possible (i.e. divergent) successors sets. The first
2021 holds one element, whereas the second holds three (i.e. the changeset has
2017 holds one element, whereas the second holds three (i.e. the changeset has
2022 been split).
2018 been split).
2023 """
2019 """
2024 # passed to successorssets caching computation from one call to another
2020 # passed to successorssets caching computation from one call to another
2025 cache = {}
2021 cache = {}
2026 ctx2str = str
2022 ctx2str = str
2027 node2str = short
2023 node2str = short
2028 if ui.debug():
2024 if ui.debug():
2029 def ctx2str(ctx):
2025 def ctx2str(ctx):
2030 return ctx.hex()
2026 return ctx.hex()
2031 node2str = hex
2027 node2str = hex
2032 for rev in scmutil.revrange(repo, revs):
2028 for rev in scmutil.revrange(repo, revs):
2033 ctx = repo[rev]
2029 ctx = repo[rev]
2034 ui.write('%s\n'% ctx2str(ctx))
2030 ui.write('%s\n'% ctx2str(ctx))
2035 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
2031 for succsset in obsolete.successorssets(repo, ctx.node(), cache):
2036 if succsset:
2032 if succsset:
2037 ui.write(' ')
2033 ui.write(' ')
2038 ui.write(node2str(succsset[0]))
2034 ui.write(node2str(succsset[0]))
2039 for node in succsset[1:]:
2035 for node in succsset[1:]:
2040 ui.write(' ')
2036 ui.write(' ')
2041 ui.write(node2str(node))
2037 ui.write(node2str(node))
2042 ui.write('\n')
2038 ui.write('\n')
2043
2039
2044 @command('debugtemplate',
2040 @command('debugtemplate',
2045 [('r', 'rev', [], _('apply template on changesets'), _('REV')),
2041 [('r', 'rev', [], _('apply template on changesets'), _('REV')),
2046 ('D', 'define', [], _('define template keyword'), _('KEY=VALUE'))],
2042 ('D', 'define', [], _('define template keyword'), _('KEY=VALUE'))],
2047 _('[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
2043 _('[-r REV]... [-D KEY=VALUE]... TEMPLATE'),
2048 optionalrepo=True)
2044 optionalrepo=True)
2049 def debugtemplate(ui, repo, tmpl, **opts):
2045 def debugtemplate(ui, repo, tmpl, **opts):
2050 """parse and apply a template
2046 """parse and apply a template
2051
2047
2052 If -r/--rev is given, the template is processed as a log template and
2048 If -r/--rev is given, the template is processed as a log template and
2053 applied to the given changesets. Otherwise, it is processed as a generic
2049 applied to the given changesets. Otherwise, it is processed as a generic
2054 template.
2050 template.
2055
2051
2056 Use --verbose to print the parsed tree.
2052 Use --verbose to print the parsed tree.
2057 """
2053 """
2058 revs = None
2054 revs = None
2059 if opts['rev']:
2055 if opts['rev']:
2060 if repo is None:
2056 if repo is None:
2061 raise error.RepoError(_('there is no Mercurial repository here '
2057 raise error.RepoError(_('there is no Mercurial repository here '
2062 '(.hg not found)'))
2058 '(.hg not found)'))
2063 revs = scmutil.revrange(repo, opts['rev'])
2059 revs = scmutil.revrange(repo, opts['rev'])
2064
2060
2065 props = {}
2061 props = {}
2066 for d in opts['define']:
2062 for d in opts['define']:
2067 try:
2063 try:
2068 k, v = (e.strip() for e in d.split('=', 1))
2064 k, v = (e.strip() for e in d.split('=', 1))
2069 if not k or k == 'ui':
2065 if not k or k == 'ui':
2070 raise ValueError
2066 raise ValueError
2071 props[k] = v
2067 props[k] = v
2072 except ValueError:
2068 except ValueError:
2073 raise error.Abort(_('malformed keyword definition: %s') % d)
2069 raise error.Abort(_('malformed keyword definition: %s') % d)
2074
2070
2075 if ui.verbose:
2071 if ui.verbose:
2076 aliases = ui.configitems('templatealias')
2072 aliases = ui.configitems('templatealias')
2077 tree = templater.parse(tmpl)
2073 tree = templater.parse(tmpl)
2078 ui.note(templater.prettyformat(tree), '\n')
2074 ui.note(templater.prettyformat(tree), '\n')
2079 newtree = templater.expandaliases(tree, aliases)
2075 newtree = templater.expandaliases(tree, aliases)
2080 if newtree != tree:
2076 if newtree != tree:
2081 ui.note(("* expanded:\n"), templater.prettyformat(newtree), '\n')
2077 ui.note(("* expanded:\n"), templater.prettyformat(newtree), '\n')
2082
2078
2083 mapfile = None
2079 mapfile = None
2084 if revs is None:
2080 if revs is None:
2085 k = 'debugtemplate'
2081 k = 'debugtemplate'
2086 t = formatter.maketemplater(ui, k, tmpl)
2082 t = formatter.maketemplater(ui, k, tmpl)
2087 ui.write(templater.stringify(t(k, ui=ui, **props)))
2083 ui.write(templater.stringify(t(k, ui=ui, **props)))
2088 else:
2084 else:
2089 displayer = cmdutil.changeset_templater(ui, repo, None, opts, tmpl,
2085 displayer = cmdutil.changeset_templater(ui, repo, None, opts, tmpl,
2090 mapfile, buffered=False)
2086 mapfile, buffered=False)
2091 for r in revs:
2087 for r in revs:
2092 displayer.show(repo[r], **props)
2088 displayer.show(repo[r], **props)
2093 displayer.close()
2089 displayer.close()
2094
2090
2095 @command('debugupdatecaches', [])
2091 @command('debugupdatecaches', [])
2096 def debugupdatecaches(ui, repo, *pats, **opts):
2092 def debugupdatecaches(ui, repo, *pats, **opts):
2097 """warm all known caches in the repository"""
2093 """warm all known caches in the repository"""
2098 with repo.wlock():
2094 with repo.wlock():
2099 with repo.lock():
2095 with repo.lock():
2100 repo.updatecaches()
2096 repo.updatecaches()
2101
2097
2102 @command('debugupgraderepo', [
2098 @command('debugupgraderepo', [
2103 ('o', 'optimize', [], _('extra optimization to perform'), _('NAME')),
2099 ('o', 'optimize', [], _('extra optimization to perform'), _('NAME')),
2104 ('', 'run', False, _('performs an upgrade')),
2100 ('', 'run', False, _('performs an upgrade')),
2105 ])
2101 ])
2106 def debugupgraderepo(ui, repo, run=False, optimize=None):
2102 def debugupgraderepo(ui, repo, run=False, optimize=None):
2107 """upgrade a repository to use different features
2103 """upgrade a repository to use different features
2108
2104
2109 If no arguments are specified, the repository is evaluated for upgrade
2105 If no arguments are specified, the repository is evaluated for upgrade
2110 and a list of problems and potential optimizations is printed.
2106 and a list of problems and potential optimizations is printed.
2111
2107
2112 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
2108 With ``--run``, a repository upgrade is performed. Behavior of the upgrade
2113 can be influenced via additional arguments. More details will be provided
2109 can be influenced via additional arguments. More details will be provided
2114 by the command output when run without ``--run``.
2110 by the command output when run without ``--run``.
2115
2111
2116 During the upgrade, the repository will be locked and no writes will be
2112 During the upgrade, the repository will be locked and no writes will be
2117 allowed.
2113 allowed.
2118
2114
2119 At the end of the upgrade, the repository may not be readable while new
2115 At the end of the upgrade, the repository may not be readable while new
2120 repository data is swapped in. This window will be as long as it takes to
2116 repository data is swapped in. This window will be as long as it takes to
2121 rename some directories inside the ``.hg`` directory. On most machines, this
2117 rename some directories inside the ``.hg`` directory. On most machines, this
2122 should complete almost instantaneously and the chances of a consumer being
2118 should complete almost instantaneously and the chances of a consumer being
2123 unable to access the repository should be low.
2119 unable to access the repository should be low.
2124 """
2120 """
2125 return upgrade.upgraderepo(ui, repo, run=run, optimize=optimize)
2121 return upgrade.upgraderepo(ui, repo, run=run, optimize=optimize)
2126
2122
2127 @command('debugwalk', cmdutil.walkopts, _('[OPTION]... [FILE]...'),
2123 @command('debugwalk', cmdutil.walkopts, _('[OPTION]... [FILE]...'),
2128 inferrepo=True)
2124 inferrepo=True)
2129 def debugwalk(ui, repo, *pats, **opts):
2125 def debugwalk(ui, repo, *pats, **opts):
2130 """show how files match on given patterns"""
2126 """show how files match on given patterns"""
2131 m = scmutil.match(repo[None], pats, opts)
2127 m = scmutil.match(repo[None], pats, opts)
2132 items = list(repo[None].walk(m))
2128 items = list(repo[None].walk(m))
2133 if not items:
2129 if not items:
2134 return
2130 return
2135 f = lambda fn: fn
2131 f = lambda fn: fn
2136 if ui.configbool('ui', 'slash') and pycompat.ossep != '/':
2132 if ui.configbool('ui', 'slash') and pycompat.ossep != '/':
2137 f = lambda fn: util.normpath(fn)
2133 f = lambda fn: util.normpath(fn)
2138 fmt = 'f %%-%ds %%-%ds %%s' % (
2134 fmt = 'f %%-%ds %%-%ds %%s' % (
2139 max([len(abs) for abs in items]),
2135 max([len(abs) for abs in items]),
2140 max([len(m.rel(abs)) for abs in items]))
2136 max([len(m.rel(abs)) for abs in items]))
2141 for abs in items:
2137 for abs in items:
2142 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2138 line = fmt % (abs, f(m.rel(abs)), m.exact(abs) and 'exact' or '')
2143 ui.write("%s\n" % line.rstrip())
2139 ui.write("%s\n" % line.rstrip())
2144
2140
2145 @command('debugwireargs',
2141 @command('debugwireargs',
2146 [('', 'three', '', 'three'),
2142 [('', 'three', '', 'three'),
2147 ('', 'four', '', 'four'),
2143 ('', 'four', '', 'four'),
2148 ('', 'five', '', 'five'),
2144 ('', 'five', '', 'five'),
2149 ] + cmdutil.remoteopts,
2145 ] + cmdutil.remoteopts,
2150 _('REPO [OPTIONS]... [ONE [TWO]]'),
2146 _('REPO [OPTIONS]... [ONE [TWO]]'),
2151 norepo=True)
2147 norepo=True)
2152 def debugwireargs(ui, repopath, *vals, **opts):
2148 def debugwireargs(ui, repopath, *vals, **opts):
2153 repo = hg.peer(ui, opts, repopath)
2149 repo = hg.peer(ui, opts, repopath)
2154 for opt in cmdutil.remoteopts:
2150 for opt in cmdutil.remoteopts:
2155 del opts[opt[1]]
2151 del opts[opt[1]]
2156 args = {}
2152 args = {}
2157 for k, v in opts.iteritems():
2153 for k, v in opts.iteritems():
2158 if v:
2154 if v:
2159 args[k] = v
2155 args[k] = v
2160 # run twice to check that we don't mess up the stream for the next command
2156 # run twice to check that we don't mess up the stream for the next command
2161 res1 = repo.debugwireargs(*vals, **args)
2157 res1 = repo.debugwireargs(*vals, **args)
2162 res2 = repo.debugwireargs(*vals, **args)
2158 res2 = repo.debugwireargs(*vals, **args)
2163 ui.write("%s\n" % res1)
2159 ui.write("%s\n" % res1)
2164 if res1 != res2:
2160 if res1 != res2:
2165 ui.warn("%s\n" % res2)
2161 ui.warn("%s\n" % res2)
@@ -1,788 +1,796 b''
1 # match.py - filename matching
1 # match.py - filename matching
2 #
2 #
3 # Copyright 2008, 2009 Matt Mackall <mpm@selenic.com> and others
3 # Copyright 2008, 2009 Matt Mackall <mpm@selenic.com> and others
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import copy
10 import copy
11 import os
11 import os
12 import re
12 import re
13
13
14 from .i18n import _
14 from .i18n import _
15 from . import (
15 from . import (
16 error,
16 error,
17 pathutil,
17 pathutil,
18 util,
18 util,
19 )
19 )
20
20
21 propertycache = util.propertycache
21 propertycache = util.propertycache
22
22
23 def _rematcher(regex):
23 def _rematcher(regex):
24 '''compile the regexp with the best available regexp engine and return a
24 '''compile the regexp with the best available regexp engine and return a
25 matcher function'''
25 matcher function'''
26 m = util.re.compile(regex)
26 m = util.re.compile(regex)
27 try:
27 try:
28 # slightly faster, provided by facebook's re2 bindings
28 # slightly faster, provided by facebook's re2 bindings
29 return m.test_match
29 return m.test_match
30 except AttributeError:
30 except AttributeError:
31 return m.match
31 return m.match
32
32
33 def _expandsets(kindpats, ctx, listsubrepos):
33 def _expandsets(kindpats, ctx, listsubrepos):
34 '''Returns the kindpats list with the 'set' patterns expanded.'''
34 '''Returns the kindpats list with the 'set' patterns expanded.'''
35 fset = set()
35 fset = set()
36 other = []
36 other = []
37
37
38 for kind, pat, source in kindpats:
38 for kind, pat, source in kindpats:
39 if kind == 'set':
39 if kind == 'set':
40 if not ctx:
40 if not ctx:
41 raise error.Abort(_("fileset expression with no context"))
41 raise error.Abort(_("fileset expression with no context"))
42 s = ctx.getfileset(pat)
42 s = ctx.getfileset(pat)
43 fset.update(s)
43 fset.update(s)
44
44
45 if listsubrepos:
45 if listsubrepos:
46 for subpath in ctx.substate:
46 for subpath in ctx.substate:
47 s = ctx.sub(subpath).getfileset(pat)
47 s = ctx.sub(subpath).getfileset(pat)
48 fset.update(subpath + '/' + f for f in s)
48 fset.update(subpath + '/' + f for f in s)
49
49
50 continue
50 continue
51 other.append((kind, pat, source))
51 other.append((kind, pat, source))
52 return fset, other
52 return fset, other
53
53
54 def _expandsubinclude(kindpats, root):
54 def _expandsubinclude(kindpats, root):
55 '''Returns the list of subinclude matcher args and the kindpats without the
55 '''Returns the list of subinclude matcher args and the kindpats without the
56 subincludes in it.'''
56 subincludes in it.'''
57 relmatchers = []
57 relmatchers = []
58 other = []
58 other = []
59
59
60 for kind, pat, source in kindpats:
60 for kind, pat, source in kindpats:
61 if kind == 'subinclude':
61 if kind == 'subinclude':
62 sourceroot = pathutil.dirname(util.normpath(source))
62 sourceroot = pathutil.dirname(util.normpath(source))
63 pat = util.pconvert(pat)
63 pat = util.pconvert(pat)
64 path = pathutil.join(sourceroot, pat)
64 path = pathutil.join(sourceroot, pat)
65
65
66 newroot = pathutil.dirname(path)
66 newroot = pathutil.dirname(path)
67 matcherargs = (newroot, '', [], ['include:%s' % path])
67 matcherargs = (newroot, '', [], ['include:%s' % path])
68
68
69 prefix = pathutil.canonpath(root, root, newroot)
69 prefix = pathutil.canonpath(root, root, newroot)
70 if prefix:
70 if prefix:
71 prefix += '/'
71 prefix += '/'
72 relmatchers.append((prefix, matcherargs))
72 relmatchers.append((prefix, matcherargs))
73 else:
73 else:
74 other.append((kind, pat, source))
74 other.append((kind, pat, source))
75
75
76 return relmatchers, other
76 return relmatchers, other
77
77
78 def _kindpatsalwaysmatch(kindpats):
78 def _kindpatsalwaysmatch(kindpats):
79 """"Checks whether the kindspats match everything, as e.g.
79 """"Checks whether the kindspats match everything, as e.g.
80 'relpath:.' does.
80 'relpath:.' does.
81 """
81 """
82 for kind, pat, source in kindpats:
82 for kind, pat, source in kindpats:
83 if pat != '' or kind not in ['relpath', 'glob']:
83 if pat != '' or kind not in ['relpath', 'glob']:
84 return False
84 return False
85 return True
85 return True
86
86
87 def match(root, cwd, patterns, include=None, exclude=None, default='glob',
87 def match(root, cwd, patterns, include=None, exclude=None, default='glob',
88 exact=False, auditor=None, ctx=None, listsubrepos=False, warn=None,
88 exact=False, auditor=None, ctx=None, listsubrepos=False, warn=None,
89 badfn=None, icasefs=False):
89 badfn=None, icasefs=False):
90 """build an object to match a set of file patterns
90 """build an object to match a set of file patterns
91
91
92 arguments:
92 arguments:
93 root - the canonical root of the tree you're matching against
93 root - the canonical root of the tree you're matching against
94 cwd - the current working directory, if relevant
94 cwd - the current working directory, if relevant
95 patterns - patterns to find
95 patterns - patterns to find
96 include - patterns to include (unless they are excluded)
96 include - patterns to include (unless they are excluded)
97 exclude - patterns to exclude (even if they are included)
97 exclude - patterns to exclude (even if they are included)
98 default - if a pattern in patterns has no explicit type, assume this one
98 default - if a pattern in patterns has no explicit type, assume this one
99 exact - patterns are actually filenames (include/exclude still apply)
99 exact - patterns are actually filenames (include/exclude still apply)
100 warn - optional function used for printing warnings
100 warn - optional function used for printing warnings
101 badfn - optional bad() callback for this matcher instead of the default
101 badfn - optional bad() callback for this matcher instead of the default
102 icasefs - make a matcher for wdir on case insensitive filesystems, which
102 icasefs - make a matcher for wdir on case insensitive filesystems, which
103 normalizes the given patterns to the case in the filesystem
103 normalizes the given patterns to the case in the filesystem
104
104
105 a pattern is one of:
105 a pattern is one of:
106 'glob:<glob>' - a glob relative to cwd
106 'glob:<glob>' - a glob relative to cwd
107 're:<regexp>' - a regular expression
107 're:<regexp>' - a regular expression
108 'path:<path>' - a path relative to repository root, which is matched
108 'path:<path>' - a path relative to repository root, which is matched
109 recursively
109 recursively
110 'rootfilesin:<path>' - a path relative to repository root, which is
110 'rootfilesin:<path>' - a path relative to repository root, which is
111 matched non-recursively (will not match subdirectories)
111 matched non-recursively (will not match subdirectories)
112 'relglob:<glob>' - an unrooted glob (*.c matches C files in all dirs)
112 'relglob:<glob>' - an unrooted glob (*.c matches C files in all dirs)
113 'relpath:<path>' - a path relative to cwd
113 'relpath:<path>' - a path relative to cwd
114 'relre:<regexp>' - a regexp that needn't match the start of a name
114 'relre:<regexp>' - a regexp that needn't match the start of a name
115 'set:<fileset>' - a fileset expression
115 'set:<fileset>' - a fileset expression
116 'include:<path>' - a file of patterns to read and include
116 'include:<path>' - a file of patterns to read and include
117 'subinclude:<path>' - a file of patterns to match against files under
117 'subinclude:<path>' - a file of patterns to match against files under
118 the same directory
118 the same directory
119 '<something>' - a pattern of the specified default type
119 '<something>' - a pattern of the specified default type
120 """
120 """
121 normalize = _donormalize
121 normalize = _donormalize
122 if icasefs:
122 if icasefs:
123 dirstate = ctx.repo().dirstate
123 dirstate = ctx.repo().dirstate
124 dsnormalize = dirstate.normalize
124 dsnormalize = dirstate.normalize
125
125
126 def normalize(patterns, default, root, cwd, auditor, warn):
126 def normalize(patterns, default, root, cwd, auditor, warn):
127 kp = _donormalize(patterns, default, root, cwd, auditor, warn)
127 kp = _donormalize(patterns, default, root, cwd, auditor, warn)
128 kindpats = []
128 kindpats = []
129 for kind, pats, source in kp:
129 for kind, pats, source in kp:
130 if kind not in ('re', 'relre'): # regex can't be normalized
130 if kind not in ('re', 'relre'): # regex can't be normalized
131 p = pats
131 p = pats
132 pats = dsnormalize(pats)
132 pats = dsnormalize(pats)
133
133
134 # Preserve the original to handle a case only rename.
134 # Preserve the original to handle a case only rename.
135 if p != pats and p in dirstate:
135 if p != pats and p in dirstate:
136 kindpats.append((kind, p, source))
136 kindpats.append((kind, p, source))
137
137
138 kindpats.append((kind, pats, source))
138 kindpats.append((kind, pats, source))
139 return kindpats
139 return kindpats
140
140
141 return matcher(root, cwd, normalize, patterns, include=include,
141 return matcher(root, cwd, normalize, patterns, include=include,
142 exclude=exclude, default=default, exact=exact,
142 exclude=exclude, default=default, exact=exact,
143 auditor=auditor, ctx=ctx, listsubrepos=listsubrepos,
143 auditor=auditor, ctx=ctx, listsubrepos=listsubrepos,
144 warn=warn, badfn=badfn)
144 warn=warn, badfn=badfn)
145
145
146 def exact(root, cwd, files, badfn=None):
146 def exact(root, cwd, files, badfn=None):
147 return match(root, cwd, files, exact=True, badfn=badfn)
147 return match(root, cwd, files, exact=True, badfn=badfn)
148
148
149 def always(root, cwd):
149 def always(root, cwd):
150 return match(root, cwd, [])
150 return match(root, cwd, [])
151
151
152 def badmatch(match, badfn):
152 def badmatch(match, badfn):
153 """Make a copy of the given matcher, replacing its bad method with the given
153 """Make a copy of the given matcher, replacing its bad method with the given
154 one.
154 one.
155 """
155 """
156 m = copy.copy(match)
156 m = copy.copy(match)
157 m.bad = badfn
157 m.bad = badfn
158 return m
158 return m
159
159
160 def _donormalize(patterns, default, root, cwd, auditor, warn):
160 def _donormalize(patterns, default, root, cwd, auditor, warn):
161 '''Convert 'kind:pat' from the patterns list to tuples with kind and
161 '''Convert 'kind:pat' from the patterns list to tuples with kind and
162 normalized and rooted patterns and with listfiles expanded.'''
162 normalized and rooted patterns and with listfiles expanded.'''
163 kindpats = []
163 kindpats = []
164 for kind, pat in [_patsplit(p, default) for p in patterns]:
164 for kind, pat in [_patsplit(p, default) for p in patterns]:
165 if kind in ('glob', 'relpath'):
165 if kind in ('glob', 'relpath'):
166 pat = pathutil.canonpath(root, cwd, pat, auditor)
166 pat = pathutil.canonpath(root, cwd, pat, auditor)
167 elif kind in ('relglob', 'path', 'rootfilesin'):
167 elif kind in ('relglob', 'path', 'rootfilesin'):
168 pat = util.normpath(pat)
168 pat = util.normpath(pat)
169 elif kind in ('listfile', 'listfile0'):
169 elif kind in ('listfile', 'listfile0'):
170 try:
170 try:
171 files = util.readfile(pat)
171 files = util.readfile(pat)
172 if kind == 'listfile0':
172 if kind == 'listfile0':
173 files = files.split('\0')
173 files = files.split('\0')
174 else:
174 else:
175 files = files.splitlines()
175 files = files.splitlines()
176 files = [f for f in files if f]
176 files = [f for f in files if f]
177 except EnvironmentError:
177 except EnvironmentError:
178 raise error.Abort(_("unable to read file list (%s)") % pat)
178 raise error.Abort(_("unable to read file list (%s)") % pat)
179 for k, p, source in _donormalize(files, default, root, cwd,
179 for k, p, source in _donormalize(files, default, root, cwd,
180 auditor, warn):
180 auditor, warn):
181 kindpats.append((k, p, pat))
181 kindpats.append((k, p, pat))
182 continue
182 continue
183 elif kind == 'include':
183 elif kind == 'include':
184 try:
184 try:
185 fullpath = os.path.join(root, util.localpath(pat))
185 fullpath = os.path.join(root, util.localpath(pat))
186 includepats = readpatternfile(fullpath, warn)
186 includepats = readpatternfile(fullpath, warn)
187 for k, p, source in _donormalize(includepats, default,
187 for k, p, source in _donormalize(includepats, default,
188 root, cwd, auditor, warn):
188 root, cwd, auditor, warn):
189 kindpats.append((k, p, source or pat))
189 kindpats.append((k, p, source or pat))
190 except error.Abort as inst:
190 except error.Abort as inst:
191 raise error.Abort('%s: %s' % (pat, inst[0]))
191 raise error.Abort('%s: %s' % (pat, inst[0]))
192 except IOError as inst:
192 except IOError as inst:
193 if warn:
193 if warn:
194 warn(_("skipping unreadable pattern file '%s': %s\n") %
194 warn(_("skipping unreadable pattern file '%s': %s\n") %
195 (pat, inst.strerror))
195 (pat, inst.strerror))
196 continue
196 continue
197 # else: re or relre - which cannot be normalized
197 # else: re or relre - which cannot be normalized
198 kindpats.append((kind, pat, ''))
198 kindpats.append((kind, pat, ''))
199 return kindpats
199 return kindpats
200
200
201 class matcher(object):
201 class matcher(object):
202
202
203 def __init__(self, root, cwd, normalize, patterns, include=None,
203 def __init__(self, root, cwd, normalize, patterns, include=None,
204 exclude=None, default='glob', exact=False, auditor=None,
204 exclude=None, default='glob', exact=False, auditor=None,
205 ctx=None, listsubrepos=False, warn=None, badfn=None):
205 ctx=None, listsubrepos=False, warn=None, badfn=None):
206 if include is None:
206 if include is None:
207 include = []
207 include = []
208 if exclude is None:
208 if exclude is None:
209 exclude = []
209 exclude = []
210
210
211 self._root = root
211 self._root = root
212 self._cwd = cwd
212 self._cwd = cwd
213 self._files = [] # exact files and roots of patterns
213 self._files = [] # exact files and roots of patterns
214 self._anypats = bool(include or exclude)
214 self._anypats = bool(include or exclude)
215 self._always = False
215 self._always = False
216 self._pathrestricted = bool(include or exclude or patterns)
216 self._pathrestricted = bool(include or exclude or patterns)
217 self.patternspat = None
218 self.includepat = None
219 self.excludepat = None
217
220
218 # roots are directories which are recursively included/excluded.
221 # roots are directories which are recursively included/excluded.
219 self._includeroots = set()
222 self._includeroots = set()
220 self._excluderoots = set()
223 self._excluderoots = set()
221 # dirs are directories which are non-recursively included.
224 # dirs are directories which are non-recursively included.
222 self._includedirs = set()
225 self._includedirs = set()
223
226
224 if badfn is not None:
227 if badfn is not None:
225 self.bad = badfn
228 self.bad = badfn
226
229
227 matchfns = []
230 matchfns = []
228 if include:
231 if include:
229 kindpats = normalize(include, 'glob', root, cwd, auditor, warn)
232 kindpats = normalize(include, 'glob', root, cwd, auditor, warn)
230 self.includepat, im = _buildmatch(ctx, kindpats, '(?:/|$)',
233 self.includepat, im = _buildmatch(ctx, kindpats, '(?:/|$)',
231 listsubrepos, root)
234 listsubrepos, root)
232 roots, dirs = _rootsanddirs(kindpats)
235 roots, dirs = _rootsanddirs(kindpats)
233 self._includeroots.update(roots)
236 self._includeroots.update(roots)
234 self._includedirs.update(dirs)
237 self._includedirs.update(dirs)
235 matchfns.append(im)
238 matchfns.append(im)
236 if exclude:
239 if exclude:
237 kindpats = normalize(exclude, 'glob', root, cwd, auditor, warn)
240 kindpats = normalize(exclude, 'glob', root, cwd, auditor, warn)
238 self.excludepat, em = _buildmatch(ctx, kindpats, '(?:/|$)',
241 self.excludepat, em = _buildmatch(ctx, kindpats, '(?:/|$)',
239 listsubrepos, root)
242 listsubrepos, root)
240 if not _anypats(kindpats):
243 if not _anypats(kindpats):
241 # Only consider recursive excludes as such - if a non-recursive
244 # Only consider recursive excludes as such - if a non-recursive
242 # exclude is used, we must still recurse into the excluded
245 # exclude is used, we must still recurse into the excluded
243 # directory, at least to find subdirectories. In such a case,
246 # directory, at least to find subdirectories. In such a case,
244 # the regex still won't match the non-recursively-excluded
247 # the regex still won't match the non-recursively-excluded
245 # files.
248 # files.
246 self._excluderoots.update(_roots(kindpats))
249 self._excluderoots.update(_roots(kindpats))
247 matchfns.append(lambda f: not em(f))
250 matchfns.append(lambda f: not em(f))
248 if exact:
251 if exact:
249 if isinstance(patterns, list):
252 if isinstance(patterns, list):
250 self._files = patterns
253 self._files = patterns
251 else:
254 else:
252 self._files = list(patterns)
255 self._files = list(patterns)
253 matchfns.append(self.exact)
256 matchfns.append(self.exact)
254 elif patterns:
257 elif patterns:
255 kindpats = normalize(patterns, default, root, cwd, auditor, warn)
258 kindpats = normalize(patterns, default, root, cwd, auditor, warn)
256 if not _kindpatsalwaysmatch(kindpats):
259 if not _kindpatsalwaysmatch(kindpats):
257 self._files = _explicitfiles(kindpats)
260 self._files = _explicitfiles(kindpats)
258 self._anypats = self._anypats or _anypats(kindpats)
261 self._anypats = self._anypats or _anypats(kindpats)
259 self.patternspat, pm = _buildmatch(ctx, kindpats, '$',
262 self.patternspat, pm = _buildmatch(ctx, kindpats, '$',
260 listsubrepos, root)
263 listsubrepos, root)
261 matchfns.append(pm)
264 matchfns.append(pm)
262
265
263 if not matchfns:
266 if not matchfns:
264 m = util.always
267 m = util.always
265 self._always = True
268 self._always = True
266 elif len(matchfns) == 1:
269 elif len(matchfns) == 1:
267 m = matchfns[0]
270 m = matchfns[0]
268 else:
271 else:
269 def m(f):
272 def m(f):
270 for matchfn in matchfns:
273 for matchfn in matchfns:
271 if not matchfn(f):
274 if not matchfn(f):
272 return False
275 return False
273 return True
276 return True
274
277
275 self.matchfn = m
278 self.matchfn = m
276
279
277 def __call__(self, fn):
280 def __call__(self, fn):
278 return self.matchfn(fn)
281 return self.matchfn(fn)
279 def __iter__(self):
282 def __iter__(self):
280 for f in self._files:
283 for f in self._files:
281 yield f
284 yield f
282
285
283 # Callbacks related to how the matcher is used by dirstate.walk.
286 # Callbacks related to how the matcher is used by dirstate.walk.
284 # Subscribers to these events must monkeypatch the matcher object.
287 # Subscribers to these events must monkeypatch the matcher object.
285 def bad(self, f, msg):
288 def bad(self, f, msg):
286 '''Callback from dirstate.walk for each explicit file that can't be
289 '''Callback from dirstate.walk for each explicit file that can't be
287 found/accessed, with an error message.'''
290 found/accessed, with an error message.'''
288 pass
291 pass
289
292
290 # If an explicitdir is set, it will be called when an explicitly listed
293 # If an explicitdir is set, it will be called when an explicitly listed
291 # directory is visited.
294 # directory is visited.
292 explicitdir = None
295 explicitdir = None
293
296
294 # If an traversedir is set, it will be called when a directory discovered
297 # If an traversedir is set, it will be called when a directory discovered
295 # by recursive traversal is visited.
298 # by recursive traversal is visited.
296 traversedir = None
299 traversedir = None
297
300
298 def abs(self, f):
301 def abs(self, f):
299 '''Convert a repo path back to path that is relative to the root of the
302 '''Convert a repo path back to path that is relative to the root of the
300 matcher.'''
303 matcher.'''
301 return f
304 return f
302
305
303 def rel(self, f):
306 def rel(self, f):
304 '''Convert repo path back to path that is relative to cwd of matcher.'''
307 '''Convert repo path back to path that is relative to cwd of matcher.'''
305 return util.pathto(self._root, self._cwd, f)
308 return util.pathto(self._root, self._cwd, f)
306
309
307 def uipath(self, f):
310 def uipath(self, f):
308 '''Convert repo path to a display path. If patterns or -I/-X were used
311 '''Convert repo path to a display path. If patterns or -I/-X were used
309 to create this matcher, the display path will be relative to cwd.
312 to create this matcher, the display path will be relative to cwd.
310 Otherwise it is relative to the root of the repo.'''
313 Otherwise it is relative to the root of the repo.'''
311 return (self._pathrestricted and self.rel(f)) or self.abs(f)
314 return (self._pathrestricted and self.rel(f)) or self.abs(f)
312
315
313 def files(self):
316 def files(self):
314 '''Explicitly listed files or patterns or roots:
317 '''Explicitly listed files or patterns or roots:
315 if no patterns or .always(): empty list,
318 if no patterns or .always(): empty list,
316 if exact: list exact files,
319 if exact: list exact files,
317 if not .anypats(): list all files and dirs,
320 if not .anypats(): list all files and dirs,
318 else: optimal roots'''
321 else: optimal roots'''
319 return self._files
322 return self._files
320
323
321 @propertycache
324 @propertycache
322 def _fileset(self):
325 def _fileset(self):
323 return set(self._files)
326 return set(self._files)
324
327
325 @propertycache
328 @propertycache
326 def _dirs(self):
329 def _dirs(self):
327 return set(util.dirs(self._fileset)) | {'.'}
330 return set(util.dirs(self._fileset)) | {'.'}
328
331
329 def visitdir(self, dir):
332 def visitdir(self, dir):
330 '''Decides whether a directory should be visited based on whether it
333 '''Decides whether a directory should be visited based on whether it
331 has potential matches in it or one of its subdirectories. This is
334 has potential matches in it or one of its subdirectories. This is
332 based on the match's primary, included, and excluded patterns.
335 based on the match's primary, included, and excluded patterns.
333
336
334 Returns the string 'all' if the given directory and all subdirectories
337 Returns the string 'all' if the given directory and all subdirectories
335 should be visited. Otherwise returns True or False indicating whether
338 should be visited. Otherwise returns True or False indicating whether
336 the given directory should be visited.
339 the given directory should be visited.
337
340
338 This function's behavior is undefined if it has returned False for
341 This function's behavior is undefined if it has returned False for
339 one of the dir's parent directories.
342 one of the dir's parent directories.
340 '''
343 '''
341 if self.prefix() and dir in self._fileset:
344 if self.prefix() and dir in self._fileset:
342 return 'all'
345 return 'all'
343 if dir in self._excluderoots:
346 if dir in self._excluderoots:
344 return False
347 return False
345 if ((self._includeroots or self._includedirs) and
348 if ((self._includeroots or self._includedirs) and
346 '.' not in self._includeroots and
349 '.' not in self._includeroots and
347 dir not in self._includeroots and
350 dir not in self._includeroots and
348 dir not in self._includedirs and
351 dir not in self._includedirs and
349 not any(parent in self._includeroots
352 not any(parent in self._includeroots
350 for parent in util.finddirs(dir))):
353 for parent in util.finddirs(dir))):
351 return False
354 return False
352 return (not self._fileset or
355 return (not self._fileset or
353 '.' in self._fileset or
356 '.' in self._fileset or
354 dir in self._fileset or
357 dir in self._fileset or
355 dir in self._dirs or
358 dir in self._dirs or
356 any(parentdir in self._fileset
359 any(parentdir in self._fileset
357 for parentdir in util.finddirs(dir)))
360 for parentdir in util.finddirs(dir)))
358
361
359 def exact(self, f):
362 def exact(self, f):
360 '''Returns True if f is in .files().'''
363 '''Returns True if f is in .files().'''
361 return f in self._fileset
364 return f in self._fileset
362
365
363 def anypats(self):
366 def anypats(self):
364 '''Matcher uses patterns or include/exclude.'''
367 '''Matcher uses patterns or include/exclude.'''
365 return self._anypats
368 return self._anypats
366
369
367 def always(self):
370 def always(self):
368 '''Matcher will match everything and .files() will be empty
371 '''Matcher will match everything and .files() will be empty
369 - optimization might be possible and necessary.'''
372 - optimization might be possible and necessary.'''
370 return self._always
373 return self._always
371
374
372 def isexact(self):
375 def isexact(self):
373 return self.matchfn == self.exact
376 return self.matchfn == self.exact
374
377
375 def prefix(self):
378 def prefix(self):
376 return not self.always() and not self.isexact() and not self.anypats()
379 return not self.always() and not self.isexact() and not self.anypats()
377
380
381 def __repr__(self):
382 return ('<matcher files=%r, patterns=%r, includes=%r, excludes=%r>' %
383 (self._files, self.patternspat, self.includepat,
384 self.excludepat))
385
378 class subdirmatcher(matcher):
386 class subdirmatcher(matcher):
379 """Adapt a matcher to work on a subdirectory only.
387 """Adapt a matcher to work on a subdirectory only.
380
388
381 The paths are remapped to remove/insert the path as needed:
389 The paths are remapped to remove/insert the path as needed:
382
390
383 >>> m1 = match('root', '', ['a.txt', 'sub/b.txt'])
391 >>> m1 = match('root', '', ['a.txt', 'sub/b.txt'])
384 >>> m2 = subdirmatcher('sub', m1)
392 >>> m2 = subdirmatcher('sub', m1)
385 >>> bool(m2('a.txt'))
393 >>> bool(m2('a.txt'))
386 False
394 False
387 >>> bool(m2('b.txt'))
395 >>> bool(m2('b.txt'))
388 True
396 True
389 >>> bool(m2.matchfn('a.txt'))
397 >>> bool(m2.matchfn('a.txt'))
390 False
398 False
391 >>> bool(m2.matchfn('b.txt'))
399 >>> bool(m2.matchfn('b.txt'))
392 True
400 True
393 >>> m2.files()
401 >>> m2.files()
394 ['b.txt']
402 ['b.txt']
395 >>> m2.exact('b.txt')
403 >>> m2.exact('b.txt')
396 True
404 True
397 >>> util.pconvert(m2.rel('b.txt'))
405 >>> util.pconvert(m2.rel('b.txt'))
398 'sub/b.txt'
406 'sub/b.txt'
399 >>> def bad(f, msg):
407 >>> def bad(f, msg):
400 ... print "%s: %s" % (f, msg)
408 ... print "%s: %s" % (f, msg)
401 >>> m1.bad = bad
409 >>> m1.bad = bad
402 >>> m2.bad('x.txt', 'No such file')
410 >>> m2.bad('x.txt', 'No such file')
403 sub/x.txt: No such file
411 sub/x.txt: No such file
404 >>> m2.abs('c.txt')
412 >>> m2.abs('c.txt')
405 'sub/c.txt'
413 'sub/c.txt'
406 """
414 """
407
415
408 def __init__(self, path, matcher):
416 def __init__(self, path, matcher):
409 self._root = matcher._root
417 self._root = matcher._root
410 self._cwd = matcher._cwd
418 self._cwd = matcher._cwd
411 self._path = path
419 self._path = path
412 self._matcher = matcher
420 self._matcher = matcher
413 self._always = matcher._always
421 self._always = matcher._always
414
422
415 self._files = [f[len(path) + 1:] for f in matcher._files
423 self._files = [f[len(path) + 1:] for f in matcher._files
416 if f.startswith(path + "/")]
424 if f.startswith(path + "/")]
417
425
418 # If the parent repo had a path to this subrepo and the matcher is
426 # If the parent repo had a path to this subrepo and the matcher is
419 # a prefix matcher, this submatcher always matches.
427 # a prefix matcher, this submatcher always matches.
420 if matcher.prefix():
428 if matcher.prefix():
421 self._always = any(f == path for f in matcher._files)
429 self._always = any(f == path for f in matcher._files)
422
430
423 self._anypats = matcher._anypats
431 self._anypats = matcher._anypats
424 # Some information is lost in the superclass's constructor, so we
432 # Some information is lost in the superclass's constructor, so we
425 # can not accurately create the matching function for the subdirectory
433 # can not accurately create the matching function for the subdirectory
426 # from the inputs. Instead, we override matchfn() and visitdir() to
434 # from the inputs. Instead, we override matchfn() and visitdir() to
427 # call the original matcher with the subdirectory path prepended.
435 # call the original matcher with the subdirectory path prepended.
428 self.matchfn = lambda fn: matcher.matchfn(self._path + "/" + fn)
436 self.matchfn = lambda fn: matcher.matchfn(self._path + "/" + fn)
429
437
430 def bad(self, f, msg):
438 def bad(self, f, msg):
431 self._matcher.bad(self._path + "/" + f, msg)
439 self._matcher.bad(self._path + "/" + f, msg)
432
440
433 def abs(self, f):
441 def abs(self, f):
434 return self._matcher.abs(self._path + "/" + f)
442 return self._matcher.abs(self._path + "/" + f)
435
443
436 def rel(self, f):
444 def rel(self, f):
437 return self._matcher.rel(self._path + "/" + f)
445 return self._matcher.rel(self._path + "/" + f)
438
446
439 def uipath(self, f):
447 def uipath(self, f):
440 return self._matcher.uipath(self._path + "/" + f)
448 return self._matcher.uipath(self._path + "/" + f)
441
449
442 def visitdir(self, dir):
450 def visitdir(self, dir):
443 if dir == '.':
451 if dir == '.':
444 dir = self._path
452 dir = self._path
445 else:
453 else:
446 dir = self._path + "/" + dir
454 dir = self._path + "/" + dir
447 return self._matcher.visitdir(dir)
455 return self._matcher.visitdir(dir)
448
456
449 def patkind(pattern, default=None):
457 def patkind(pattern, default=None):
450 '''If pattern is 'kind:pat' with a known kind, return kind.'''
458 '''If pattern is 'kind:pat' with a known kind, return kind.'''
451 return _patsplit(pattern, default)[0]
459 return _patsplit(pattern, default)[0]
452
460
453 def _patsplit(pattern, default):
461 def _patsplit(pattern, default):
454 """Split a string into the optional pattern kind prefix and the actual
462 """Split a string into the optional pattern kind prefix and the actual
455 pattern."""
463 pattern."""
456 if ':' in pattern:
464 if ':' in pattern:
457 kind, pat = pattern.split(':', 1)
465 kind, pat = pattern.split(':', 1)
458 if kind in ('re', 'glob', 'path', 'relglob', 'relpath', 'relre',
466 if kind in ('re', 'glob', 'path', 'relglob', 'relpath', 'relre',
459 'listfile', 'listfile0', 'set', 'include', 'subinclude',
467 'listfile', 'listfile0', 'set', 'include', 'subinclude',
460 'rootfilesin'):
468 'rootfilesin'):
461 return kind, pat
469 return kind, pat
462 return default, pattern
470 return default, pattern
463
471
464 def _globre(pat):
472 def _globre(pat):
465 r'''Convert an extended glob string to a regexp string.
473 r'''Convert an extended glob string to a regexp string.
466
474
467 >>> print _globre(r'?')
475 >>> print _globre(r'?')
468 .
476 .
469 >>> print _globre(r'*')
477 >>> print _globre(r'*')
470 [^/]*
478 [^/]*
471 >>> print _globre(r'**')
479 >>> print _globre(r'**')
472 .*
480 .*
473 >>> print _globre(r'**/a')
481 >>> print _globre(r'**/a')
474 (?:.*/)?a
482 (?:.*/)?a
475 >>> print _globre(r'a/**/b')
483 >>> print _globre(r'a/**/b')
476 a\/(?:.*/)?b
484 a\/(?:.*/)?b
477 >>> print _globre(r'[a*?!^][^b][!c]')
485 >>> print _globre(r'[a*?!^][^b][!c]')
478 [a*?!^][\^b][^c]
486 [a*?!^][\^b][^c]
479 >>> print _globre(r'{a,b}')
487 >>> print _globre(r'{a,b}')
480 (?:a|b)
488 (?:a|b)
481 >>> print _globre(r'.\*\?')
489 >>> print _globre(r'.\*\?')
482 \.\*\?
490 \.\*\?
483 '''
491 '''
484 i, n = 0, len(pat)
492 i, n = 0, len(pat)
485 res = ''
493 res = ''
486 group = 0
494 group = 0
487 escape = util.re.escape
495 escape = util.re.escape
488 def peek():
496 def peek():
489 return i < n and pat[i:i + 1]
497 return i < n and pat[i:i + 1]
490 while i < n:
498 while i < n:
491 c = pat[i:i + 1]
499 c = pat[i:i + 1]
492 i += 1
500 i += 1
493 if c not in '*?[{},\\':
501 if c not in '*?[{},\\':
494 res += escape(c)
502 res += escape(c)
495 elif c == '*':
503 elif c == '*':
496 if peek() == '*':
504 if peek() == '*':
497 i += 1
505 i += 1
498 if peek() == '/':
506 if peek() == '/':
499 i += 1
507 i += 1
500 res += '(?:.*/)?'
508 res += '(?:.*/)?'
501 else:
509 else:
502 res += '.*'
510 res += '.*'
503 else:
511 else:
504 res += '[^/]*'
512 res += '[^/]*'
505 elif c == '?':
513 elif c == '?':
506 res += '.'
514 res += '.'
507 elif c == '[':
515 elif c == '[':
508 j = i
516 j = i
509 if j < n and pat[j:j + 1] in '!]':
517 if j < n and pat[j:j + 1] in '!]':
510 j += 1
518 j += 1
511 while j < n and pat[j:j + 1] != ']':
519 while j < n and pat[j:j + 1] != ']':
512 j += 1
520 j += 1
513 if j >= n:
521 if j >= n:
514 res += '\\['
522 res += '\\['
515 else:
523 else:
516 stuff = pat[i:j].replace('\\','\\\\')
524 stuff = pat[i:j].replace('\\','\\\\')
517 i = j + 1
525 i = j + 1
518 if stuff[0:1] == '!':
526 if stuff[0:1] == '!':
519 stuff = '^' + stuff[1:]
527 stuff = '^' + stuff[1:]
520 elif stuff[0:1] == '^':
528 elif stuff[0:1] == '^':
521 stuff = '\\' + stuff
529 stuff = '\\' + stuff
522 res = '%s[%s]' % (res, stuff)
530 res = '%s[%s]' % (res, stuff)
523 elif c == '{':
531 elif c == '{':
524 group += 1
532 group += 1
525 res += '(?:'
533 res += '(?:'
526 elif c == '}' and group:
534 elif c == '}' and group:
527 res += ')'
535 res += ')'
528 group -= 1
536 group -= 1
529 elif c == ',' and group:
537 elif c == ',' and group:
530 res += '|'
538 res += '|'
531 elif c == '\\':
539 elif c == '\\':
532 p = peek()
540 p = peek()
533 if p:
541 if p:
534 i += 1
542 i += 1
535 res += escape(p)
543 res += escape(p)
536 else:
544 else:
537 res += escape(c)
545 res += escape(c)
538 else:
546 else:
539 res += escape(c)
547 res += escape(c)
540 return res
548 return res
541
549
542 def _regex(kind, pat, globsuffix):
550 def _regex(kind, pat, globsuffix):
543 '''Convert a (normalized) pattern of any kind into a regular expression.
551 '''Convert a (normalized) pattern of any kind into a regular expression.
544 globsuffix is appended to the regexp of globs.'''
552 globsuffix is appended to the regexp of globs.'''
545 if not pat:
553 if not pat:
546 return ''
554 return ''
547 if kind == 're':
555 if kind == 're':
548 return pat
556 return pat
549 if kind == 'path':
557 if kind == 'path':
550 if pat == '.':
558 if pat == '.':
551 return ''
559 return ''
552 return '^' + util.re.escape(pat) + '(?:/|$)'
560 return '^' + util.re.escape(pat) + '(?:/|$)'
553 if kind == 'rootfilesin':
561 if kind == 'rootfilesin':
554 if pat == '.':
562 if pat == '.':
555 escaped = ''
563 escaped = ''
556 else:
564 else:
557 # Pattern is a directory name.
565 # Pattern is a directory name.
558 escaped = util.re.escape(pat) + '/'
566 escaped = util.re.escape(pat) + '/'
559 # Anything after the pattern must be a non-directory.
567 # Anything after the pattern must be a non-directory.
560 return '^' + escaped + '[^/]+$'
568 return '^' + escaped + '[^/]+$'
561 if kind == 'relglob':
569 if kind == 'relglob':
562 return '(?:|.*/)' + _globre(pat) + globsuffix
570 return '(?:|.*/)' + _globre(pat) + globsuffix
563 if kind == 'relpath':
571 if kind == 'relpath':
564 return util.re.escape(pat) + '(?:/|$)'
572 return util.re.escape(pat) + '(?:/|$)'
565 if kind == 'relre':
573 if kind == 'relre':
566 if pat.startswith('^'):
574 if pat.startswith('^'):
567 return pat
575 return pat
568 return '.*' + pat
576 return '.*' + pat
569 return _globre(pat) + globsuffix
577 return _globre(pat) + globsuffix
570
578
571 def _buildmatch(ctx, kindpats, globsuffix, listsubrepos, root):
579 def _buildmatch(ctx, kindpats, globsuffix, listsubrepos, root):
572 '''Return regexp string and a matcher function for kindpats.
580 '''Return regexp string and a matcher function for kindpats.
573 globsuffix is appended to the regexp of globs.'''
581 globsuffix is appended to the regexp of globs.'''
574 matchfuncs = []
582 matchfuncs = []
575
583
576 subincludes, kindpats = _expandsubinclude(kindpats, root)
584 subincludes, kindpats = _expandsubinclude(kindpats, root)
577 if subincludes:
585 if subincludes:
578 submatchers = {}
586 submatchers = {}
579 def matchsubinclude(f):
587 def matchsubinclude(f):
580 for prefix, matcherargs in subincludes:
588 for prefix, matcherargs in subincludes:
581 if f.startswith(prefix):
589 if f.startswith(prefix):
582 mf = submatchers.get(prefix)
590 mf = submatchers.get(prefix)
583 if mf is None:
591 if mf is None:
584 mf = match(*matcherargs)
592 mf = match(*matcherargs)
585 submatchers[prefix] = mf
593 submatchers[prefix] = mf
586
594
587 if mf(f[len(prefix):]):
595 if mf(f[len(prefix):]):
588 return True
596 return True
589 return False
597 return False
590 matchfuncs.append(matchsubinclude)
598 matchfuncs.append(matchsubinclude)
591
599
592 fset, kindpats = _expandsets(kindpats, ctx, listsubrepos)
600 fset, kindpats = _expandsets(kindpats, ctx, listsubrepos)
593 if fset:
601 if fset:
594 matchfuncs.append(fset.__contains__)
602 matchfuncs.append(fset.__contains__)
595
603
596 regex = ''
604 regex = ''
597 if kindpats:
605 if kindpats:
598 regex, mf = _buildregexmatch(kindpats, globsuffix)
606 regex, mf = _buildregexmatch(kindpats, globsuffix)
599 matchfuncs.append(mf)
607 matchfuncs.append(mf)
600
608
601 if len(matchfuncs) == 1:
609 if len(matchfuncs) == 1:
602 return regex, matchfuncs[0]
610 return regex, matchfuncs[0]
603 else:
611 else:
604 return regex, lambda f: any(mf(f) for mf in matchfuncs)
612 return regex, lambda f: any(mf(f) for mf in matchfuncs)
605
613
606 def _buildregexmatch(kindpats, globsuffix):
614 def _buildregexmatch(kindpats, globsuffix):
607 """Build a match function from a list of kinds and kindpats,
615 """Build a match function from a list of kinds and kindpats,
608 return regexp string and a matcher function."""
616 return regexp string and a matcher function."""
609 try:
617 try:
610 regex = '(?:%s)' % '|'.join([_regex(k, p, globsuffix)
618 regex = '(?:%s)' % '|'.join([_regex(k, p, globsuffix)
611 for (k, p, s) in kindpats])
619 for (k, p, s) in kindpats])
612 if len(regex) > 20000:
620 if len(regex) > 20000:
613 raise OverflowError
621 raise OverflowError
614 return regex, _rematcher(regex)
622 return regex, _rematcher(regex)
615 except OverflowError:
623 except OverflowError:
616 # We're using a Python with a tiny regex engine and we
624 # We're using a Python with a tiny regex engine and we
617 # made it explode, so we'll divide the pattern list in two
625 # made it explode, so we'll divide the pattern list in two
618 # until it works
626 # until it works
619 l = len(kindpats)
627 l = len(kindpats)
620 if l < 2:
628 if l < 2:
621 raise
629 raise
622 regexa, a = _buildregexmatch(kindpats[:l//2], globsuffix)
630 regexa, a = _buildregexmatch(kindpats[:l//2], globsuffix)
623 regexb, b = _buildregexmatch(kindpats[l//2:], globsuffix)
631 regexb, b = _buildregexmatch(kindpats[l//2:], globsuffix)
624 return regex, lambda s: a(s) or b(s)
632 return regex, lambda s: a(s) or b(s)
625 except re.error:
633 except re.error:
626 for k, p, s in kindpats:
634 for k, p, s in kindpats:
627 try:
635 try:
628 _rematcher('(?:%s)' % _regex(k, p, globsuffix))
636 _rematcher('(?:%s)' % _regex(k, p, globsuffix))
629 except re.error:
637 except re.error:
630 if s:
638 if s:
631 raise error.Abort(_("%s: invalid pattern (%s): %s") %
639 raise error.Abort(_("%s: invalid pattern (%s): %s") %
632 (s, k, p))
640 (s, k, p))
633 else:
641 else:
634 raise error.Abort(_("invalid pattern (%s): %s") % (k, p))
642 raise error.Abort(_("invalid pattern (%s): %s") % (k, p))
635 raise error.Abort(_("invalid pattern"))
643 raise error.Abort(_("invalid pattern"))
636
644
637 def _patternrootsanddirs(kindpats):
645 def _patternrootsanddirs(kindpats):
638 '''Returns roots and directories corresponding to each pattern.
646 '''Returns roots and directories corresponding to each pattern.
639
647
640 This calculates the roots and directories exactly matching the patterns and
648 This calculates the roots and directories exactly matching the patterns and
641 returns a tuple of (roots, dirs) for each. It does not return other
649 returns a tuple of (roots, dirs) for each. It does not return other
642 directories which may also need to be considered, like the parent
650 directories which may also need to be considered, like the parent
643 directories.
651 directories.
644 '''
652 '''
645 r = []
653 r = []
646 d = []
654 d = []
647 for kind, pat, source in kindpats:
655 for kind, pat, source in kindpats:
648 if kind == 'glob': # find the non-glob prefix
656 if kind == 'glob': # find the non-glob prefix
649 root = []
657 root = []
650 for p in pat.split('/'):
658 for p in pat.split('/'):
651 if '[' in p or '{' in p or '*' in p or '?' in p:
659 if '[' in p or '{' in p or '*' in p or '?' in p:
652 break
660 break
653 root.append(p)
661 root.append(p)
654 r.append('/'.join(root) or '.')
662 r.append('/'.join(root) or '.')
655 elif kind in ('relpath', 'path'):
663 elif kind in ('relpath', 'path'):
656 r.append(pat or '.')
664 r.append(pat or '.')
657 elif kind in ('rootfilesin',):
665 elif kind in ('rootfilesin',):
658 d.append(pat or '.')
666 d.append(pat or '.')
659 else: # relglob, re, relre
667 else: # relglob, re, relre
660 r.append('.')
668 r.append('.')
661 return r, d
669 return r, d
662
670
663 def _roots(kindpats):
671 def _roots(kindpats):
664 '''Returns root directories to match recursively from the given patterns.'''
672 '''Returns root directories to match recursively from the given patterns.'''
665 roots, dirs = _patternrootsanddirs(kindpats)
673 roots, dirs = _patternrootsanddirs(kindpats)
666 return roots
674 return roots
667
675
668 def _rootsanddirs(kindpats):
676 def _rootsanddirs(kindpats):
669 '''Returns roots and exact directories from patterns.
677 '''Returns roots and exact directories from patterns.
670
678
671 roots are directories to match recursively, whereas exact directories should
679 roots are directories to match recursively, whereas exact directories should
672 be matched non-recursively. The returned (roots, dirs) tuple will also
680 be matched non-recursively. The returned (roots, dirs) tuple will also
673 include directories that need to be implicitly considered as either, such as
681 include directories that need to be implicitly considered as either, such as
674 parent directories.
682 parent directories.
675
683
676 >>> _rootsanddirs(\
684 >>> _rootsanddirs(\
677 [('glob', 'g/h/*', ''), ('glob', 'g/h', ''), ('glob', 'g*', '')])
685 [('glob', 'g/h/*', ''), ('glob', 'g/h', ''), ('glob', 'g*', '')])
678 (['g/h', 'g/h', '.'], ['g', '.'])
686 (['g/h', 'g/h', '.'], ['g', '.'])
679 >>> _rootsanddirs(\
687 >>> _rootsanddirs(\
680 [('rootfilesin', 'g/h', ''), ('rootfilesin', '', '')])
688 [('rootfilesin', 'g/h', ''), ('rootfilesin', '', '')])
681 ([], ['g/h', '.', 'g', '.'])
689 ([], ['g/h', '.', 'g', '.'])
682 >>> _rootsanddirs(\
690 >>> _rootsanddirs(\
683 [('relpath', 'r', ''), ('path', 'p/p', ''), ('path', '', '')])
691 [('relpath', 'r', ''), ('path', 'p/p', ''), ('path', '', '')])
684 (['r', 'p/p', '.'], ['p', '.'])
692 (['r', 'p/p', '.'], ['p', '.'])
685 >>> _rootsanddirs(\
693 >>> _rootsanddirs(\
686 [('relglob', 'rg*', ''), ('re', 're/', ''), ('relre', 'rr', '')])
694 [('relglob', 'rg*', ''), ('re', 're/', ''), ('relre', 'rr', '')])
687 (['.', '.', '.'], ['.'])
695 (['.', '.', '.'], ['.'])
688 '''
696 '''
689 r, d = _patternrootsanddirs(kindpats)
697 r, d = _patternrootsanddirs(kindpats)
690
698
691 # Append the parents as non-recursive/exact directories, since they must be
699 # Append the parents as non-recursive/exact directories, since they must be
692 # scanned to get to either the roots or the other exact directories.
700 # scanned to get to either the roots or the other exact directories.
693 d.extend(util.dirs(d))
701 d.extend(util.dirs(d))
694 d.extend(util.dirs(r))
702 d.extend(util.dirs(r))
695 # util.dirs() does not include the root directory, so add it manually
703 # util.dirs() does not include the root directory, so add it manually
696 d.append('.')
704 d.append('.')
697
705
698 return r, d
706 return r, d
699
707
700 def _explicitfiles(kindpats):
708 def _explicitfiles(kindpats):
701 '''Returns the potential explicit filenames from the patterns.
709 '''Returns the potential explicit filenames from the patterns.
702
710
703 >>> _explicitfiles([('path', 'foo/bar', '')])
711 >>> _explicitfiles([('path', 'foo/bar', '')])
704 ['foo/bar']
712 ['foo/bar']
705 >>> _explicitfiles([('rootfilesin', 'foo/bar', '')])
713 >>> _explicitfiles([('rootfilesin', 'foo/bar', '')])
706 []
714 []
707 '''
715 '''
708 # Keep only the pattern kinds where one can specify filenames (vs only
716 # Keep only the pattern kinds where one can specify filenames (vs only
709 # directory names).
717 # directory names).
710 filable = [kp for kp in kindpats if kp[0] not in ('rootfilesin',)]
718 filable = [kp for kp in kindpats if kp[0] not in ('rootfilesin',)]
711 return _roots(filable)
719 return _roots(filable)
712
720
713 def _anypats(kindpats):
721 def _anypats(kindpats):
714 for kind, pat, source in kindpats:
722 for kind, pat, source in kindpats:
715 if kind in ('glob', 're', 'relglob', 'relre', 'set', 'rootfilesin'):
723 if kind in ('glob', 're', 'relglob', 'relre', 'set', 'rootfilesin'):
716 return True
724 return True
717
725
718 _commentre = None
726 _commentre = None
719
727
720 def readpatternfile(filepath, warn, sourceinfo=False):
728 def readpatternfile(filepath, warn, sourceinfo=False):
721 '''parse a pattern file, returning a list of
729 '''parse a pattern file, returning a list of
722 patterns. These patterns should be given to compile()
730 patterns. These patterns should be given to compile()
723 to be validated and converted into a match function.
731 to be validated and converted into a match function.
724
732
725 trailing white space is dropped.
733 trailing white space is dropped.
726 the escape character is backslash.
734 the escape character is backslash.
727 comments start with #.
735 comments start with #.
728 empty lines are skipped.
736 empty lines are skipped.
729
737
730 lines can be of the following formats:
738 lines can be of the following formats:
731
739
732 syntax: regexp # defaults following lines to non-rooted regexps
740 syntax: regexp # defaults following lines to non-rooted regexps
733 syntax: glob # defaults following lines to non-rooted globs
741 syntax: glob # defaults following lines to non-rooted globs
734 re:pattern # non-rooted regular expression
742 re:pattern # non-rooted regular expression
735 glob:pattern # non-rooted glob
743 glob:pattern # non-rooted glob
736 pattern # pattern of the current default type
744 pattern # pattern of the current default type
737
745
738 if sourceinfo is set, returns a list of tuples:
746 if sourceinfo is set, returns a list of tuples:
739 (pattern, lineno, originalline). This is useful to debug ignore patterns.
747 (pattern, lineno, originalline). This is useful to debug ignore patterns.
740 '''
748 '''
741
749
742 syntaxes = {'re': 'relre:', 'regexp': 'relre:', 'glob': 'relglob:',
750 syntaxes = {'re': 'relre:', 'regexp': 'relre:', 'glob': 'relglob:',
743 'include': 'include', 'subinclude': 'subinclude'}
751 'include': 'include', 'subinclude': 'subinclude'}
744 syntax = 'relre:'
752 syntax = 'relre:'
745 patterns = []
753 patterns = []
746
754
747 fp = open(filepath, 'rb')
755 fp = open(filepath, 'rb')
748 for lineno, line in enumerate(util.iterfile(fp), start=1):
756 for lineno, line in enumerate(util.iterfile(fp), start=1):
749 if "#" in line:
757 if "#" in line:
750 global _commentre
758 global _commentre
751 if not _commentre:
759 if not _commentre:
752 _commentre = util.re.compile(br'((?:^|[^\\])(?:\\\\)*)#.*')
760 _commentre = util.re.compile(br'((?:^|[^\\])(?:\\\\)*)#.*')
753 # remove comments prefixed by an even number of escapes
761 # remove comments prefixed by an even number of escapes
754 m = _commentre.search(line)
762 m = _commentre.search(line)
755 if m:
763 if m:
756 line = line[:m.end(1)]
764 line = line[:m.end(1)]
757 # fixup properly escaped comments that survived the above
765 # fixup properly escaped comments that survived the above
758 line = line.replace("\\#", "#")
766 line = line.replace("\\#", "#")
759 line = line.rstrip()
767 line = line.rstrip()
760 if not line:
768 if not line:
761 continue
769 continue
762
770
763 if line.startswith('syntax:'):
771 if line.startswith('syntax:'):
764 s = line[7:].strip()
772 s = line[7:].strip()
765 try:
773 try:
766 syntax = syntaxes[s]
774 syntax = syntaxes[s]
767 except KeyError:
775 except KeyError:
768 if warn:
776 if warn:
769 warn(_("%s: ignoring invalid syntax '%s'\n") %
777 warn(_("%s: ignoring invalid syntax '%s'\n") %
770 (filepath, s))
778 (filepath, s))
771 continue
779 continue
772
780
773 linesyntax = syntax
781 linesyntax = syntax
774 for s, rels in syntaxes.iteritems():
782 for s, rels in syntaxes.iteritems():
775 if line.startswith(rels):
783 if line.startswith(rels):
776 linesyntax = rels
784 linesyntax = rels
777 line = line[len(rels):]
785 line = line[len(rels):]
778 break
786 break
779 elif line.startswith(s+':'):
787 elif line.startswith(s+':'):
780 linesyntax = rels
788 linesyntax = rels
781 line = line[len(s) + 1:]
789 line = line[len(s) + 1:]
782 break
790 break
783 if sourceinfo:
791 if sourceinfo:
784 patterns.append((linesyntax + line, lineno, line))
792 patterns.append((linesyntax + line, lineno, line))
785 else:
793 else:
786 patterns.append(linesyntax + line)
794 patterns.append(linesyntax + line)
787 fp.close()
795 fp.close()
788 return patterns
796 return patterns
@@ -1,301 +1,301 b''
1 $ hg init ignorerepo
1 $ hg init ignorerepo
2 $ cd ignorerepo
2 $ cd ignorerepo
3
3
4 Issue562: .hgignore requires newline at end:
4 Issue562: .hgignore requires newline at end:
5
5
6 $ touch foo
6 $ touch foo
7 $ touch bar
7 $ touch bar
8 $ touch baz
8 $ touch baz
9 $ cat > makeignore.py <<EOF
9 $ cat > makeignore.py <<EOF
10 > f = open(".hgignore", "w")
10 > f = open(".hgignore", "w")
11 > f.write("ignore\n")
11 > f.write("ignore\n")
12 > f.write("foo\n")
12 > f.write("foo\n")
13 > # No EOL here
13 > # No EOL here
14 > f.write("bar")
14 > f.write("bar")
15 > f.close()
15 > f.close()
16 > EOF
16 > EOF
17
17
18 $ python makeignore.py
18 $ python makeignore.py
19
19
20 Should display baz only:
20 Should display baz only:
21
21
22 $ hg status
22 $ hg status
23 ? baz
23 ? baz
24
24
25 $ rm foo bar baz .hgignore makeignore.py
25 $ rm foo bar baz .hgignore makeignore.py
26
26
27 $ touch a.o
27 $ touch a.o
28 $ touch a.c
28 $ touch a.c
29 $ touch syntax
29 $ touch syntax
30 $ mkdir dir
30 $ mkdir dir
31 $ touch dir/a.o
31 $ touch dir/a.o
32 $ touch dir/b.o
32 $ touch dir/b.o
33 $ touch dir/c.o
33 $ touch dir/c.o
34
34
35 $ hg add dir/a.o
35 $ hg add dir/a.o
36 $ hg commit -m 0
36 $ hg commit -m 0
37 $ hg add dir/b.o
37 $ hg add dir/b.o
38
38
39 $ hg status
39 $ hg status
40 A dir/b.o
40 A dir/b.o
41 ? a.c
41 ? a.c
42 ? a.o
42 ? a.o
43 ? dir/c.o
43 ? dir/c.o
44 ? syntax
44 ? syntax
45
45
46 $ echo "*.o" > .hgignore
46 $ echo "*.o" > .hgignore
47 $ hg status
47 $ hg status
48 abort: $TESTTMP/ignorerepo/.hgignore: invalid pattern (relre): *.o (glob)
48 abort: $TESTTMP/ignorerepo/.hgignore: invalid pattern (relre): *.o (glob)
49 [255]
49 [255]
50
50
51 $ echo ".*\.o" > .hgignore
51 $ echo ".*\.o" > .hgignore
52 $ hg status
52 $ hg status
53 A dir/b.o
53 A dir/b.o
54 ? .hgignore
54 ? .hgignore
55 ? a.c
55 ? a.c
56 ? syntax
56 ? syntax
57
57
58 Ensure that comments work:
58 Ensure that comments work:
59
59
60 $ touch 'foo#bar' 'quux#'
60 $ touch 'foo#bar' 'quux#'
61 #if no-windows
61 #if no-windows
62 $ touch 'baz\#wat'
62 $ touch 'baz\#wat'
63 #endif
63 #endif
64 $ cat <<'EOF' >> .hgignore
64 $ cat <<'EOF' >> .hgignore
65 > # full-line comment
65 > # full-line comment
66 > # whitespace-only comment line
66 > # whitespace-only comment line
67 > syntax# pattern, no whitespace, then comment
67 > syntax# pattern, no whitespace, then comment
68 > a.c # pattern, then whitespace, then comment
68 > a.c # pattern, then whitespace, then comment
69 > baz\\# # escaped comment character
69 > baz\\# # escaped comment character
70 > foo\#b # escaped comment character
70 > foo\#b # escaped comment character
71 > quux\## escaped comment character at end of name
71 > quux\## escaped comment character at end of name
72 > EOF
72 > EOF
73 $ hg status
73 $ hg status
74 A dir/b.o
74 A dir/b.o
75 ? .hgignore
75 ? .hgignore
76 $ rm 'foo#bar' 'quux#'
76 $ rm 'foo#bar' 'quux#'
77 #if no-windows
77 #if no-windows
78 $ rm 'baz\#wat'
78 $ rm 'baz\#wat'
79 #endif
79 #endif
80
80
81 Check it does not ignore the current directory '.':
81 Check it does not ignore the current directory '.':
82
82
83 $ echo "^\." > .hgignore
83 $ echo "^\." > .hgignore
84 $ hg status
84 $ hg status
85 A dir/b.o
85 A dir/b.o
86 ? a.c
86 ? a.c
87 ? a.o
87 ? a.o
88 ? dir/c.o
88 ? dir/c.o
89 ? syntax
89 ? syntax
90
90
91 Test that patterns from ui.ignore options are read:
91 Test that patterns from ui.ignore options are read:
92
92
93 $ echo > .hgignore
93 $ echo > .hgignore
94 $ cat >> $HGRCPATH << EOF
94 $ cat >> $HGRCPATH << EOF
95 > [ui]
95 > [ui]
96 > ignore.other = $TESTTMP/ignorerepo/.hg/testhgignore
96 > ignore.other = $TESTTMP/ignorerepo/.hg/testhgignore
97 > EOF
97 > EOF
98 $ echo "glob:**.o" > .hg/testhgignore
98 $ echo "glob:**.o" > .hg/testhgignore
99 $ hg status
99 $ hg status
100 A dir/b.o
100 A dir/b.o
101 ? .hgignore
101 ? .hgignore
102 ? a.c
102 ? a.c
103 ? syntax
103 ? syntax
104
104
105 empty out testhgignore
105 empty out testhgignore
106 $ echo > .hg/testhgignore
106 $ echo > .hg/testhgignore
107
107
108 Test relative ignore path (issue4473):
108 Test relative ignore path (issue4473):
109
109
110 $ cat >> $HGRCPATH << EOF
110 $ cat >> $HGRCPATH << EOF
111 > [ui]
111 > [ui]
112 > ignore.relative = .hg/testhgignorerel
112 > ignore.relative = .hg/testhgignorerel
113 > EOF
113 > EOF
114 $ echo "glob:*.o" > .hg/testhgignorerel
114 $ echo "glob:*.o" > .hg/testhgignorerel
115 $ cd dir
115 $ cd dir
116 $ hg status
116 $ hg status
117 A dir/b.o
117 A dir/b.o
118 ? .hgignore
118 ? .hgignore
119 ? a.c
119 ? a.c
120 ? syntax
120 ? syntax
121
121
122 $ cd ..
122 $ cd ..
123 $ echo > .hg/testhgignorerel
123 $ echo > .hg/testhgignorerel
124 $ echo "syntax: glob" > .hgignore
124 $ echo "syntax: glob" > .hgignore
125 $ echo "re:.*\.o" >> .hgignore
125 $ echo "re:.*\.o" >> .hgignore
126 $ hg status
126 $ hg status
127 A dir/b.o
127 A dir/b.o
128 ? .hgignore
128 ? .hgignore
129 ? a.c
129 ? a.c
130 ? syntax
130 ? syntax
131
131
132 $ echo "syntax: invalid" > .hgignore
132 $ echo "syntax: invalid" > .hgignore
133 $ hg status
133 $ hg status
134 $TESTTMP/ignorerepo/.hgignore: ignoring invalid syntax 'invalid' (glob)
134 $TESTTMP/ignorerepo/.hgignore: ignoring invalid syntax 'invalid' (glob)
135 A dir/b.o
135 A dir/b.o
136 ? .hgignore
136 ? .hgignore
137 ? a.c
137 ? a.c
138 ? a.o
138 ? a.o
139 ? dir/c.o
139 ? dir/c.o
140 ? syntax
140 ? syntax
141
141
142 $ echo "syntax: glob" > .hgignore
142 $ echo "syntax: glob" > .hgignore
143 $ echo "*.o" >> .hgignore
143 $ echo "*.o" >> .hgignore
144 $ hg status
144 $ hg status
145 A dir/b.o
145 A dir/b.o
146 ? .hgignore
146 ? .hgignore
147 ? a.c
147 ? a.c
148 ? syntax
148 ? syntax
149
149
150 $ echo "relglob:syntax*" > .hgignore
150 $ echo "relglob:syntax*" > .hgignore
151 $ hg status
151 $ hg status
152 A dir/b.o
152 A dir/b.o
153 ? .hgignore
153 ? .hgignore
154 ? a.c
154 ? a.c
155 ? a.o
155 ? a.o
156 ? dir/c.o
156 ? dir/c.o
157
157
158 $ echo "relglob:*" > .hgignore
158 $ echo "relglob:*" > .hgignore
159 $ hg status
159 $ hg status
160 A dir/b.o
160 A dir/b.o
161
161
162 $ cd dir
162 $ cd dir
163 $ hg status .
163 $ hg status .
164 A b.o
164 A b.o
165
165
166 $ hg debugignore
166 $ hg debugignore
167 (?:(?:|.*/)[^/]*(?:/|$))
167 <matcher files=[], patterns=None, includes='(?:(?:|.*/)[^/]*(?:/|$))', excludes=None>
168
168
169 $ hg debugignore b.o
169 $ hg debugignore b.o
170 b.o is ignored
170 b.o is ignored
171 (ignore rule in $TESTTMP/ignorerepo/.hgignore, line 1: '*') (glob)
171 (ignore rule in $TESTTMP/ignorerepo/.hgignore, line 1: '*') (glob)
172
172
173 $ cd ..
173 $ cd ..
174
174
175 Check patterns that match only the directory
175 Check patterns that match only the directory
176
176
177 $ echo "^dir\$" > .hgignore
177 $ echo "^dir\$" > .hgignore
178 $ hg status
178 $ hg status
179 A dir/b.o
179 A dir/b.o
180 ? .hgignore
180 ? .hgignore
181 ? a.c
181 ? a.c
182 ? a.o
182 ? a.o
183 ? syntax
183 ? syntax
184
184
185 Check recursive glob pattern matches no directories (dir/**/c.o matches dir/c.o)
185 Check recursive glob pattern matches no directories (dir/**/c.o matches dir/c.o)
186
186
187 $ echo "syntax: glob" > .hgignore
187 $ echo "syntax: glob" > .hgignore
188 $ echo "dir/**/c.o" >> .hgignore
188 $ echo "dir/**/c.o" >> .hgignore
189 $ touch dir/c.o
189 $ touch dir/c.o
190 $ mkdir dir/subdir
190 $ mkdir dir/subdir
191 $ touch dir/subdir/c.o
191 $ touch dir/subdir/c.o
192 $ hg status
192 $ hg status
193 A dir/b.o
193 A dir/b.o
194 ? .hgignore
194 ? .hgignore
195 ? a.c
195 ? a.c
196 ? a.o
196 ? a.o
197 ? syntax
197 ? syntax
198 $ hg debugignore a.c
198 $ hg debugignore a.c
199 a.c is not ignored
199 a.c is not ignored
200 $ hg debugignore dir/c.o
200 $ hg debugignore dir/c.o
201 dir/c.o is ignored
201 dir/c.o is ignored
202 (ignore rule in $TESTTMP/ignorerepo/.hgignore, line 2: 'dir/**/c.o') (glob)
202 (ignore rule in $TESTTMP/ignorerepo/.hgignore, line 2: 'dir/**/c.o') (glob)
203
203
204 Check using 'include:' in ignore file
204 Check using 'include:' in ignore file
205
205
206 $ hg purge --all --config extensions.purge=
206 $ hg purge --all --config extensions.purge=
207 $ touch foo.included
207 $ touch foo.included
208
208
209 $ echo ".*.included" > otherignore
209 $ echo ".*.included" > otherignore
210 $ hg status -I "include:otherignore"
210 $ hg status -I "include:otherignore"
211 ? foo.included
211 ? foo.included
212
212
213 $ echo "include:otherignore" >> .hgignore
213 $ echo "include:otherignore" >> .hgignore
214 $ hg status
214 $ hg status
215 A dir/b.o
215 A dir/b.o
216 ? .hgignore
216 ? .hgignore
217 ? otherignore
217 ? otherignore
218
218
219 Check recursive uses of 'include:'
219 Check recursive uses of 'include:'
220
220
221 $ echo "include:nested/ignore" >> otherignore
221 $ echo "include:nested/ignore" >> otherignore
222 $ mkdir nested
222 $ mkdir nested
223 $ echo "glob:*ignore" > nested/ignore
223 $ echo "glob:*ignore" > nested/ignore
224 $ hg status
224 $ hg status
225 A dir/b.o
225 A dir/b.o
226
226
227 $ cp otherignore goodignore
227 $ cp otherignore goodignore
228 $ echo "include:badignore" >> otherignore
228 $ echo "include:badignore" >> otherignore
229 $ hg status
229 $ hg status
230 skipping unreadable pattern file 'badignore': No such file or directory
230 skipping unreadable pattern file 'badignore': No such file or directory
231 A dir/b.o
231 A dir/b.o
232
232
233 $ mv goodignore otherignore
233 $ mv goodignore otherignore
234
234
235 Check using 'include:' while in a non-root directory
235 Check using 'include:' while in a non-root directory
236
236
237 $ cd ..
237 $ cd ..
238 $ hg -R ignorerepo status
238 $ hg -R ignorerepo status
239 A dir/b.o
239 A dir/b.o
240 $ cd ignorerepo
240 $ cd ignorerepo
241
241
242 Check including subincludes
242 Check including subincludes
243
243
244 $ hg revert -q --all
244 $ hg revert -q --all
245 $ hg purge --all --config extensions.purge=
245 $ hg purge --all --config extensions.purge=
246 $ echo ".hgignore" > .hgignore
246 $ echo ".hgignore" > .hgignore
247 $ mkdir dir1 dir2
247 $ mkdir dir1 dir2
248 $ touch dir1/file1 dir1/file2 dir2/file1 dir2/file2
248 $ touch dir1/file1 dir1/file2 dir2/file1 dir2/file2
249 $ echo "subinclude:dir2/.hgignore" >> .hgignore
249 $ echo "subinclude:dir2/.hgignore" >> .hgignore
250 $ echo "glob:file*2" > dir2/.hgignore
250 $ echo "glob:file*2" > dir2/.hgignore
251 $ hg status
251 $ hg status
252 ? dir1/file1
252 ? dir1/file1
253 ? dir1/file2
253 ? dir1/file2
254 ? dir2/file1
254 ? dir2/file1
255
255
256 Check including subincludes with regexs
256 Check including subincludes with regexs
257
257
258 $ echo "subinclude:dir1/.hgignore" >> .hgignore
258 $ echo "subinclude:dir1/.hgignore" >> .hgignore
259 $ echo "regexp:f.le1" > dir1/.hgignore
259 $ echo "regexp:f.le1" > dir1/.hgignore
260
260
261 $ hg status
261 $ hg status
262 ? dir1/file2
262 ? dir1/file2
263 ? dir2/file1
263 ? dir2/file1
264
264
265 Check multiple levels of sub-ignores
265 Check multiple levels of sub-ignores
266
266
267 $ mkdir dir1/subdir
267 $ mkdir dir1/subdir
268 $ touch dir1/subdir/subfile1 dir1/subdir/subfile3 dir1/subdir/subfile4
268 $ touch dir1/subdir/subfile1 dir1/subdir/subfile3 dir1/subdir/subfile4
269 $ echo "subinclude:subdir/.hgignore" >> dir1/.hgignore
269 $ echo "subinclude:subdir/.hgignore" >> dir1/.hgignore
270 $ echo "glob:subfil*3" >> dir1/subdir/.hgignore
270 $ echo "glob:subfil*3" >> dir1/subdir/.hgignore
271
271
272 $ hg status
272 $ hg status
273 ? dir1/file2
273 ? dir1/file2
274 ? dir1/subdir/subfile4
274 ? dir1/subdir/subfile4
275 ? dir2/file1
275 ? dir2/file1
276
276
277 Check include subignore at the same level
277 Check include subignore at the same level
278
278
279 $ mv dir1/subdir/.hgignore dir1/.hgignoretwo
279 $ mv dir1/subdir/.hgignore dir1/.hgignoretwo
280 $ echo "regexp:f.le1" > dir1/.hgignore
280 $ echo "regexp:f.le1" > dir1/.hgignore
281 $ echo "subinclude:.hgignoretwo" >> dir1/.hgignore
281 $ echo "subinclude:.hgignoretwo" >> dir1/.hgignore
282 $ echo "glob:file*2" > dir1/.hgignoretwo
282 $ echo "glob:file*2" > dir1/.hgignoretwo
283
283
284 $ hg status | grep file2
284 $ hg status | grep file2
285 [1]
285 [1]
286 $ hg debugignore dir1/file2
286 $ hg debugignore dir1/file2
287 dir1/file2 is ignored
287 dir1/file2 is ignored
288 (ignore rule in dir2/.hgignore, line 1: 'file*2')
288 (ignore rule in dir2/.hgignore, line 1: 'file*2')
289
289
290 #if windows
290 #if windows
291
291
292 Windows paths are accepted on input
292 Windows paths are accepted on input
293
293
294 $ rm dir1/.hgignore
294 $ rm dir1/.hgignore
295 $ echo "dir1/file*" >> .hgignore
295 $ echo "dir1/file*" >> .hgignore
296 $ hg debugignore "dir1\file2"
296 $ hg debugignore "dir1\file2"
297 dir1\file2 is ignored
297 dir1\file2 is ignored
298 (ignore rule in $TESTTMP\ignorerepo\.hgignore, line 4: 'dir1/file*')
298 (ignore rule in $TESTTMP\ignorerepo\.hgignore, line 4: 'dir1/file*')
299 $ hg up -qC .
299 $ hg up -qC .
300
300
301 #endif
301 #endif
General Comments 0
You need to be logged in to leave comments. Login now