##// END OF EJS Templates
fsmonitor: don't attempt state-leave if we didn't state-enter...
Wez Furlong -
r32335:35432917 default
parent child Browse files
Show More
@@ -1,737 +1,741 b''
1 # __init__.py - fsmonitor initialization and overrides
1 # __init__.py - fsmonitor initialization and overrides
2 #
2 #
3 # Copyright 2013-2016 Facebook, Inc.
3 # Copyright 2013-2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
9
9
10 Integrates the file-watching program Watchman with Mercurial to produce faster
10 Integrates the file-watching program Watchman with Mercurial to produce faster
11 status results.
11 status results.
12
12
13 On a particular Linux system, for a real-world repository with over 400,000
13 On a particular Linux system, for a real-world repository with over 400,000
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
15 system, with fsmonitor it takes about 0.3 seconds.
15 system, with fsmonitor it takes about 0.3 seconds.
16
16
17 fsmonitor requires no configuration -- it will tell Watchman about your
17 fsmonitor requires no configuration -- it will tell Watchman about your
18 repository as necessary. You'll need to install Watchman from
18 repository as necessary. You'll need to install Watchman from
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
20
20
21 The following configuration options exist:
21 The following configuration options exist:
22
22
23 ::
23 ::
24
24
25 [fsmonitor]
25 [fsmonitor]
26 mode = {off, on, paranoid}
26 mode = {off, on, paranoid}
27
27
28 When `mode = off`, fsmonitor will disable itself (similar to not loading the
28 When `mode = off`, fsmonitor will disable itself (similar to not loading the
29 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
29 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
30 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
30 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
31 and ensure that the results are consistent.
31 and ensure that the results are consistent.
32
32
33 ::
33 ::
34
34
35 [fsmonitor]
35 [fsmonitor]
36 timeout = (float)
36 timeout = (float)
37
37
38 A value, in seconds, that determines how long fsmonitor will wait for Watchman
38 A value, in seconds, that determines how long fsmonitor will wait for Watchman
39 to return results. Defaults to `2.0`.
39 to return results. Defaults to `2.0`.
40
40
41 ::
41 ::
42
42
43 [fsmonitor]
43 [fsmonitor]
44 blacklistusers = (list of userids)
44 blacklistusers = (list of userids)
45
45
46 A list of usernames for which fsmonitor will disable itself altogether.
46 A list of usernames for which fsmonitor will disable itself altogether.
47
47
48 ::
48 ::
49
49
50 [fsmonitor]
50 [fsmonitor]
51 walk_on_invalidate = (boolean)
51 walk_on_invalidate = (boolean)
52
52
53 Whether or not to walk the whole repo ourselves when our cached state has been
53 Whether or not to walk the whole repo ourselves when our cached state has been
54 invalidated, for example when Watchman has been restarted or .hgignore rules
54 invalidated, for example when Watchman has been restarted or .hgignore rules
55 have been changed. Walking the repo in that case can result in competing for
55 have been changed. Walking the repo in that case can result in competing for
56 I/O with Watchman. For large repos it is recommended to set this value to
56 I/O with Watchman. For large repos it is recommended to set this value to
57 false. You may wish to set this to true if you have a very fast filesystem
57 false. You may wish to set this to true if you have a very fast filesystem
58 that can outpace the IPC overhead of getting the result data for the full repo
58 that can outpace the IPC overhead of getting the result data for the full repo
59 from Watchman. Defaults to false.
59 from Watchman. Defaults to false.
60
60
61 fsmonitor is incompatible with the largefiles and eol extensions, and
61 fsmonitor is incompatible with the largefiles and eol extensions, and
62 will disable itself if any of those are active.
62 will disable itself if any of those are active.
63
63
64 '''
64 '''
65
65
66 # Platforms Supported
66 # Platforms Supported
67 # ===================
67 # ===================
68 #
68 #
69 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
69 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
70 # even under severe loads.
70 # even under severe loads.
71 #
71 #
72 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
72 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
73 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
73 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
74 # user testing under normal loads.
74 # user testing under normal loads.
75 #
75 #
76 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
76 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
77 # very little testing has been done.
77 # very little testing has been done.
78 #
78 #
79 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
79 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
80 #
80 #
81 # Known Issues
81 # Known Issues
82 # ============
82 # ============
83 #
83 #
84 # * fsmonitor will disable itself if any of the following extensions are
84 # * fsmonitor will disable itself if any of the following extensions are
85 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
85 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
86 # * fsmonitor will produce incorrect results if nested repos that are not
86 # * fsmonitor will produce incorrect results if nested repos that are not
87 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
87 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
88 #
88 #
89 # The issues related to nested repos and subrepos are probably not fundamental
89 # The issues related to nested repos and subrepos are probably not fundamental
90 # ones. Patches to fix them are welcome.
90 # ones. Patches to fix them are welcome.
91
91
92 from __future__ import absolute_import
92 from __future__ import absolute_import
93
93
94 import codecs
94 import codecs
95 import hashlib
95 import hashlib
96 import os
96 import os
97 import stat
97 import stat
98 import sys
98 import sys
99
99
100 from mercurial.i18n import _
100 from mercurial.i18n import _
101 from mercurial import (
101 from mercurial import (
102 context,
102 context,
103 encoding,
103 encoding,
104 error,
104 error,
105 extensions,
105 extensions,
106 localrepo,
106 localrepo,
107 merge,
107 merge,
108 pathutil,
108 pathutil,
109 pycompat,
109 pycompat,
110 scmutil,
110 scmutil,
111 util,
111 util,
112 )
112 )
113 from mercurial import match as matchmod
113 from mercurial import match as matchmod
114
114
115 from . import (
115 from . import (
116 pywatchman,
116 pywatchman,
117 state,
117 state,
118 watchmanclient,
118 watchmanclient,
119 )
119 )
120
120
121 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
121 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
122 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
122 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
123 # be specifying the version(s) of Mercurial they are tested with, or
123 # be specifying the version(s) of Mercurial they are tested with, or
124 # leave the attribute unspecified.
124 # leave the attribute unspecified.
125 testedwith = 'ships-with-hg-core'
125 testedwith = 'ships-with-hg-core'
126
126
127 # This extension is incompatible with the following blacklisted extensions
127 # This extension is incompatible with the following blacklisted extensions
128 # and will disable itself when encountering one of these:
128 # and will disable itself when encountering one of these:
129 _blacklist = ['largefiles', 'eol']
129 _blacklist = ['largefiles', 'eol']
130
130
131 def _handleunavailable(ui, state, ex):
131 def _handleunavailable(ui, state, ex):
132 """Exception handler for Watchman interaction exceptions"""
132 """Exception handler for Watchman interaction exceptions"""
133 if isinstance(ex, watchmanclient.Unavailable):
133 if isinstance(ex, watchmanclient.Unavailable):
134 if ex.warn:
134 if ex.warn:
135 ui.warn(str(ex) + '\n')
135 ui.warn(str(ex) + '\n')
136 if ex.invalidate:
136 if ex.invalidate:
137 state.invalidate()
137 state.invalidate()
138 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
138 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
139 else:
139 else:
140 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
140 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
141
141
142 def _hashignore(ignore):
142 def _hashignore(ignore):
143 """Calculate hash for ignore patterns and filenames
143 """Calculate hash for ignore patterns and filenames
144
144
145 If this information changes between Mercurial invocations, we can't
145 If this information changes between Mercurial invocations, we can't
146 rely on Watchman information anymore and have to re-scan the working
146 rely on Watchman information anymore and have to re-scan the working
147 copy.
147 copy.
148
148
149 """
149 """
150 sha1 = hashlib.sha1()
150 sha1 = hashlib.sha1()
151 if util.safehasattr(ignore, 'includepat'):
151 if util.safehasattr(ignore, 'includepat'):
152 sha1.update(ignore.includepat)
152 sha1.update(ignore.includepat)
153 sha1.update('\0\0')
153 sha1.update('\0\0')
154 if util.safehasattr(ignore, 'excludepat'):
154 if util.safehasattr(ignore, 'excludepat'):
155 sha1.update(ignore.excludepat)
155 sha1.update(ignore.excludepat)
156 sha1.update('\0\0')
156 sha1.update('\0\0')
157 if util.safehasattr(ignore, 'patternspat'):
157 if util.safehasattr(ignore, 'patternspat'):
158 sha1.update(ignore.patternspat)
158 sha1.update(ignore.patternspat)
159 sha1.update('\0\0')
159 sha1.update('\0\0')
160 if util.safehasattr(ignore, '_files'):
160 if util.safehasattr(ignore, '_files'):
161 for f in ignore._files:
161 for f in ignore._files:
162 sha1.update(f)
162 sha1.update(f)
163 sha1.update('\0')
163 sha1.update('\0')
164 return sha1.hexdigest()
164 return sha1.hexdigest()
165
165
166 _watchmanencoding = pywatchman.encoding.get_local_encoding()
166 _watchmanencoding = pywatchman.encoding.get_local_encoding()
167 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
167 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
168 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
168 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
169
169
170 def _watchmantofsencoding(path):
170 def _watchmantofsencoding(path):
171 """Fix path to match watchman and local filesystem encoding
171 """Fix path to match watchman and local filesystem encoding
172
172
173 watchman's paths encoding can differ from filesystem encoding. For example,
173 watchman's paths encoding can differ from filesystem encoding. For example,
174 on Windows, it's always utf-8.
174 on Windows, it's always utf-8.
175 """
175 """
176 try:
176 try:
177 decoded = path.decode(_watchmanencoding)
177 decoded = path.decode(_watchmanencoding)
178 except UnicodeDecodeError as e:
178 except UnicodeDecodeError as e:
179 raise error.Abort(str(e), hint='watchman encoding error')
179 raise error.Abort(str(e), hint='watchman encoding error')
180
180
181 try:
181 try:
182 encoded = decoded.encode(_fsencoding, 'strict')
182 encoded = decoded.encode(_fsencoding, 'strict')
183 except UnicodeEncodeError as e:
183 except UnicodeEncodeError as e:
184 raise error.Abort(str(e))
184 raise error.Abort(str(e))
185
185
186 return encoded
186 return encoded
187
187
188 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
188 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
189 '''Replacement for dirstate.walk, hooking into Watchman.
189 '''Replacement for dirstate.walk, hooking into Watchman.
190
190
191 Whenever full is False, ignored is False, and the Watchman client is
191 Whenever full is False, ignored is False, and the Watchman client is
192 available, use Watchman combined with saved state to possibly return only a
192 available, use Watchman combined with saved state to possibly return only a
193 subset of files.'''
193 subset of files.'''
194 def bail():
194 def bail():
195 return orig(match, subrepos, unknown, ignored, full=True)
195 return orig(match, subrepos, unknown, ignored, full=True)
196
196
197 if full or ignored or not self._watchmanclient.available():
197 if full or ignored or not self._watchmanclient.available():
198 return bail()
198 return bail()
199 state = self._fsmonitorstate
199 state = self._fsmonitorstate
200 clock, ignorehash, notefiles = state.get()
200 clock, ignorehash, notefiles = state.get()
201 if not clock:
201 if not clock:
202 if state.walk_on_invalidate:
202 if state.walk_on_invalidate:
203 return bail()
203 return bail()
204 # Initial NULL clock value, see
204 # Initial NULL clock value, see
205 # https://facebook.github.io/watchman/docs/clockspec.html
205 # https://facebook.github.io/watchman/docs/clockspec.html
206 clock = 'c:0:0'
206 clock = 'c:0:0'
207 notefiles = []
207 notefiles = []
208
208
209 def fwarn(f, msg):
209 def fwarn(f, msg):
210 self._ui.warn('%s: %s\n' % (self.pathto(f), msg))
210 self._ui.warn('%s: %s\n' % (self.pathto(f), msg))
211 return False
211 return False
212
212
213 def badtype(mode):
213 def badtype(mode):
214 kind = _('unknown')
214 kind = _('unknown')
215 if stat.S_ISCHR(mode):
215 if stat.S_ISCHR(mode):
216 kind = _('character device')
216 kind = _('character device')
217 elif stat.S_ISBLK(mode):
217 elif stat.S_ISBLK(mode):
218 kind = _('block device')
218 kind = _('block device')
219 elif stat.S_ISFIFO(mode):
219 elif stat.S_ISFIFO(mode):
220 kind = _('fifo')
220 kind = _('fifo')
221 elif stat.S_ISSOCK(mode):
221 elif stat.S_ISSOCK(mode):
222 kind = _('socket')
222 kind = _('socket')
223 elif stat.S_ISDIR(mode):
223 elif stat.S_ISDIR(mode):
224 kind = _('directory')
224 kind = _('directory')
225 return _('unsupported file type (type is %s)') % kind
225 return _('unsupported file type (type is %s)') % kind
226
226
227 ignore = self._ignore
227 ignore = self._ignore
228 dirignore = self._dirignore
228 dirignore = self._dirignore
229 if unknown:
229 if unknown:
230 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
230 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
231 # ignore list changed -- can't rely on Watchman state any more
231 # ignore list changed -- can't rely on Watchman state any more
232 if state.walk_on_invalidate:
232 if state.walk_on_invalidate:
233 return bail()
233 return bail()
234 notefiles = []
234 notefiles = []
235 clock = 'c:0:0'
235 clock = 'c:0:0'
236 else:
236 else:
237 # always ignore
237 # always ignore
238 ignore = util.always
238 ignore = util.always
239 dirignore = util.always
239 dirignore = util.always
240
240
241 matchfn = match.matchfn
241 matchfn = match.matchfn
242 matchalways = match.always()
242 matchalways = match.always()
243 dmap = self._map
243 dmap = self._map
244 nonnormalset = getattr(self, '_nonnormalset', None)
244 nonnormalset = getattr(self, '_nonnormalset', None)
245
245
246 copymap = self._copymap
246 copymap = self._copymap
247 getkind = stat.S_IFMT
247 getkind = stat.S_IFMT
248 dirkind = stat.S_IFDIR
248 dirkind = stat.S_IFDIR
249 regkind = stat.S_IFREG
249 regkind = stat.S_IFREG
250 lnkkind = stat.S_IFLNK
250 lnkkind = stat.S_IFLNK
251 join = self._join
251 join = self._join
252 normcase = util.normcase
252 normcase = util.normcase
253 fresh_instance = False
253 fresh_instance = False
254
254
255 exact = skipstep3 = False
255 exact = skipstep3 = False
256 if match.isexact(): # match.exact
256 if match.isexact(): # match.exact
257 exact = True
257 exact = True
258 dirignore = util.always # skip step 2
258 dirignore = util.always # skip step 2
259 elif match.prefix(): # match.match, no patterns
259 elif match.prefix(): # match.match, no patterns
260 skipstep3 = True
260 skipstep3 = True
261
261
262 if not exact and self._checkcase:
262 if not exact and self._checkcase:
263 # note that even though we could receive directory entries, we're only
263 # note that even though we could receive directory entries, we're only
264 # interested in checking if a file with the same name exists. So only
264 # interested in checking if a file with the same name exists. So only
265 # normalize files if possible.
265 # normalize files if possible.
266 normalize = self._normalizefile
266 normalize = self._normalizefile
267 skipstep3 = False
267 skipstep3 = False
268 else:
268 else:
269 normalize = None
269 normalize = None
270
270
271 # step 1: find all explicit files
271 # step 1: find all explicit files
272 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
272 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
273
273
274 skipstep3 = skipstep3 and not (work or dirsnotfound)
274 skipstep3 = skipstep3 and not (work or dirsnotfound)
275 work = [d for d in work if not dirignore(d[0])]
275 work = [d for d in work if not dirignore(d[0])]
276
276
277 if not work and (exact or skipstep3):
277 if not work and (exact or skipstep3):
278 for s in subrepos:
278 for s in subrepos:
279 del results[s]
279 del results[s]
280 del results['.hg']
280 del results['.hg']
281 return results
281 return results
282
282
283 # step 2: query Watchman
283 # step 2: query Watchman
284 try:
284 try:
285 # Use the user-configured timeout for the query.
285 # Use the user-configured timeout for the query.
286 # Add a little slack over the top of the user query to allow for
286 # Add a little slack over the top of the user query to allow for
287 # overheads while transferring the data
287 # overheads while transferring the data
288 self._watchmanclient.settimeout(state.timeout + 0.1)
288 self._watchmanclient.settimeout(state.timeout + 0.1)
289 result = self._watchmanclient.command('query', {
289 result = self._watchmanclient.command('query', {
290 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
290 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
291 'since': clock,
291 'since': clock,
292 'expression': [
292 'expression': [
293 'not', [
293 'not', [
294 'anyof', ['dirname', '.hg'],
294 'anyof', ['dirname', '.hg'],
295 ['name', '.hg', 'wholename']
295 ['name', '.hg', 'wholename']
296 ]
296 ]
297 ],
297 ],
298 'sync_timeout': int(state.timeout * 1000),
298 'sync_timeout': int(state.timeout * 1000),
299 'empty_on_fresh_instance': state.walk_on_invalidate,
299 'empty_on_fresh_instance': state.walk_on_invalidate,
300 })
300 })
301 except Exception as ex:
301 except Exception as ex:
302 _handleunavailable(self._ui, state, ex)
302 _handleunavailable(self._ui, state, ex)
303 self._watchmanclient.clearconnection()
303 self._watchmanclient.clearconnection()
304 return bail()
304 return bail()
305 else:
305 else:
306 # We need to propagate the last observed clock up so that we
306 # We need to propagate the last observed clock up so that we
307 # can use it for our next query
307 # can use it for our next query
308 state.setlastclock(result['clock'])
308 state.setlastclock(result['clock'])
309 if result['is_fresh_instance']:
309 if result['is_fresh_instance']:
310 if state.walk_on_invalidate:
310 if state.walk_on_invalidate:
311 state.invalidate()
311 state.invalidate()
312 return bail()
312 return bail()
313 fresh_instance = True
313 fresh_instance = True
314 # Ignore any prior noteable files from the state info
314 # Ignore any prior noteable files from the state info
315 notefiles = []
315 notefiles = []
316
316
317 # for file paths which require normalization and we encounter a case
317 # for file paths which require normalization and we encounter a case
318 # collision, we store our own foldmap
318 # collision, we store our own foldmap
319 if normalize:
319 if normalize:
320 foldmap = dict((normcase(k), k) for k in results)
320 foldmap = dict((normcase(k), k) for k in results)
321
321
322 switch_slashes = pycompat.ossep == '\\'
322 switch_slashes = pycompat.ossep == '\\'
323 # The order of the results is, strictly speaking, undefined.
323 # The order of the results is, strictly speaking, undefined.
324 # For case changes on a case insensitive filesystem we may receive
324 # For case changes on a case insensitive filesystem we may receive
325 # two entries, one with exists=True and another with exists=False.
325 # two entries, one with exists=True and another with exists=False.
326 # The exists=True entries in the same response should be interpreted
326 # The exists=True entries in the same response should be interpreted
327 # as being happens-after the exists=False entries due to the way that
327 # as being happens-after the exists=False entries due to the way that
328 # Watchman tracks files. We use this property to reconcile deletes
328 # Watchman tracks files. We use this property to reconcile deletes
329 # for name case changes.
329 # for name case changes.
330 for entry in result['files']:
330 for entry in result['files']:
331 fname = entry['name']
331 fname = entry['name']
332 if _fixencoding:
332 if _fixencoding:
333 fname = _watchmantofsencoding(fname)
333 fname = _watchmantofsencoding(fname)
334 if switch_slashes:
334 if switch_slashes:
335 fname = fname.replace('\\', '/')
335 fname = fname.replace('\\', '/')
336 if normalize:
336 if normalize:
337 normed = normcase(fname)
337 normed = normcase(fname)
338 fname = normalize(fname, True, True)
338 fname = normalize(fname, True, True)
339 foldmap[normed] = fname
339 foldmap[normed] = fname
340 fmode = entry['mode']
340 fmode = entry['mode']
341 fexists = entry['exists']
341 fexists = entry['exists']
342 kind = getkind(fmode)
342 kind = getkind(fmode)
343
343
344 if not fexists:
344 if not fexists:
345 # if marked as deleted and we don't already have a change
345 # if marked as deleted and we don't already have a change
346 # record, mark it as deleted. If we already have an entry
346 # record, mark it as deleted. If we already have an entry
347 # for fname then it was either part of walkexplicit or was
347 # for fname then it was either part of walkexplicit or was
348 # an earlier result that was a case change
348 # an earlier result that was a case change
349 if fname not in results and fname in dmap and (
349 if fname not in results and fname in dmap and (
350 matchalways or matchfn(fname)):
350 matchalways or matchfn(fname)):
351 results[fname] = None
351 results[fname] = None
352 elif kind == dirkind:
352 elif kind == dirkind:
353 if fname in dmap and (matchalways or matchfn(fname)):
353 if fname in dmap and (matchalways or matchfn(fname)):
354 results[fname] = None
354 results[fname] = None
355 elif kind == regkind or kind == lnkkind:
355 elif kind == regkind or kind == lnkkind:
356 if fname in dmap:
356 if fname in dmap:
357 if matchalways or matchfn(fname):
357 if matchalways or matchfn(fname):
358 results[fname] = entry
358 results[fname] = entry
359 elif (matchalways or matchfn(fname)) and not ignore(fname):
359 elif (matchalways or matchfn(fname)) and not ignore(fname):
360 results[fname] = entry
360 results[fname] = entry
361 elif fname in dmap and (matchalways or matchfn(fname)):
361 elif fname in dmap and (matchalways or matchfn(fname)):
362 results[fname] = None
362 results[fname] = None
363
363
364 # step 3: query notable files we don't already know about
364 # step 3: query notable files we don't already know about
365 # XXX try not to iterate over the entire dmap
365 # XXX try not to iterate over the entire dmap
366 if normalize:
366 if normalize:
367 # any notable files that have changed case will already be handled
367 # any notable files that have changed case will already be handled
368 # above, so just check membership in the foldmap
368 # above, so just check membership in the foldmap
369 notefiles = set((normalize(f, True, True) for f in notefiles
369 notefiles = set((normalize(f, True, True) for f in notefiles
370 if normcase(f) not in foldmap))
370 if normcase(f) not in foldmap))
371 visit = set((f for f in notefiles if (f not in results and matchfn(f)
371 visit = set((f for f in notefiles if (f not in results and matchfn(f)
372 and (f in dmap or not ignore(f)))))
372 and (f in dmap or not ignore(f)))))
373
373
374 if nonnormalset is not None and not fresh_instance:
374 if nonnormalset is not None and not fresh_instance:
375 if matchalways:
375 if matchalways:
376 visit.update(f for f in nonnormalset if f not in results)
376 visit.update(f for f in nonnormalset if f not in results)
377 visit.update(f for f in copymap if f not in results)
377 visit.update(f for f in copymap if f not in results)
378 else:
378 else:
379 visit.update(f for f in nonnormalset
379 visit.update(f for f in nonnormalset
380 if f not in results and matchfn(f))
380 if f not in results and matchfn(f))
381 visit.update(f for f in copymap
381 visit.update(f for f in copymap
382 if f not in results and matchfn(f))
382 if f not in results and matchfn(f))
383 else:
383 else:
384 if matchalways:
384 if matchalways:
385 visit.update(f for f, st in dmap.iteritems()
385 visit.update(f for f, st in dmap.iteritems()
386 if (f not in results and
386 if (f not in results and
387 (st[2] < 0 or st[0] != 'n' or fresh_instance)))
387 (st[2] < 0 or st[0] != 'n' or fresh_instance)))
388 visit.update(f for f in copymap if f not in results)
388 visit.update(f for f in copymap if f not in results)
389 else:
389 else:
390 visit.update(f for f, st in dmap.iteritems()
390 visit.update(f for f, st in dmap.iteritems()
391 if (f not in results and
391 if (f not in results and
392 (st[2] < 0 or st[0] != 'n' or fresh_instance)
392 (st[2] < 0 or st[0] != 'n' or fresh_instance)
393 and matchfn(f)))
393 and matchfn(f)))
394 visit.update(f for f in copymap
394 visit.update(f for f in copymap
395 if f not in results and matchfn(f))
395 if f not in results and matchfn(f))
396
396
397 audit = pathutil.pathauditor(self._root).check
397 audit = pathutil.pathauditor(self._root).check
398 auditpass = [f for f in visit if audit(f)]
398 auditpass = [f for f in visit if audit(f)]
399 auditpass.sort()
399 auditpass.sort()
400 auditfail = visit.difference(auditpass)
400 auditfail = visit.difference(auditpass)
401 for f in auditfail:
401 for f in auditfail:
402 results[f] = None
402 results[f] = None
403
403
404 nf = iter(auditpass).next
404 nf = iter(auditpass).next
405 for st in util.statfiles([join(f) for f in auditpass]):
405 for st in util.statfiles([join(f) for f in auditpass]):
406 f = nf()
406 f = nf()
407 if st or f in dmap:
407 if st or f in dmap:
408 results[f] = st
408 results[f] = st
409
409
410 for s in subrepos:
410 for s in subrepos:
411 del results[s]
411 del results[s]
412 del results['.hg']
412 del results['.hg']
413 return results
413 return results
414
414
415 def overridestatus(
415 def overridestatus(
416 orig, self, node1='.', node2=None, match=None, ignored=False,
416 orig, self, node1='.', node2=None, match=None, ignored=False,
417 clean=False, unknown=False, listsubrepos=False):
417 clean=False, unknown=False, listsubrepos=False):
418 listignored = ignored
418 listignored = ignored
419 listclean = clean
419 listclean = clean
420 listunknown = unknown
420 listunknown = unknown
421
421
422 def _cmpsets(l1, l2):
422 def _cmpsets(l1, l2):
423 try:
423 try:
424 if 'FSMONITOR_LOG_FILE' in encoding.environ:
424 if 'FSMONITOR_LOG_FILE' in encoding.environ:
425 fn = encoding.environ['FSMONITOR_LOG_FILE']
425 fn = encoding.environ['FSMONITOR_LOG_FILE']
426 f = open(fn, 'wb')
426 f = open(fn, 'wb')
427 else:
427 else:
428 fn = 'fsmonitorfail.log'
428 fn = 'fsmonitorfail.log'
429 f = self.opener(fn, 'wb')
429 f = self.opener(fn, 'wb')
430 except (IOError, OSError):
430 except (IOError, OSError):
431 self.ui.warn(_('warning: unable to write to %s\n') % fn)
431 self.ui.warn(_('warning: unable to write to %s\n') % fn)
432 return
432 return
433
433
434 try:
434 try:
435 for i, (s1, s2) in enumerate(zip(l1, l2)):
435 for i, (s1, s2) in enumerate(zip(l1, l2)):
436 if set(s1) != set(s2):
436 if set(s1) != set(s2):
437 f.write('sets at position %d are unequal\n' % i)
437 f.write('sets at position %d are unequal\n' % i)
438 f.write('watchman returned: %s\n' % s1)
438 f.write('watchman returned: %s\n' % s1)
439 f.write('stat returned: %s\n' % s2)
439 f.write('stat returned: %s\n' % s2)
440 finally:
440 finally:
441 f.close()
441 f.close()
442
442
443 if isinstance(node1, context.changectx):
443 if isinstance(node1, context.changectx):
444 ctx1 = node1
444 ctx1 = node1
445 else:
445 else:
446 ctx1 = self[node1]
446 ctx1 = self[node1]
447 if isinstance(node2, context.changectx):
447 if isinstance(node2, context.changectx):
448 ctx2 = node2
448 ctx2 = node2
449 else:
449 else:
450 ctx2 = self[node2]
450 ctx2 = self[node2]
451
451
452 working = ctx2.rev() is None
452 working = ctx2.rev() is None
453 parentworking = working and ctx1 == self['.']
453 parentworking = working and ctx1 == self['.']
454 match = match or matchmod.always(self.root, self.getcwd())
454 match = match or matchmod.always(self.root, self.getcwd())
455
455
456 # Maybe we can use this opportunity to update Watchman's state.
456 # Maybe we can use this opportunity to update Watchman's state.
457 # Mercurial uses workingcommitctx and/or memctx to represent the part of
457 # Mercurial uses workingcommitctx and/or memctx to represent the part of
458 # the workingctx that is to be committed. So don't update the state in
458 # the workingctx that is to be committed. So don't update the state in
459 # that case.
459 # that case.
460 # HG_PENDING is set in the environment when the dirstate is being updated
460 # HG_PENDING is set in the environment when the dirstate is being updated
461 # in the middle of a transaction; we must not update our state in that
461 # in the middle of a transaction; we must not update our state in that
462 # case, or we risk forgetting about changes in the working copy.
462 # case, or we risk forgetting about changes in the working copy.
463 updatestate = (parentworking and match.always() and
463 updatestate = (parentworking and match.always() and
464 not isinstance(ctx2, (context.workingcommitctx,
464 not isinstance(ctx2, (context.workingcommitctx,
465 context.memctx)) and
465 context.memctx)) and
466 'HG_PENDING' not in encoding.environ)
466 'HG_PENDING' not in encoding.environ)
467
467
468 try:
468 try:
469 if self._fsmonitorstate.walk_on_invalidate:
469 if self._fsmonitorstate.walk_on_invalidate:
470 # Use a short timeout to query the current clock. If that
470 # Use a short timeout to query the current clock. If that
471 # takes too long then we assume that the service will be slow
471 # takes too long then we assume that the service will be slow
472 # to answer our query.
472 # to answer our query.
473 # walk_on_invalidate indicates that we prefer to walk the
473 # walk_on_invalidate indicates that we prefer to walk the
474 # tree ourselves because we can ignore portions that Watchman
474 # tree ourselves because we can ignore portions that Watchman
475 # cannot and we tend to be faster in the warmer buffer cache
475 # cannot and we tend to be faster in the warmer buffer cache
476 # cases.
476 # cases.
477 self._watchmanclient.settimeout(0.1)
477 self._watchmanclient.settimeout(0.1)
478 else:
478 else:
479 # Give Watchman more time to potentially complete its walk
479 # Give Watchman more time to potentially complete its walk
480 # and return the initial clock. In this mode we assume that
480 # and return the initial clock. In this mode we assume that
481 # the filesystem will be slower than parsing a potentially
481 # the filesystem will be slower than parsing a potentially
482 # very large Watchman result set.
482 # very large Watchman result set.
483 self._watchmanclient.settimeout(
483 self._watchmanclient.settimeout(
484 self._fsmonitorstate.timeout + 0.1)
484 self._fsmonitorstate.timeout + 0.1)
485 startclock = self._watchmanclient.getcurrentclock()
485 startclock = self._watchmanclient.getcurrentclock()
486 except Exception as ex:
486 except Exception as ex:
487 self._watchmanclient.clearconnection()
487 self._watchmanclient.clearconnection()
488 _handleunavailable(self.ui, self._fsmonitorstate, ex)
488 _handleunavailable(self.ui, self._fsmonitorstate, ex)
489 # boo, Watchman failed. bail
489 # boo, Watchman failed. bail
490 return orig(node1, node2, match, listignored, listclean,
490 return orig(node1, node2, match, listignored, listclean,
491 listunknown, listsubrepos)
491 listunknown, listsubrepos)
492
492
493 if updatestate:
493 if updatestate:
494 # We need info about unknown files. This may make things slower the
494 # We need info about unknown files. This may make things slower the
495 # first time, but whatever.
495 # first time, but whatever.
496 stateunknown = True
496 stateunknown = True
497 else:
497 else:
498 stateunknown = listunknown
498 stateunknown = listunknown
499
499
500 r = orig(node1, node2, match, listignored, listclean, stateunknown,
500 r = orig(node1, node2, match, listignored, listclean, stateunknown,
501 listsubrepos)
501 listsubrepos)
502 modified, added, removed, deleted, unknown, ignored, clean = r
502 modified, added, removed, deleted, unknown, ignored, clean = r
503
503
504 if updatestate:
504 if updatestate:
505 notefiles = modified + added + removed + deleted + unknown
505 notefiles = modified + added + removed + deleted + unknown
506 self._fsmonitorstate.set(
506 self._fsmonitorstate.set(
507 self._fsmonitorstate.getlastclock() or startclock,
507 self._fsmonitorstate.getlastclock() or startclock,
508 _hashignore(self.dirstate._ignore),
508 _hashignore(self.dirstate._ignore),
509 notefiles)
509 notefiles)
510
510
511 if not listunknown:
511 if not listunknown:
512 unknown = []
512 unknown = []
513
513
514 # don't do paranoid checks if we're not going to query Watchman anyway
514 # don't do paranoid checks if we're not going to query Watchman anyway
515 full = listclean or match.traversedir is not None
515 full = listclean or match.traversedir is not None
516 if self._fsmonitorstate.mode == 'paranoid' and not full:
516 if self._fsmonitorstate.mode == 'paranoid' and not full:
517 # run status again and fall back to the old walk this time
517 # run status again and fall back to the old walk this time
518 self.dirstate._fsmonitordisable = True
518 self.dirstate._fsmonitordisable = True
519
519
520 # shut the UI up
520 # shut the UI up
521 quiet = self.ui.quiet
521 quiet = self.ui.quiet
522 self.ui.quiet = True
522 self.ui.quiet = True
523 fout, ferr = self.ui.fout, self.ui.ferr
523 fout, ferr = self.ui.fout, self.ui.ferr
524 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
524 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
525
525
526 try:
526 try:
527 rv2 = orig(
527 rv2 = orig(
528 node1, node2, match, listignored, listclean, listunknown,
528 node1, node2, match, listignored, listclean, listunknown,
529 listsubrepos)
529 listsubrepos)
530 finally:
530 finally:
531 self.dirstate._fsmonitordisable = False
531 self.dirstate._fsmonitordisable = False
532 self.ui.quiet = quiet
532 self.ui.quiet = quiet
533 self.ui.fout, self.ui.ferr = fout, ferr
533 self.ui.fout, self.ui.ferr = fout, ferr
534
534
535 # clean isn't tested since it's set to True above
535 # clean isn't tested since it's set to True above
536 _cmpsets([modified, added, removed, deleted, unknown, ignored, clean],
536 _cmpsets([modified, added, removed, deleted, unknown, ignored, clean],
537 rv2)
537 rv2)
538 modified, added, removed, deleted, unknown, ignored, clean = rv2
538 modified, added, removed, deleted, unknown, ignored, clean = rv2
539
539
540 return scmutil.status(
540 return scmutil.status(
541 modified, added, removed, deleted, unknown, ignored, clean)
541 modified, added, removed, deleted, unknown, ignored, clean)
542
542
543 def makedirstate(cls):
543 def makedirstate(cls):
544 class fsmonitordirstate(cls):
544 class fsmonitordirstate(cls):
545 def _fsmonitorinit(self, fsmonitorstate, watchmanclient):
545 def _fsmonitorinit(self, fsmonitorstate, watchmanclient):
546 # _fsmonitordisable is used in paranoid mode
546 # _fsmonitordisable is used in paranoid mode
547 self._fsmonitordisable = False
547 self._fsmonitordisable = False
548 self._fsmonitorstate = fsmonitorstate
548 self._fsmonitorstate = fsmonitorstate
549 self._watchmanclient = watchmanclient
549 self._watchmanclient = watchmanclient
550
550
551 def walk(self, *args, **kwargs):
551 def walk(self, *args, **kwargs):
552 orig = super(fsmonitordirstate, self).walk
552 orig = super(fsmonitordirstate, self).walk
553 if self._fsmonitordisable:
553 if self._fsmonitordisable:
554 return orig(*args, **kwargs)
554 return orig(*args, **kwargs)
555 return overridewalk(orig, self, *args, **kwargs)
555 return overridewalk(orig, self, *args, **kwargs)
556
556
557 def rebuild(self, *args, **kwargs):
557 def rebuild(self, *args, **kwargs):
558 self._fsmonitorstate.invalidate()
558 self._fsmonitorstate.invalidate()
559 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
559 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
560
560
561 def invalidate(self, *args, **kwargs):
561 def invalidate(self, *args, **kwargs):
562 self._fsmonitorstate.invalidate()
562 self._fsmonitorstate.invalidate()
563 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
563 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
564
564
565 return fsmonitordirstate
565 return fsmonitordirstate
566
566
567 def wrapdirstate(orig, self):
567 def wrapdirstate(orig, self):
568 ds = orig(self)
568 ds = orig(self)
569 # only override the dirstate when Watchman is available for the repo
569 # only override the dirstate when Watchman is available for the repo
570 if util.safehasattr(self, '_fsmonitorstate'):
570 if util.safehasattr(self, '_fsmonitorstate'):
571 ds.__class__ = makedirstate(ds.__class__)
571 ds.__class__ = makedirstate(ds.__class__)
572 ds._fsmonitorinit(self._fsmonitorstate, self._watchmanclient)
572 ds._fsmonitorinit(self._fsmonitorstate, self._watchmanclient)
573 return ds
573 return ds
574
574
575 def extsetup(ui):
575 def extsetup(ui):
576 wrapfilecache(localrepo.localrepository, 'dirstate', wrapdirstate)
576 wrapfilecache(localrepo.localrepository, 'dirstate', wrapdirstate)
577 if pycompat.sysplatform == 'darwin':
577 if pycompat.sysplatform == 'darwin':
578 # An assist for avoiding the dangling-symlink fsevents bug
578 # An assist for avoiding the dangling-symlink fsevents bug
579 extensions.wrapfunction(os, 'symlink', wrapsymlink)
579 extensions.wrapfunction(os, 'symlink', wrapsymlink)
580
580
581 extensions.wrapfunction(merge, 'update', wrapupdate)
581 extensions.wrapfunction(merge, 'update', wrapupdate)
582
582
583 def wrapsymlink(orig, source, link_name):
583 def wrapsymlink(orig, source, link_name):
584 ''' if we create a dangling symlink, also touch the parent dir
584 ''' if we create a dangling symlink, also touch the parent dir
585 to encourage fsevents notifications to work more correctly '''
585 to encourage fsevents notifications to work more correctly '''
586 try:
586 try:
587 return orig(source, link_name)
587 return orig(source, link_name)
588 finally:
588 finally:
589 try:
589 try:
590 os.utime(os.path.dirname(link_name), None)
590 os.utime(os.path.dirname(link_name), None)
591 except OSError:
591 except OSError:
592 pass
592 pass
593
593
594 class state_update(object):
594 class state_update(object):
595 ''' This context manager is responsible for dispatching the state-enter
595 ''' This context manager is responsible for dispatching the state-enter
596 and state-leave signals to the watchman service '''
596 and state-leave signals to the watchman service '''
597
597
598 def __init__(self, repo, node, distance, partial):
598 def __init__(self, repo, node, distance, partial):
599 self.repo = repo
599 self.repo = repo
600 self.node = node
600 self.node = node
601 self.distance = distance
601 self.distance = distance
602 self.partial = partial
602 self.partial = partial
603 self._lock = None
603 self._lock = None
604 self.need_leave = False
604
605
605 def __enter__(self):
606 def __enter__(self):
606 # We explicitly need to take a lock here, before we proceed to update
607 # We explicitly need to take a lock here, before we proceed to update
607 # watchman about the update operation, so that we don't race with
608 # watchman about the update operation, so that we don't race with
608 # some other actor. merge.update is going to take the wlock almost
609 # some other actor. merge.update is going to take the wlock almost
609 # immediately anyway, so this is effectively extending the lock
610 # immediately anyway, so this is effectively extending the lock
610 # around a couple of short sanity checks.
611 # around a couple of short sanity checks.
611 self._lock = self.repo.wlock()
612 self._lock = self.repo.wlock()
612 self._state('state-enter')
613 self.need_leave = self._state('state-enter')
613 return self
614 return self
614
615
615 def __exit__(self, type_, value, tb):
616 def __exit__(self, type_, value, tb):
616 try:
617 try:
617 status = 'ok' if type_ is None else 'failed'
618 if self.need_leave:
618 self._state('state-leave', status=status)
619 status = 'ok' if type_ is None else 'failed'
620 self._state('state-leave', status=status)
619 finally:
621 finally:
620 if self._lock:
622 if self._lock:
621 self._lock.release()
623 self._lock.release()
622
624
623 def _state(self, cmd, status='ok'):
625 def _state(self, cmd, status='ok'):
624 if not util.safehasattr(self.repo, '_watchmanclient'):
626 if not util.safehasattr(self.repo, '_watchmanclient'):
625 return
627 return False
626 try:
628 try:
627 commithash = self.repo[self.node].hex()
629 commithash = self.repo[self.node].hex()
628 self.repo._watchmanclient.command(cmd, {
630 self.repo._watchmanclient.command(cmd, {
629 'name': 'hg.update',
631 'name': 'hg.update',
630 'metadata': {
632 'metadata': {
631 # the target revision
633 # the target revision
632 'rev': commithash,
634 'rev': commithash,
633 # approximate number of commits between current and target
635 # approximate number of commits between current and target
634 'distance': self.distance,
636 'distance': self.distance,
635 # success/failure (only really meaningful for state-leave)
637 # success/failure (only really meaningful for state-leave)
636 'status': status,
638 'status': status,
637 # whether the working copy parent is changing
639 # whether the working copy parent is changing
638 'partial': self.partial,
640 'partial': self.partial,
639 }})
641 }})
642 return True
640 except Exception as e:
643 except Exception as e:
641 # Swallow any errors; fire and forget
644 # Swallow any errors; fire and forget
642 self.repo.ui.log(
645 self.repo.ui.log(
643 'watchman', 'Exception %s while running %s\n', e, cmd)
646 'watchman', 'Exception %s while running %s\n', e, cmd)
647 return False
644
648
645 # Bracket working copy updates with calls to the watchman state-enter
649 # Bracket working copy updates with calls to the watchman state-enter
646 # and state-leave commands. This allows clients to perform more intelligent
650 # and state-leave commands. This allows clients to perform more intelligent
647 # settling during bulk file change scenarios
651 # settling during bulk file change scenarios
648 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
652 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
649 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
653 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
650 mergeancestor=False, labels=None, matcher=None, **kwargs):
654 mergeancestor=False, labels=None, matcher=None, **kwargs):
651
655
652 distance = 0
656 distance = 0
653 partial = True
657 partial = True
654 if matcher is None or matcher.always():
658 if matcher is None or matcher.always():
655 partial = False
659 partial = False
656 wc = repo[None]
660 wc = repo[None]
657 parents = wc.parents()
661 parents = wc.parents()
658 if len(parents) == 2:
662 if len(parents) == 2:
659 anc = repo.changelog.ancestor(parents[0].node(), parents[1].node())
663 anc = repo.changelog.ancestor(parents[0].node(), parents[1].node())
660 ancrev = repo[anc].rev()
664 ancrev = repo[anc].rev()
661 distance = abs(repo[node].rev() - ancrev)
665 distance = abs(repo[node].rev() - ancrev)
662 elif len(parents) == 1:
666 elif len(parents) == 1:
663 distance = abs(repo[node].rev() - parents[0].rev())
667 distance = abs(repo[node].rev() - parents[0].rev())
664
668
665 with state_update(repo, node, distance, partial):
669 with state_update(repo, node, distance, partial):
666 return orig(
670 return orig(
667 repo, node, branchmerge, force, ancestor, mergeancestor,
671 repo, node, branchmerge, force, ancestor, mergeancestor,
668 labels, matcher, *kwargs)
672 labels, matcher, *kwargs)
669
673
670 def reposetup(ui, repo):
674 def reposetup(ui, repo):
671 # We don't work with largefiles or inotify
675 # We don't work with largefiles or inotify
672 exts = extensions.enabled()
676 exts = extensions.enabled()
673 for ext in _blacklist:
677 for ext in _blacklist:
674 if ext in exts:
678 if ext in exts:
675 ui.warn(_('The fsmonitor extension is incompatible with the %s '
679 ui.warn(_('The fsmonitor extension is incompatible with the %s '
676 'extension and has been disabled.\n') % ext)
680 'extension and has been disabled.\n') % ext)
677 return
681 return
678
682
679 if util.safehasattr(repo, 'dirstate'):
683 if util.safehasattr(repo, 'dirstate'):
680 # We don't work with subrepos either. Note that we can get passed in
684 # We don't work with subrepos either. Note that we can get passed in
681 # e.g. a statichttprepo, which throws on trying to access the substate.
685 # e.g. a statichttprepo, which throws on trying to access the substate.
682 # XXX This sucks.
686 # XXX This sucks.
683 try:
687 try:
684 # if repo[None].substate can cause a dirstate parse, which is too
688 # if repo[None].substate can cause a dirstate parse, which is too
685 # slow. Instead, look for a file called hgsubstate,
689 # slow. Instead, look for a file called hgsubstate,
686 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
690 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
687 return
691 return
688 except AttributeError:
692 except AttributeError:
689 return
693 return
690
694
691 fsmonitorstate = state.state(repo)
695 fsmonitorstate = state.state(repo)
692 if fsmonitorstate.mode == 'off':
696 if fsmonitorstate.mode == 'off':
693 return
697 return
694
698
695 try:
699 try:
696 client = watchmanclient.client(repo)
700 client = watchmanclient.client(repo)
697 except Exception as ex:
701 except Exception as ex:
698 _handleunavailable(ui, fsmonitorstate, ex)
702 _handleunavailable(ui, fsmonitorstate, ex)
699 return
703 return
700
704
701 repo._fsmonitorstate = fsmonitorstate
705 repo._fsmonitorstate = fsmonitorstate
702 repo._watchmanclient = client
706 repo._watchmanclient = client
703
707
704 # at this point since fsmonitorstate wasn't present, repo.dirstate is
708 # at this point since fsmonitorstate wasn't present, repo.dirstate is
705 # not a fsmonitordirstate
709 # not a fsmonitordirstate
706 dirstate = repo.dirstate
710 dirstate = repo.dirstate
707 dirstate.__class__ = makedirstate(dirstate.__class__)
711 dirstate.__class__ = makedirstate(dirstate.__class__)
708 dirstate._fsmonitorinit(fsmonitorstate, client)
712 dirstate._fsmonitorinit(fsmonitorstate, client)
709 # invalidate property cache, but keep filecache which contains the
713 # invalidate property cache, but keep filecache which contains the
710 # wrapped dirstate object
714 # wrapped dirstate object
711 del repo.unfiltered().__dict__['dirstate']
715 del repo.unfiltered().__dict__['dirstate']
712 assert dirstate is repo._filecache['dirstate'].obj
716 assert dirstate is repo._filecache['dirstate'].obj
713
717
714 class fsmonitorrepo(repo.__class__):
718 class fsmonitorrepo(repo.__class__):
715 def status(self, *args, **kwargs):
719 def status(self, *args, **kwargs):
716 orig = super(fsmonitorrepo, self).status
720 orig = super(fsmonitorrepo, self).status
717 return overridestatus(orig, self, *args, **kwargs)
721 return overridestatus(orig, self, *args, **kwargs)
718
722
719 repo.__class__ = fsmonitorrepo
723 repo.__class__ = fsmonitorrepo
720
724
721 def wrapfilecache(cls, propname, wrapper):
725 def wrapfilecache(cls, propname, wrapper):
722 """Wraps a filecache property. These can't be wrapped using the normal
726 """Wraps a filecache property. These can't be wrapped using the normal
723 wrapfunction. This should eventually go into upstream Mercurial.
727 wrapfunction. This should eventually go into upstream Mercurial.
724 """
728 """
725 assert callable(wrapper)
729 assert callable(wrapper)
726 for currcls in cls.__mro__:
730 for currcls in cls.__mro__:
727 if propname in currcls.__dict__:
731 if propname in currcls.__dict__:
728 origfn = currcls.__dict__[propname].func
732 origfn = currcls.__dict__[propname].func
729 assert callable(origfn)
733 assert callable(origfn)
730 def wrap(*args, **kwargs):
734 def wrap(*args, **kwargs):
731 return wrapper(origfn, *args, **kwargs)
735 return wrapper(origfn, *args, **kwargs)
732 currcls.__dict__[propname].func = wrap
736 currcls.__dict__[propname].func = wrap
733 break
737 break
734
738
735 if currcls is object:
739 if currcls is object:
736 raise AttributeError(
740 raise AttributeError(
737 _("type '%s' has no property '%s'") % (cls, propname))
741 _("type '%s' has no property '%s'") % (cls, propname))
General Comments 0
You need to be logged in to leave comments. Login now