##// END OF EJS Templates
match: delete unused root and cwd arguments from {always,never,exact}() (API)...
Martin von Zweigbergk -
r41825:0531dff7 default
parent child Browse files
Show More

The requested changes are too big and content was truncated. Show full diff

@@ -1,828 +1,828 b''
1 # __init__.py - fsmonitor initialization and overrides
1 # __init__.py - fsmonitor initialization and overrides
2 #
2 #
3 # Copyright 2013-2016 Facebook, Inc.
3 # Copyright 2013-2016 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
8 '''Faster status operations with the Watchman file monitor (EXPERIMENTAL)
9
9
10 Integrates the file-watching program Watchman with Mercurial to produce faster
10 Integrates the file-watching program Watchman with Mercurial to produce faster
11 status results.
11 status results.
12
12
13 On a particular Linux system, for a real-world repository with over 400,000
13 On a particular Linux system, for a real-world repository with over 400,000
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
14 files hosted on ext4, vanilla `hg status` takes 1.3 seconds. On the same
15 system, with fsmonitor it takes about 0.3 seconds.
15 system, with fsmonitor it takes about 0.3 seconds.
16
16
17 fsmonitor requires no configuration -- it will tell Watchman about your
17 fsmonitor requires no configuration -- it will tell Watchman about your
18 repository as necessary. You'll need to install Watchman from
18 repository as necessary. You'll need to install Watchman from
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
19 https://facebook.github.io/watchman/ and make sure it is in your PATH.
20
20
21 fsmonitor is incompatible with the largefiles and eol extensions, and
21 fsmonitor is incompatible with the largefiles and eol extensions, and
22 will disable itself if any of those are active.
22 will disable itself if any of those are active.
23
23
24 The following configuration options exist:
24 The following configuration options exist:
25
25
26 ::
26 ::
27
27
28 [fsmonitor]
28 [fsmonitor]
29 mode = {off, on, paranoid}
29 mode = {off, on, paranoid}
30
30
31 When `mode = off`, fsmonitor will disable itself (similar to not loading the
31 When `mode = off`, fsmonitor will disable itself (similar to not loading the
32 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
32 extension at all). When `mode = on`, fsmonitor will be enabled (the default).
33 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
33 When `mode = paranoid`, fsmonitor will query both Watchman and the filesystem,
34 and ensure that the results are consistent.
34 and ensure that the results are consistent.
35
35
36 ::
36 ::
37
37
38 [fsmonitor]
38 [fsmonitor]
39 timeout = (float)
39 timeout = (float)
40
40
41 A value, in seconds, that determines how long fsmonitor will wait for Watchman
41 A value, in seconds, that determines how long fsmonitor will wait for Watchman
42 to return results. Defaults to `2.0`.
42 to return results. Defaults to `2.0`.
43
43
44 ::
44 ::
45
45
46 [fsmonitor]
46 [fsmonitor]
47 blacklistusers = (list of userids)
47 blacklistusers = (list of userids)
48
48
49 A list of usernames for which fsmonitor will disable itself altogether.
49 A list of usernames for which fsmonitor will disable itself altogether.
50
50
51 ::
51 ::
52
52
53 [fsmonitor]
53 [fsmonitor]
54 walk_on_invalidate = (boolean)
54 walk_on_invalidate = (boolean)
55
55
56 Whether or not to walk the whole repo ourselves when our cached state has been
56 Whether or not to walk the whole repo ourselves when our cached state has been
57 invalidated, for example when Watchman has been restarted or .hgignore rules
57 invalidated, for example when Watchman has been restarted or .hgignore rules
58 have been changed. Walking the repo in that case can result in competing for
58 have been changed. Walking the repo in that case can result in competing for
59 I/O with Watchman. For large repos it is recommended to set this value to
59 I/O with Watchman. For large repos it is recommended to set this value to
60 false. You may wish to set this to true if you have a very fast filesystem
60 false. You may wish to set this to true if you have a very fast filesystem
61 that can outpace the IPC overhead of getting the result data for the full repo
61 that can outpace the IPC overhead of getting the result data for the full repo
62 from Watchman. Defaults to false.
62 from Watchman. Defaults to false.
63
63
64 ::
64 ::
65
65
66 [fsmonitor]
66 [fsmonitor]
67 warn_when_unused = (boolean)
67 warn_when_unused = (boolean)
68
68
69 Whether to print a warning during certain operations when fsmonitor would be
69 Whether to print a warning during certain operations when fsmonitor would be
70 beneficial to performance but isn't enabled.
70 beneficial to performance but isn't enabled.
71
71
72 ::
72 ::
73
73
74 [fsmonitor]
74 [fsmonitor]
75 warn_update_file_count = (integer)
75 warn_update_file_count = (integer)
76
76
77 If ``warn_when_unused`` is set and fsmonitor isn't enabled, a warning will
77 If ``warn_when_unused`` is set and fsmonitor isn't enabled, a warning will
78 be printed during working directory updates if this many files will be
78 be printed during working directory updates if this many files will be
79 created.
79 created.
80 '''
80 '''
81
81
82 # Platforms Supported
82 # Platforms Supported
83 # ===================
83 # ===================
84 #
84 #
85 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
85 # **Linux:** *Stable*. Watchman and fsmonitor are both known to work reliably,
86 # even under severe loads.
86 # even under severe loads.
87 #
87 #
88 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
88 # **Mac OS X:** *Stable*. The Mercurial test suite passes with fsmonitor
89 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
89 # turned on, on case-insensitive HFS+. There has been a reasonable amount of
90 # user testing under normal loads.
90 # user testing under normal loads.
91 #
91 #
92 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
92 # **Solaris, BSD:** *Alpha*. watchman and fsmonitor are believed to work, but
93 # very little testing has been done.
93 # very little testing has been done.
94 #
94 #
95 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
95 # **Windows:** *Alpha*. Not in a release version of watchman or fsmonitor yet.
96 #
96 #
97 # Known Issues
97 # Known Issues
98 # ============
98 # ============
99 #
99 #
100 # * fsmonitor will disable itself if any of the following extensions are
100 # * fsmonitor will disable itself if any of the following extensions are
101 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
101 # enabled: largefiles, inotify, eol; or if the repository has subrepos.
102 # * fsmonitor will produce incorrect results if nested repos that are not
102 # * fsmonitor will produce incorrect results if nested repos that are not
103 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
103 # subrepos exist. *Workaround*: add nested repo paths to your `.hgignore`.
104 #
104 #
105 # The issues related to nested repos and subrepos are probably not fundamental
105 # The issues related to nested repos and subrepos are probably not fundamental
106 # ones. Patches to fix them are welcome.
106 # ones. Patches to fix them are welcome.
107
107
108 from __future__ import absolute_import
108 from __future__ import absolute_import
109
109
110 import codecs
110 import codecs
111 import hashlib
111 import hashlib
112 import os
112 import os
113 import stat
113 import stat
114 import sys
114 import sys
115 import weakref
115 import weakref
116
116
117 from mercurial.i18n import _
117 from mercurial.i18n import _
118 from mercurial.node import (
118 from mercurial.node import (
119 hex,
119 hex,
120 )
120 )
121
121
122 from mercurial import (
122 from mercurial import (
123 context,
123 context,
124 encoding,
124 encoding,
125 error,
125 error,
126 extensions,
126 extensions,
127 localrepo,
127 localrepo,
128 merge,
128 merge,
129 pathutil,
129 pathutil,
130 pycompat,
130 pycompat,
131 registrar,
131 registrar,
132 scmutil,
132 scmutil,
133 util,
133 util,
134 )
134 )
135 from mercurial import match as matchmod
135 from mercurial import match as matchmod
136
136
137 from . import (
137 from . import (
138 pywatchman,
138 pywatchman,
139 state,
139 state,
140 watchmanclient,
140 watchmanclient,
141 )
141 )
142
142
143 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
143 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
144 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
144 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
145 # be specifying the version(s) of Mercurial they are tested with, or
145 # be specifying the version(s) of Mercurial they are tested with, or
146 # leave the attribute unspecified.
146 # leave the attribute unspecified.
147 testedwith = 'ships-with-hg-core'
147 testedwith = 'ships-with-hg-core'
148
148
149 configtable = {}
149 configtable = {}
150 configitem = registrar.configitem(configtable)
150 configitem = registrar.configitem(configtable)
151
151
152 configitem('fsmonitor', 'mode',
152 configitem('fsmonitor', 'mode',
153 default='on',
153 default='on',
154 )
154 )
155 configitem('fsmonitor', 'walk_on_invalidate',
155 configitem('fsmonitor', 'walk_on_invalidate',
156 default=False,
156 default=False,
157 )
157 )
158 configitem('fsmonitor', 'timeout',
158 configitem('fsmonitor', 'timeout',
159 default='2',
159 default='2',
160 )
160 )
161 configitem('fsmonitor', 'blacklistusers',
161 configitem('fsmonitor', 'blacklistusers',
162 default=list,
162 default=list,
163 )
163 )
164 configitem('fsmonitor', 'verbose',
164 configitem('fsmonitor', 'verbose',
165 default=True,
165 default=True,
166 )
166 )
167 configitem('experimental', 'fsmonitor.transaction_notify',
167 configitem('experimental', 'fsmonitor.transaction_notify',
168 default=False,
168 default=False,
169 )
169 )
170
170
171 # This extension is incompatible with the following blacklisted extensions
171 # This extension is incompatible with the following blacklisted extensions
172 # and will disable itself when encountering one of these:
172 # and will disable itself when encountering one of these:
173 _blacklist = ['largefiles', 'eol']
173 _blacklist = ['largefiles', 'eol']
174
174
175 def _handleunavailable(ui, state, ex):
175 def _handleunavailable(ui, state, ex):
176 """Exception handler for Watchman interaction exceptions"""
176 """Exception handler for Watchman interaction exceptions"""
177 if isinstance(ex, watchmanclient.Unavailable):
177 if isinstance(ex, watchmanclient.Unavailable):
178 # experimental config: fsmonitor.verbose
178 # experimental config: fsmonitor.verbose
179 if ex.warn and ui.configbool('fsmonitor', 'verbose'):
179 if ex.warn and ui.configbool('fsmonitor', 'verbose'):
180 ui.warn(str(ex) + '\n')
180 ui.warn(str(ex) + '\n')
181 if ex.invalidate:
181 if ex.invalidate:
182 state.invalidate()
182 state.invalidate()
183 # experimental config: fsmonitor.verbose
183 # experimental config: fsmonitor.verbose
184 if ui.configbool('fsmonitor', 'verbose'):
184 if ui.configbool('fsmonitor', 'verbose'):
185 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
185 ui.log('fsmonitor', 'Watchman unavailable: %s\n', ex.msg)
186 else:
186 else:
187 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
187 ui.log('fsmonitor', 'Watchman exception: %s\n', ex)
188
188
189 def _hashignore(ignore):
189 def _hashignore(ignore):
190 """Calculate hash for ignore patterns and filenames
190 """Calculate hash for ignore patterns and filenames
191
191
192 If this information changes between Mercurial invocations, we can't
192 If this information changes between Mercurial invocations, we can't
193 rely on Watchman information anymore and have to re-scan the working
193 rely on Watchman information anymore and have to re-scan the working
194 copy.
194 copy.
195
195
196 """
196 """
197 sha1 = hashlib.sha1()
197 sha1 = hashlib.sha1()
198 sha1.update(repr(ignore))
198 sha1.update(repr(ignore))
199 return sha1.hexdigest()
199 return sha1.hexdigest()
200
200
201 _watchmanencoding = pywatchman.encoding.get_local_encoding()
201 _watchmanencoding = pywatchman.encoding.get_local_encoding()
202 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
202 _fsencoding = sys.getfilesystemencoding() or sys.getdefaultencoding()
203 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
203 _fixencoding = codecs.lookup(_watchmanencoding) != codecs.lookup(_fsencoding)
204
204
205 def _watchmantofsencoding(path):
205 def _watchmantofsencoding(path):
206 """Fix path to match watchman and local filesystem encoding
206 """Fix path to match watchman and local filesystem encoding
207
207
208 watchman's paths encoding can differ from filesystem encoding. For example,
208 watchman's paths encoding can differ from filesystem encoding. For example,
209 on Windows, it's always utf-8.
209 on Windows, it's always utf-8.
210 """
210 """
211 try:
211 try:
212 decoded = path.decode(_watchmanencoding)
212 decoded = path.decode(_watchmanencoding)
213 except UnicodeDecodeError as e:
213 except UnicodeDecodeError as e:
214 raise error.Abort(str(e), hint='watchman encoding error')
214 raise error.Abort(str(e), hint='watchman encoding error')
215
215
216 try:
216 try:
217 encoded = decoded.encode(_fsencoding, 'strict')
217 encoded = decoded.encode(_fsencoding, 'strict')
218 except UnicodeEncodeError as e:
218 except UnicodeEncodeError as e:
219 raise error.Abort(str(e))
219 raise error.Abort(str(e))
220
220
221 return encoded
221 return encoded
222
222
223 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
223 def overridewalk(orig, self, match, subrepos, unknown, ignored, full=True):
224 '''Replacement for dirstate.walk, hooking into Watchman.
224 '''Replacement for dirstate.walk, hooking into Watchman.
225
225
226 Whenever full is False, ignored is False, and the Watchman client is
226 Whenever full is False, ignored is False, and the Watchman client is
227 available, use Watchman combined with saved state to possibly return only a
227 available, use Watchman combined with saved state to possibly return only a
228 subset of files.'''
228 subset of files.'''
229 def bail(reason):
229 def bail(reason):
230 self._ui.debug('fsmonitor: fallback to core status, %s\n' % reason)
230 self._ui.debug('fsmonitor: fallback to core status, %s\n' % reason)
231 return orig(match, subrepos, unknown, ignored, full=True)
231 return orig(match, subrepos, unknown, ignored, full=True)
232
232
233 if full:
233 if full:
234 return bail('full rewalk requested')
234 return bail('full rewalk requested')
235 if ignored:
235 if ignored:
236 return bail('listing ignored files')
236 return bail('listing ignored files')
237 if not self._watchmanclient.available():
237 if not self._watchmanclient.available():
238 return bail('client unavailable')
238 return bail('client unavailable')
239 state = self._fsmonitorstate
239 state = self._fsmonitorstate
240 clock, ignorehash, notefiles = state.get()
240 clock, ignorehash, notefiles = state.get()
241 if not clock:
241 if not clock:
242 if state.walk_on_invalidate:
242 if state.walk_on_invalidate:
243 return bail('no clock')
243 return bail('no clock')
244 # Initial NULL clock value, see
244 # Initial NULL clock value, see
245 # https://facebook.github.io/watchman/docs/clockspec.html
245 # https://facebook.github.io/watchman/docs/clockspec.html
246 clock = 'c:0:0'
246 clock = 'c:0:0'
247 notefiles = []
247 notefiles = []
248
248
249 ignore = self._ignore
249 ignore = self._ignore
250 dirignore = self._dirignore
250 dirignore = self._dirignore
251 if unknown:
251 if unknown:
252 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
252 if _hashignore(ignore) != ignorehash and clock != 'c:0:0':
253 # ignore list changed -- can't rely on Watchman state any more
253 # ignore list changed -- can't rely on Watchman state any more
254 if state.walk_on_invalidate:
254 if state.walk_on_invalidate:
255 return bail('ignore rules changed')
255 return bail('ignore rules changed')
256 notefiles = []
256 notefiles = []
257 clock = 'c:0:0'
257 clock = 'c:0:0'
258 else:
258 else:
259 # always ignore
259 # always ignore
260 ignore = util.always
260 ignore = util.always
261 dirignore = util.always
261 dirignore = util.always
262
262
263 matchfn = match.matchfn
263 matchfn = match.matchfn
264 matchalways = match.always()
264 matchalways = match.always()
265 dmap = self._map
265 dmap = self._map
266 if util.safehasattr(dmap, '_map'):
266 if util.safehasattr(dmap, '_map'):
267 # for better performance, directly access the inner dirstate map if the
267 # for better performance, directly access the inner dirstate map if the
268 # standard dirstate implementation is in use.
268 # standard dirstate implementation is in use.
269 dmap = dmap._map
269 dmap = dmap._map
270 nonnormalset = self._map.nonnormalset
270 nonnormalset = self._map.nonnormalset
271
271
272 copymap = self._map.copymap
272 copymap = self._map.copymap
273 getkind = stat.S_IFMT
273 getkind = stat.S_IFMT
274 dirkind = stat.S_IFDIR
274 dirkind = stat.S_IFDIR
275 regkind = stat.S_IFREG
275 regkind = stat.S_IFREG
276 lnkkind = stat.S_IFLNK
276 lnkkind = stat.S_IFLNK
277 join = self._join
277 join = self._join
278 normcase = util.normcase
278 normcase = util.normcase
279 fresh_instance = False
279 fresh_instance = False
280
280
281 exact = skipstep3 = False
281 exact = skipstep3 = False
282 if match.isexact(): # match.exact
282 if match.isexact(): # match.exact
283 exact = True
283 exact = True
284 dirignore = util.always # skip step 2
284 dirignore = util.always # skip step 2
285 elif match.prefix(): # match.match, no patterns
285 elif match.prefix(): # match.match, no patterns
286 skipstep3 = True
286 skipstep3 = True
287
287
288 if not exact and self._checkcase:
288 if not exact and self._checkcase:
289 # note that even though we could receive directory entries, we're only
289 # note that even though we could receive directory entries, we're only
290 # interested in checking if a file with the same name exists. So only
290 # interested in checking if a file with the same name exists. So only
291 # normalize files if possible.
291 # normalize files if possible.
292 normalize = self._normalizefile
292 normalize = self._normalizefile
293 skipstep3 = False
293 skipstep3 = False
294 else:
294 else:
295 normalize = None
295 normalize = None
296
296
297 # step 1: find all explicit files
297 # step 1: find all explicit files
298 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
298 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
299
299
300 skipstep3 = skipstep3 and not (work or dirsnotfound)
300 skipstep3 = skipstep3 and not (work or dirsnotfound)
301 work = [d for d in work if not dirignore(d[0])]
301 work = [d for d in work if not dirignore(d[0])]
302
302
303 if not work and (exact or skipstep3):
303 if not work and (exact or skipstep3):
304 for s in subrepos:
304 for s in subrepos:
305 del results[s]
305 del results[s]
306 del results['.hg']
306 del results['.hg']
307 return results
307 return results
308
308
309 # step 2: query Watchman
309 # step 2: query Watchman
310 try:
310 try:
311 # Use the user-configured timeout for the query.
311 # Use the user-configured timeout for the query.
312 # Add a little slack over the top of the user query to allow for
312 # Add a little slack over the top of the user query to allow for
313 # overheads while transferring the data
313 # overheads while transferring the data
314 self._watchmanclient.settimeout(state.timeout + 0.1)
314 self._watchmanclient.settimeout(state.timeout + 0.1)
315 result = self._watchmanclient.command('query', {
315 result = self._watchmanclient.command('query', {
316 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
316 'fields': ['mode', 'mtime', 'size', 'exists', 'name'],
317 'since': clock,
317 'since': clock,
318 'expression': [
318 'expression': [
319 'not', [
319 'not', [
320 'anyof', ['dirname', '.hg'],
320 'anyof', ['dirname', '.hg'],
321 ['name', '.hg', 'wholename']
321 ['name', '.hg', 'wholename']
322 ]
322 ]
323 ],
323 ],
324 'sync_timeout': int(state.timeout * 1000),
324 'sync_timeout': int(state.timeout * 1000),
325 'empty_on_fresh_instance': state.walk_on_invalidate,
325 'empty_on_fresh_instance': state.walk_on_invalidate,
326 })
326 })
327 except Exception as ex:
327 except Exception as ex:
328 _handleunavailable(self._ui, state, ex)
328 _handleunavailable(self._ui, state, ex)
329 self._watchmanclient.clearconnection()
329 self._watchmanclient.clearconnection()
330 return bail('exception during run')
330 return bail('exception during run')
331 else:
331 else:
332 # We need to propagate the last observed clock up so that we
332 # We need to propagate the last observed clock up so that we
333 # can use it for our next query
333 # can use it for our next query
334 state.setlastclock(result['clock'])
334 state.setlastclock(result['clock'])
335 if result['is_fresh_instance']:
335 if result['is_fresh_instance']:
336 if state.walk_on_invalidate:
336 if state.walk_on_invalidate:
337 state.invalidate()
337 state.invalidate()
338 return bail('fresh instance')
338 return bail('fresh instance')
339 fresh_instance = True
339 fresh_instance = True
340 # Ignore any prior noteable files from the state info
340 # Ignore any prior noteable files from the state info
341 notefiles = []
341 notefiles = []
342
342
343 # for file paths which require normalization and we encounter a case
343 # for file paths which require normalization and we encounter a case
344 # collision, we store our own foldmap
344 # collision, we store our own foldmap
345 if normalize:
345 if normalize:
346 foldmap = dict((normcase(k), k) for k in results)
346 foldmap = dict((normcase(k), k) for k in results)
347
347
348 switch_slashes = pycompat.ossep == '\\'
348 switch_slashes = pycompat.ossep == '\\'
349 # The order of the results is, strictly speaking, undefined.
349 # The order of the results is, strictly speaking, undefined.
350 # For case changes on a case insensitive filesystem we may receive
350 # For case changes on a case insensitive filesystem we may receive
351 # two entries, one with exists=True and another with exists=False.
351 # two entries, one with exists=True and another with exists=False.
352 # The exists=True entries in the same response should be interpreted
352 # The exists=True entries in the same response should be interpreted
353 # as being happens-after the exists=False entries due to the way that
353 # as being happens-after the exists=False entries due to the way that
354 # Watchman tracks files. We use this property to reconcile deletes
354 # Watchman tracks files. We use this property to reconcile deletes
355 # for name case changes.
355 # for name case changes.
356 for entry in result['files']:
356 for entry in result['files']:
357 fname = entry['name']
357 fname = entry['name']
358 if _fixencoding:
358 if _fixencoding:
359 fname = _watchmantofsencoding(fname)
359 fname = _watchmantofsencoding(fname)
360 if switch_slashes:
360 if switch_slashes:
361 fname = fname.replace('\\', '/')
361 fname = fname.replace('\\', '/')
362 if normalize:
362 if normalize:
363 normed = normcase(fname)
363 normed = normcase(fname)
364 fname = normalize(fname, True, True)
364 fname = normalize(fname, True, True)
365 foldmap[normed] = fname
365 foldmap[normed] = fname
366 fmode = entry['mode']
366 fmode = entry['mode']
367 fexists = entry['exists']
367 fexists = entry['exists']
368 kind = getkind(fmode)
368 kind = getkind(fmode)
369
369
370 if '/.hg/' in fname or fname.endswith('/.hg'):
370 if '/.hg/' in fname or fname.endswith('/.hg'):
371 return bail('nested-repo-detected')
371 return bail('nested-repo-detected')
372
372
373 if not fexists:
373 if not fexists:
374 # if marked as deleted and we don't already have a change
374 # if marked as deleted and we don't already have a change
375 # record, mark it as deleted. If we already have an entry
375 # record, mark it as deleted. If we already have an entry
376 # for fname then it was either part of walkexplicit or was
376 # for fname then it was either part of walkexplicit or was
377 # an earlier result that was a case change
377 # an earlier result that was a case change
378 if fname not in results and fname in dmap and (
378 if fname not in results and fname in dmap and (
379 matchalways or matchfn(fname)):
379 matchalways or matchfn(fname)):
380 results[fname] = None
380 results[fname] = None
381 elif kind == dirkind:
381 elif kind == dirkind:
382 if fname in dmap and (matchalways or matchfn(fname)):
382 if fname in dmap and (matchalways or matchfn(fname)):
383 results[fname] = None
383 results[fname] = None
384 elif kind == regkind or kind == lnkkind:
384 elif kind == regkind or kind == lnkkind:
385 if fname in dmap:
385 if fname in dmap:
386 if matchalways or matchfn(fname):
386 if matchalways or matchfn(fname):
387 results[fname] = entry
387 results[fname] = entry
388 elif (matchalways or matchfn(fname)) and not ignore(fname):
388 elif (matchalways or matchfn(fname)) and not ignore(fname):
389 results[fname] = entry
389 results[fname] = entry
390 elif fname in dmap and (matchalways or matchfn(fname)):
390 elif fname in dmap and (matchalways or matchfn(fname)):
391 results[fname] = None
391 results[fname] = None
392
392
393 # step 3: query notable files we don't already know about
393 # step 3: query notable files we don't already know about
394 # XXX try not to iterate over the entire dmap
394 # XXX try not to iterate over the entire dmap
395 if normalize:
395 if normalize:
396 # any notable files that have changed case will already be handled
396 # any notable files that have changed case will already be handled
397 # above, so just check membership in the foldmap
397 # above, so just check membership in the foldmap
398 notefiles = set((normalize(f, True, True) for f in notefiles
398 notefiles = set((normalize(f, True, True) for f in notefiles
399 if normcase(f) not in foldmap))
399 if normcase(f) not in foldmap))
400 visit = set((f for f in notefiles if (f not in results and matchfn(f)
400 visit = set((f for f in notefiles if (f not in results and matchfn(f)
401 and (f in dmap or not ignore(f)))))
401 and (f in dmap or not ignore(f)))))
402
402
403 if not fresh_instance:
403 if not fresh_instance:
404 if matchalways:
404 if matchalways:
405 visit.update(f for f in nonnormalset if f not in results)
405 visit.update(f for f in nonnormalset if f not in results)
406 visit.update(f for f in copymap if f not in results)
406 visit.update(f for f in copymap if f not in results)
407 else:
407 else:
408 visit.update(f for f in nonnormalset
408 visit.update(f for f in nonnormalset
409 if f not in results and matchfn(f))
409 if f not in results and matchfn(f))
410 visit.update(f for f in copymap
410 visit.update(f for f in copymap
411 if f not in results and matchfn(f))
411 if f not in results and matchfn(f))
412 else:
412 else:
413 if matchalways:
413 if matchalways:
414 visit.update(f for f, st in dmap.iteritems() if f not in results)
414 visit.update(f for f, st in dmap.iteritems() if f not in results)
415 visit.update(f for f in copymap if f not in results)
415 visit.update(f for f in copymap if f not in results)
416 else:
416 else:
417 visit.update(f for f, st in dmap.iteritems()
417 visit.update(f for f, st in dmap.iteritems()
418 if f not in results and matchfn(f))
418 if f not in results and matchfn(f))
419 visit.update(f for f in copymap
419 visit.update(f for f in copymap
420 if f not in results and matchfn(f))
420 if f not in results and matchfn(f))
421
421
422 audit = pathutil.pathauditor(self._root, cached=True).check
422 audit = pathutil.pathauditor(self._root, cached=True).check
423 auditpass = [f for f in visit if audit(f)]
423 auditpass = [f for f in visit if audit(f)]
424 auditpass.sort()
424 auditpass.sort()
425 auditfail = visit.difference(auditpass)
425 auditfail = visit.difference(auditpass)
426 for f in auditfail:
426 for f in auditfail:
427 results[f] = None
427 results[f] = None
428
428
429 nf = iter(auditpass).next
429 nf = iter(auditpass).next
430 for st in util.statfiles([join(f) for f in auditpass]):
430 for st in util.statfiles([join(f) for f in auditpass]):
431 f = nf()
431 f = nf()
432 if st or f in dmap:
432 if st or f in dmap:
433 results[f] = st
433 results[f] = st
434
434
435 for s in subrepos:
435 for s in subrepos:
436 del results[s]
436 del results[s]
437 del results['.hg']
437 del results['.hg']
438 return results
438 return results
439
439
440 def overridestatus(
440 def overridestatus(
441 orig, self, node1='.', node2=None, match=None, ignored=False,
441 orig, self, node1='.', node2=None, match=None, ignored=False,
442 clean=False, unknown=False, listsubrepos=False):
442 clean=False, unknown=False, listsubrepos=False):
443 listignored = ignored
443 listignored = ignored
444 listclean = clean
444 listclean = clean
445 listunknown = unknown
445 listunknown = unknown
446
446
447 def _cmpsets(l1, l2):
447 def _cmpsets(l1, l2):
448 try:
448 try:
449 if 'FSMONITOR_LOG_FILE' in encoding.environ:
449 if 'FSMONITOR_LOG_FILE' in encoding.environ:
450 fn = encoding.environ['FSMONITOR_LOG_FILE']
450 fn = encoding.environ['FSMONITOR_LOG_FILE']
451 f = open(fn, 'wb')
451 f = open(fn, 'wb')
452 else:
452 else:
453 fn = 'fsmonitorfail.log'
453 fn = 'fsmonitorfail.log'
454 f = self.vfs.open(fn, 'wb')
454 f = self.vfs.open(fn, 'wb')
455 except (IOError, OSError):
455 except (IOError, OSError):
456 self.ui.warn(_('warning: unable to write to %s\n') % fn)
456 self.ui.warn(_('warning: unable to write to %s\n') % fn)
457 return
457 return
458
458
459 try:
459 try:
460 for i, (s1, s2) in enumerate(zip(l1, l2)):
460 for i, (s1, s2) in enumerate(zip(l1, l2)):
461 if set(s1) != set(s2):
461 if set(s1) != set(s2):
462 f.write('sets at position %d are unequal\n' % i)
462 f.write('sets at position %d are unequal\n' % i)
463 f.write('watchman returned: %s\n' % s1)
463 f.write('watchman returned: %s\n' % s1)
464 f.write('stat returned: %s\n' % s2)
464 f.write('stat returned: %s\n' % s2)
465 finally:
465 finally:
466 f.close()
466 f.close()
467
467
468 if isinstance(node1, context.changectx):
468 if isinstance(node1, context.changectx):
469 ctx1 = node1
469 ctx1 = node1
470 else:
470 else:
471 ctx1 = self[node1]
471 ctx1 = self[node1]
472 if isinstance(node2, context.changectx):
472 if isinstance(node2, context.changectx):
473 ctx2 = node2
473 ctx2 = node2
474 else:
474 else:
475 ctx2 = self[node2]
475 ctx2 = self[node2]
476
476
477 working = ctx2.rev() is None
477 working = ctx2.rev() is None
478 parentworking = working and ctx1 == self['.']
478 parentworking = working and ctx1 == self['.']
479 match = match or matchmod.always(self.root, self.getcwd())
479 match = match or matchmod.always()
480
480
481 # Maybe we can use this opportunity to update Watchman's state.
481 # Maybe we can use this opportunity to update Watchman's state.
482 # Mercurial uses workingcommitctx and/or memctx to represent the part of
482 # Mercurial uses workingcommitctx and/or memctx to represent the part of
483 # the workingctx that is to be committed. So don't update the state in
483 # the workingctx that is to be committed. So don't update the state in
484 # that case.
484 # that case.
485 # HG_PENDING is set in the environment when the dirstate is being updated
485 # HG_PENDING is set in the environment when the dirstate is being updated
486 # in the middle of a transaction; we must not update our state in that
486 # in the middle of a transaction; we must not update our state in that
487 # case, or we risk forgetting about changes in the working copy.
487 # case, or we risk forgetting about changes in the working copy.
488 updatestate = (parentworking and match.always() and
488 updatestate = (parentworking and match.always() and
489 not isinstance(ctx2, (context.workingcommitctx,
489 not isinstance(ctx2, (context.workingcommitctx,
490 context.memctx)) and
490 context.memctx)) and
491 'HG_PENDING' not in encoding.environ)
491 'HG_PENDING' not in encoding.environ)
492
492
493 try:
493 try:
494 if self._fsmonitorstate.walk_on_invalidate:
494 if self._fsmonitorstate.walk_on_invalidate:
495 # Use a short timeout to query the current clock. If that
495 # Use a short timeout to query the current clock. If that
496 # takes too long then we assume that the service will be slow
496 # takes too long then we assume that the service will be slow
497 # to answer our query.
497 # to answer our query.
498 # walk_on_invalidate indicates that we prefer to walk the
498 # walk_on_invalidate indicates that we prefer to walk the
499 # tree ourselves because we can ignore portions that Watchman
499 # tree ourselves because we can ignore portions that Watchman
500 # cannot and we tend to be faster in the warmer buffer cache
500 # cannot and we tend to be faster in the warmer buffer cache
501 # cases.
501 # cases.
502 self._watchmanclient.settimeout(0.1)
502 self._watchmanclient.settimeout(0.1)
503 else:
503 else:
504 # Give Watchman more time to potentially complete its walk
504 # Give Watchman more time to potentially complete its walk
505 # and return the initial clock. In this mode we assume that
505 # and return the initial clock. In this mode we assume that
506 # the filesystem will be slower than parsing a potentially
506 # the filesystem will be slower than parsing a potentially
507 # very large Watchman result set.
507 # very large Watchman result set.
508 self._watchmanclient.settimeout(
508 self._watchmanclient.settimeout(
509 self._fsmonitorstate.timeout + 0.1)
509 self._fsmonitorstate.timeout + 0.1)
510 startclock = self._watchmanclient.getcurrentclock()
510 startclock = self._watchmanclient.getcurrentclock()
511 except Exception as ex:
511 except Exception as ex:
512 self._watchmanclient.clearconnection()
512 self._watchmanclient.clearconnection()
513 _handleunavailable(self.ui, self._fsmonitorstate, ex)
513 _handleunavailable(self.ui, self._fsmonitorstate, ex)
514 # boo, Watchman failed. bail
514 # boo, Watchman failed. bail
515 return orig(node1, node2, match, listignored, listclean,
515 return orig(node1, node2, match, listignored, listclean,
516 listunknown, listsubrepos)
516 listunknown, listsubrepos)
517
517
518 if updatestate:
518 if updatestate:
519 # We need info about unknown files. This may make things slower the
519 # We need info about unknown files. This may make things slower the
520 # first time, but whatever.
520 # first time, but whatever.
521 stateunknown = True
521 stateunknown = True
522 else:
522 else:
523 stateunknown = listunknown
523 stateunknown = listunknown
524
524
525 if updatestate:
525 if updatestate:
526 ps = poststatus(startclock)
526 ps = poststatus(startclock)
527 self.addpostdsstatus(ps)
527 self.addpostdsstatus(ps)
528
528
529 r = orig(node1, node2, match, listignored, listclean, stateunknown,
529 r = orig(node1, node2, match, listignored, listclean, stateunknown,
530 listsubrepos)
530 listsubrepos)
531 modified, added, removed, deleted, unknown, ignored, clean = r
531 modified, added, removed, deleted, unknown, ignored, clean = r
532
532
533 if not listunknown:
533 if not listunknown:
534 unknown = []
534 unknown = []
535
535
536 # don't do paranoid checks if we're not going to query Watchman anyway
536 # don't do paranoid checks if we're not going to query Watchman anyway
537 full = listclean or match.traversedir is not None
537 full = listclean or match.traversedir is not None
538 if self._fsmonitorstate.mode == 'paranoid' and not full:
538 if self._fsmonitorstate.mode == 'paranoid' and not full:
539 # run status again and fall back to the old walk this time
539 # run status again and fall back to the old walk this time
540 self.dirstate._fsmonitordisable = True
540 self.dirstate._fsmonitordisable = True
541
541
542 # shut the UI up
542 # shut the UI up
543 quiet = self.ui.quiet
543 quiet = self.ui.quiet
544 self.ui.quiet = True
544 self.ui.quiet = True
545 fout, ferr = self.ui.fout, self.ui.ferr
545 fout, ferr = self.ui.fout, self.ui.ferr
546 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
546 self.ui.fout = self.ui.ferr = open(os.devnull, 'wb')
547
547
548 try:
548 try:
549 rv2 = orig(
549 rv2 = orig(
550 node1, node2, match, listignored, listclean, listunknown,
550 node1, node2, match, listignored, listclean, listunknown,
551 listsubrepos)
551 listsubrepos)
552 finally:
552 finally:
553 self.dirstate._fsmonitordisable = False
553 self.dirstate._fsmonitordisable = False
554 self.ui.quiet = quiet
554 self.ui.quiet = quiet
555 self.ui.fout, self.ui.ferr = fout, ferr
555 self.ui.fout, self.ui.ferr = fout, ferr
556
556
557 # clean isn't tested since it's set to True above
557 # clean isn't tested since it's set to True above
558 with self.wlock():
558 with self.wlock():
559 _cmpsets(
559 _cmpsets(
560 [modified, added, removed, deleted, unknown, ignored, clean],
560 [modified, added, removed, deleted, unknown, ignored, clean],
561 rv2)
561 rv2)
562 modified, added, removed, deleted, unknown, ignored, clean = rv2
562 modified, added, removed, deleted, unknown, ignored, clean = rv2
563
563
564 return scmutil.status(
564 return scmutil.status(
565 modified, added, removed, deleted, unknown, ignored, clean)
565 modified, added, removed, deleted, unknown, ignored, clean)
566
566
567 class poststatus(object):
567 class poststatus(object):
568 def __init__(self, startclock):
568 def __init__(self, startclock):
569 self._startclock = startclock
569 self._startclock = startclock
570
570
571 def __call__(self, wctx, status):
571 def __call__(self, wctx, status):
572 clock = wctx.repo()._fsmonitorstate.getlastclock() or self._startclock
572 clock = wctx.repo()._fsmonitorstate.getlastclock() or self._startclock
573 hashignore = _hashignore(wctx.repo().dirstate._ignore)
573 hashignore = _hashignore(wctx.repo().dirstate._ignore)
574 notefiles = (status.modified + status.added + status.removed +
574 notefiles = (status.modified + status.added + status.removed +
575 status.deleted + status.unknown)
575 status.deleted + status.unknown)
576 wctx.repo()._fsmonitorstate.set(clock, hashignore, notefiles)
576 wctx.repo()._fsmonitorstate.set(clock, hashignore, notefiles)
577
577
578 def makedirstate(repo, dirstate):
578 def makedirstate(repo, dirstate):
579 class fsmonitordirstate(dirstate.__class__):
579 class fsmonitordirstate(dirstate.__class__):
580 def _fsmonitorinit(self, repo):
580 def _fsmonitorinit(self, repo):
581 # _fsmonitordisable is used in paranoid mode
581 # _fsmonitordisable is used in paranoid mode
582 self._fsmonitordisable = False
582 self._fsmonitordisable = False
583 self._fsmonitorstate = repo._fsmonitorstate
583 self._fsmonitorstate = repo._fsmonitorstate
584 self._watchmanclient = repo._watchmanclient
584 self._watchmanclient = repo._watchmanclient
585 self._repo = weakref.proxy(repo)
585 self._repo = weakref.proxy(repo)
586
586
587 def walk(self, *args, **kwargs):
587 def walk(self, *args, **kwargs):
588 orig = super(fsmonitordirstate, self).walk
588 orig = super(fsmonitordirstate, self).walk
589 if self._fsmonitordisable:
589 if self._fsmonitordisable:
590 return orig(*args, **kwargs)
590 return orig(*args, **kwargs)
591 return overridewalk(orig, self, *args, **kwargs)
591 return overridewalk(orig, self, *args, **kwargs)
592
592
593 def rebuild(self, *args, **kwargs):
593 def rebuild(self, *args, **kwargs):
594 self._fsmonitorstate.invalidate()
594 self._fsmonitorstate.invalidate()
595 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
595 return super(fsmonitordirstate, self).rebuild(*args, **kwargs)
596
596
597 def invalidate(self, *args, **kwargs):
597 def invalidate(self, *args, **kwargs):
598 self._fsmonitorstate.invalidate()
598 self._fsmonitorstate.invalidate()
599 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
599 return super(fsmonitordirstate, self).invalidate(*args, **kwargs)
600
600
601 dirstate.__class__ = fsmonitordirstate
601 dirstate.__class__ = fsmonitordirstate
602 dirstate._fsmonitorinit(repo)
602 dirstate._fsmonitorinit(repo)
603
603
604 def wrapdirstate(orig, self):
604 def wrapdirstate(orig, self):
605 ds = orig(self)
605 ds = orig(self)
606 # only override the dirstate when Watchman is available for the repo
606 # only override the dirstate when Watchman is available for the repo
607 if util.safehasattr(self, '_fsmonitorstate'):
607 if util.safehasattr(self, '_fsmonitorstate'):
608 makedirstate(self, ds)
608 makedirstate(self, ds)
609 return ds
609 return ds
610
610
611 def extsetup(ui):
611 def extsetup(ui):
612 extensions.wrapfilecache(
612 extensions.wrapfilecache(
613 localrepo.localrepository, 'dirstate', wrapdirstate)
613 localrepo.localrepository, 'dirstate', wrapdirstate)
614 if pycompat.isdarwin:
614 if pycompat.isdarwin:
615 # An assist for avoiding the dangling-symlink fsevents bug
615 # An assist for avoiding the dangling-symlink fsevents bug
616 extensions.wrapfunction(os, 'symlink', wrapsymlink)
616 extensions.wrapfunction(os, 'symlink', wrapsymlink)
617
617
618 extensions.wrapfunction(merge, 'update', wrapupdate)
618 extensions.wrapfunction(merge, 'update', wrapupdate)
619
619
620 def wrapsymlink(orig, source, link_name):
620 def wrapsymlink(orig, source, link_name):
621 ''' if we create a dangling symlink, also touch the parent dir
621 ''' if we create a dangling symlink, also touch the parent dir
622 to encourage fsevents notifications to work more correctly '''
622 to encourage fsevents notifications to work more correctly '''
623 try:
623 try:
624 return orig(source, link_name)
624 return orig(source, link_name)
625 finally:
625 finally:
626 try:
626 try:
627 os.utime(os.path.dirname(link_name), None)
627 os.utime(os.path.dirname(link_name), None)
628 except OSError:
628 except OSError:
629 pass
629 pass
630
630
631 class state_update(object):
631 class state_update(object):
632 ''' This context manager is responsible for dispatching the state-enter
632 ''' This context manager is responsible for dispatching the state-enter
633 and state-leave signals to the watchman service. The enter and leave
633 and state-leave signals to the watchman service. The enter and leave
634 methods can be invoked manually (for scenarios where context manager
634 methods can be invoked manually (for scenarios where context manager
635 semantics are not possible). If parameters oldnode and newnode are None,
635 semantics are not possible). If parameters oldnode and newnode are None,
636 they will be populated based on current working copy in enter and
636 they will be populated based on current working copy in enter and
637 leave, respectively. Similarly, if the distance is none, it will be
637 leave, respectively. Similarly, if the distance is none, it will be
638 calculated based on the oldnode and newnode in the leave method.'''
638 calculated based on the oldnode and newnode in the leave method.'''
639
639
640 def __init__(self, repo, name, oldnode=None, newnode=None, distance=None,
640 def __init__(self, repo, name, oldnode=None, newnode=None, distance=None,
641 partial=False):
641 partial=False):
642 self.repo = repo.unfiltered()
642 self.repo = repo.unfiltered()
643 self.name = name
643 self.name = name
644 self.oldnode = oldnode
644 self.oldnode = oldnode
645 self.newnode = newnode
645 self.newnode = newnode
646 self.distance = distance
646 self.distance = distance
647 self.partial = partial
647 self.partial = partial
648 self._lock = None
648 self._lock = None
649 self.need_leave = False
649 self.need_leave = False
650
650
651 def __enter__(self):
651 def __enter__(self):
652 self.enter()
652 self.enter()
653
653
654 def enter(self):
654 def enter(self):
655 # Make sure we have a wlock prior to sending notifications to watchman.
655 # Make sure we have a wlock prior to sending notifications to watchman.
656 # We don't want to race with other actors. In the update case,
656 # We don't want to race with other actors. In the update case,
657 # merge.update is going to take the wlock almost immediately. We are
657 # merge.update is going to take the wlock almost immediately. We are
658 # effectively extending the lock around several short sanity checks.
658 # effectively extending the lock around several short sanity checks.
659 if self.oldnode is None:
659 if self.oldnode is None:
660 self.oldnode = self.repo['.'].node()
660 self.oldnode = self.repo['.'].node()
661
661
662 if self.repo.currentwlock() is None:
662 if self.repo.currentwlock() is None:
663 if util.safehasattr(self.repo, 'wlocknostateupdate'):
663 if util.safehasattr(self.repo, 'wlocknostateupdate'):
664 self._lock = self.repo.wlocknostateupdate()
664 self._lock = self.repo.wlocknostateupdate()
665 else:
665 else:
666 self._lock = self.repo.wlock()
666 self._lock = self.repo.wlock()
667 self.need_leave = self._state(
667 self.need_leave = self._state(
668 'state-enter',
668 'state-enter',
669 hex(self.oldnode))
669 hex(self.oldnode))
670 return self
670 return self
671
671
672 def __exit__(self, type_, value, tb):
672 def __exit__(self, type_, value, tb):
673 abort = True if type_ else False
673 abort = True if type_ else False
674 self.exit(abort=abort)
674 self.exit(abort=abort)
675
675
676 def exit(self, abort=False):
676 def exit(self, abort=False):
677 try:
677 try:
678 if self.need_leave:
678 if self.need_leave:
679 status = 'failed' if abort else 'ok'
679 status = 'failed' if abort else 'ok'
680 if self.newnode is None:
680 if self.newnode is None:
681 self.newnode = self.repo['.'].node()
681 self.newnode = self.repo['.'].node()
682 if self.distance is None:
682 if self.distance is None:
683 self.distance = calcdistance(
683 self.distance = calcdistance(
684 self.repo, self.oldnode, self.newnode)
684 self.repo, self.oldnode, self.newnode)
685 self._state(
685 self._state(
686 'state-leave',
686 'state-leave',
687 hex(self.newnode),
687 hex(self.newnode),
688 status=status)
688 status=status)
689 finally:
689 finally:
690 self.need_leave = False
690 self.need_leave = False
691 if self._lock:
691 if self._lock:
692 self._lock.release()
692 self._lock.release()
693
693
694 def _state(self, cmd, commithash, status='ok'):
694 def _state(self, cmd, commithash, status='ok'):
695 if not util.safehasattr(self.repo, '_watchmanclient'):
695 if not util.safehasattr(self.repo, '_watchmanclient'):
696 return False
696 return False
697 try:
697 try:
698 self.repo._watchmanclient.command(cmd, {
698 self.repo._watchmanclient.command(cmd, {
699 'name': self.name,
699 'name': self.name,
700 'metadata': {
700 'metadata': {
701 # the target revision
701 # the target revision
702 'rev': commithash,
702 'rev': commithash,
703 # approximate number of commits between current and target
703 # approximate number of commits between current and target
704 'distance': self.distance if self.distance else 0,
704 'distance': self.distance if self.distance else 0,
705 # success/failure (only really meaningful for state-leave)
705 # success/failure (only really meaningful for state-leave)
706 'status': status,
706 'status': status,
707 # whether the working copy parent is changing
707 # whether the working copy parent is changing
708 'partial': self.partial,
708 'partial': self.partial,
709 }})
709 }})
710 return True
710 return True
711 except Exception as e:
711 except Exception as e:
712 # Swallow any errors; fire and forget
712 # Swallow any errors; fire and forget
713 self.repo.ui.log(
713 self.repo.ui.log(
714 'watchman', 'Exception %s while running %s\n', e, cmd)
714 'watchman', 'Exception %s while running %s\n', e, cmd)
715 return False
715 return False
716
716
717 # Estimate the distance between two nodes
717 # Estimate the distance between two nodes
718 def calcdistance(repo, oldnode, newnode):
718 def calcdistance(repo, oldnode, newnode):
719 anc = repo.changelog.ancestor(oldnode, newnode)
719 anc = repo.changelog.ancestor(oldnode, newnode)
720 ancrev = repo[anc].rev()
720 ancrev = repo[anc].rev()
721 distance = (abs(repo[oldnode].rev() - ancrev)
721 distance = (abs(repo[oldnode].rev() - ancrev)
722 + abs(repo[newnode].rev() - ancrev))
722 + abs(repo[newnode].rev() - ancrev))
723 return distance
723 return distance
724
724
725 # Bracket working copy updates with calls to the watchman state-enter
725 # Bracket working copy updates with calls to the watchman state-enter
726 # and state-leave commands. This allows clients to perform more intelligent
726 # and state-leave commands. This allows clients to perform more intelligent
727 # settling during bulk file change scenarios
727 # settling during bulk file change scenarios
728 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
728 # https://facebook.github.io/watchman/docs/cmd/subscribe.html#advanced-settling
729 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
729 def wrapupdate(orig, repo, node, branchmerge, force, ancestor=None,
730 mergeancestor=False, labels=None, matcher=None, **kwargs):
730 mergeancestor=False, labels=None, matcher=None, **kwargs):
731
731
732 distance = 0
732 distance = 0
733 partial = True
733 partial = True
734 oldnode = repo['.'].node()
734 oldnode = repo['.'].node()
735 newnode = repo[node].node()
735 newnode = repo[node].node()
736 if matcher is None or matcher.always():
736 if matcher is None or matcher.always():
737 partial = False
737 partial = False
738 distance = calcdistance(repo.unfiltered(), oldnode, newnode)
738 distance = calcdistance(repo.unfiltered(), oldnode, newnode)
739
739
740 with state_update(repo, name="hg.update", oldnode=oldnode, newnode=newnode,
740 with state_update(repo, name="hg.update", oldnode=oldnode, newnode=newnode,
741 distance=distance, partial=partial):
741 distance=distance, partial=partial):
742 return orig(
742 return orig(
743 repo, node, branchmerge, force, ancestor, mergeancestor,
743 repo, node, branchmerge, force, ancestor, mergeancestor,
744 labels, matcher, **kwargs)
744 labels, matcher, **kwargs)
745
745
746 def repo_has_depth_one_nested_repo(repo):
746 def repo_has_depth_one_nested_repo(repo):
747 for f in repo.wvfs.listdir():
747 for f in repo.wvfs.listdir():
748 if os.path.isdir(os.path.join(repo.root, f, '.hg')):
748 if os.path.isdir(os.path.join(repo.root, f, '.hg')):
749 msg = 'fsmonitor: sub-repository %r detected, fsmonitor disabled\n'
749 msg = 'fsmonitor: sub-repository %r detected, fsmonitor disabled\n'
750 repo.ui.debug(msg % f)
750 repo.ui.debug(msg % f)
751 return True
751 return True
752 return False
752 return False
753
753
754 def reposetup(ui, repo):
754 def reposetup(ui, repo):
755 # We don't work with largefiles or inotify
755 # We don't work with largefiles or inotify
756 exts = extensions.enabled()
756 exts = extensions.enabled()
757 for ext in _blacklist:
757 for ext in _blacklist:
758 if ext in exts:
758 if ext in exts:
759 ui.warn(_('The fsmonitor extension is incompatible with the %s '
759 ui.warn(_('The fsmonitor extension is incompatible with the %s '
760 'extension and has been disabled.\n') % ext)
760 'extension and has been disabled.\n') % ext)
761 return
761 return
762
762
763 if repo.local():
763 if repo.local():
764 # We don't work with subrepos either.
764 # We don't work with subrepos either.
765 #
765 #
766 # if repo[None].substate can cause a dirstate parse, which is too
766 # if repo[None].substate can cause a dirstate parse, which is too
767 # slow. Instead, look for a file called hgsubstate,
767 # slow. Instead, look for a file called hgsubstate,
768 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
768 if repo.wvfs.exists('.hgsubstate') or repo.wvfs.exists('.hgsub'):
769 return
769 return
770
770
771 if repo_has_depth_one_nested_repo(repo):
771 if repo_has_depth_one_nested_repo(repo):
772 return
772 return
773
773
774 fsmonitorstate = state.state(repo)
774 fsmonitorstate = state.state(repo)
775 if fsmonitorstate.mode == 'off':
775 if fsmonitorstate.mode == 'off':
776 return
776 return
777
777
778 try:
778 try:
779 client = watchmanclient.client(repo)
779 client = watchmanclient.client(repo)
780 except Exception as ex:
780 except Exception as ex:
781 _handleunavailable(ui, fsmonitorstate, ex)
781 _handleunavailable(ui, fsmonitorstate, ex)
782 return
782 return
783
783
784 repo._fsmonitorstate = fsmonitorstate
784 repo._fsmonitorstate = fsmonitorstate
785 repo._watchmanclient = client
785 repo._watchmanclient = client
786
786
787 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
787 dirstate, cached = localrepo.isfilecached(repo, 'dirstate')
788 if cached:
788 if cached:
789 # at this point since fsmonitorstate wasn't present,
789 # at this point since fsmonitorstate wasn't present,
790 # repo.dirstate is not a fsmonitordirstate
790 # repo.dirstate is not a fsmonitordirstate
791 makedirstate(repo, dirstate)
791 makedirstate(repo, dirstate)
792
792
793 class fsmonitorrepo(repo.__class__):
793 class fsmonitorrepo(repo.__class__):
794 def status(self, *args, **kwargs):
794 def status(self, *args, **kwargs):
795 orig = super(fsmonitorrepo, self).status
795 orig = super(fsmonitorrepo, self).status
796 return overridestatus(orig, self, *args, **kwargs)
796 return overridestatus(orig, self, *args, **kwargs)
797
797
798 def wlocknostateupdate(self, *args, **kwargs):
798 def wlocknostateupdate(self, *args, **kwargs):
799 return super(fsmonitorrepo, self).wlock(*args, **kwargs)
799 return super(fsmonitorrepo, self).wlock(*args, **kwargs)
800
800
801 def wlock(self, *args, **kwargs):
801 def wlock(self, *args, **kwargs):
802 l = super(fsmonitorrepo, self).wlock(*args, **kwargs)
802 l = super(fsmonitorrepo, self).wlock(*args, **kwargs)
803 if not ui.configbool(
803 if not ui.configbool(
804 "experimental", "fsmonitor.transaction_notify"):
804 "experimental", "fsmonitor.transaction_notify"):
805 return l
805 return l
806 if l.held != 1:
806 if l.held != 1:
807 return l
807 return l
808 origrelease = l.releasefn
808 origrelease = l.releasefn
809
809
810 def staterelease():
810 def staterelease():
811 if origrelease:
811 if origrelease:
812 origrelease()
812 origrelease()
813 if l.stateupdate:
813 if l.stateupdate:
814 l.stateupdate.exit()
814 l.stateupdate.exit()
815 l.stateupdate = None
815 l.stateupdate = None
816
816
817 try:
817 try:
818 l.stateupdate = None
818 l.stateupdate = None
819 l.stateupdate = state_update(self, name="hg.transaction")
819 l.stateupdate = state_update(self, name="hg.transaction")
820 l.stateupdate.enter()
820 l.stateupdate.enter()
821 l.releasefn = staterelease
821 l.releasefn = staterelease
822 except Exception as e:
822 except Exception as e:
823 # Swallow any errors; fire and forget
823 # Swallow any errors; fire and forget
824 self.ui.log(
824 self.ui.log(
825 'watchman', 'Exception in state update %s\n', e)
825 'watchman', 'Exception in state update %s\n', e)
826 return l
826 return l
827
827
828 repo.__class__ = fsmonitorrepo
828 repo.__class__ = fsmonitorrepo
@@ -1,341 +1,341 b''
1 # Copyright 2005, 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
1 # Copyright 2005, 2006 Benoit Boissinot <benoit.boissinot@ens-lyon.org>
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5
5
6 '''commands to sign and verify changesets'''
6 '''commands to sign and verify changesets'''
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import binascii
10 import binascii
11 import os
11 import os
12
12
13 from mercurial.i18n import _
13 from mercurial.i18n import _
14 from mercurial import (
14 from mercurial import (
15 cmdutil,
15 cmdutil,
16 error,
16 error,
17 help,
17 help,
18 match,
18 match,
19 node as hgnode,
19 node as hgnode,
20 pycompat,
20 pycompat,
21 registrar,
21 registrar,
22 )
22 )
23 from mercurial.utils import (
23 from mercurial.utils import (
24 dateutil,
24 dateutil,
25 procutil,
25 procutil,
26 )
26 )
27
27
28 cmdtable = {}
28 cmdtable = {}
29 command = registrar.command(cmdtable)
29 command = registrar.command(cmdtable)
30 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
30 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
31 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
32 # be specifying the version(s) of Mercurial they are tested with, or
32 # be specifying the version(s) of Mercurial they are tested with, or
33 # leave the attribute unspecified.
33 # leave the attribute unspecified.
34 testedwith = 'ships-with-hg-core'
34 testedwith = 'ships-with-hg-core'
35
35
36 configtable = {}
36 configtable = {}
37 configitem = registrar.configitem(configtable)
37 configitem = registrar.configitem(configtable)
38
38
39 configitem('gpg', 'cmd',
39 configitem('gpg', 'cmd',
40 default='gpg',
40 default='gpg',
41 )
41 )
42 configitem('gpg', 'key',
42 configitem('gpg', 'key',
43 default=None,
43 default=None,
44 )
44 )
45 configitem('gpg', '.*',
45 configitem('gpg', '.*',
46 default=None,
46 default=None,
47 generic=True,
47 generic=True,
48 )
48 )
49
49
50 # Custom help category
50 # Custom help category
51 _HELP_CATEGORY = 'gpg'
51 _HELP_CATEGORY = 'gpg'
52
52
53 class gpg(object):
53 class gpg(object):
54 def __init__(self, path, key=None):
54 def __init__(self, path, key=None):
55 self.path = path
55 self.path = path
56 self.key = (key and " --local-user \"%s\"" % key) or ""
56 self.key = (key and " --local-user \"%s\"" % key) or ""
57
57
58 def sign(self, data):
58 def sign(self, data):
59 gpgcmd = "%s --sign --detach-sign%s" % (self.path, self.key)
59 gpgcmd = "%s --sign --detach-sign%s" % (self.path, self.key)
60 return procutil.filter(data, gpgcmd)
60 return procutil.filter(data, gpgcmd)
61
61
62 def verify(self, data, sig):
62 def verify(self, data, sig):
63 """ returns of the good and bad signatures"""
63 """ returns of the good and bad signatures"""
64 sigfile = datafile = None
64 sigfile = datafile = None
65 try:
65 try:
66 # create temporary files
66 # create temporary files
67 fd, sigfile = pycompat.mkstemp(prefix="hg-gpg-", suffix=".sig")
67 fd, sigfile = pycompat.mkstemp(prefix="hg-gpg-", suffix=".sig")
68 fp = os.fdopen(fd, r'wb')
68 fp = os.fdopen(fd, r'wb')
69 fp.write(sig)
69 fp.write(sig)
70 fp.close()
70 fp.close()
71 fd, datafile = pycompat.mkstemp(prefix="hg-gpg-", suffix=".txt")
71 fd, datafile = pycompat.mkstemp(prefix="hg-gpg-", suffix=".txt")
72 fp = os.fdopen(fd, r'wb')
72 fp = os.fdopen(fd, r'wb')
73 fp.write(data)
73 fp.write(data)
74 fp.close()
74 fp.close()
75 gpgcmd = ("%s --logger-fd 1 --status-fd 1 --verify "
75 gpgcmd = ("%s --logger-fd 1 --status-fd 1 --verify "
76 "\"%s\" \"%s\"" % (self.path, sigfile, datafile))
76 "\"%s\" \"%s\"" % (self.path, sigfile, datafile))
77 ret = procutil.filter("", gpgcmd)
77 ret = procutil.filter("", gpgcmd)
78 finally:
78 finally:
79 for f in (sigfile, datafile):
79 for f in (sigfile, datafile):
80 try:
80 try:
81 if f:
81 if f:
82 os.unlink(f)
82 os.unlink(f)
83 except OSError:
83 except OSError:
84 pass
84 pass
85 keys = []
85 keys = []
86 key, fingerprint = None, None
86 key, fingerprint = None, None
87 for l in ret.splitlines():
87 for l in ret.splitlines():
88 # see DETAILS in the gnupg documentation
88 # see DETAILS in the gnupg documentation
89 # filter the logger output
89 # filter the logger output
90 if not l.startswith("[GNUPG:]"):
90 if not l.startswith("[GNUPG:]"):
91 continue
91 continue
92 l = l[9:]
92 l = l[9:]
93 if l.startswith("VALIDSIG"):
93 if l.startswith("VALIDSIG"):
94 # fingerprint of the primary key
94 # fingerprint of the primary key
95 fingerprint = l.split()[10]
95 fingerprint = l.split()[10]
96 elif l.startswith("ERRSIG"):
96 elif l.startswith("ERRSIG"):
97 key = l.split(" ", 3)[:2]
97 key = l.split(" ", 3)[:2]
98 key.append("")
98 key.append("")
99 fingerprint = None
99 fingerprint = None
100 elif (l.startswith("GOODSIG") or
100 elif (l.startswith("GOODSIG") or
101 l.startswith("EXPSIG") or
101 l.startswith("EXPSIG") or
102 l.startswith("EXPKEYSIG") or
102 l.startswith("EXPKEYSIG") or
103 l.startswith("BADSIG")):
103 l.startswith("BADSIG")):
104 if key is not None:
104 if key is not None:
105 keys.append(key + [fingerprint])
105 keys.append(key + [fingerprint])
106 key = l.split(" ", 2)
106 key = l.split(" ", 2)
107 fingerprint = None
107 fingerprint = None
108 if key is not None:
108 if key is not None:
109 keys.append(key + [fingerprint])
109 keys.append(key + [fingerprint])
110 return keys
110 return keys
111
111
112 def newgpg(ui, **opts):
112 def newgpg(ui, **opts):
113 """create a new gpg instance"""
113 """create a new gpg instance"""
114 gpgpath = ui.config("gpg", "cmd")
114 gpgpath = ui.config("gpg", "cmd")
115 gpgkey = opts.get(r'key')
115 gpgkey = opts.get(r'key')
116 if not gpgkey:
116 if not gpgkey:
117 gpgkey = ui.config("gpg", "key")
117 gpgkey = ui.config("gpg", "key")
118 return gpg(gpgpath, gpgkey)
118 return gpg(gpgpath, gpgkey)
119
119
120 def sigwalk(repo):
120 def sigwalk(repo):
121 """
121 """
122 walk over every sigs, yields a couple
122 walk over every sigs, yields a couple
123 ((node, version, sig), (filename, linenumber))
123 ((node, version, sig), (filename, linenumber))
124 """
124 """
125 def parsefile(fileiter, context):
125 def parsefile(fileiter, context):
126 ln = 1
126 ln = 1
127 for l in fileiter:
127 for l in fileiter:
128 if not l:
128 if not l:
129 continue
129 continue
130 yield (l.split(" ", 2), (context, ln))
130 yield (l.split(" ", 2), (context, ln))
131 ln += 1
131 ln += 1
132
132
133 # read the heads
133 # read the heads
134 fl = repo.file(".hgsigs")
134 fl = repo.file(".hgsigs")
135 for r in reversed(fl.heads()):
135 for r in reversed(fl.heads()):
136 fn = ".hgsigs|%s" % hgnode.short(r)
136 fn = ".hgsigs|%s" % hgnode.short(r)
137 for item in parsefile(fl.read(r).splitlines(), fn):
137 for item in parsefile(fl.read(r).splitlines(), fn):
138 yield item
138 yield item
139 try:
139 try:
140 # read local signatures
140 # read local signatures
141 fn = "localsigs"
141 fn = "localsigs"
142 for item in parsefile(repo.vfs(fn), fn):
142 for item in parsefile(repo.vfs(fn), fn):
143 yield item
143 yield item
144 except IOError:
144 except IOError:
145 pass
145 pass
146
146
147 def getkeys(ui, repo, mygpg, sigdata, context):
147 def getkeys(ui, repo, mygpg, sigdata, context):
148 """get the keys who signed a data"""
148 """get the keys who signed a data"""
149 fn, ln = context
149 fn, ln = context
150 node, version, sig = sigdata
150 node, version, sig = sigdata
151 prefix = "%s:%d" % (fn, ln)
151 prefix = "%s:%d" % (fn, ln)
152 node = hgnode.bin(node)
152 node = hgnode.bin(node)
153
153
154 data = node2txt(repo, node, version)
154 data = node2txt(repo, node, version)
155 sig = binascii.a2b_base64(sig)
155 sig = binascii.a2b_base64(sig)
156 keys = mygpg.verify(data, sig)
156 keys = mygpg.verify(data, sig)
157
157
158 validkeys = []
158 validkeys = []
159 # warn for expired key and/or sigs
159 # warn for expired key and/or sigs
160 for key in keys:
160 for key in keys:
161 if key[0] == "ERRSIG":
161 if key[0] == "ERRSIG":
162 ui.write(_("%s Unknown key ID \"%s\"\n") % (prefix, key[1]))
162 ui.write(_("%s Unknown key ID \"%s\"\n") % (prefix, key[1]))
163 continue
163 continue
164 if key[0] == "BADSIG":
164 if key[0] == "BADSIG":
165 ui.write(_("%s Bad signature from \"%s\"\n") % (prefix, key[2]))
165 ui.write(_("%s Bad signature from \"%s\"\n") % (prefix, key[2]))
166 continue
166 continue
167 if key[0] == "EXPSIG":
167 if key[0] == "EXPSIG":
168 ui.write(_("%s Note: Signature has expired"
168 ui.write(_("%s Note: Signature has expired"
169 " (signed by: \"%s\")\n") % (prefix, key[2]))
169 " (signed by: \"%s\")\n") % (prefix, key[2]))
170 elif key[0] == "EXPKEYSIG":
170 elif key[0] == "EXPKEYSIG":
171 ui.write(_("%s Note: This key has expired"
171 ui.write(_("%s Note: This key has expired"
172 " (signed by: \"%s\")\n") % (prefix, key[2]))
172 " (signed by: \"%s\")\n") % (prefix, key[2]))
173 validkeys.append((key[1], key[2], key[3]))
173 validkeys.append((key[1], key[2], key[3]))
174 return validkeys
174 return validkeys
175
175
176 @command("sigs", [], _('hg sigs'), helpcategory=_HELP_CATEGORY)
176 @command("sigs", [], _('hg sigs'), helpcategory=_HELP_CATEGORY)
177 def sigs(ui, repo):
177 def sigs(ui, repo):
178 """list signed changesets"""
178 """list signed changesets"""
179 mygpg = newgpg(ui)
179 mygpg = newgpg(ui)
180 revs = {}
180 revs = {}
181
181
182 for data, context in sigwalk(repo):
182 for data, context in sigwalk(repo):
183 node, version, sig = data
183 node, version, sig = data
184 fn, ln = context
184 fn, ln = context
185 try:
185 try:
186 n = repo.lookup(node)
186 n = repo.lookup(node)
187 except KeyError:
187 except KeyError:
188 ui.warn(_("%s:%d node does not exist\n") % (fn, ln))
188 ui.warn(_("%s:%d node does not exist\n") % (fn, ln))
189 continue
189 continue
190 r = repo.changelog.rev(n)
190 r = repo.changelog.rev(n)
191 keys = getkeys(ui, repo, mygpg, data, context)
191 keys = getkeys(ui, repo, mygpg, data, context)
192 if not keys:
192 if not keys:
193 continue
193 continue
194 revs.setdefault(r, [])
194 revs.setdefault(r, [])
195 revs[r].extend(keys)
195 revs[r].extend(keys)
196 for rev in sorted(revs, reverse=True):
196 for rev in sorted(revs, reverse=True):
197 for k in revs[rev]:
197 for k in revs[rev]:
198 r = "%5d:%s" % (rev, hgnode.hex(repo.changelog.node(rev)))
198 r = "%5d:%s" % (rev, hgnode.hex(repo.changelog.node(rev)))
199 ui.write("%-30s %s\n" % (keystr(ui, k), r))
199 ui.write("%-30s %s\n" % (keystr(ui, k), r))
200
200
201 @command("sigcheck", [], _('hg sigcheck REV'), helpcategory=_HELP_CATEGORY)
201 @command("sigcheck", [], _('hg sigcheck REV'), helpcategory=_HELP_CATEGORY)
202 def sigcheck(ui, repo, rev):
202 def sigcheck(ui, repo, rev):
203 """verify all the signatures there may be for a particular revision"""
203 """verify all the signatures there may be for a particular revision"""
204 mygpg = newgpg(ui)
204 mygpg = newgpg(ui)
205 rev = repo.lookup(rev)
205 rev = repo.lookup(rev)
206 hexrev = hgnode.hex(rev)
206 hexrev = hgnode.hex(rev)
207 keys = []
207 keys = []
208
208
209 for data, context in sigwalk(repo):
209 for data, context in sigwalk(repo):
210 node, version, sig = data
210 node, version, sig = data
211 if node == hexrev:
211 if node == hexrev:
212 k = getkeys(ui, repo, mygpg, data, context)
212 k = getkeys(ui, repo, mygpg, data, context)
213 if k:
213 if k:
214 keys.extend(k)
214 keys.extend(k)
215
215
216 if not keys:
216 if not keys:
217 ui.write(_("no valid signature for %s\n") % hgnode.short(rev))
217 ui.write(_("no valid signature for %s\n") % hgnode.short(rev))
218 return
218 return
219
219
220 # print summary
220 # print summary
221 ui.write(_("%s is signed by:\n") % hgnode.short(rev))
221 ui.write(_("%s is signed by:\n") % hgnode.short(rev))
222 for key in keys:
222 for key in keys:
223 ui.write(" %s\n" % keystr(ui, key))
223 ui.write(" %s\n" % keystr(ui, key))
224
224
225 def keystr(ui, key):
225 def keystr(ui, key):
226 """associate a string to a key (username, comment)"""
226 """associate a string to a key (username, comment)"""
227 keyid, user, fingerprint = key
227 keyid, user, fingerprint = key
228 comment = ui.config("gpg", fingerprint)
228 comment = ui.config("gpg", fingerprint)
229 if comment:
229 if comment:
230 return "%s (%s)" % (user, comment)
230 return "%s (%s)" % (user, comment)
231 else:
231 else:
232 return user
232 return user
233
233
234 @command("sign",
234 @command("sign",
235 [('l', 'local', None, _('make the signature local')),
235 [('l', 'local', None, _('make the signature local')),
236 ('f', 'force', None, _('sign even if the sigfile is modified')),
236 ('f', 'force', None, _('sign even if the sigfile is modified')),
237 ('', 'no-commit', None, _('do not commit the sigfile after signing')),
237 ('', 'no-commit', None, _('do not commit the sigfile after signing')),
238 ('k', 'key', '',
238 ('k', 'key', '',
239 _('the key id to sign with'), _('ID')),
239 _('the key id to sign with'), _('ID')),
240 ('m', 'message', '',
240 ('m', 'message', '',
241 _('use text as commit message'), _('TEXT')),
241 _('use text as commit message'), _('TEXT')),
242 ('e', 'edit', False, _('invoke editor on commit messages')),
242 ('e', 'edit', False, _('invoke editor on commit messages')),
243 ] + cmdutil.commitopts2,
243 ] + cmdutil.commitopts2,
244 _('hg sign [OPTION]... [REV]...'),
244 _('hg sign [OPTION]... [REV]...'),
245 helpcategory=_HELP_CATEGORY)
245 helpcategory=_HELP_CATEGORY)
246 def sign(ui, repo, *revs, **opts):
246 def sign(ui, repo, *revs, **opts):
247 """add a signature for the current or given revision
247 """add a signature for the current or given revision
248
248
249 If no revision is given, the parent of the working directory is used,
249 If no revision is given, the parent of the working directory is used,
250 or tip if no revision is checked out.
250 or tip if no revision is checked out.
251
251
252 The ``gpg.cmd`` config setting can be used to specify the command
252 The ``gpg.cmd`` config setting can be used to specify the command
253 to run. A default key can be specified with ``gpg.key``.
253 to run. A default key can be specified with ``gpg.key``.
254
254
255 See :hg:`help dates` for a list of formats valid for -d/--date.
255 See :hg:`help dates` for a list of formats valid for -d/--date.
256 """
256 """
257 with repo.wlock():
257 with repo.wlock():
258 return _dosign(ui, repo, *revs, **opts)
258 return _dosign(ui, repo, *revs, **opts)
259
259
260 def _dosign(ui, repo, *revs, **opts):
260 def _dosign(ui, repo, *revs, **opts):
261 mygpg = newgpg(ui, **opts)
261 mygpg = newgpg(ui, **opts)
262 opts = pycompat.byteskwargs(opts)
262 opts = pycompat.byteskwargs(opts)
263 sigver = "0"
263 sigver = "0"
264 sigmessage = ""
264 sigmessage = ""
265
265
266 date = opts.get('date')
266 date = opts.get('date')
267 if date:
267 if date:
268 opts['date'] = dateutil.parsedate(date)
268 opts['date'] = dateutil.parsedate(date)
269
269
270 if revs:
270 if revs:
271 nodes = [repo.lookup(n) for n in revs]
271 nodes = [repo.lookup(n) for n in revs]
272 else:
272 else:
273 nodes = [node for node in repo.dirstate.parents()
273 nodes = [node for node in repo.dirstate.parents()
274 if node != hgnode.nullid]
274 if node != hgnode.nullid]
275 if len(nodes) > 1:
275 if len(nodes) > 1:
276 raise error.Abort(_('uncommitted merge - please provide a '
276 raise error.Abort(_('uncommitted merge - please provide a '
277 'specific revision'))
277 'specific revision'))
278 if not nodes:
278 if not nodes:
279 nodes = [repo.changelog.tip()]
279 nodes = [repo.changelog.tip()]
280
280
281 for n in nodes:
281 for n in nodes:
282 hexnode = hgnode.hex(n)
282 hexnode = hgnode.hex(n)
283 ui.write(_("signing %d:%s\n") % (repo.changelog.rev(n),
283 ui.write(_("signing %d:%s\n") % (repo.changelog.rev(n),
284 hgnode.short(n)))
284 hgnode.short(n)))
285 # build data
285 # build data
286 data = node2txt(repo, n, sigver)
286 data = node2txt(repo, n, sigver)
287 sig = mygpg.sign(data)
287 sig = mygpg.sign(data)
288 if not sig:
288 if not sig:
289 raise error.Abort(_("error while signing"))
289 raise error.Abort(_("error while signing"))
290 sig = binascii.b2a_base64(sig)
290 sig = binascii.b2a_base64(sig)
291 sig = sig.replace("\n", "")
291 sig = sig.replace("\n", "")
292 sigmessage += "%s %s %s\n" % (hexnode, sigver, sig)
292 sigmessage += "%s %s %s\n" % (hexnode, sigver, sig)
293
293
294 # write it
294 # write it
295 if opts['local']:
295 if opts['local']:
296 repo.vfs.append("localsigs", sigmessage)
296 repo.vfs.append("localsigs", sigmessage)
297 return
297 return
298
298
299 if not opts["force"]:
299 if not opts["force"]:
300 msigs = match.exact(repo.root, '', ['.hgsigs'])
300 msigs = match.exact(['.hgsigs'])
301 if any(repo.status(match=msigs, unknown=True, ignored=True)):
301 if any(repo.status(match=msigs, unknown=True, ignored=True)):
302 raise error.Abort(_("working copy of .hgsigs is changed "),
302 raise error.Abort(_("working copy of .hgsigs is changed "),
303 hint=_("please commit .hgsigs manually"))
303 hint=_("please commit .hgsigs manually"))
304
304
305 sigsfile = repo.wvfs(".hgsigs", "ab")
305 sigsfile = repo.wvfs(".hgsigs", "ab")
306 sigsfile.write(sigmessage)
306 sigsfile.write(sigmessage)
307 sigsfile.close()
307 sigsfile.close()
308
308
309 if '.hgsigs' not in repo.dirstate:
309 if '.hgsigs' not in repo.dirstate:
310 repo[None].add([".hgsigs"])
310 repo[None].add([".hgsigs"])
311
311
312 if opts["no_commit"]:
312 if opts["no_commit"]:
313 return
313 return
314
314
315 message = opts['message']
315 message = opts['message']
316 if not message:
316 if not message:
317 # we don't translate commit messages
317 # we don't translate commit messages
318 message = "\n".join(["Added signature for changeset %s"
318 message = "\n".join(["Added signature for changeset %s"
319 % hgnode.short(n)
319 % hgnode.short(n)
320 for n in nodes])
320 for n in nodes])
321 try:
321 try:
322 editor = cmdutil.getcommiteditor(editform='gpg.sign',
322 editor = cmdutil.getcommiteditor(editform='gpg.sign',
323 **pycompat.strkwargs(opts))
323 **pycompat.strkwargs(opts))
324 repo.commit(message, opts['user'], opts['date'], match=msigs,
324 repo.commit(message, opts['user'], opts['date'], match=msigs,
325 editor=editor)
325 editor=editor)
326 except ValueError as inst:
326 except ValueError as inst:
327 raise error.Abort(pycompat.bytestr(inst))
327 raise error.Abort(pycompat.bytestr(inst))
328
328
329 def node2txt(repo, node, ver):
329 def node2txt(repo, node, ver):
330 """map a manifest into some text"""
330 """map a manifest into some text"""
331 if ver == "0":
331 if ver == "0":
332 return "%s\n" % hgnode.hex(node)
332 return "%s\n" % hgnode.hex(node)
333 else:
333 else:
334 raise error.Abort(_("unknown signature version"))
334 raise error.Abort(_("unknown signature version"))
335
335
336 def extsetup(ui):
336 def extsetup(ui):
337 # Add our category before "Repository maintenance".
337 # Add our category before "Repository maintenance".
338 help.CATEGORY_ORDER.insert(
338 help.CATEGORY_ORDER.insert(
339 help.CATEGORY_ORDER.index(command.CATEGORY_MAINTENANCE),
339 help.CATEGORY_ORDER.index(command.CATEGORY_MAINTENANCE),
340 _HELP_CATEGORY)
340 _HELP_CATEGORY)
341 help.CATEGORY_NAMES[_HELP_CATEGORY] = 'GPG signing'
341 help.CATEGORY_NAMES[_HELP_CATEGORY] = 'GPG signing'
@@ -1,675 +1,675 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''largefiles utility code: must not import other modules in this package.'''
9 '''largefiles utility code: must not import other modules in this package.'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 import copy
12 import copy
13 import hashlib
13 import hashlib
14 import os
14 import os
15 import stat
15 import stat
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18 from mercurial.node import hex
18 from mercurial.node import hex
19
19
20 from mercurial import (
20 from mercurial import (
21 dirstate,
21 dirstate,
22 encoding,
22 encoding,
23 error,
23 error,
24 httpconnection,
24 httpconnection,
25 match as matchmod,
25 match as matchmod,
26 node,
26 node,
27 pycompat,
27 pycompat,
28 scmutil,
28 scmutil,
29 sparse,
29 sparse,
30 util,
30 util,
31 vfs as vfsmod,
31 vfs as vfsmod,
32 )
32 )
33
33
34 shortname = '.hglf'
34 shortname = '.hglf'
35 shortnameslash = shortname + '/'
35 shortnameslash = shortname + '/'
36 longname = 'largefiles'
36 longname = 'largefiles'
37
37
38 # -- Private worker functions ------------------------------------------
38 # -- Private worker functions ------------------------------------------
39
39
40 def getminsize(ui, assumelfiles, opt, default=10):
40 def getminsize(ui, assumelfiles, opt, default=10):
41 lfsize = opt
41 lfsize = opt
42 if not lfsize and assumelfiles:
42 if not lfsize and assumelfiles:
43 lfsize = ui.config(longname, 'minsize', default=default)
43 lfsize = ui.config(longname, 'minsize', default=default)
44 if lfsize:
44 if lfsize:
45 try:
45 try:
46 lfsize = float(lfsize)
46 lfsize = float(lfsize)
47 except ValueError:
47 except ValueError:
48 raise error.Abort(_('largefiles: size must be number (not %s)\n')
48 raise error.Abort(_('largefiles: size must be number (not %s)\n')
49 % lfsize)
49 % lfsize)
50 if lfsize is None:
50 if lfsize is None:
51 raise error.Abort(_('minimum size for largefiles must be specified'))
51 raise error.Abort(_('minimum size for largefiles must be specified'))
52 return lfsize
52 return lfsize
53
53
54 def link(src, dest):
54 def link(src, dest):
55 """Try to create hardlink - if that fails, efficiently make a copy."""
55 """Try to create hardlink - if that fails, efficiently make a copy."""
56 util.makedirs(os.path.dirname(dest))
56 util.makedirs(os.path.dirname(dest))
57 try:
57 try:
58 util.oslink(src, dest)
58 util.oslink(src, dest)
59 except OSError:
59 except OSError:
60 # if hardlinks fail, fallback on atomic copy
60 # if hardlinks fail, fallback on atomic copy
61 with open(src, 'rb') as srcf, util.atomictempfile(dest) as dstf:
61 with open(src, 'rb') as srcf, util.atomictempfile(dest) as dstf:
62 for chunk in util.filechunkiter(srcf):
62 for chunk in util.filechunkiter(srcf):
63 dstf.write(chunk)
63 dstf.write(chunk)
64 os.chmod(dest, os.stat(src).st_mode)
64 os.chmod(dest, os.stat(src).st_mode)
65
65
66 def usercachepath(ui, hash):
66 def usercachepath(ui, hash):
67 '''Return the correct location in the "global" largefiles cache for a file
67 '''Return the correct location in the "global" largefiles cache for a file
68 with the given hash.
68 with the given hash.
69 This cache is used for sharing of largefiles across repositories - both
69 This cache is used for sharing of largefiles across repositories - both
70 to preserve download bandwidth and storage space.'''
70 to preserve download bandwidth and storage space.'''
71 return os.path.join(_usercachedir(ui), hash)
71 return os.path.join(_usercachedir(ui), hash)
72
72
73 def _usercachedir(ui, name=longname):
73 def _usercachedir(ui, name=longname):
74 '''Return the location of the "global" largefiles cache.'''
74 '''Return the location of the "global" largefiles cache.'''
75 path = ui.configpath(name, 'usercache')
75 path = ui.configpath(name, 'usercache')
76 if path:
76 if path:
77 return path
77 return path
78 if pycompat.iswindows:
78 if pycompat.iswindows:
79 appdata = encoding.environ.get('LOCALAPPDATA',\
79 appdata = encoding.environ.get('LOCALAPPDATA',\
80 encoding.environ.get('APPDATA'))
80 encoding.environ.get('APPDATA'))
81 if appdata:
81 if appdata:
82 return os.path.join(appdata, name)
82 return os.path.join(appdata, name)
83 elif pycompat.isdarwin:
83 elif pycompat.isdarwin:
84 home = encoding.environ.get('HOME')
84 home = encoding.environ.get('HOME')
85 if home:
85 if home:
86 return os.path.join(home, 'Library', 'Caches', name)
86 return os.path.join(home, 'Library', 'Caches', name)
87 elif pycompat.isposix:
87 elif pycompat.isposix:
88 path = encoding.environ.get('XDG_CACHE_HOME')
88 path = encoding.environ.get('XDG_CACHE_HOME')
89 if path:
89 if path:
90 return os.path.join(path, name)
90 return os.path.join(path, name)
91 home = encoding.environ.get('HOME')
91 home = encoding.environ.get('HOME')
92 if home:
92 if home:
93 return os.path.join(home, '.cache', name)
93 return os.path.join(home, '.cache', name)
94 else:
94 else:
95 raise error.Abort(_('unknown operating system: %s\n')
95 raise error.Abort(_('unknown operating system: %s\n')
96 % pycompat.osname)
96 % pycompat.osname)
97 raise error.Abort(_('unknown %s usercache location') % name)
97 raise error.Abort(_('unknown %s usercache location') % name)
98
98
99 def inusercache(ui, hash):
99 def inusercache(ui, hash):
100 path = usercachepath(ui, hash)
100 path = usercachepath(ui, hash)
101 return os.path.exists(path)
101 return os.path.exists(path)
102
102
103 def findfile(repo, hash):
103 def findfile(repo, hash):
104 '''Return store path of the largefile with the specified hash.
104 '''Return store path of the largefile with the specified hash.
105 As a side effect, the file might be linked from user cache.
105 As a side effect, the file might be linked from user cache.
106 Return None if the file can't be found locally.'''
106 Return None if the file can't be found locally.'''
107 path, exists = findstorepath(repo, hash)
107 path, exists = findstorepath(repo, hash)
108 if exists:
108 if exists:
109 repo.ui.note(_('found %s in store\n') % hash)
109 repo.ui.note(_('found %s in store\n') % hash)
110 return path
110 return path
111 elif inusercache(repo.ui, hash):
111 elif inusercache(repo.ui, hash):
112 repo.ui.note(_('found %s in system cache\n') % hash)
112 repo.ui.note(_('found %s in system cache\n') % hash)
113 path = storepath(repo, hash)
113 path = storepath(repo, hash)
114 link(usercachepath(repo.ui, hash), path)
114 link(usercachepath(repo.ui, hash), path)
115 return path
115 return path
116 return None
116 return None
117
117
118 class largefilesdirstate(dirstate.dirstate):
118 class largefilesdirstate(dirstate.dirstate):
119 def __getitem__(self, key):
119 def __getitem__(self, key):
120 return super(largefilesdirstate, self).__getitem__(unixpath(key))
120 return super(largefilesdirstate, self).__getitem__(unixpath(key))
121 def normal(self, f):
121 def normal(self, f):
122 return super(largefilesdirstate, self).normal(unixpath(f))
122 return super(largefilesdirstate, self).normal(unixpath(f))
123 def remove(self, f):
123 def remove(self, f):
124 return super(largefilesdirstate, self).remove(unixpath(f))
124 return super(largefilesdirstate, self).remove(unixpath(f))
125 def add(self, f):
125 def add(self, f):
126 return super(largefilesdirstate, self).add(unixpath(f))
126 return super(largefilesdirstate, self).add(unixpath(f))
127 def drop(self, f):
127 def drop(self, f):
128 return super(largefilesdirstate, self).drop(unixpath(f))
128 return super(largefilesdirstate, self).drop(unixpath(f))
129 def forget(self, f):
129 def forget(self, f):
130 return super(largefilesdirstate, self).forget(unixpath(f))
130 return super(largefilesdirstate, self).forget(unixpath(f))
131 def normallookup(self, f):
131 def normallookup(self, f):
132 return super(largefilesdirstate, self).normallookup(unixpath(f))
132 return super(largefilesdirstate, self).normallookup(unixpath(f))
133 def _ignore(self, f):
133 def _ignore(self, f):
134 return False
134 return False
135 def write(self, tr=False):
135 def write(self, tr=False):
136 # (1) disable PENDING mode always
136 # (1) disable PENDING mode always
137 # (lfdirstate isn't yet managed as a part of the transaction)
137 # (lfdirstate isn't yet managed as a part of the transaction)
138 # (2) avoid develwarn 'use dirstate.write with ....'
138 # (2) avoid develwarn 'use dirstate.write with ....'
139 super(largefilesdirstate, self).write(None)
139 super(largefilesdirstate, self).write(None)
140
140
141 def openlfdirstate(ui, repo, create=True):
141 def openlfdirstate(ui, repo, create=True):
142 '''
142 '''
143 Return a dirstate object that tracks largefiles: i.e. its root is
143 Return a dirstate object that tracks largefiles: i.e. its root is
144 the repo root, but it is saved in .hg/largefiles/dirstate.
144 the repo root, but it is saved in .hg/largefiles/dirstate.
145 '''
145 '''
146 vfs = repo.vfs
146 vfs = repo.vfs
147 lfstoredir = longname
147 lfstoredir = longname
148 opener = vfsmod.vfs(vfs.join(lfstoredir))
148 opener = vfsmod.vfs(vfs.join(lfstoredir))
149 lfdirstate = largefilesdirstate(opener, ui, repo.root,
149 lfdirstate = largefilesdirstate(opener, ui, repo.root,
150 repo.dirstate._validate,
150 repo.dirstate._validate,
151 lambda: sparse.matcher(repo))
151 lambda: sparse.matcher(repo))
152
152
153 # If the largefiles dirstate does not exist, populate and create
153 # If the largefiles dirstate does not exist, populate and create
154 # it. This ensures that we create it on the first meaningful
154 # it. This ensures that we create it on the first meaningful
155 # largefiles operation in a new clone.
155 # largefiles operation in a new clone.
156 if create and not vfs.exists(vfs.join(lfstoredir, 'dirstate')):
156 if create and not vfs.exists(vfs.join(lfstoredir, 'dirstate')):
157 matcher = getstandinmatcher(repo)
157 matcher = getstandinmatcher(repo)
158 standins = repo.dirstate.walk(matcher, subrepos=[], unknown=False,
158 standins = repo.dirstate.walk(matcher, subrepos=[], unknown=False,
159 ignored=False)
159 ignored=False)
160
160
161 if len(standins) > 0:
161 if len(standins) > 0:
162 vfs.makedirs(lfstoredir)
162 vfs.makedirs(lfstoredir)
163
163
164 for standin in standins:
164 for standin in standins:
165 lfile = splitstandin(standin)
165 lfile = splitstandin(standin)
166 lfdirstate.normallookup(lfile)
166 lfdirstate.normallookup(lfile)
167 return lfdirstate
167 return lfdirstate
168
168
169 def lfdirstatestatus(lfdirstate, repo):
169 def lfdirstatestatus(lfdirstate, repo):
170 pctx = repo['.']
170 pctx = repo['.']
171 match = matchmod.always(repo.root, repo.getcwd())
171 match = matchmod.always()
172 unsure, s = lfdirstate.status(match, subrepos=[], ignored=False,
172 unsure, s = lfdirstate.status(match, subrepos=[], ignored=False,
173 clean=False, unknown=False)
173 clean=False, unknown=False)
174 modified, clean = s.modified, s.clean
174 modified, clean = s.modified, s.clean
175 for lfile in unsure:
175 for lfile in unsure:
176 try:
176 try:
177 fctx = pctx[standin(lfile)]
177 fctx = pctx[standin(lfile)]
178 except LookupError:
178 except LookupError:
179 fctx = None
179 fctx = None
180 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
180 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
181 modified.append(lfile)
181 modified.append(lfile)
182 else:
182 else:
183 clean.append(lfile)
183 clean.append(lfile)
184 lfdirstate.normal(lfile)
184 lfdirstate.normal(lfile)
185 return s
185 return s
186
186
187 def listlfiles(repo, rev=None, matcher=None):
187 def listlfiles(repo, rev=None, matcher=None):
188 '''return a list of largefiles in the working copy or the
188 '''return a list of largefiles in the working copy or the
189 specified changeset'''
189 specified changeset'''
190
190
191 if matcher is None:
191 if matcher is None:
192 matcher = getstandinmatcher(repo)
192 matcher = getstandinmatcher(repo)
193
193
194 # ignore unknown files in working directory
194 # ignore unknown files in working directory
195 return [splitstandin(f)
195 return [splitstandin(f)
196 for f in repo[rev].walk(matcher)
196 for f in repo[rev].walk(matcher)
197 if rev is not None or repo.dirstate[f] != '?']
197 if rev is not None or repo.dirstate[f] != '?']
198
198
199 def instore(repo, hash, forcelocal=False):
199 def instore(repo, hash, forcelocal=False):
200 '''Return true if a largefile with the given hash exists in the store'''
200 '''Return true if a largefile with the given hash exists in the store'''
201 return os.path.exists(storepath(repo, hash, forcelocal))
201 return os.path.exists(storepath(repo, hash, forcelocal))
202
202
203 def storepath(repo, hash, forcelocal=False):
203 def storepath(repo, hash, forcelocal=False):
204 '''Return the correct location in the repository largefiles store for a
204 '''Return the correct location in the repository largefiles store for a
205 file with the given hash.'''
205 file with the given hash.'''
206 if not forcelocal and repo.shared():
206 if not forcelocal and repo.shared():
207 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
207 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
208 return repo.vfs.join(longname, hash)
208 return repo.vfs.join(longname, hash)
209
209
210 def findstorepath(repo, hash):
210 def findstorepath(repo, hash):
211 '''Search through the local store path(s) to find the file for the given
211 '''Search through the local store path(s) to find the file for the given
212 hash. If the file is not found, its path in the primary store is returned.
212 hash. If the file is not found, its path in the primary store is returned.
213 The return value is a tuple of (path, exists(path)).
213 The return value is a tuple of (path, exists(path)).
214 '''
214 '''
215 # For shared repos, the primary store is in the share source. But for
215 # For shared repos, the primary store is in the share source. But for
216 # backward compatibility, force a lookup in the local store if it wasn't
216 # backward compatibility, force a lookup in the local store if it wasn't
217 # found in the share source.
217 # found in the share source.
218 path = storepath(repo, hash, False)
218 path = storepath(repo, hash, False)
219
219
220 if instore(repo, hash):
220 if instore(repo, hash):
221 return (path, True)
221 return (path, True)
222 elif repo.shared() and instore(repo, hash, True):
222 elif repo.shared() and instore(repo, hash, True):
223 return storepath(repo, hash, True), True
223 return storepath(repo, hash, True), True
224
224
225 return (path, False)
225 return (path, False)
226
226
227 def copyfromcache(repo, hash, filename):
227 def copyfromcache(repo, hash, filename):
228 '''Copy the specified largefile from the repo or system cache to
228 '''Copy the specified largefile from the repo or system cache to
229 filename in the repository. Return true on success or false if the
229 filename in the repository. Return true on success or false if the
230 file was not found in either cache (which should not happened:
230 file was not found in either cache (which should not happened:
231 this is meant to be called only after ensuring that the needed
231 this is meant to be called only after ensuring that the needed
232 largefile exists in the cache).'''
232 largefile exists in the cache).'''
233 wvfs = repo.wvfs
233 wvfs = repo.wvfs
234 path = findfile(repo, hash)
234 path = findfile(repo, hash)
235 if path is None:
235 if path is None:
236 return False
236 return False
237 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
237 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
238 # The write may fail before the file is fully written, but we
238 # The write may fail before the file is fully written, but we
239 # don't use atomic writes in the working copy.
239 # don't use atomic writes in the working copy.
240 with open(path, 'rb') as srcfd, wvfs(filename, 'wb') as destfd:
240 with open(path, 'rb') as srcfd, wvfs(filename, 'wb') as destfd:
241 gothash = copyandhash(
241 gothash = copyandhash(
242 util.filechunkiter(srcfd), destfd)
242 util.filechunkiter(srcfd), destfd)
243 if gothash != hash:
243 if gothash != hash:
244 repo.ui.warn(_('%s: data corruption in %s with hash %s\n')
244 repo.ui.warn(_('%s: data corruption in %s with hash %s\n')
245 % (filename, path, gothash))
245 % (filename, path, gothash))
246 wvfs.unlink(filename)
246 wvfs.unlink(filename)
247 return False
247 return False
248 return True
248 return True
249
249
250 def copytostore(repo, ctx, file, fstandin):
250 def copytostore(repo, ctx, file, fstandin):
251 wvfs = repo.wvfs
251 wvfs = repo.wvfs
252 hash = readasstandin(ctx[fstandin])
252 hash = readasstandin(ctx[fstandin])
253 if instore(repo, hash):
253 if instore(repo, hash):
254 return
254 return
255 if wvfs.exists(file):
255 if wvfs.exists(file):
256 copytostoreabsolute(repo, wvfs.join(file), hash)
256 copytostoreabsolute(repo, wvfs.join(file), hash)
257 else:
257 else:
258 repo.ui.warn(_("%s: largefile %s not available from local store\n") %
258 repo.ui.warn(_("%s: largefile %s not available from local store\n") %
259 (file, hash))
259 (file, hash))
260
260
261 def copyalltostore(repo, node):
261 def copyalltostore(repo, node):
262 '''Copy all largefiles in a given revision to the store'''
262 '''Copy all largefiles in a given revision to the store'''
263
263
264 ctx = repo[node]
264 ctx = repo[node]
265 for filename in ctx.files():
265 for filename in ctx.files():
266 realfile = splitstandin(filename)
266 realfile = splitstandin(filename)
267 if realfile is not None and filename in ctx.manifest():
267 if realfile is not None and filename in ctx.manifest():
268 copytostore(repo, ctx, realfile, filename)
268 copytostore(repo, ctx, realfile, filename)
269
269
270 def copytostoreabsolute(repo, file, hash):
270 def copytostoreabsolute(repo, file, hash):
271 if inusercache(repo.ui, hash):
271 if inusercache(repo.ui, hash):
272 link(usercachepath(repo.ui, hash), storepath(repo, hash))
272 link(usercachepath(repo.ui, hash), storepath(repo, hash))
273 else:
273 else:
274 util.makedirs(os.path.dirname(storepath(repo, hash)))
274 util.makedirs(os.path.dirname(storepath(repo, hash)))
275 with open(file, 'rb') as srcf:
275 with open(file, 'rb') as srcf:
276 with util.atomictempfile(storepath(repo, hash),
276 with util.atomictempfile(storepath(repo, hash),
277 createmode=repo.store.createmode) as dstf:
277 createmode=repo.store.createmode) as dstf:
278 for chunk in util.filechunkiter(srcf):
278 for chunk in util.filechunkiter(srcf):
279 dstf.write(chunk)
279 dstf.write(chunk)
280 linktousercache(repo, hash)
280 linktousercache(repo, hash)
281
281
282 def linktousercache(repo, hash):
282 def linktousercache(repo, hash):
283 '''Link / copy the largefile with the specified hash from the store
283 '''Link / copy the largefile with the specified hash from the store
284 to the cache.'''
284 to the cache.'''
285 path = usercachepath(repo.ui, hash)
285 path = usercachepath(repo.ui, hash)
286 link(storepath(repo, hash), path)
286 link(storepath(repo, hash), path)
287
287
288 def getstandinmatcher(repo, rmatcher=None):
288 def getstandinmatcher(repo, rmatcher=None):
289 '''Return a match object that applies rmatcher to the standin directory'''
289 '''Return a match object that applies rmatcher to the standin directory'''
290 wvfs = repo.wvfs
290 wvfs = repo.wvfs
291 standindir = shortname
291 standindir = shortname
292
292
293 # no warnings about missing files or directories
293 # no warnings about missing files or directories
294 badfn = lambda f, msg: None
294 badfn = lambda f, msg: None
295
295
296 if rmatcher and not rmatcher.always():
296 if rmatcher and not rmatcher.always():
297 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
297 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
298 if not pats:
298 if not pats:
299 pats = [wvfs.join(standindir)]
299 pats = [wvfs.join(standindir)]
300 match = scmutil.match(repo[None], pats, badfn=badfn)
300 match = scmutil.match(repo[None], pats, badfn=badfn)
301 else:
301 else:
302 # no patterns: relative to repo root
302 # no patterns: relative to repo root
303 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
303 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
304 return match
304 return match
305
305
306 def composestandinmatcher(repo, rmatcher):
306 def composestandinmatcher(repo, rmatcher):
307 '''Return a matcher that accepts standins corresponding to the
307 '''Return a matcher that accepts standins corresponding to the
308 files accepted by rmatcher. Pass the list of files in the matcher
308 files accepted by rmatcher. Pass the list of files in the matcher
309 as the paths specified by the user.'''
309 as the paths specified by the user.'''
310 smatcher = getstandinmatcher(repo, rmatcher)
310 smatcher = getstandinmatcher(repo, rmatcher)
311 isstandin = smatcher.matchfn
311 isstandin = smatcher.matchfn
312 def composedmatchfn(f):
312 def composedmatchfn(f):
313 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
313 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
314 smatcher.matchfn = composedmatchfn
314 smatcher.matchfn = composedmatchfn
315
315
316 return smatcher
316 return smatcher
317
317
318 def standin(filename):
318 def standin(filename):
319 '''Return the repo-relative path to the standin for the specified big
319 '''Return the repo-relative path to the standin for the specified big
320 file.'''
320 file.'''
321 # Notes:
321 # Notes:
322 # 1) Some callers want an absolute path, but for instance addlargefiles
322 # 1) Some callers want an absolute path, but for instance addlargefiles
323 # needs it repo-relative so it can be passed to repo[None].add(). So
323 # needs it repo-relative so it can be passed to repo[None].add(). So
324 # leave it up to the caller to use repo.wjoin() to get an absolute path.
324 # leave it up to the caller to use repo.wjoin() to get an absolute path.
325 # 2) Join with '/' because that's what dirstate always uses, even on
325 # 2) Join with '/' because that's what dirstate always uses, even on
326 # Windows. Change existing separator to '/' first in case we are
326 # Windows. Change existing separator to '/' first in case we are
327 # passed filenames from an external source (like the command line).
327 # passed filenames from an external source (like the command line).
328 return shortnameslash + util.pconvert(filename)
328 return shortnameslash + util.pconvert(filename)
329
329
330 def isstandin(filename):
330 def isstandin(filename):
331 '''Return true if filename is a big file standin. filename must be
331 '''Return true if filename is a big file standin. filename must be
332 in Mercurial's internal form (slash-separated).'''
332 in Mercurial's internal form (slash-separated).'''
333 return filename.startswith(shortnameslash)
333 return filename.startswith(shortnameslash)
334
334
335 def splitstandin(filename):
335 def splitstandin(filename):
336 # Split on / because that's what dirstate always uses, even on Windows.
336 # Split on / because that's what dirstate always uses, even on Windows.
337 # Change local separator to / first just in case we are passed filenames
337 # Change local separator to / first just in case we are passed filenames
338 # from an external source (like the command line).
338 # from an external source (like the command line).
339 bits = util.pconvert(filename).split('/', 1)
339 bits = util.pconvert(filename).split('/', 1)
340 if len(bits) == 2 and bits[0] == shortname:
340 if len(bits) == 2 and bits[0] == shortname:
341 return bits[1]
341 return bits[1]
342 else:
342 else:
343 return None
343 return None
344
344
345 def updatestandin(repo, lfile, standin):
345 def updatestandin(repo, lfile, standin):
346 """Re-calculate hash value of lfile and write it into standin
346 """Re-calculate hash value of lfile and write it into standin
347
347
348 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
348 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
349 """
349 """
350 file = repo.wjoin(lfile)
350 file = repo.wjoin(lfile)
351 if repo.wvfs.exists(lfile):
351 if repo.wvfs.exists(lfile):
352 hash = hashfile(file)
352 hash = hashfile(file)
353 executable = getexecutable(file)
353 executable = getexecutable(file)
354 writestandin(repo, standin, hash, executable)
354 writestandin(repo, standin, hash, executable)
355 else:
355 else:
356 raise error.Abort(_('%s: file not found!') % lfile)
356 raise error.Abort(_('%s: file not found!') % lfile)
357
357
358 def readasstandin(fctx):
358 def readasstandin(fctx):
359 '''read hex hash from given filectx of standin file
359 '''read hex hash from given filectx of standin file
360
360
361 This encapsulates how "standin" data is stored into storage layer.'''
361 This encapsulates how "standin" data is stored into storage layer.'''
362 return fctx.data().strip()
362 return fctx.data().strip()
363
363
364 def writestandin(repo, standin, hash, executable):
364 def writestandin(repo, standin, hash, executable):
365 '''write hash to <repo.root>/<standin>'''
365 '''write hash to <repo.root>/<standin>'''
366 repo.wwrite(standin, hash + '\n', executable and 'x' or '')
366 repo.wwrite(standin, hash + '\n', executable and 'x' or '')
367
367
368 def copyandhash(instream, outfile):
368 def copyandhash(instream, outfile):
369 '''Read bytes from instream (iterable) and write them to outfile,
369 '''Read bytes from instream (iterable) and write them to outfile,
370 computing the SHA-1 hash of the data along the way. Return the hash.'''
370 computing the SHA-1 hash of the data along the way. Return the hash.'''
371 hasher = hashlib.sha1('')
371 hasher = hashlib.sha1('')
372 for data in instream:
372 for data in instream:
373 hasher.update(data)
373 hasher.update(data)
374 outfile.write(data)
374 outfile.write(data)
375 return hex(hasher.digest())
375 return hex(hasher.digest())
376
376
377 def hashfile(file):
377 def hashfile(file):
378 if not os.path.exists(file):
378 if not os.path.exists(file):
379 return ''
379 return ''
380 with open(file, 'rb') as fd:
380 with open(file, 'rb') as fd:
381 return hexsha1(fd)
381 return hexsha1(fd)
382
382
383 def getexecutable(filename):
383 def getexecutable(filename):
384 mode = os.stat(filename).st_mode
384 mode = os.stat(filename).st_mode
385 return ((mode & stat.S_IXUSR) and
385 return ((mode & stat.S_IXUSR) and
386 (mode & stat.S_IXGRP) and
386 (mode & stat.S_IXGRP) and
387 (mode & stat.S_IXOTH))
387 (mode & stat.S_IXOTH))
388
388
389 def urljoin(first, second, *arg):
389 def urljoin(first, second, *arg):
390 def join(left, right):
390 def join(left, right):
391 if not left.endswith('/'):
391 if not left.endswith('/'):
392 left += '/'
392 left += '/'
393 if right.startswith('/'):
393 if right.startswith('/'):
394 right = right[1:]
394 right = right[1:]
395 return left + right
395 return left + right
396
396
397 url = join(first, second)
397 url = join(first, second)
398 for a in arg:
398 for a in arg:
399 url = join(url, a)
399 url = join(url, a)
400 return url
400 return url
401
401
402 def hexsha1(fileobj):
402 def hexsha1(fileobj):
403 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
403 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
404 object data"""
404 object data"""
405 h = hashlib.sha1()
405 h = hashlib.sha1()
406 for chunk in util.filechunkiter(fileobj):
406 for chunk in util.filechunkiter(fileobj):
407 h.update(chunk)
407 h.update(chunk)
408 return hex(h.digest())
408 return hex(h.digest())
409
409
410 def httpsendfile(ui, filename):
410 def httpsendfile(ui, filename):
411 return httpconnection.httpsendfile(ui, filename, 'rb')
411 return httpconnection.httpsendfile(ui, filename, 'rb')
412
412
413 def unixpath(path):
413 def unixpath(path):
414 '''Return a version of path normalized for use with the lfdirstate.'''
414 '''Return a version of path normalized for use with the lfdirstate.'''
415 return util.pconvert(os.path.normpath(path))
415 return util.pconvert(os.path.normpath(path))
416
416
417 def islfilesrepo(repo):
417 def islfilesrepo(repo):
418 '''Return true if the repo is a largefile repo.'''
418 '''Return true if the repo is a largefile repo.'''
419 if ('largefiles' in repo.requirements and
419 if ('largefiles' in repo.requirements and
420 any(shortnameslash in f[0] for f in repo.store.datafiles())):
420 any(shortnameslash in f[0] for f in repo.store.datafiles())):
421 return True
421 return True
422
422
423 return any(openlfdirstate(repo.ui, repo, False))
423 return any(openlfdirstate(repo.ui, repo, False))
424
424
425 class storeprotonotcapable(Exception):
425 class storeprotonotcapable(Exception):
426 def __init__(self, storetypes):
426 def __init__(self, storetypes):
427 self.storetypes = storetypes
427 self.storetypes = storetypes
428
428
429 def getstandinsstate(repo):
429 def getstandinsstate(repo):
430 standins = []
430 standins = []
431 matcher = getstandinmatcher(repo)
431 matcher = getstandinmatcher(repo)
432 wctx = repo[None]
432 wctx = repo[None]
433 for standin in repo.dirstate.walk(matcher, subrepos=[], unknown=False,
433 for standin in repo.dirstate.walk(matcher, subrepos=[], unknown=False,
434 ignored=False):
434 ignored=False):
435 lfile = splitstandin(standin)
435 lfile = splitstandin(standin)
436 try:
436 try:
437 hash = readasstandin(wctx[standin])
437 hash = readasstandin(wctx[standin])
438 except IOError:
438 except IOError:
439 hash = None
439 hash = None
440 standins.append((lfile, hash))
440 standins.append((lfile, hash))
441 return standins
441 return standins
442
442
443 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
443 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
444 lfstandin = standin(lfile)
444 lfstandin = standin(lfile)
445 if lfstandin in repo.dirstate:
445 if lfstandin in repo.dirstate:
446 stat = repo.dirstate._map[lfstandin]
446 stat = repo.dirstate._map[lfstandin]
447 state, mtime = stat[0], stat[3]
447 state, mtime = stat[0], stat[3]
448 else:
448 else:
449 state, mtime = '?', -1
449 state, mtime = '?', -1
450 if state == 'n':
450 if state == 'n':
451 if (normallookup or mtime < 0 or
451 if (normallookup or mtime < 0 or
452 not repo.wvfs.exists(lfile)):
452 not repo.wvfs.exists(lfile)):
453 # state 'n' doesn't ensure 'clean' in this case
453 # state 'n' doesn't ensure 'clean' in this case
454 lfdirstate.normallookup(lfile)
454 lfdirstate.normallookup(lfile)
455 else:
455 else:
456 lfdirstate.normal(lfile)
456 lfdirstate.normal(lfile)
457 elif state == 'm':
457 elif state == 'm':
458 lfdirstate.normallookup(lfile)
458 lfdirstate.normallookup(lfile)
459 elif state == 'r':
459 elif state == 'r':
460 lfdirstate.remove(lfile)
460 lfdirstate.remove(lfile)
461 elif state == 'a':
461 elif state == 'a':
462 lfdirstate.add(lfile)
462 lfdirstate.add(lfile)
463 elif state == '?':
463 elif state == '?':
464 lfdirstate.drop(lfile)
464 lfdirstate.drop(lfile)
465
465
466 def markcommitted(orig, ctx, node):
466 def markcommitted(orig, ctx, node):
467 repo = ctx.repo()
467 repo = ctx.repo()
468
468
469 orig(node)
469 orig(node)
470
470
471 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
471 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
472 # because files coming from the 2nd parent are omitted in the latter.
472 # because files coming from the 2nd parent are omitted in the latter.
473 #
473 #
474 # The former should be used to get targets of "synclfdirstate",
474 # The former should be used to get targets of "synclfdirstate",
475 # because such files:
475 # because such files:
476 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
476 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
477 # - have to be marked as "n" after commit, but
477 # - have to be marked as "n" after commit, but
478 # - aren't listed in "repo[node].files()"
478 # - aren't listed in "repo[node].files()"
479
479
480 lfdirstate = openlfdirstate(repo.ui, repo)
480 lfdirstate = openlfdirstate(repo.ui, repo)
481 for f in ctx.files():
481 for f in ctx.files():
482 lfile = splitstandin(f)
482 lfile = splitstandin(f)
483 if lfile is not None:
483 if lfile is not None:
484 synclfdirstate(repo, lfdirstate, lfile, False)
484 synclfdirstate(repo, lfdirstate, lfile, False)
485 lfdirstate.write()
485 lfdirstate.write()
486
486
487 # As part of committing, copy all of the largefiles into the cache.
487 # As part of committing, copy all of the largefiles into the cache.
488 #
488 #
489 # Using "node" instead of "ctx" implies additional "repo[node]"
489 # Using "node" instead of "ctx" implies additional "repo[node]"
490 # lookup while copyalltostore(), but can omit redundant check for
490 # lookup while copyalltostore(), but can omit redundant check for
491 # files comming from the 2nd parent, which should exist in store
491 # files comming from the 2nd parent, which should exist in store
492 # at merging.
492 # at merging.
493 copyalltostore(repo, node)
493 copyalltostore(repo, node)
494
494
495 def getlfilestoupdate(oldstandins, newstandins):
495 def getlfilestoupdate(oldstandins, newstandins):
496 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
496 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
497 filelist = []
497 filelist = []
498 for f in changedstandins:
498 for f in changedstandins:
499 if f[0] not in filelist:
499 if f[0] not in filelist:
500 filelist.append(f[0])
500 filelist.append(f[0])
501 return filelist
501 return filelist
502
502
503 def getlfilestoupload(repo, missing, addfunc):
503 def getlfilestoupload(repo, missing, addfunc):
504 makeprogress = repo.ui.makeprogress
504 makeprogress = repo.ui.makeprogress
505 with makeprogress(_('finding outgoing largefiles'),
505 with makeprogress(_('finding outgoing largefiles'),
506 unit=_('revisions'), total=len(missing)) as progress:
506 unit=_('revisions'), total=len(missing)) as progress:
507 for i, n in enumerate(missing):
507 for i, n in enumerate(missing):
508 progress.update(i)
508 progress.update(i)
509 parents = [p for p in repo[n].parents() if p != node.nullid]
509 parents = [p for p in repo[n].parents() if p != node.nullid]
510
510
511 oldlfstatus = repo.lfstatus
511 oldlfstatus = repo.lfstatus
512 repo.lfstatus = False
512 repo.lfstatus = False
513 try:
513 try:
514 ctx = repo[n]
514 ctx = repo[n]
515 finally:
515 finally:
516 repo.lfstatus = oldlfstatus
516 repo.lfstatus = oldlfstatus
517
517
518 files = set(ctx.files())
518 files = set(ctx.files())
519 if len(parents) == 2:
519 if len(parents) == 2:
520 mc = ctx.manifest()
520 mc = ctx.manifest()
521 mp1 = ctx.p1().manifest()
521 mp1 = ctx.p1().manifest()
522 mp2 = ctx.p2().manifest()
522 mp2 = ctx.p2().manifest()
523 for f in mp1:
523 for f in mp1:
524 if f not in mc:
524 if f not in mc:
525 files.add(f)
525 files.add(f)
526 for f in mp2:
526 for f in mp2:
527 if f not in mc:
527 if f not in mc:
528 files.add(f)
528 files.add(f)
529 for f in mc:
529 for f in mc:
530 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
530 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
531 files.add(f)
531 files.add(f)
532 for fn in files:
532 for fn in files:
533 if isstandin(fn) and fn in ctx:
533 if isstandin(fn) and fn in ctx:
534 addfunc(fn, readasstandin(ctx[fn]))
534 addfunc(fn, readasstandin(ctx[fn]))
535
535
536 def updatestandinsbymatch(repo, match):
536 def updatestandinsbymatch(repo, match):
537 '''Update standins in the working directory according to specified match
537 '''Update standins in the working directory according to specified match
538
538
539 This returns (possibly modified) ``match`` object to be used for
539 This returns (possibly modified) ``match`` object to be used for
540 subsequent commit process.
540 subsequent commit process.
541 '''
541 '''
542
542
543 ui = repo.ui
543 ui = repo.ui
544
544
545 # Case 1: user calls commit with no specific files or
545 # Case 1: user calls commit with no specific files or
546 # include/exclude patterns: refresh and commit all files that
546 # include/exclude patterns: refresh and commit all files that
547 # are "dirty".
547 # are "dirty".
548 if match is None or match.always():
548 if match is None or match.always():
549 # Spend a bit of time here to get a list of files we know
549 # Spend a bit of time here to get a list of files we know
550 # are modified so we can compare only against those.
550 # are modified so we can compare only against those.
551 # It can cost a lot of time (several seconds)
551 # It can cost a lot of time (several seconds)
552 # otherwise to update all standins if the largefiles are
552 # otherwise to update all standins if the largefiles are
553 # large.
553 # large.
554 lfdirstate = openlfdirstate(ui, repo)
554 lfdirstate = openlfdirstate(ui, repo)
555 dirtymatch = matchmod.always(repo.root, repo.getcwd())
555 dirtymatch = matchmod.always()
556 unsure, s = lfdirstate.status(dirtymatch, subrepos=[], ignored=False,
556 unsure, s = lfdirstate.status(dirtymatch, subrepos=[], ignored=False,
557 clean=False, unknown=False)
557 clean=False, unknown=False)
558 modifiedfiles = unsure + s.modified + s.added + s.removed
558 modifiedfiles = unsure + s.modified + s.added + s.removed
559 lfiles = listlfiles(repo)
559 lfiles = listlfiles(repo)
560 # this only loops through largefiles that exist (not
560 # this only loops through largefiles that exist (not
561 # removed/renamed)
561 # removed/renamed)
562 for lfile in lfiles:
562 for lfile in lfiles:
563 if lfile in modifiedfiles:
563 if lfile in modifiedfiles:
564 fstandin = standin(lfile)
564 fstandin = standin(lfile)
565 if repo.wvfs.exists(fstandin):
565 if repo.wvfs.exists(fstandin):
566 # this handles the case where a rebase is being
566 # this handles the case where a rebase is being
567 # performed and the working copy is not updated
567 # performed and the working copy is not updated
568 # yet.
568 # yet.
569 if repo.wvfs.exists(lfile):
569 if repo.wvfs.exists(lfile):
570 updatestandin(repo, lfile, fstandin)
570 updatestandin(repo, lfile, fstandin)
571
571
572 return match
572 return match
573
573
574 lfiles = listlfiles(repo)
574 lfiles = listlfiles(repo)
575 match._files = repo._subdirlfs(match.files(), lfiles)
575 match._files = repo._subdirlfs(match.files(), lfiles)
576
576
577 # Case 2: user calls commit with specified patterns: refresh
577 # Case 2: user calls commit with specified patterns: refresh
578 # any matching big files.
578 # any matching big files.
579 smatcher = composestandinmatcher(repo, match)
579 smatcher = composestandinmatcher(repo, match)
580 standins = repo.dirstate.walk(smatcher, subrepos=[], unknown=False,
580 standins = repo.dirstate.walk(smatcher, subrepos=[], unknown=False,
581 ignored=False)
581 ignored=False)
582
582
583 # No matching big files: get out of the way and pass control to
583 # No matching big files: get out of the way and pass control to
584 # the usual commit() method.
584 # the usual commit() method.
585 if not standins:
585 if not standins:
586 return match
586 return match
587
587
588 # Refresh all matching big files. It's possible that the
588 # Refresh all matching big files. It's possible that the
589 # commit will end up failing, in which case the big files will
589 # commit will end up failing, in which case the big files will
590 # stay refreshed. No harm done: the user modified them and
590 # stay refreshed. No harm done: the user modified them and
591 # asked to commit them, so sooner or later we're going to
591 # asked to commit them, so sooner or later we're going to
592 # refresh the standins. Might as well leave them refreshed.
592 # refresh the standins. Might as well leave them refreshed.
593 lfdirstate = openlfdirstate(ui, repo)
593 lfdirstate = openlfdirstate(ui, repo)
594 for fstandin in standins:
594 for fstandin in standins:
595 lfile = splitstandin(fstandin)
595 lfile = splitstandin(fstandin)
596 if lfdirstate[lfile] != 'r':
596 if lfdirstate[lfile] != 'r':
597 updatestandin(repo, lfile, fstandin)
597 updatestandin(repo, lfile, fstandin)
598
598
599 # Cook up a new matcher that only matches regular files or
599 # Cook up a new matcher that only matches regular files or
600 # standins corresponding to the big files requested by the
600 # standins corresponding to the big files requested by the
601 # user. Have to modify _files to prevent commit() from
601 # user. Have to modify _files to prevent commit() from
602 # complaining "not tracked" for big files.
602 # complaining "not tracked" for big files.
603 match = copy.copy(match)
603 match = copy.copy(match)
604 origmatchfn = match.matchfn
604 origmatchfn = match.matchfn
605
605
606 # Check both the list of largefiles and the list of
606 # Check both the list of largefiles and the list of
607 # standins because if a largefile was removed, it
607 # standins because if a largefile was removed, it
608 # won't be in the list of largefiles at this point
608 # won't be in the list of largefiles at this point
609 match._files += sorted(standins)
609 match._files += sorted(standins)
610
610
611 actualfiles = []
611 actualfiles = []
612 for f in match._files:
612 for f in match._files:
613 fstandin = standin(f)
613 fstandin = standin(f)
614
614
615 # For largefiles, only one of the normal and standin should be
615 # For largefiles, only one of the normal and standin should be
616 # committed (except if one of them is a remove). In the case of a
616 # committed (except if one of them is a remove). In the case of a
617 # standin removal, drop the normal file if it is unknown to dirstate.
617 # standin removal, drop the normal file if it is unknown to dirstate.
618 # Thus, skip plain largefile names but keep the standin.
618 # Thus, skip plain largefile names but keep the standin.
619 if f in lfiles or fstandin in standins:
619 if f in lfiles or fstandin in standins:
620 if repo.dirstate[fstandin] != 'r':
620 if repo.dirstate[fstandin] != 'r':
621 if repo.dirstate[f] != 'r':
621 if repo.dirstate[f] != 'r':
622 continue
622 continue
623 elif repo.dirstate[f] == '?':
623 elif repo.dirstate[f] == '?':
624 continue
624 continue
625
625
626 actualfiles.append(f)
626 actualfiles.append(f)
627 match._files = actualfiles
627 match._files = actualfiles
628
628
629 def matchfn(f):
629 def matchfn(f):
630 if origmatchfn(f):
630 if origmatchfn(f):
631 return f not in lfiles
631 return f not in lfiles
632 else:
632 else:
633 return f in standins
633 return f in standins
634
634
635 match.matchfn = matchfn
635 match.matchfn = matchfn
636
636
637 return match
637 return match
638
638
639 class automatedcommithook(object):
639 class automatedcommithook(object):
640 '''Stateful hook to update standins at the 1st commit of resuming
640 '''Stateful hook to update standins at the 1st commit of resuming
641
641
642 For efficiency, updating standins in the working directory should
642 For efficiency, updating standins in the working directory should
643 be avoided while automated committing (like rebase, transplant and
643 be avoided while automated committing (like rebase, transplant and
644 so on), because they should be updated before committing.
644 so on), because they should be updated before committing.
645
645
646 But the 1st commit of resuming automated committing (e.g. ``rebase
646 But the 1st commit of resuming automated committing (e.g. ``rebase
647 --continue``) should update them, because largefiles may be
647 --continue``) should update them, because largefiles may be
648 modified manually.
648 modified manually.
649 '''
649 '''
650 def __init__(self, resuming):
650 def __init__(self, resuming):
651 self.resuming = resuming
651 self.resuming = resuming
652
652
653 def __call__(self, repo, match):
653 def __call__(self, repo, match):
654 if self.resuming:
654 if self.resuming:
655 self.resuming = False # avoids updating at subsequent commits
655 self.resuming = False # avoids updating at subsequent commits
656 return updatestandinsbymatch(repo, match)
656 return updatestandinsbymatch(repo, match)
657 else:
657 else:
658 return match
658 return match
659
659
660 def getstatuswriter(ui, repo, forcibly=None):
660 def getstatuswriter(ui, repo, forcibly=None):
661 '''Return the function to write largefiles specific status out
661 '''Return the function to write largefiles specific status out
662
662
663 If ``forcibly`` is ``None``, this returns the last element of
663 If ``forcibly`` is ``None``, this returns the last element of
664 ``repo._lfstatuswriters`` as "default" writer function.
664 ``repo._lfstatuswriters`` as "default" writer function.
665
665
666 Otherwise, this returns the function to always write out (or
666 Otherwise, this returns the function to always write out (or
667 ignore if ``not forcibly``) status.
667 ignore if ``not forcibly``) status.
668 '''
668 '''
669 if forcibly is None and util.safehasattr(repo, '_largefilesenabled'):
669 if forcibly is None and util.safehasattr(repo, '_largefilesenabled'):
670 return repo._lfstatuswriters[-1]
670 return repo._lfstatuswriters[-1]
671 else:
671 else:
672 if forcibly:
672 if forcibly:
673 return ui.status # forcibly WRITE OUT
673 return ui.status # forcibly WRITE OUT
674 else:
674 else:
675 return lambda *msg, **opts: None # forcibly IGNORE
675 return lambda *msg, **opts: None # forcibly IGNORE
@@ -1,1506 +1,1503 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 import copy
12 import copy
13 import os
13 import os
14
14
15 from mercurial.i18n import _
15 from mercurial.i18n import _
16
16
17 from mercurial.hgweb import (
17 from mercurial.hgweb import (
18 webcommands,
18 webcommands,
19 )
19 )
20
20
21 from mercurial import (
21 from mercurial import (
22 archival,
22 archival,
23 cmdutil,
23 cmdutil,
24 copies as copiesmod,
24 copies as copiesmod,
25 error,
25 error,
26 exchange,
26 exchange,
27 extensions,
27 extensions,
28 exthelper,
28 exthelper,
29 filemerge,
29 filemerge,
30 hg,
30 hg,
31 logcmdutil,
31 logcmdutil,
32 match as matchmod,
32 match as matchmod,
33 merge,
33 merge,
34 pathutil,
34 pathutil,
35 pycompat,
35 pycompat,
36 scmutil,
36 scmutil,
37 smartset,
37 smartset,
38 subrepo,
38 subrepo,
39 upgrade,
39 upgrade,
40 url as urlmod,
40 url as urlmod,
41 util,
41 util,
42 )
42 )
43
43
44 from . import (
44 from . import (
45 lfcommands,
45 lfcommands,
46 lfutil,
46 lfutil,
47 storefactory,
47 storefactory,
48 )
48 )
49
49
50 eh = exthelper.exthelper()
50 eh = exthelper.exthelper()
51
51
52 # -- Utility functions: commonly/repeatedly needed functionality ---------------
52 # -- Utility functions: commonly/repeatedly needed functionality ---------------
53
53
54 def composelargefilematcher(match, manifest):
54 def composelargefilematcher(match, manifest):
55 '''create a matcher that matches only the largefiles in the original
55 '''create a matcher that matches only the largefiles in the original
56 matcher'''
56 matcher'''
57 m = copy.copy(match)
57 m = copy.copy(match)
58 lfile = lambda f: lfutil.standin(f) in manifest
58 lfile = lambda f: lfutil.standin(f) in manifest
59 m._files = [lf for lf in m._files if lfile(lf)]
59 m._files = [lf for lf in m._files if lfile(lf)]
60 m._fileset = set(m._files)
60 m._fileset = set(m._files)
61 m.always = lambda: False
61 m.always = lambda: False
62 origmatchfn = m.matchfn
62 origmatchfn = m.matchfn
63 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
63 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
64 return m
64 return m
65
65
66 def composenormalfilematcher(match, manifest, exclude=None):
66 def composenormalfilematcher(match, manifest, exclude=None):
67 excluded = set()
67 excluded = set()
68 if exclude is not None:
68 if exclude is not None:
69 excluded.update(exclude)
69 excluded.update(exclude)
70
70
71 m = copy.copy(match)
71 m = copy.copy(match)
72 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
72 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
73 manifest or f in excluded)
73 manifest or f in excluded)
74 m._files = [lf for lf in m._files if notlfile(lf)]
74 m._files = [lf for lf in m._files if notlfile(lf)]
75 m._fileset = set(m._files)
75 m._fileset = set(m._files)
76 m.always = lambda: False
76 m.always = lambda: False
77 origmatchfn = m.matchfn
77 origmatchfn = m.matchfn
78 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
78 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
79 return m
79 return m
80
80
81 def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
81 def addlargefiles(ui, repo, isaddremove, matcher, uipathfn, **opts):
82 large = opts.get(r'large')
82 large = opts.get(r'large')
83 lfsize = lfutil.getminsize(
83 lfsize = lfutil.getminsize(
84 ui, lfutil.islfilesrepo(repo), opts.get(r'lfsize'))
84 ui, lfutil.islfilesrepo(repo), opts.get(r'lfsize'))
85
85
86 lfmatcher = None
86 lfmatcher = None
87 if lfutil.islfilesrepo(repo):
87 if lfutil.islfilesrepo(repo):
88 lfpats = ui.configlist(lfutil.longname, 'patterns')
88 lfpats = ui.configlist(lfutil.longname, 'patterns')
89 if lfpats:
89 if lfpats:
90 lfmatcher = matchmod.match(repo.root, '', list(lfpats))
90 lfmatcher = matchmod.match(repo.root, '', list(lfpats))
91
91
92 lfnames = []
92 lfnames = []
93 m = matcher
93 m = matcher
94
94
95 wctx = repo[None]
95 wctx = repo[None]
96 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
96 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
97 exact = m.exact(f)
97 exact = m.exact(f)
98 lfile = lfutil.standin(f) in wctx
98 lfile = lfutil.standin(f) in wctx
99 nfile = f in wctx
99 nfile = f in wctx
100 exists = lfile or nfile
100 exists = lfile or nfile
101
101
102 # Don't warn the user when they attempt to add a normal tracked file.
102 # Don't warn the user when they attempt to add a normal tracked file.
103 # The normal add code will do that for us.
103 # The normal add code will do that for us.
104 if exact and exists:
104 if exact and exists:
105 if lfile:
105 if lfile:
106 ui.warn(_('%s already a largefile\n') % uipathfn(f))
106 ui.warn(_('%s already a largefile\n') % uipathfn(f))
107 continue
107 continue
108
108
109 if (exact or not exists) and not lfutil.isstandin(f):
109 if (exact or not exists) and not lfutil.isstandin(f):
110 # In case the file was removed previously, but not committed
110 # In case the file was removed previously, but not committed
111 # (issue3507)
111 # (issue3507)
112 if not repo.wvfs.exists(f):
112 if not repo.wvfs.exists(f):
113 continue
113 continue
114
114
115 abovemin = (lfsize and
115 abovemin = (lfsize and
116 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
116 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
117 if large or abovemin or (lfmatcher and lfmatcher(f)):
117 if large or abovemin or (lfmatcher and lfmatcher(f)):
118 lfnames.append(f)
118 lfnames.append(f)
119 if ui.verbose or not exact:
119 if ui.verbose or not exact:
120 ui.status(_('adding %s as a largefile\n') % uipathfn(f))
120 ui.status(_('adding %s as a largefile\n') % uipathfn(f))
121
121
122 bad = []
122 bad = []
123
123
124 # Need to lock, otherwise there could be a race condition between
124 # Need to lock, otherwise there could be a race condition between
125 # when standins are created and added to the repo.
125 # when standins are created and added to the repo.
126 with repo.wlock():
126 with repo.wlock():
127 if not opts.get(r'dry_run'):
127 if not opts.get(r'dry_run'):
128 standins = []
128 standins = []
129 lfdirstate = lfutil.openlfdirstate(ui, repo)
129 lfdirstate = lfutil.openlfdirstate(ui, repo)
130 for f in lfnames:
130 for f in lfnames:
131 standinname = lfutil.standin(f)
131 standinname = lfutil.standin(f)
132 lfutil.writestandin(repo, standinname, hash='',
132 lfutil.writestandin(repo, standinname, hash='',
133 executable=lfutil.getexecutable(repo.wjoin(f)))
133 executable=lfutil.getexecutable(repo.wjoin(f)))
134 standins.append(standinname)
134 standins.append(standinname)
135 if lfdirstate[f] == 'r':
135 if lfdirstate[f] == 'r':
136 lfdirstate.normallookup(f)
136 lfdirstate.normallookup(f)
137 else:
137 else:
138 lfdirstate.add(f)
138 lfdirstate.add(f)
139 lfdirstate.write()
139 lfdirstate.write()
140 bad += [lfutil.splitstandin(f)
140 bad += [lfutil.splitstandin(f)
141 for f in repo[None].add(standins)
141 for f in repo[None].add(standins)
142 if f in m.files()]
142 if f in m.files()]
143
143
144 added = [f for f in lfnames if f not in bad]
144 added = [f for f in lfnames if f not in bad]
145 return added, bad
145 return added, bad
146
146
147 def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
147 def removelargefiles(ui, repo, isaddremove, matcher, uipathfn, dryrun, **opts):
148 after = opts.get(r'after')
148 after = opts.get(r'after')
149 m = composelargefilematcher(matcher, repo[None].manifest())
149 m = composelargefilematcher(matcher, repo[None].manifest())
150 try:
150 try:
151 repo.lfstatus = True
151 repo.lfstatus = True
152 s = repo.status(match=m, clean=not isaddremove)
152 s = repo.status(match=m, clean=not isaddremove)
153 finally:
153 finally:
154 repo.lfstatus = False
154 repo.lfstatus = False
155 manifest = repo[None].manifest()
155 manifest = repo[None].manifest()
156 modified, added, deleted, clean = [[f for f in list
156 modified, added, deleted, clean = [[f for f in list
157 if lfutil.standin(f) in manifest]
157 if lfutil.standin(f) in manifest]
158 for list in (s.modified, s.added,
158 for list in (s.modified, s.added,
159 s.deleted, s.clean)]
159 s.deleted, s.clean)]
160
160
161 def warn(files, msg):
161 def warn(files, msg):
162 for f in files:
162 for f in files:
163 ui.warn(msg % uipathfn(f))
163 ui.warn(msg % uipathfn(f))
164 return int(len(files) > 0)
164 return int(len(files) > 0)
165
165
166 if after:
166 if after:
167 remove = deleted
167 remove = deleted
168 result = warn(modified + added + clean,
168 result = warn(modified + added + clean,
169 _('not removing %s: file still exists\n'))
169 _('not removing %s: file still exists\n'))
170 else:
170 else:
171 remove = deleted + clean
171 remove = deleted + clean
172 result = warn(modified, _('not removing %s: file is modified (use -f'
172 result = warn(modified, _('not removing %s: file is modified (use -f'
173 ' to force removal)\n'))
173 ' to force removal)\n'))
174 result = warn(added, _('not removing %s: file has been marked for add'
174 result = warn(added, _('not removing %s: file has been marked for add'
175 ' (use forget to undo)\n')) or result
175 ' (use forget to undo)\n')) or result
176
176
177 # Need to lock because standin files are deleted then removed from the
177 # Need to lock because standin files are deleted then removed from the
178 # repository and we could race in-between.
178 # repository and we could race in-between.
179 with repo.wlock():
179 with repo.wlock():
180 lfdirstate = lfutil.openlfdirstate(ui, repo)
180 lfdirstate = lfutil.openlfdirstate(ui, repo)
181 for f in sorted(remove):
181 for f in sorted(remove):
182 if ui.verbose or not m.exact(f):
182 if ui.verbose or not m.exact(f):
183 ui.status(_('removing %s\n') % uipathfn(f))
183 ui.status(_('removing %s\n') % uipathfn(f))
184
184
185 if not dryrun:
185 if not dryrun:
186 if not after:
186 if not after:
187 repo.wvfs.unlinkpath(f, ignoremissing=True)
187 repo.wvfs.unlinkpath(f, ignoremissing=True)
188
188
189 if dryrun:
189 if dryrun:
190 return result
190 return result
191
191
192 remove = [lfutil.standin(f) for f in remove]
192 remove = [lfutil.standin(f) for f in remove]
193 # If this is being called by addremove, let the original addremove
193 # If this is being called by addremove, let the original addremove
194 # function handle this.
194 # function handle this.
195 if not isaddremove:
195 if not isaddremove:
196 for f in remove:
196 for f in remove:
197 repo.wvfs.unlinkpath(f, ignoremissing=True)
197 repo.wvfs.unlinkpath(f, ignoremissing=True)
198 repo[None].forget(remove)
198 repo[None].forget(remove)
199
199
200 for f in remove:
200 for f in remove:
201 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
201 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
202 False)
202 False)
203
203
204 lfdirstate.write()
204 lfdirstate.write()
205
205
206 return result
206 return result
207
207
208 # For overriding mercurial.hgweb.webcommands so that largefiles will
208 # For overriding mercurial.hgweb.webcommands so that largefiles will
209 # appear at their right place in the manifests.
209 # appear at their right place in the manifests.
210 @eh.wrapfunction(webcommands, 'decodepath')
210 @eh.wrapfunction(webcommands, 'decodepath')
211 def decodepath(orig, path):
211 def decodepath(orig, path):
212 return lfutil.splitstandin(path) or path
212 return lfutil.splitstandin(path) or path
213
213
214 # -- Wrappers: modify existing commands --------------------------------
214 # -- Wrappers: modify existing commands --------------------------------
215
215
216 @eh.wrapcommand('add',
216 @eh.wrapcommand('add',
217 opts=[('', 'large', None, _('add as largefile')),
217 opts=[('', 'large', None, _('add as largefile')),
218 ('', 'normal', None, _('add as normal file')),
218 ('', 'normal', None, _('add as normal file')),
219 ('', 'lfsize', '', _('add all files above this size (in megabytes) '
219 ('', 'lfsize', '', _('add all files above this size (in megabytes) '
220 'as largefiles (default: 10)'))])
220 'as largefiles (default: 10)'))])
221 def overrideadd(orig, ui, repo, *pats, **opts):
221 def overrideadd(orig, ui, repo, *pats, **opts):
222 if opts.get(r'normal') and opts.get(r'large'):
222 if opts.get(r'normal') and opts.get(r'large'):
223 raise error.Abort(_('--normal cannot be used with --large'))
223 raise error.Abort(_('--normal cannot be used with --large'))
224 return orig(ui, repo, *pats, **opts)
224 return orig(ui, repo, *pats, **opts)
225
225
226 @eh.wrapfunction(cmdutil, 'add')
226 @eh.wrapfunction(cmdutil, 'add')
227 def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
227 def cmdutiladd(orig, ui, repo, matcher, prefix, uipathfn, explicitonly, **opts):
228 # The --normal flag short circuits this override
228 # The --normal flag short circuits this override
229 if opts.get(r'normal'):
229 if opts.get(r'normal'):
230 return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
230 return orig(ui, repo, matcher, prefix, uipathfn, explicitonly, **opts)
231
231
232 ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
232 ladded, lbad = addlargefiles(ui, repo, False, matcher, uipathfn, **opts)
233 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
233 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
234 ladded)
234 ladded)
235 bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
235 bad = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly, **opts)
236
236
237 bad.extend(f for f in lbad)
237 bad.extend(f for f in lbad)
238 return bad
238 return bad
239
239
240 @eh.wrapfunction(cmdutil, 'remove')
240 @eh.wrapfunction(cmdutil, 'remove')
241 def cmdutilremove(orig, ui, repo, matcher, prefix, uipathfn, after, force,
241 def cmdutilremove(orig, ui, repo, matcher, prefix, uipathfn, after, force,
242 subrepos, dryrun):
242 subrepos, dryrun):
243 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
243 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
244 result = orig(ui, repo, normalmatcher, prefix, uipathfn, after, force,
244 result = orig(ui, repo, normalmatcher, prefix, uipathfn, after, force,
245 subrepos, dryrun)
245 subrepos, dryrun)
246 return removelargefiles(ui, repo, False, matcher, uipathfn, dryrun,
246 return removelargefiles(ui, repo, False, matcher, uipathfn, dryrun,
247 after=after, force=force) or result
247 after=after, force=force) or result
248
248
249 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
249 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
250 def overridestatusfn(orig, repo, rev2, **opts):
250 def overridestatusfn(orig, repo, rev2, **opts):
251 try:
251 try:
252 repo._repo.lfstatus = True
252 repo._repo.lfstatus = True
253 return orig(repo, rev2, **opts)
253 return orig(repo, rev2, **opts)
254 finally:
254 finally:
255 repo._repo.lfstatus = False
255 repo._repo.lfstatus = False
256
256
257 @eh.wrapcommand('status')
257 @eh.wrapcommand('status')
258 def overridestatus(orig, ui, repo, *pats, **opts):
258 def overridestatus(orig, ui, repo, *pats, **opts):
259 try:
259 try:
260 repo.lfstatus = True
260 repo.lfstatus = True
261 return orig(ui, repo, *pats, **opts)
261 return orig(ui, repo, *pats, **opts)
262 finally:
262 finally:
263 repo.lfstatus = False
263 repo.lfstatus = False
264
264
265 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
265 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
266 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
266 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
267 try:
267 try:
268 repo._repo.lfstatus = True
268 repo._repo.lfstatus = True
269 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
269 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
270 finally:
270 finally:
271 repo._repo.lfstatus = False
271 repo._repo.lfstatus = False
272
272
273 @eh.wrapcommand('log')
273 @eh.wrapcommand('log')
274 def overridelog(orig, ui, repo, *pats, **opts):
274 def overridelog(orig, ui, repo, *pats, **opts):
275 def overridematchandpats(orig, ctx, pats=(), opts=None, globbed=False,
275 def overridematchandpats(orig, ctx, pats=(), opts=None, globbed=False,
276 default='relpath', badfn=None):
276 default='relpath', badfn=None):
277 """Matcher that merges root directory with .hglf, suitable for log.
277 """Matcher that merges root directory with .hglf, suitable for log.
278 It is still possible to match .hglf directly.
278 It is still possible to match .hglf directly.
279 For any listed files run log on the standin too.
279 For any listed files run log on the standin too.
280 matchfn tries both the given filename and with .hglf stripped.
280 matchfn tries both the given filename and with .hglf stripped.
281 """
281 """
282 if opts is None:
282 if opts is None:
283 opts = {}
283 opts = {}
284 matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
284 matchandpats = orig(ctx, pats, opts, globbed, default, badfn=badfn)
285 m, p = copy.copy(matchandpats)
285 m, p = copy.copy(matchandpats)
286
286
287 if m.always():
287 if m.always():
288 # We want to match everything anyway, so there's no benefit trying
288 # We want to match everything anyway, so there's no benefit trying
289 # to add standins.
289 # to add standins.
290 return matchandpats
290 return matchandpats
291
291
292 pats = set(p)
292 pats = set(p)
293
293
294 def fixpats(pat, tostandin=lfutil.standin):
294 def fixpats(pat, tostandin=lfutil.standin):
295 if pat.startswith('set:'):
295 if pat.startswith('set:'):
296 return pat
296 return pat
297
297
298 kindpat = matchmod._patsplit(pat, None)
298 kindpat = matchmod._patsplit(pat, None)
299
299
300 if kindpat[0] is not None:
300 if kindpat[0] is not None:
301 return kindpat[0] + ':' + tostandin(kindpat[1])
301 return kindpat[0] + ':' + tostandin(kindpat[1])
302 return tostandin(kindpat[1])
302 return tostandin(kindpat[1])
303
303
304 cwd = repo.getcwd()
304 cwd = repo.getcwd()
305 if cwd:
305 if cwd:
306 hglf = lfutil.shortname
306 hglf = lfutil.shortname
307 back = util.pconvert(repo.pathto(hglf)[:-len(hglf)])
307 back = util.pconvert(repo.pathto(hglf)[:-len(hglf)])
308
308
309 def tostandin(f):
309 def tostandin(f):
310 # The file may already be a standin, so truncate the back
310 # The file may already be a standin, so truncate the back
311 # prefix and test before mangling it. This avoids turning
311 # prefix and test before mangling it. This avoids turning
312 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
312 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
313 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
313 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
314 return f
314 return f
315
315
316 # An absolute path is from outside the repo, so truncate the
316 # An absolute path is from outside the repo, so truncate the
317 # path to the root before building the standin. Otherwise cwd
317 # path to the root before building the standin. Otherwise cwd
318 # is somewhere in the repo, relative to root, and needs to be
318 # is somewhere in the repo, relative to root, and needs to be
319 # prepended before building the standin.
319 # prepended before building the standin.
320 if os.path.isabs(cwd):
320 if os.path.isabs(cwd):
321 f = f[len(back):]
321 f = f[len(back):]
322 else:
322 else:
323 f = cwd + '/' + f
323 f = cwd + '/' + f
324 return back + lfutil.standin(f)
324 return back + lfutil.standin(f)
325 else:
325 else:
326 def tostandin(f):
326 def tostandin(f):
327 if lfutil.isstandin(f):
327 if lfutil.isstandin(f):
328 return f
328 return f
329 return lfutil.standin(f)
329 return lfutil.standin(f)
330 pats.update(fixpats(f, tostandin) for f in p)
330 pats.update(fixpats(f, tostandin) for f in p)
331
331
332 for i in range(0, len(m._files)):
332 for i in range(0, len(m._files)):
333 # Don't add '.hglf' to m.files, since that is already covered by '.'
333 # Don't add '.hglf' to m.files, since that is already covered by '.'
334 if m._files[i] == '.':
334 if m._files[i] == '.':
335 continue
335 continue
336 standin = lfutil.standin(m._files[i])
336 standin = lfutil.standin(m._files[i])
337 # If the "standin" is a directory, append instead of replace to
337 # If the "standin" is a directory, append instead of replace to
338 # support naming a directory on the command line with only
338 # support naming a directory on the command line with only
339 # largefiles. The original directory is kept to support normal
339 # largefiles. The original directory is kept to support normal
340 # files.
340 # files.
341 if standin in ctx:
341 if standin in ctx:
342 m._files[i] = standin
342 m._files[i] = standin
343 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
343 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
344 m._files.append(standin)
344 m._files.append(standin)
345
345
346 m._fileset = set(m._files)
346 m._fileset = set(m._files)
347 m.always = lambda: False
347 m.always = lambda: False
348 origmatchfn = m.matchfn
348 origmatchfn = m.matchfn
349 def lfmatchfn(f):
349 def lfmatchfn(f):
350 lf = lfutil.splitstandin(f)
350 lf = lfutil.splitstandin(f)
351 if lf is not None and origmatchfn(lf):
351 if lf is not None and origmatchfn(lf):
352 return True
352 return True
353 r = origmatchfn(f)
353 r = origmatchfn(f)
354 return r
354 return r
355 m.matchfn = lfmatchfn
355 m.matchfn = lfmatchfn
356
356
357 ui.debug('updated patterns: %s\n' % ', '.join(sorted(pats)))
357 ui.debug('updated patterns: %s\n' % ', '.join(sorted(pats)))
358 return m, pats
358 return m, pats
359
359
360 # For hg log --patch, the match object is used in two different senses:
360 # For hg log --patch, the match object is used in two different senses:
361 # (1) to determine what revisions should be printed out, and
361 # (1) to determine what revisions should be printed out, and
362 # (2) to determine what files to print out diffs for.
362 # (2) to determine what files to print out diffs for.
363 # The magic matchandpats override should be used for case (1) but not for
363 # The magic matchandpats override should be used for case (1) but not for
364 # case (2).
364 # case (2).
365 oldmatchandpats = scmutil.matchandpats
365 oldmatchandpats = scmutil.matchandpats
366 def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
366 def overridemakefilematcher(orig, repo, pats, opts, badfn=None):
367 wctx = repo[None]
367 wctx = repo[None]
368 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
368 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
369 return lambda ctx: match
369 return lambda ctx: match
370
370
371 wrappedmatchandpats = extensions.wrappedfunction(scmutil, 'matchandpats',
371 wrappedmatchandpats = extensions.wrappedfunction(scmutil, 'matchandpats',
372 overridematchandpats)
372 overridematchandpats)
373 wrappedmakefilematcher = extensions.wrappedfunction(
373 wrappedmakefilematcher = extensions.wrappedfunction(
374 logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
374 logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
375 with wrappedmatchandpats, wrappedmakefilematcher:
375 with wrappedmatchandpats, wrappedmakefilematcher:
376 return orig(ui, repo, *pats, **opts)
376 return orig(ui, repo, *pats, **opts)
377
377
378 @eh.wrapcommand('verify',
378 @eh.wrapcommand('verify',
379 opts=[('', 'large', None,
379 opts=[('', 'large', None,
380 _('verify that all largefiles in current revision exists')),
380 _('verify that all largefiles in current revision exists')),
381 ('', 'lfa', None,
381 ('', 'lfa', None,
382 _('verify largefiles in all revisions, not just current')),
382 _('verify largefiles in all revisions, not just current')),
383 ('', 'lfc', None,
383 ('', 'lfc', None,
384 _('verify local largefile contents, not just existence'))])
384 _('verify local largefile contents, not just existence'))])
385 def overrideverify(orig, ui, repo, *pats, **opts):
385 def overrideverify(orig, ui, repo, *pats, **opts):
386 large = opts.pop(r'large', False)
386 large = opts.pop(r'large', False)
387 all = opts.pop(r'lfa', False)
387 all = opts.pop(r'lfa', False)
388 contents = opts.pop(r'lfc', False)
388 contents = opts.pop(r'lfc', False)
389
389
390 result = orig(ui, repo, *pats, **opts)
390 result = orig(ui, repo, *pats, **opts)
391 if large or all or contents:
391 if large or all or contents:
392 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
392 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
393 return result
393 return result
394
394
395 @eh.wrapcommand('debugstate',
395 @eh.wrapcommand('debugstate',
396 opts=[('', 'large', None, _('display largefiles dirstate'))])
396 opts=[('', 'large', None, _('display largefiles dirstate'))])
397 def overridedebugstate(orig, ui, repo, *pats, **opts):
397 def overridedebugstate(orig, ui, repo, *pats, **opts):
398 large = opts.pop(r'large', False)
398 large = opts.pop(r'large', False)
399 if large:
399 if large:
400 class fakerepo(object):
400 class fakerepo(object):
401 dirstate = lfutil.openlfdirstate(ui, repo)
401 dirstate = lfutil.openlfdirstate(ui, repo)
402 orig(ui, fakerepo, *pats, **opts)
402 orig(ui, fakerepo, *pats, **opts)
403 else:
403 else:
404 orig(ui, repo, *pats, **opts)
404 orig(ui, repo, *pats, **opts)
405
405
406 # Before starting the manifest merge, merge.updates will call
406 # Before starting the manifest merge, merge.updates will call
407 # _checkunknownfile to check if there are any files in the merged-in
407 # _checkunknownfile to check if there are any files in the merged-in
408 # changeset that collide with unknown files in the working copy.
408 # changeset that collide with unknown files in the working copy.
409 #
409 #
410 # The largefiles are seen as unknown, so this prevents us from merging
410 # The largefiles are seen as unknown, so this prevents us from merging
411 # in a file 'foo' if we already have a largefile with the same name.
411 # in a file 'foo' if we already have a largefile with the same name.
412 #
412 #
413 # The overridden function filters the unknown files by removing any
413 # The overridden function filters the unknown files by removing any
414 # largefiles. This makes the merge proceed and we can then handle this
414 # largefiles. This makes the merge proceed and we can then handle this
415 # case further in the overridden calculateupdates function below.
415 # case further in the overridden calculateupdates function below.
416 @eh.wrapfunction(merge, '_checkunknownfile')
416 @eh.wrapfunction(merge, '_checkunknownfile')
417 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
417 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
418 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
418 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
419 return False
419 return False
420 return origfn(repo, wctx, mctx, f, f2)
420 return origfn(repo, wctx, mctx, f, f2)
421
421
422 # The manifest merge handles conflicts on the manifest level. We want
422 # The manifest merge handles conflicts on the manifest level. We want
423 # to handle changes in largefile-ness of files at this level too.
423 # to handle changes in largefile-ness of files at this level too.
424 #
424 #
425 # The strategy is to run the original calculateupdates and then process
425 # The strategy is to run the original calculateupdates and then process
426 # the action list it outputs. There are two cases we need to deal with:
426 # the action list it outputs. There are two cases we need to deal with:
427 #
427 #
428 # 1. Normal file in p1, largefile in p2. Here the largefile is
428 # 1. Normal file in p1, largefile in p2. Here the largefile is
429 # detected via its standin file, which will enter the working copy
429 # detected via its standin file, which will enter the working copy
430 # with a "get" action. It is not "merge" since the standin is all
430 # with a "get" action. It is not "merge" since the standin is all
431 # Mercurial is concerned with at this level -- the link to the
431 # Mercurial is concerned with at this level -- the link to the
432 # existing normal file is not relevant here.
432 # existing normal file is not relevant here.
433 #
433 #
434 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
434 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
435 # since the largefile will be present in the working copy and
435 # since the largefile will be present in the working copy and
436 # different from the normal file in p2. Mercurial therefore
436 # different from the normal file in p2. Mercurial therefore
437 # triggers a merge action.
437 # triggers a merge action.
438 #
438 #
439 # In both cases, we prompt the user and emit new actions to either
439 # In both cases, we prompt the user and emit new actions to either
440 # remove the standin (if the normal file was kept) or to remove the
440 # remove the standin (if the normal file was kept) or to remove the
441 # normal file and get the standin (if the largefile was kept). The
441 # normal file and get the standin (if the largefile was kept). The
442 # default prompt answer is to use the largefile version since it was
442 # default prompt answer is to use the largefile version since it was
443 # presumably changed on purpose.
443 # presumably changed on purpose.
444 #
444 #
445 # Finally, the merge.applyupdates function will then take care of
445 # Finally, the merge.applyupdates function will then take care of
446 # writing the files into the working copy and lfcommands.updatelfiles
446 # writing the files into the working copy and lfcommands.updatelfiles
447 # will update the largefiles.
447 # will update the largefiles.
448 @eh.wrapfunction(merge, 'calculateupdates')
448 @eh.wrapfunction(merge, 'calculateupdates')
449 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
449 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
450 acceptremote, *args, **kwargs):
450 acceptremote, *args, **kwargs):
451 overwrite = force and not branchmerge
451 overwrite = force and not branchmerge
452 actions, diverge, renamedelete = origfn(
452 actions, diverge, renamedelete = origfn(
453 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
453 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
454
454
455 if overwrite:
455 if overwrite:
456 return actions, diverge, renamedelete
456 return actions, diverge, renamedelete
457
457
458 # Convert to dictionary with filename as key and action as value.
458 # Convert to dictionary with filename as key and action as value.
459 lfiles = set()
459 lfiles = set()
460 for f in actions:
460 for f in actions:
461 splitstandin = lfutil.splitstandin(f)
461 splitstandin = lfutil.splitstandin(f)
462 if splitstandin in p1:
462 if splitstandin in p1:
463 lfiles.add(splitstandin)
463 lfiles.add(splitstandin)
464 elif lfutil.standin(f) in p1:
464 elif lfutil.standin(f) in p1:
465 lfiles.add(f)
465 lfiles.add(f)
466
466
467 for lfile in sorted(lfiles):
467 for lfile in sorted(lfiles):
468 standin = lfutil.standin(lfile)
468 standin = lfutil.standin(lfile)
469 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
469 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
470 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
470 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
471 if sm in ('g', 'dc') and lm != 'r':
471 if sm in ('g', 'dc') and lm != 'r':
472 if sm == 'dc':
472 if sm == 'dc':
473 f1, f2, fa, move, anc = sargs
473 f1, f2, fa, move, anc = sargs
474 sargs = (p2[f2].flags(), False)
474 sargs = (p2[f2].flags(), False)
475 # Case 1: normal file in the working copy, largefile in
475 # Case 1: normal file in the working copy, largefile in
476 # the second parent
476 # the second parent
477 usermsg = _('remote turned local normal file %s into a largefile\n'
477 usermsg = _('remote turned local normal file %s into a largefile\n'
478 'use (l)argefile or keep (n)ormal file?'
478 'use (l)argefile or keep (n)ormal file?'
479 '$$ &Largefile $$ &Normal file') % lfile
479 '$$ &Largefile $$ &Normal file') % lfile
480 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
480 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
481 actions[lfile] = ('r', None, 'replaced by standin')
481 actions[lfile] = ('r', None, 'replaced by standin')
482 actions[standin] = ('g', sargs, 'replaces standin')
482 actions[standin] = ('g', sargs, 'replaces standin')
483 else: # keep local normal file
483 else: # keep local normal file
484 actions[lfile] = ('k', None, 'replaces standin')
484 actions[lfile] = ('k', None, 'replaces standin')
485 if branchmerge:
485 if branchmerge:
486 actions[standin] = ('k', None, 'replaced by non-standin')
486 actions[standin] = ('k', None, 'replaced by non-standin')
487 else:
487 else:
488 actions[standin] = ('r', None, 'replaced by non-standin')
488 actions[standin] = ('r', None, 'replaced by non-standin')
489 elif lm in ('g', 'dc') and sm != 'r':
489 elif lm in ('g', 'dc') and sm != 'r':
490 if lm == 'dc':
490 if lm == 'dc':
491 f1, f2, fa, move, anc = largs
491 f1, f2, fa, move, anc = largs
492 largs = (p2[f2].flags(), False)
492 largs = (p2[f2].flags(), False)
493 # Case 2: largefile in the working copy, normal file in
493 # Case 2: largefile in the working copy, normal file in
494 # the second parent
494 # the second parent
495 usermsg = _('remote turned local largefile %s into a normal file\n'
495 usermsg = _('remote turned local largefile %s into a normal file\n'
496 'keep (l)argefile or use (n)ormal file?'
496 'keep (l)argefile or use (n)ormal file?'
497 '$$ &Largefile $$ &Normal file') % lfile
497 '$$ &Largefile $$ &Normal file') % lfile
498 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
498 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
499 if branchmerge:
499 if branchmerge:
500 # largefile can be restored from standin safely
500 # largefile can be restored from standin safely
501 actions[lfile] = ('k', None, 'replaced by standin')
501 actions[lfile] = ('k', None, 'replaced by standin')
502 actions[standin] = ('k', None, 'replaces standin')
502 actions[standin] = ('k', None, 'replaces standin')
503 else:
503 else:
504 # "lfile" should be marked as "removed" without
504 # "lfile" should be marked as "removed" without
505 # removal of itself
505 # removal of itself
506 actions[lfile] = ('lfmr', None,
506 actions[lfile] = ('lfmr', None,
507 'forget non-standin largefile')
507 'forget non-standin largefile')
508
508
509 # linear-merge should treat this largefile as 're-added'
509 # linear-merge should treat this largefile as 're-added'
510 actions[standin] = ('a', None, 'keep standin')
510 actions[standin] = ('a', None, 'keep standin')
511 else: # pick remote normal file
511 else: # pick remote normal file
512 actions[lfile] = ('g', largs, 'replaces standin')
512 actions[lfile] = ('g', largs, 'replaces standin')
513 actions[standin] = ('r', None, 'replaced by non-standin')
513 actions[standin] = ('r', None, 'replaced by non-standin')
514
514
515 return actions, diverge, renamedelete
515 return actions, diverge, renamedelete
516
516
517 @eh.wrapfunction(merge, 'recordupdates')
517 @eh.wrapfunction(merge, 'recordupdates')
518 def mergerecordupdates(orig, repo, actions, branchmerge):
518 def mergerecordupdates(orig, repo, actions, branchmerge):
519 if 'lfmr' in actions:
519 if 'lfmr' in actions:
520 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
520 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
521 for lfile, args, msg in actions['lfmr']:
521 for lfile, args, msg in actions['lfmr']:
522 # this should be executed before 'orig', to execute 'remove'
522 # this should be executed before 'orig', to execute 'remove'
523 # before all other actions
523 # before all other actions
524 repo.dirstate.remove(lfile)
524 repo.dirstate.remove(lfile)
525 # make sure lfile doesn't get synclfdirstate'd as normal
525 # make sure lfile doesn't get synclfdirstate'd as normal
526 lfdirstate.add(lfile)
526 lfdirstate.add(lfile)
527 lfdirstate.write()
527 lfdirstate.write()
528
528
529 return orig(repo, actions, branchmerge)
529 return orig(repo, actions, branchmerge)
530
530
531 # Override filemerge to prompt the user about how they wish to merge
531 # Override filemerge to prompt the user about how they wish to merge
532 # largefiles. This will handle identical edits without prompting the user.
532 # largefiles. This will handle identical edits without prompting the user.
533 @eh.wrapfunction(filemerge, '_filemerge')
533 @eh.wrapfunction(filemerge, '_filemerge')
534 def overridefilemerge(origfn, premerge, repo, wctx, mynode, orig, fcd, fco, fca,
534 def overridefilemerge(origfn, premerge, repo, wctx, mynode, orig, fcd, fco, fca,
535 labels=None):
535 labels=None):
536 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
536 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
537 return origfn(premerge, repo, wctx, mynode, orig, fcd, fco, fca,
537 return origfn(premerge, repo, wctx, mynode, orig, fcd, fco, fca,
538 labels=labels)
538 labels=labels)
539
539
540 ahash = lfutil.readasstandin(fca).lower()
540 ahash = lfutil.readasstandin(fca).lower()
541 dhash = lfutil.readasstandin(fcd).lower()
541 dhash = lfutil.readasstandin(fcd).lower()
542 ohash = lfutil.readasstandin(fco).lower()
542 ohash = lfutil.readasstandin(fco).lower()
543 if (ohash != ahash and
543 if (ohash != ahash and
544 ohash != dhash and
544 ohash != dhash and
545 (dhash == ahash or
545 (dhash == ahash or
546 repo.ui.promptchoice(
546 repo.ui.promptchoice(
547 _('largefile %s has a merge conflict\nancestor was %s\n'
547 _('largefile %s has a merge conflict\nancestor was %s\n'
548 'keep (l)ocal %s or\ntake (o)ther %s?'
548 'keep (l)ocal %s or\ntake (o)ther %s?'
549 '$$ &Local $$ &Other') %
549 '$$ &Local $$ &Other') %
550 (lfutil.splitstandin(orig), ahash, dhash, ohash),
550 (lfutil.splitstandin(orig), ahash, dhash, ohash),
551 0) == 1)):
551 0) == 1)):
552 repo.wwrite(fcd.path(), fco.data(), fco.flags())
552 repo.wwrite(fcd.path(), fco.data(), fco.flags())
553 return True, 0, False
553 return True, 0, False
554
554
555 @eh.wrapfunction(copiesmod, 'pathcopies')
555 @eh.wrapfunction(copiesmod, 'pathcopies')
556 def copiespathcopies(orig, ctx1, ctx2, match=None):
556 def copiespathcopies(orig, ctx1, ctx2, match=None):
557 copies = orig(ctx1, ctx2, match=match)
557 copies = orig(ctx1, ctx2, match=match)
558 updated = {}
558 updated = {}
559
559
560 for k, v in copies.iteritems():
560 for k, v in copies.iteritems():
561 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
561 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
562
562
563 return updated
563 return updated
564
564
565 # Copy first changes the matchers to match standins instead of
565 # Copy first changes the matchers to match standins instead of
566 # largefiles. Then it overrides util.copyfile in that function it
566 # largefiles. Then it overrides util.copyfile in that function it
567 # checks if the destination largefile already exists. It also keeps a
567 # checks if the destination largefile already exists. It also keeps a
568 # list of copied files so that the largefiles can be copied and the
568 # list of copied files so that the largefiles can be copied and the
569 # dirstate updated.
569 # dirstate updated.
570 @eh.wrapfunction(cmdutil, 'copy')
570 @eh.wrapfunction(cmdutil, 'copy')
571 def overridecopy(orig, ui, repo, pats, opts, rename=False):
571 def overridecopy(orig, ui, repo, pats, opts, rename=False):
572 # doesn't remove largefile on rename
572 # doesn't remove largefile on rename
573 if len(pats) < 2:
573 if len(pats) < 2:
574 # this isn't legal, let the original function deal with it
574 # this isn't legal, let the original function deal with it
575 return orig(ui, repo, pats, opts, rename)
575 return orig(ui, repo, pats, opts, rename)
576
576
577 # This could copy both lfiles and normal files in one command,
577 # This could copy both lfiles and normal files in one command,
578 # but we don't want to do that. First replace their matcher to
578 # but we don't want to do that. First replace their matcher to
579 # only match normal files and run it, then replace it to just
579 # only match normal files and run it, then replace it to just
580 # match largefiles and run it again.
580 # match largefiles and run it again.
581 nonormalfiles = False
581 nonormalfiles = False
582 nolfiles = False
582 nolfiles = False
583 manifest = repo[None].manifest()
583 manifest = repo[None].manifest()
584 def normalfilesmatchfn(orig, ctx, pats=(), opts=None, globbed=False,
584 def normalfilesmatchfn(orig, ctx, pats=(), opts=None, globbed=False,
585 default='relpath', badfn=None):
585 default='relpath', badfn=None):
586 if opts is None:
586 if opts is None:
587 opts = {}
587 opts = {}
588 match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
588 match = orig(ctx, pats, opts, globbed, default, badfn=badfn)
589 return composenormalfilematcher(match, manifest)
589 return composenormalfilematcher(match, manifest)
590 with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
590 with extensions.wrappedfunction(scmutil, 'match', normalfilesmatchfn):
591 try:
591 try:
592 result = orig(ui, repo, pats, opts, rename)
592 result = orig(ui, repo, pats, opts, rename)
593 except error.Abort as e:
593 except error.Abort as e:
594 if pycompat.bytestr(e) != _('no files to copy'):
594 if pycompat.bytestr(e) != _('no files to copy'):
595 raise e
595 raise e
596 else:
596 else:
597 nonormalfiles = True
597 nonormalfiles = True
598 result = 0
598 result = 0
599
599
600 # The first rename can cause our current working directory to be removed.
600 # The first rename can cause our current working directory to be removed.
601 # In that case there is nothing left to copy/rename so just quit.
601 # In that case there is nothing left to copy/rename so just quit.
602 try:
602 try:
603 repo.getcwd()
603 repo.getcwd()
604 except OSError:
604 except OSError:
605 return result
605 return result
606
606
607 def makestandin(relpath):
607 def makestandin(relpath):
608 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
608 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
609 return repo.wvfs.join(lfutil.standin(path))
609 return repo.wvfs.join(lfutil.standin(path))
610
610
611 fullpats = scmutil.expandpats(pats)
611 fullpats = scmutil.expandpats(pats)
612 dest = fullpats[-1]
612 dest = fullpats[-1]
613
613
614 if os.path.isdir(dest):
614 if os.path.isdir(dest):
615 if not os.path.isdir(makestandin(dest)):
615 if not os.path.isdir(makestandin(dest)):
616 os.makedirs(makestandin(dest))
616 os.makedirs(makestandin(dest))
617
617
618 try:
618 try:
619 # When we call orig below it creates the standins but we don't add
619 # When we call orig below it creates the standins but we don't add
620 # them to the dir state until later so lock during that time.
620 # them to the dir state until later so lock during that time.
621 wlock = repo.wlock()
621 wlock = repo.wlock()
622
622
623 manifest = repo[None].manifest()
623 manifest = repo[None].manifest()
624 def overridematch(orig, ctx, pats=(), opts=None, globbed=False,
624 def overridematch(orig, ctx, pats=(), opts=None, globbed=False,
625 default='relpath', badfn=None):
625 default='relpath', badfn=None):
626 if opts is None:
626 if opts is None:
627 opts = {}
627 opts = {}
628 newpats = []
628 newpats = []
629 # The patterns were previously mangled to add the standin
629 # The patterns were previously mangled to add the standin
630 # directory; we need to remove that now
630 # directory; we need to remove that now
631 for pat in pats:
631 for pat in pats:
632 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
632 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
633 newpats.append(pat.replace(lfutil.shortname, ''))
633 newpats.append(pat.replace(lfutil.shortname, ''))
634 else:
634 else:
635 newpats.append(pat)
635 newpats.append(pat)
636 match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
636 match = orig(ctx, newpats, opts, globbed, default, badfn=badfn)
637 m = copy.copy(match)
637 m = copy.copy(match)
638 lfile = lambda f: lfutil.standin(f) in manifest
638 lfile = lambda f: lfutil.standin(f) in manifest
639 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
639 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
640 m._fileset = set(m._files)
640 m._fileset = set(m._files)
641 origmatchfn = m.matchfn
641 origmatchfn = m.matchfn
642 def matchfn(f):
642 def matchfn(f):
643 lfile = lfutil.splitstandin(f)
643 lfile = lfutil.splitstandin(f)
644 return (lfile is not None and
644 return (lfile is not None and
645 (f in manifest) and
645 (f in manifest) and
646 origmatchfn(lfile) or
646 origmatchfn(lfile) or
647 None)
647 None)
648 m.matchfn = matchfn
648 m.matchfn = matchfn
649 return m
649 return m
650 listpats = []
650 listpats = []
651 for pat in pats:
651 for pat in pats:
652 if matchmod.patkind(pat) is not None:
652 if matchmod.patkind(pat) is not None:
653 listpats.append(pat)
653 listpats.append(pat)
654 else:
654 else:
655 listpats.append(makestandin(pat))
655 listpats.append(makestandin(pat))
656
656
657 copiedfiles = []
657 copiedfiles = []
658 def overridecopyfile(orig, src, dest, *args, **kwargs):
658 def overridecopyfile(orig, src, dest, *args, **kwargs):
659 if (lfutil.shortname in src and
659 if (lfutil.shortname in src and
660 dest.startswith(repo.wjoin(lfutil.shortname))):
660 dest.startswith(repo.wjoin(lfutil.shortname))):
661 destlfile = dest.replace(lfutil.shortname, '')
661 destlfile = dest.replace(lfutil.shortname, '')
662 if not opts['force'] and os.path.exists(destlfile):
662 if not opts['force'] and os.path.exists(destlfile):
663 raise IOError('',
663 raise IOError('',
664 _('destination largefile already exists'))
664 _('destination largefile already exists'))
665 copiedfiles.append((src, dest))
665 copiedfiles.append((src, dest))
666 orig(src, dest, *args, **kwargs)
666 orig(src, dest, *args, **kwargs)
667 with extensions.wrappedfunction(util, 'copyfile', overridecopyfile), \
667 with extensions.wrappedfunction(util, 'copyfile', overridecopyfile), \
668 extensions.wrappedfunction(scmutil, 'match', overridematch):
668 extensions.wrappedfunction(scmutil, 'match', overridematch):
669 result += orig(ui, repo, listpats, opts, rename)
669 result += orig(ui, repo, listpats, opts, rename)
670
670
671 lfdirstate = lfutil.openlfdirstate(ui, repo)
671 lfdirstate = lfutil.openlfdirstate(ui, repo)
672 for (src, dest) in copiedfiles:
672 for (src, dest) in copiedfiles:
673 if (lfutil.shortname in src and
673 if (lfutil.shortname in src and
674 dest.startswith(repo.wjoin(lfutil.shortname))):
674 dest.startswith(repo.wjoin(lfutil.shortname))):
675 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
675 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
676 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
676 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
677 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or '.'
677 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or '.'
678 if not os.path.isdir(destlfiledir):
678 if not os.path.isdir(destlfiledir):
679 os.makedirs(destlfiledir)
679 os.makedirs(destlfiledir)
680 if rename:
680 if rename:
681 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
681 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
682
682
683 # The file is gone, but this deletes any empty parent
683 # The file is gone, but this deletes any empty parent
684 # directories as a side-effect.
684 # directories as a side-effect.
685 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
685 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
686 lfdirstate.remove(srclfile)
686 lfdirstate.remove(srclfile)
687 else:
687 else:
688 util.copyfile(repo.wjoin(srclfile),
688 util.copyfile(repo.wjoin(srclfile),
689 repo.wjoin(destlfile))
689 repo.wjoin(destlfile))
690
690
691 lfdirstate.add(destlfile)
691 lfdirstate.add(destlfile)
692 lfdirstate.write()
692 lfdirstate.write()
693 except error.Abort as e:
693 except error.Abort as e:
694 if pycompat.bytestr(e) != _('no files to copy'):
694 if pycompat.bytestr(e) != _('no files to copy'):
695 raise e
695 raise e
696 else:
696 else:
697 nolfiles = True
697 nolfiles = True
698 finally:
698 finally:
699 wlock.release()
699 wlock.release()
700
700
701 if nolfiles and nonormalfiles:
701 if nolfiles and nonormalfiles:
702 raise error.Abort(_('no files to copy'))
702 raise error.Abort(_('no files to copy'))
703
703
704 return result
704 return result
705
705
706 # When the user calls revert, we have to be careful to not revert any
706 # When the user calls revert, we have to be careful to not revert any
707 # changes to other largefiles accidentally. This means we have to keep
707 # changes to other largefiles accidentally. This means we have to keep
708 # track of the largefiles that are being reverted so we only pull down
708 # track of the largefiles that are being reverted so we only pull down
709 # the necessary largefiles.
709 # the necessary largefiles.
710 #
710 #
711 # Standins are only updated (to match the hash of largefiles) before
711 # Standins are only updated (to match the hash of largefiles) before
712 # commits. Update the standins then run the original revert, changing
712 # commits. Update the standins then run the original revert, changing
713 # the matcher to hit standins instead of largefiles. Based on the
713 # the matcher to hit standins instead of largefiles. Based on the
714 # resulting standins update the largefiles.
714 # resulting standins update the largefiles.
715 @eh.wrapfunction(cmdutil, 'revert')
715 @eh.wrapfunction(cmdutil, 'revert')
716 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
716 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
717 # Because we put the standins in a bad state (by updating them)
717 # Because we put the standins in a bad state (by updating them)
718 # and then return them to a correct state we need to lock to
718 # and then return them to a correct state we need to lock to
719 # prevent others from changing them in their incorrect state.
719 # prevent others from changing them in their incorrect state.
720 with repo.wlock():
720 with repo.wlock():
721 lfdirstate = lfutil.openlfdirstate(ui, repo)
721 lfdirstate = lfutil.openlfdirstate(ui, repo)
722 s = lfutil.lfdirstatestatus(lfdirstate, repo)
722 s = lfutil.lfdirstatestatus(lfdirstate, repo)
723 lfdirstate.write()
723 lfdirstate.write()
724 for lfile in s.modified:
724 for lfile in s.modified:
725 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
725 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
726 for lfile in s.deleted:
726 for lfile in s.deleted:
727 fstandin = lfutil.standin(lfile)
727 fstandin = lfutil.standin(lfile)
728 if (repo.wvfs.exists(fstandin)):
728 if (repo.wvfs.exists(fstandin)):
729 repo.wvfs.unlink(fstandin)
729 repo.wvfs.unlink(fstandin)
730
730
731 oldstandins = lfutil.getstandinsstate(repo)
731 oldstandins = lfutil.getstandinsstate(repo)
732
732
733 def overridematch(orig, mctx, pats=(), opts=None, globbed=False,
733 def overridematch(orig, mctx, pats=(), opts=None, globbed=False,
734 default='relpath', badfn=None):
734 default='relpath', badfn=None):
735 if opts is None:
735 if opts is None:
736 opts = {}
736 opts = {}
737 match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
737 match = orig(mctx, pats, opts, globbed, default, badfn=badfn)
738 m = copy.copy(match)
738 m = copy.copy(match)
739
739
740 # revert supports recursing into subrepos, and though largefiles
740 # revert supports recursing into subrepos, and though largefiles
741 # currently doesn't work correctly in that case, this match is
741 # currently doesn't work correctly in that case, this match is
742 # called, so the lfdirstate above may not be the correct one for
742 # called, so the lfdirstate above may not be the correct one for
743 # this invocation of match.
743 # this invocation of match.
744 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
744 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
745 False)
745 False)
746
746
747 wctx = repo[None]
747 wctx = repo[None]
748 matchfiles = []
748 matchfiles = []
749 for f in m._files:
749 for f in m._files:
750 standin = lfutil.standin(f)
750 standin = lfutil.standin(f)
751 if standin in ctx or standin in mctx:
751 if standin in ctx or standin in mctx:
752 matchfiles.append(standin)
752 matchfiles.append(standin)
753 elif standin in wctx or lfdirstate[f] == 'r':
753 elif standin in wctx or lfdirstate[f] == 'r':
754 continue
754 continue
755 else:
755 else:
756 matchfiles.append(f)
756 matchfiles.append(f)
757 m._files = matchfiles
757 m._files = matchfiles
758 m._fileset = set(m._files)
758 m._fileset = set(m._files)
759 origmatchfn = m.matchfn
759 origmatchfn = m.matchfn
760 def matchfn(f):
760 def matchfn(f):
761 lfile = lfutil.splitstandin(f)
761 lfile = lfutil.splitstandin(f)
762 if lfile is not None:
762 if lfile is not None:
763 return (origmatchfn(lfile) and
763 return (origmatchfn(lfile) and
764 (f in ctx or f in mctx))
764 (f in ctx or f in mctx))
765 return origmatchfn(f)
765 return origmatchfn(f)
766 m.matchfn = matchfn
766 m.matchfn = matchfn
767 return m
767 return m
768 with extensions.wrappedfunction(scmutil, 'match', overridematch):
768 with extensions.wrappedfunction(scmutil, 'match', overridematch):
769 orig(ui, repo, ctx, parents, *pats, **opts)
769 orig(ui, repo, ctx, parents, *pats, **opts)
770
770
771 newstandins = lfutil.getstandinsstate(repo)
771 newstandins = lfutil.getstandinsstate(repo)
772 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
772 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
773 # lfdirstate should be 'normallookup'-ed for updated files,
773 # lfdirstate should be 'normallookup'-ed for updated files,
774 # because reverting doesn't touch dirstate for 'normal' files
774 # because reverting doesn't touch dirstate for 'normal' files
775 # when target revision is explicitly specified: in such case,
775 # when target revision is explicitly specified: in such case,
776 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
776 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
777 # of target (standin) file.
777 # of target (standin) file.
778 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
778 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
779 normallookup=True)
779 normallookup=True)
780
780
781 # after pulling changesets, we need to take some extra care to get
781 # after pulling changesets, we need to take some extra care to get
782 # largefiles updated remotely
782 # largefiles updated remotely
783 @eh.wrapcommand('pull',
783 @eh.wrapcommand('pull',
784 opts=[('', 'all-largefiles', None,
784 opts=[('', 'all-largefiles', None,
785 _('download all pulled versions of largefiles (DEPRECATED)')),
785 _('download all pulled versions of largefiles (DEPRECATED)')),
786 ('', 'lfrev', [],
786 ('', 'lfrev', [],
787 _('download largefiles for these revisions'), _('REV'))])
787 _('download largefiles for these revisions'), _('REV'))])
788 def overridepull(orig, ui, repo, source=None, **opts):
788 def overridepull(orig, ui, repo, source=None, **opts):
789 revsprepull = len(repo)
789 revsprepull = len(repo)
790 if not source:
790 if not source:
791 source = 'default'
791 source = 'default'
792 repo.lfpullsource = source
792 repo.lfpullsource = source
793 result = orig(ui, repo, source, **opts)
793 result = orig(ui, repo, source, **opts)
794 revspostpull = len(repo)
794 revspostpull = len(repo)
795 lfrevs = opts.get(r'lfrev', [])
795 lfrevs = opts.get(r'lfrev', [])
796 if opts.get(r'all_largefiles'):
796 if opts.get(r'all_largefiles'):
797 lfrevs.append('pulled()')
797 lfrevs.append('pulled()')
798 if lfrevs and revspostpull > revsprepull:
798 if lfrevs and revspostpull > revsprepull:
799 numcached = 0
799 numcached = 0
800 repo.firstpulled = revsprepull # for pulled() revset expression
800 repo.firstpulled = revsprepull # for pulled() revset expression
801 try:
801 try:
802 for rev in scmutil.revrange(repo, lfrevs):
802 for rev in scmutil.revrange(repo, lfrevs):
803 ui.note(_('pulling largefiles for revision %d\n') % rev)
803 ui.note(_('pulling largefiles for revision %d\n') % rev)
804 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
804 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
805 numcached += len(cached)
805 numcached += len(cached)
806 finally:
806 finally:
807 del repo.firstpulled
807 del repo.firstpulled
808 ui.status(_("%d largefiles cached\n") % numcached)
808 ui.status(_("%d largefiles cached\n") % numcached)
809 return result
809 return result
810
810
811 @eh.wrapcommand('push',
811 @eh.wrapcommand('push',
812 opts=[('', 'lfrev', [],
812 opts=[('', 'lfrev', [],
813 _('upload largefiles for these revisions'), _('REV'))])
813 _('upload largefiles for these revisions'), _('REV'))])
814 def overridepush(orig, ui, repo, *args, **kwargs):
814 def overridepush(orig, ui, repo, *args, **kwargs):
815 """Override push command and store --lfrev parameters in opargs"""
815 """Override push command and store --lfrev parameters in opargs"""
816 lfrevs = kwargs.pop(r'lfrev', None)
816 lfrevs = kwargs.pop(r'lfrev', None)
817 if lfrevs:
817 if lfrevs:
818 opargs = kwargs.setdefault(r'opargs', {})
818 opargs = kwargs.setdefault(r'opargs', {})
819 opargs['lfrevs'] = scmutil.revrange(repo, lfrevs)
819 opargs['lfrevs'] = scmutil.revrange(repo, lfrevs)
820 return orig(ui, repo, *args, **kwargs)
820 return orig(ui, repo, *args, **kwargs)
821
821
822 @eh.wrapfunction(exchange, 'pushoperation')
822 @eh.wrapfunction(exchange, 'pushoperation')
823 def exchangepushoperation(orig, *args, **kwargs):
823 def exchangepushoperation(orig, *args, **kwargs):
824 """Override pushoperation constructor and store lfrevs parameter"""
824 """Override pushoperation constructor and store lfrevs parameter"""
825 lfrevs = kwargs.pop(r'lfrevs', None)
825 lfrevs = kwargs.pop(r'lfrevs', None)
826 pushop = orig(*args, **kwargs)
826 pushop = orig(*args, **kwargs)
827 pushop.lfrevs = lfrevs
827 pushop.lfrevs = lfrevs
828 return pushop
828 return pushop
829
829
830 @eh.revsetpredicate('pulled()')
830 @eh.revsetpredicate('pulled()')
831 def pulledrevsetsymbol(repo, subset, x):
831 def pulledrevsetsymbol(repo, subset, x):
832 """Changesets that just has been pulled.
832 """Changesets that just has been pulled.
833
833
834 Only available with largefiles from pull --lfrev expressions.
834 Only available with largefiles from pull --lfrev expressions.
835
835
836 .. container:: verbose
836 .. container:: verbose
837
837
838 Some examples:
838 Some examples:
839
839
840 - pull largefiles for all new changesets::
840 - pull largefiles for all new changesets::
841
841
842 hg pull -lfrev "pulled()"
842 hg pull -lfrev "pulled()"
843
843
844 - pull largefiles for all new branch heads::
844 - pull largefiles for all new branch heads::
845
845
846 hg pull -lfrev "head(pulled()) and not closed()"
846 hg pull -lfrev "head(pulled()) and not closed()"
847
847
848 """
848 """
849
849
850 try:
850 try:
851 firstpulled = repo.firstpulled
851 firstpulled = repo.firstpulled
852 except AttributeError:
852 except AttributeError:
853 raise error.Abort(_("pulled() only available in --lfrev"))
853 raise error.Abort(_("pulled() only available in --lfrev"))
854 return smartset.baseset([r for r in subset if r >= firstpulled])
854 return smartset.baseset([r for r in subset if r >= firstpulled])
855
855
856 @eh.wrapcommand('clone',
856 @eh.wrapcommand('clone',
857 opts=[('', 'all-largefiles', None,
857 opts=[('', 'all-largefiles', None,
858 _('download all versions of all largefiles'))])
858 _('download all versions of all largefiles'))])
859 def overrideclone(orig, ui, source, dest=None, **opts):
859 def overrideclone(orig, ui, source, dest=None, **opts):
860 d = dest
860 d = dest
861 if d is None:
861 if d is None:
862 d = hg.defaultdest(source)
862 d = hg.defaultdest(source)
863 if opts.get(r'all_largefiles') and not hg.islocal(d):
863 if opts.get(r'all_largefiles') and not hg.islocal(d):
864 raise error.Abort(_(
864 raise error.Abort(_(
865 '--all-largefiles is incompatible with non-local destination %s') %
865 '--all-largefiles is incompatible with non-local destination %s') %
866 d)
866 d)
867
867
868 return orig(ui, source, dest, **opts)
868 return orig(ui, source, dest, **opts)
869
869
870 @eh.wrapfunction(hg, 'clone')
870 @eh.wrapfunction(hg, 'clone')
871 def hgclone(orig, ui, opts, *args, **kwargs):
871 def hgclone(orig, ui, opts, *args, **kwargs):
872 result = orig(ui, opts, *args, **kwargs)
872 result = orig(ui, opts, *args, **kwargs)
873
873
874 if result is not None:
874 if result is not None:
875 sourcerepo, destrepo = result
875 sourcerepo, destrepo = result
876 repo = destrepo.local()
876 repo = destrepo.local()
877
877
878 # When cloning to a remote repo (like through SSH), no repo is available
878 # When cloning to a remote repo (like through SSH), no repo is available
879 # from the peer. Therefore the largefiles can't be downloaded and the
879 # from the peer. Therefore the largefiles can't be downloaded and the
880 # hgrc can't be updated.
880 # hgrc can't be updated.
881 if not repo:
881 if not repo:
882 return result
882 return result
883
883
884 # Caching is implicitly limited to 'rev' option, since the dest repo was
884 # Caching is implicitly limited to 'rev' option, since the dest repo was
885 # truncated at that point. The user may expect a download count with
885 # truncated at that point. The user may expect a download count with
886 # this option, so attempt whether or not this is a largefile repo.
886 # this option, so attempt whether or not this is a largefile repo.
887 if opts.get('all_largefiles'):
887 if opts.get('all_largefiles'):
888 success, missing = lfcommands.downloadlfiles(ui, repo, None)
888 success, missing = lfcommands.downloadlfiles(ui, repo, None)
889
889
890 if missing != 0:
890 if missing != 0:
891 return None
891 return None
892
892
893 return result
893 return result
894
894
895 @eh.wrapcommand('rebase', extension='rebase')
895 @eh.wrapcommand('rebase', extension='rebase')
896 def overriderebase(orig, ui, repo, **opts):
896 def overriderebase(orig, ui, repo, **opts):
897 if not util.safehasattr(repo, '_largefilesenabled'):
897 if not util.safehasattr(repo, '_largefilesenabled'):
898 return orig(ui, repo, **opts)
898 return orig(ui, repo, **opts)
899
899
900 resuming = opts.get(r'continue')
900 resuming = opts.get(r'continue')
901 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
901 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
902 repo._lfstatuswriters.append(lambda *msg, **opts: None)
902 repo._lfstatuswriters.append(lambda *msg, **opts: None)
903 try:
903 try:
904 return orig(ui, repo, **opts)
904 return orig(ui, repo, **opts)
905 finally:
905 finally:
906 repo._lfstatuswriters.pop()
906 repo._lfstatuswriters.pop()
907 repo._lfcommithooks.pop()
907 repo._lfcommithooks.pop()
908
908
909 @eh.wrapcommand('archive')
909 @eh.wrapcommand('archive')
910 def overridearchivecmd(orig, ui, repo, dest, **opts):
910 def overridearchivecmd(orig, ui, repo, dest, **opts):
911 repo.unfiltered().lfstatus = True
911 repo.unfiltered().lfstatus = True
912
912
913 try:
913 try:
914 return orig(ui, repo.unfiltered(), dest, **opts)
914 return orig(ui, repo.unfiltered(), dest, **opts)
915 finally:
915 finally:
916 repo.unfiltered().lfstatus = False
916 repo.unfiltered().lfstatus = False
917
917
918 @eh.wrapfunction(webcommands, 'archive')
918 @eh.wrapfunction(webcommands, 'archive')
919 def hgwebarchive(orig, web):
919 def hgwebarchive(orig, web):
920 web.repo.lfstatus = True
920 web.repo.lfstatus = True
921
921
922 try:
922 try:
923 return orig(web)
923 return orig(web)
924 finally:
924 finally:
925 web.repo.lfstatus = False
925 web.repo.lfstatus = False
926
926
927 @eh.wrapfunction(archival, 'archive')
927 @eh.wrapfunction(archival, 'archive')
928 def overridearchive(orig, repo, dest, node, kind, decode=True, match=None,
928 def overridearchive(orig, repo, dest, node, kind, decode=True, match=None,
929 prefix='', mtime=None, subrepos=None):
929 prefix='', mtime=None, subrepos=None):
930 # For some reason setting repo.lfstatus in hgwebarchive only changes the
930 # For some reason setting repo.lfstatus in hgwebarchive only changes the
931 # unfiltered repo's attr, so check that as well.
931 # unfiltered repo's attr, so check that as well.
932 if not repo.lfstatus and not repo.unfiltered().lfstatus:
932 if not repo.lfstatus and not repo.unfiltered().lfstatus:
933 return orig(repo, dest, node, kind, decode, match, prefix, mtime,
933 return orig(repo, dest, node, kind, decode, match, prefix, mtime,
934 subrepos)
934 subrepos)
935
935
936 # No need to lock because we are only reading history and
936 # No need to lock because we are only reading history and
937 # largefile caches, neither of which are modified.
937 # largefile caches, neither of which are modified.
938 if node is not None:
938 if node is not None:
939 lfcommands.cachelfiles(repo.ui, repo, node)
939 lfcommands.cachelfiles(repo.ui, repo, node)
940
940
941 if kind not in archival.archivers:
941 if kind not in archival.archivers:
942 raise error.Abort(_("unknown archive type '%s'") % kind)
942 raise error.Abort(_("unknown archive type '%s'") % kind)
943
943
944 ctx = repo[node]
944 ctx = repo[node]
945
945
946 if kind == 'files':
946 if kind == 'files':
947 if prefix:
947 if prefix:
948 raise error.Abort(
948 raise error.Abort(
949 _('cannot give prefix when archiving to files'))
949 _('cannot give prefix when archiving to files'))
950 else:
950 else:
951 prefix = archival.tidyprefix(dest, kind, prefix)
951 prefix = archival.tidyprefix(dest, kind, prefix)
952
952
953 def write(name, mode, islink, getdata):
953 def write(name, mode, islink, getdata):
954 if match and not match(name):
954 if match and not match(name):
955 return
955 return
956 data = getdata()
956 data = getdata()
957 if decode:
957 if decode:
958 data = repo.wwritedata(name, data)
958 data = repo.wwritedata(name, data)
959 archiver.addfile(prefix + name, mode, islink, data)
959 archiver.addfile(prefix + name, mode, islink, data)
960
960
961 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
961 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
962
962
963 if repo.ui.configbool("ui", "archivemeta"):
963 if repo.ui.configbool("ui", "archivemeta"):
964 write('.hg_archival.txt', 0o644, False,
964 write('.hg_archival.txt', 0o644, False,
965 lambda: archival.buildmetadata(ctx))
965 lambda: archival.buildmetadata(ctx))
966
966
967 for f in ctx:
967 for f in ctx:
968 ff = ctx.flags(f)
968 ff = ctx.flags(f)
969 getdata = ctx[f].data
969 getdata = ctx[f].data
970 lfile = lfutil.splitstandin(f)
970 lfile = lfutil.splitstandin(f)
971 if lfile is not None:
971 if lfile is not None:
972 if node is not None:
972 if node is not None:
973 path = lfutil.findfile(repo, getdata().strip())
973 path = lfutil.findfile(repo, getdata().strip())
974
974
975 if path is None:
975 if path is None:
976 raise error.Abort(
976 raise error.Abort(
977 _('largefile %s not found in repo store or system cache')
977 _('largefile %s not found in repo store or system cache')
978 % lfile)
978 % lfile)
979 else:
979 else:
980 path = lfile
980 path = lfile
981
981
982 f = lfile
982 f = lfile
983
983
984 getdata = lambda: util.readfile(path)
984 getdata = lambda: util.readfile(path)
985 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
985 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
986
986
987 if subrepos:
987 if subrepos:
988 for subpath in sorted(ctx.substate):
988 for subpath in sorted(ctx.substate):
989 sub = ctx.workingsub(subpath)
989 sub = ctx.workingsub(subpath)
990 submatch = matchmod.subdirmatcher(subpath, match)
990 submatch = matchmod.subdirmatcher(subpath, match)
991 subprefix = prefix + subpath + '/'
991 subprefix = prefix + subpath + '/'
992 sub._repo.lfstatus = True
992 sub._repo.lfstatus = True
993 sub.archive(archiver, subprefix, submatch)
993 sub.archive(archiver, subprefix, submatch)
994
994
995 archiver.done()
995 archiver.done()
996
996
997 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
997 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
998 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
998 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
999 lfenabled = util.safehasattr(repo._repo, '_largefilesenabled')
999 lfenabled = util.safehasattr(repo._repo, '_largefilesenabled')
1000 if not lfenabled or not repo._repo.lfstatus:
1000 if not lfenabled or not repo._repo.lfstatus:
1001 return orig(repo, archiver, prefix, match, decode)
1001 return orig(repo, archiver, prefix, match, decode)
1002
1002
1003 repo._get(repo._state + ('hg',))
1003 repo._get(repo._state + ('hg',))
1004 rev = repo._state[1]
1004 rev = repo._state[1]
1005 ctx = repo._repo[rev]
1005 ctx = repo._repo[rev]
1006
1006
1007 if ctx.node() is not None:
1007 if ctx.node() is not None:
1008 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1008 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1009
1009
1010 def write(name, mode, islink, getdata):
1010 def write(name, mode, islink, getdata):
1011 # At this point, the standin has been replaced with the largefile name,
1011 # At this point, the standin has been replaced with the largefile name,
1012 # so the normal matcher works here without the lfutil variants.
1012 # so the normal matcher works here without the lfutil variants.
1013 if match and not match(f):
1013 if match and not match(f):
1014 return
1014 return
1015 data = getdata()
1015 data = getdata()
1016 if decode:
1016 if decode:
1017 data = repo._repo.wwritedata(name, data)
1017 data = repo._repo.wwritedata(name, data)
1018
1018
1019 archiver.addfile(prefix + name, mode, islink, data)
1019 archiver.addfile(prefix + name, mode, islink, data)
1020
1020
1021 for f in ctx:
1021 for f in ctx:
1022 ff = ctx.flags(f)
1022 ff = ctx.flags(f)
1023 getdata = ctx[f].data
1023 getdata = ctx[f].data
1024 lfile = lfutil.splitstandin(f)
1024 lfile = lfutil.splitstandin(f)
1025 if lfile is not None:
1025 if lfile is not None:
1026 if ctx.node() is not None:
1026 if ctx.node() is not None:
1027 path = lfutil.findfile(repo._repo, getdata().strip())
1027 path = lfutil.findfile(repo._repo, getdata().strip())
1028
1028
1029 if path is None:
1029 if path is None:
1030 raise error.Abort(
1030 raise error.Abort(
1031 _('largefile %s not found in repo store or system cache')
1031 _('largefile %s not found in repo store or system cache')
1032 % lfile)
1032 % lfile)
1033 else:
1033 else:
1034 path = lfile
1034 path = lfile
1035
1035
1036 f = lfile
1036 f = lfile
1037
1037
1038 getdata = lambda: util.readfile(os.path.join(prefix, path))
1038 getdata = lambda: util.readfile(os.path.join(prefix, path))
1039
1039
1040 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1040 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1041
1041
1042 for subpath in sorted(ctx.substate):
1042 for subpath in sorted(ctx.substate):
1043 sub = ctx.workingsub(subpath)
1043 sub = ctx.workingsub(subpath)
1044 submatch = matchmod.subdirmatcher(subpath, match)
1044 submatch = matchmod.subdirmatcher(subpath, match)
1045 subprefix = prefix + subpath + '/'
1045 subprefix = prefix + subpath + '/'
1046 sub._repo.lfstatus = True
1046 sub._repo.lfstatus = True
1047 sub.archive(archiver, subprefix, submatch, decode)
1047 sub.archive(archiver, subprefix, submatch, decode)
1048
1048
1049 # If a largefile is modified, the change is not reflected in its
1049 # If a largefile is modified, the change is not reflected in its
1050 # standin until a commit. cmdutil.bailifchanged() raises an exception
1050 # standin until a commit. cmdutil.bailifchanged() raises an exception
1051 # if the repo has uncommitted changes. Wrap it to also check if
1051 # if the repo has uncommitted changes. Wrap it to also check if
1052 # largefiles were changed. This is used by bisect, backout and fetch.
1052 # largefiles were changed. This is used by bisect, backout and fetch.
1053 @eh.wrapfunction(cmdutil, 'bailifchanged')
1053 @eh.wrapfunction(cmdutil, 'bailifchanged')
1054 def overridebailifchanged(orig, repo, *args, **kwargs):
1054 def overridebailifchanged(orig, repo, *args, **kwargs):
1055 orig(repo, *args, **kwargs)
1055 orig(repo, *args, **kwargs)
1056 repo.lfstatus = True
1056 repo.lfstatus = True
1057 s = repo.status()
1057 s = repo.status()
1058 repo.lfstatus = False
1058 repo.lfstatus = False
1059 if s.modified or s.added or s.removed or s.deleted:
1059 if s.modified or s.added or s.removed or s.deleted:
1060 raise error.Abort(_('uncommitted changes'))
1060 raise error.Abort(_('uncommitted changes'))
1061
1061
1062 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1062 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1063 def postcommitstatus(orig, repo, *args, **kwargs):
1063 def postcommitstatus(orig, repo, *args, **kwargs):
1064 repo.lfstatus = True
1064 repo.lfstatus = True
1065 try:
1065 try:
1066 return orig(repo, *args, **kwargs)
1066 return orig(repo, *args, **kwargs)
1067 finally:
1067 finally:
1068 repo.lfstatus = False
1068 repo.lfstatus = False
1069
1069
1070 @eh.wrapfunction(cmdutil, 'forget')
1070 @eh.wrapfunction(cmdutil, 'forget')
1071 def cmdutilforget(orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
1071 def cmdutilforget(orig, ui, repo, match, prefix, uipathfn, explicitonly, dryrun,
1072 interactive):
1072 interactive):
1073 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1073 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1074 bad, forgot = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly,
1074 bad, forgot = orig(ui, repo, normalmatcher, prefix, uipathfn, explicitonly,
1075 dryrun, interactive)
1075 dryrun, interactive)
1076 m = composelargefilematcher(match, repo[None].manifest())
1076 m = composelargefilematcher(match, repo[None].manifest())
1077
1077
1078 try:
1078 try:
1079 repo.lfstatus = True
1079 repo.lfstatus = True
1080 s = repo.status(match=m, clean=True)
1080 s = repo.status(match=m, clean=True)
1081 finally:
1081 finally:
1082 repo.lfstatus = False
1082 repo.lfstatus = False
1083 manifest = repo[None].manifest()
1083 manifest = repo[None].manifest()
1084 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1084 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1085 forget = [f for f in forget if lfutil.standin(f) in manifest]
1085 forget = [f for f in forget if lfutil.standin(f) in manifest]
1086
1086
1087 for f in forget:
1087 for f in forget:
1088 fstandin = lfutil.standin(f)
1088 fstandin = lfutil.standin(f)
1089 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1089 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1090 ui.warn(_('not removing %s: file is already untracked\n')
1090 ui.warn(_('not removing %s: file is already untracked\n')
1091 % uipathfn(f))
1091 % uipathfn(f))
1092 bad.append(f)
1092 bad.append(f)
1093
1093
1094 for f in forget:
1094 for f in forget:
1095 if ui.verbose or not m.exact(f):
1095 if ui.verbose or not m.exact(f):
1096 ui.status(_('removing %s\n') % uipathfn(f))
1096 ui.status(_('removing %s\n') % uipathfn(f))
1097
1097
1098 # Need to lock because standin files are deleted then removed from the
1098 # Need to lock because standin files are deleted then removed from the
1099 # repository and we could race in-between.
1099 # repository and we could race in-between.
1100 with repo.wlock():
1100 with repo.wlock():
1101 lfdirstate = lfutil.openlfdirstate(ui, repo)
1101 lfdirstate = lfutil.openlfdirstate(ui, repo)
1102 for f in forget:
1102 for f in forget:
1103 if lfdirstate[f] == 'a':
1103 if lfdirstate[f] == 'a':
1104 lfdirstate.drop(f)
1104 lfdirstate.drop(f)
1105 else:
1105 else:
1106 lfdirstate.remove(f)
1106 lfdirstate.remove(f)
1107 lfdirstate.write()
1107 lfdirstate.write()
1108 standins = [lfutil.standin(f) for f in forget]
1108 standins = [lfutil.standin(f) for f in forget]
1109 for f in standins:
1109 for f in standins:
1110 repo.wvfs.unlinkpath(f, ignoremissing=True)
1110 repo.wvfs.unlinkpath(f, ignoremissing=True)
1111 rejected = repo[None].forget(standins)
1111 rejected = repo[None].forget(standins)
1112
1112
1113 bad.extend(f for f in rejected if f in m.files())
1113 bad.extend(f for f in rejected if f in m.files())
1114 forgot.extend(f for f in forget if f not in rejected)
1114 forgot.extend(f for f in forget if f not in rejected)
1115 return bad, forgot
1115 return bad, forgot
1116
1116
1117 def _getoutgoings(repo, other, missing, addfunc):
1117 def _getoutgoings(repo, other, missing, addfunc):
1118 """get pairs of filename and largefile hash in outgoing revisions
1118 """get pairs of filename and largefile hash in outgoing revisions
1119 in 'missing'.
1119 in 'missing'.
1120
1120
1121 largefiles already existing on 'other' repository are ignored.
1121 largefiles already existing on 'other' repository are ignored.
1122
1122
1123 'addfunc' is invoked with each unique pairs of filename and
1123 'addfunc' is invoked with each unique pairs of filename and
1124 largefile hash value.
1124 largefile hash value.
1125 """
1125 """
1126 knowns = set()
1126 knowns = set()
1127 lfhashes = set()
1127 lfhashes = set()
1128 def dedup(fn, lfhash):
1128 def dedup(fn, lfhash):
1129 k = (fn, lfhash)
1129 k = (fn, lfhash)
1130 if k not in knowns:
1130 if k not in knowns:
1131 knowns.add(k)
1131 knowns.add(k)
1132 lfhashes.add(lfhash)
1132 lfhashes.add(lfhash)
1133 lfutil.getlfilestoupload(repo, missing, dedup)
1133 lfutil.getlfilestoupload(repo, missing, dedup)
1134 if lfhashes:
1134 if lfhashes:
1135 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1135 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1136 for fn, lfhash in knowns:
1136 for fn, lfhash in knowns:
1137 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1137 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1138 addfunc(fn, lfhash)
1138 addfunc(fn, lfhash)
1139
1139
1140 def outgoinghook(ui, repo, other, opts, missing):
1140 def outgoinghook(ui, repo, other, opts, missing):
1141 if opts.pop('large', None):
1141 if opts.pop('large', None):
1142 lfhashes = set()
1142 lfhashes = set()
1143 if ui.debugflag:
1143 if ui.debugflag:
1144 toupload = {}
1144 toupload = {}
1145 def addfunc(fn, lfhash):
1145 def addfunc(fn, lfhash):
1146 if fn not in toupload:
1146 if fn not in toupload:
1147 toupload[fn] = []
1147 toupload[fn] = []
1148 toupload[fn].append(lfhash)
1148 toupload[fn].append(lfhash)
1149 lfhashes.add(lfhash)
1149 lfhashes.add(lfhash)
1150 def showhashes(fn):
1150 def showhashes(fn):
1151 for lfhash in sorted(toupload[fn]):
1151 for lfhash in sorted(toupload[fn]):
1152 ui.debug(' %s\n' % (lfhash))
1152 ui.debug(' %s\n' % (lfhash))
1153 else:
1153 else:
1154 toupload = set()
1154 toupload = set()
1155 def addfunc(fn, lfhash):
1155 def addfunc(fn, lfhash):
1156 toupload.add(fn)
1156 toupload.add(fn)
1157 lfhashes.add(lfhash)
1157 lfhashes.add(lfhash)
1158 def showhashes(fn):
1158 def showhashes(fn):
1159 pass
1159 pass
1160 _getoutgoings(repo, other, missing, addfunc)
1160 _getoutgoings(repo, other, missing, addfunc)
1161
1161
1162 if not toupload:
1162 if not toupload:
1163 ui.status(_('largefiles: no files to upload\n'))
1163 ui.status(_('largefiles: no files to upload\n'))
1164 else:
1164 else:
1165 ui.status(_('largefiles to upload (%d entities):\n')
1165 ui.status(_('largefiles to upload (%d entities):\n')
1166 % (len(lfhashes)))
1166 % (len(lfhashes)))
1167 for file in sorted(toupload):
1167 for file in sorted(toupload):
1168 ui.status(lfutil.splitstandin(file) + '\n')
1168 ui.status(lfutil.splitstandin(file) + '\n')
1169 showhashes(file)
1169 showhashes(file)
1170 ui.status('\n')
1170 ui.status('\n')
1171
1171
1172 @eh.wrapcommand('outgoing',
1172 @eh.wrapcommand('outgoing',
1173 opts=[('', 'large', None, _('display outgoing largefiles'))])
1173 opts=[('', 'large', None, _('display outgoing largefiles'))])
1174 def _outgoingcmd(orig, *args, **kwargs):
1174 def _outgoingcmd(orig, *args, **kwargs):
1175 # Nothing to do here other than add the extra help option- the hook above
1175 # Nothing to do here other than add the extra help option- the hook above
1176 # processes it.
1176 # processes it.
1177 return orig(*args, **kwargs)
1177 return orig(*args, **kwargs)
1178
1178
1179 def summaryremotehook(ui, repo, opts, changes):
1179 def summaryremotehook(ui, repo, opts, changes):
1180 largeopt = opts.get('large', False)
1180 largeopt = opts.get('large', False)
1181 if changes is None:
1181 if changes is None:
1182 if largeopt:
1182 if largeopt:
1183 return (False, True) # only outgoing check is needed
1183 return (False, True) # only outgoing check is needed
1184 else:
1184 else:
1185 return (False, False)
1185 return (False, False)
1186 elif largeopt:
1186 elif largeopt:
1187 url, branch, peer, outgoing = changes[1]
1187 url, branch, peer, outgoing = changes[1]
1188 if peer is None:
1188 if peer is None:
1189 # i18n: column positioning for "hg summary"
1189 # i18n: column positioning for "hg summary"
1190 ui.status(_('largefiles: (no remote repo)\n'))
1190 ui.status(_('largefiles: (no remote repo)\n'))
1191 return
1191 return
1192
1192
1193 toupload = set()
1193 toupload = set()
1194 lfhashes = set()
1194 lfhashes = set()
1195 def addfunc(fn, lfhash):
1195 def addfunc(fn, lfhash):
1196 toupload.add(fn)
1196 toupload.add(fn)
1197 lfhashes.add(lfhash)
1197 lfhashes.add(lfhash)
1198 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1198 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1199
1199
1200 if not toupload:
1200 if not toupload:
1201 # i18n: column positioning for "hg summary"
1201 # i18n: column positioning for "hg summary"
1202 ui.status(_('largefiles: (no files to upload)\n'))
1202 ui.status(_('largefiles: (no files to upload)\n'))
1203 else:
1203 else:
1204 # i18n: column positioning for "hg summary"
1204 # i18n: column positioning for "hg summary"
1205 ui.status(_('largefiles: %d entities for %d files to upload\n')
1205 ui.status(_('largefiles: %d entities for %d files to upload\n')
1206 % (len(lfhashes), len(toupload)))
1206 % (len(lfhashes), len(toupload)))
1207
1207
1208 @eh.wrapcommand('summary',
1208 @eh.wrapcommand('summary',
1209 opts=[('', 'large', None, _('display outgoing largefiles'))])
1209 opts=[('', 'large', None, _('display outgoing largefiles'))])
1210 def overridesummary(orig, ui, repo, *pats, **opts):
1210 def overridesummary(orig, ui, repo, *pats, **opts):
1211 try:
1211 try:
1212 repo.lfstatus = True
1212 repo.lfstatus = True
1213 orig(ui, repo, *pats, **opts)
1213 orig(ui, repo, *pats, **opts)
1214 finally:
1214 finally:
1215 repo.lfstatus = False
1215 repo.lfstatus = False
1216
1216
1217 @eh.wrapfunction(scmutil, 'addremove')
1217 @eh.wrapfunction(scmutil, 'addremove')
1218 def scmutiladdremove(orig, repo, matcher, prefix, uipathfn, opts=None):
1218 def scmutiladdremove(orig, repo, matcher, prefix, uipathfn, opts=None):
1219 if opts is None:
1219 if opts is None:
1220 opts = {}
1220 opts = {}
1221 if not lfutil.islfilesrepo(repo):
1221 if not lfutil.islfilesrepo(repo):
1222 return orig(repo, matcher, prefix, uipathfn, opts)
1222 return orig(repo, matcher, prefix, uipathfn, opts)
1223 # Get the list of missing largefiles so we can remove them
1223 # Get the list of missing largefiles so we can remove them
1224 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1224 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1225 unsure, s = lfdirstate.status(matchmod.always(repo.root, repo.getcwd()),
1225 unsure, s = lfdirstate.status(matchmod.always(), subrepos=[],
1226 subrepos=[], ignored=False, clean=False,
1226 ignored=False, clean=False, unknown=False)
1227 unknown=False)
1228
1227
1229 # Call into the normal remove code, but the removing of the standin, we want
1228 # Call into the normal remove code, but the removing of the standin, we want
1230 # to have handled by original addremove. Monkey patching here makes sure
1229 # to have handled by original addremove. Monkey patching here makes sure
1231 # we don't remove the standin in the largefiles code, preventing a very
1230 # we don't remove the standin in the largefiles code, preventing a very
1232 # confused state later.
1231 # confused state later.
1233 if s.deleted:
1232 if s.deleted:
1234 m = copy.copy(matcher)
1233 m = copy.copy(matcher)
1235
1234
1236 # The m._files and m._map attributes are not changed to the deleted list
1235 # The m._files and m._map attributes are not changed to the deleted list
1237 # because that affects the m.exact() test, which in turn governs whether
1236 # because that affects the m.exact() test, which in turn governs whether
1238 # or not the file name is printed, and how. Simply limit the original
1237 # or not the file name is printed, and how. Simply limit the original
1239 # matches to those in the deleted status list.
1238 # matches to those in the deleted status list.
1240 matchfn = m.matchfn
1239 matchfn = m.matchfn
1241 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1240 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1242
1241
1243 removelargefiles(repo.ui, repo, True, m, uipathfn, opts.get('dry_run'),
1242 removelargefiles(repo.ui, repo, True, m, uipathfn, opts.get('dry_run'),
1244 **pycompat.strkwargs(opts))
1243 **pycompat.strkwargs(opts))
1245 # Call into the normal add code, and any files that *should* be added as
1244 # Call into the normal add code, and any files that *should* be added as
1246 # largefiles will be
1245 # largefiles will be
1247 added, bad = addlargefiles(repo.ui, repo, True, matcher, uipathfn,
1246 added, bad = addlargefiles(repo.ui, repo, True, matcher, uipathfn,
1248 **pycompat.strkwargs(opts))
1247 **pycompat.strkwargs(opts))
1249 # Now that we've handled largefiles, hand off to the original addremove
1248 # Now that we've handled largefiles, hand off to the original addremove
1250 # function to take care of the rest. Make sure it doesn't do anything with
1249 # function to take care of the rest. Make sure it doesn't do anything with
1251 # largefiles by passing a matcher that will ignore them.
1250 # largefiles by passing a matcher that will ignore them.
1252 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1251 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1253 return orig(repo, matcher, prefix, uipathfn, opts)
1252 return orig(repo, matcher, prefix, uipathfn, opts)
1254
1253
1255 # Calling purge with --all will cause the largefiles to be deleted.
1254 # Calling purge with --all will cause the largefiles to be deleted.
1256 # Override repo.status to prevent this from happening.
1255 # Override repo.status to prevent this from happening.
1257 @eh.wrapcommand('purge', extension='purge')
1256 @eh.wrapcommand('purge', extension='purge')
1258 def overridepurge(orig, ui, repo, *dirs, **opts):
1257 def overridepurge(orig, ui, repo, *dirs, **opts):
1259 # XXX Monkey patching a repoview will not work. The assigned attribute will
1258 # XXX Monkey patching a repoview will not work. The assigned attribute will
1260 # be set on the unfiltered repo, but we will only lookup attributes in the
1259 # be set on the unfiltered repo, but we will only lookup attributes in the
1261 # unfiltered repo if the lookup in the repoview object itself fails. As the
1260 # unfiltered repo if the lookup in the repoview object itself fails. As the
1262 # monkey patched method exists on the repoview class the lookup will not
1261 # monkey patched method exists on the repoview class the lookup will not
1263 # fail. As a result, the original version will shadow the monkey patched
1262 # fail. As a result, the original version will shadow the monkey patched
1264 # one, defeating the monkey patch.
1263 # one, defeating the monkey patch.
1265 #
1264 #
1266 # As a work around we use an unfiltered repo here. We should do something
1265 # As a work around we use an unfiltered repo here. We should do something
1267 # cleaner instead.
1266 # cleaner instead.
1268 repo = repo.unfiltered()
1267 repo = repo.unfiltered()
1269 oldstatus = repo.status
1268 oldstatus = repo.status
1270 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1269 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1271 clean=False, unknown=False, listsubrepos=False):
1270 clean=False, unknown=False, listsubrepos=False):
1272 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1271 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1273 listsubrepos)
1272 listsubrepos)
1274 lfdirstate = lfutil.openlfdirstate(ui, repo)
1273 lfdirstate = lfutil.openlfdirstate(ui, repo)
1275 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1274 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1276 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1275 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1277 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1276 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1278 unknown, ignored, r.clean)
1277 unknown, ignored, r.clean)
1279 repo.status = overridestatus
1278 repo.status = overridestatus
1280 orig(ui, repo, *dirs, **opts)
1279 orig(ui, repo, *dirs, **opts)
1281 repo.status = oldstatus
1280 repo.status = oldstatus
1282
1281
1283 @eh.wrapcommand('rollback')
1282 @eh.wrapcommand('rollback')
1284 def overriderollback(orig, ui, repo, **opts):
1283 def overriderollback(orig, ui, repo, **opts):
1285 with repo.wlock():
1284 with repo.wlock():
1286 before = repo.dirstate.parents()
1285 before = repo.dirstate.parents()
1287 orphans = set(f for f in repo.dirstate
1286 orphans = set(f for f in repo.dirstate
1288 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1287 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1289 result = orig(ui, repo, **opts)
1288 result = orig(ui, repo, **opts)
1290 after = repo.dirstate.parents()
1289 after = repo.dirstate.parents()
1291 if before == after:
1290 if before == after:
1292 return result # no need to restore standins
1291 return result # no need to restore standins
1293
1292
1294 pctx = repo['.']
1293 pctx = repo['.']
1295 for f in repo.dirstate:
1294 for f in repo.dirstate:
1296 if lfutil.isstandin(f):
1295 if lfutil.isstandin(f):
1297 orphans.discard(f)
1296 orphans.discard(f)
1298 if repo.dirstate[f] == 'r':
1297 if repo.dirstate[f] == 'r':
1299 repo.wvfs.unlinkpath(f, ignoremissing=True)
1298 repo.wvfs.unlinkpath(f, ignoremissing=True)
1300 elif f in pctx:
1299 elif f in pctx:
1301 fctx = pctx[f]
1300 fctx = pctx[f]
1302 repo.wwrite(f, fctx.data(), fctx.flags())
1301 repo.wwrite(f, fctx.data(), fctx.flags())
1303 else:
1302 else:
1304 # content of standin is not so important in 'a',
1303 # content of standin is not so important in 'a',
1305 # 'm' or 'n' (coming from the 2nd parent) cases
1304 # 'm' or 'n' (coming from the 2nd parent) cases
1306 lfutil.writestandin(repo, f, '', False)
1305 lfutil.writestandin(repo, f, '', False)
1307 for standin in orphans:
1306 for standin in orphans:
1308 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1307 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1309
1308
1310 lfdirstate = lfutil.openlfdirstate(ui, repo)
1309 lfdirstate = lfutil.openlfdirstate(ui, repo)
1311 orphans = set(lfdirstate)
1310 orphans = set(lfdirstate)
1312 lfiles = lfutil.listlfiles(repo)
1311 lfiles = lfutil.listlfiles(repo)
1313 for file in lfiles:
1312 for file in lfiles:
1314 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1313 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1315 orphans.discard(file)
1314 orphans.discard(file)
1316 for lfile in orphans:
1315 for lfile in orphans:
1317 lfdirstate.drop(lfile)
1316 lfdirstate.drop(lfile)
1318 lfdirstate.write()
1317 lfdirstate.write()
1319 return result
1318 return result
1320
1319
1321 @eh.wrapcommand('transplant', extension='transplant')
1320 @eh.wrapcommand('transplant', extension='transplant')
1322 def overridetransplant(orig, ui, repo, *revs, **opts):
1321 def overridetransplant(orig, ui, repo, *revs, **opts):
1323 resuming = opts.get(r'continue')
1322 resuming = opts.get(r'continue')
1324 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1323 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1325 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1324 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1326 try:
1325 try:
1327 result = orig(ui, repo, *revs, **opts)
1326 result = orig(ui, repo, *revs, **opts)
1328 finally:
1327 finally:
1329 repo._lfstatuswriters.pop()
1328 repo._lfstatuswriters.pop()
1330 repo._lfcommithooks.pop()
1329 repo._lfcommithooks.pop()
1331 return result
1330 return result
1332
1331
1333 @eh.wrapcommand('cat')
1332 @eh.wrapcommand('cat')
1334 def overridecat(orig, ui, repo, file1, *pats, **opts):
1333 def overridecat(orig, ui, repo, file1, *pats, **opts):
1335 opts = pycompat.byteskwargs(opts)
1334 opts = pycompat.byteskwargs(opts)
1336 ctx = scmutil.revsingle(repo, opts.get('rev'))
1335 ctx = scmutil.revsingle(repo, opts.get('rev'))
1337 err = 1
1336 err = 1
1338 notbad = set()
1337 notbad = set()
1339 m = scmutil.match(ctx, (file1,) + pats, opts)
1338 m = scmutil.match(ctx, (file1,) + pats, opts)
1340 origmatchfn = m.matchfn
1339 origmatchfn = m.matchfn
1341 def lfmatchfn(f):
1340 def lfmatchfn(f):
1342 if origmatchfn(f):
1341 if origmatchfn(f):
1343 return True
1342 return True
1344 lf = lfutil.splitstandin(f)
1343 lf = lfutil.splitstandin(f)
1345 if lf is None:
1344 if lf is None:
1346 return False
1345 return False
1347 notbad.add(lf)
1346 notbad.add(lf)
1348 return origmatchfn(lf)
1347 return origmatchfn(lf)
1349 m.matchfn = lfmatchfn
1348 m.matchfn = lfmatchfn
1350 origbadfn = m.bad
1349 origbadfn = m.bad
1351 def lfbadfn(f, msg):
1350 def lfbadfn(f, msg):
1352 if not f in notbad:
1351 if not f in notbad:
1353 origbadfn(f, msg)
1352 origbadfn(f, msg)
1354 m.bad = lfbadfn
1353 m.bad = lfbadfn
1355
1354
1356 origvisitdirfn = m.visitdir
1355 origvisitdirfn = m.visitdir
1357 def lfvisitdirfn(dir):
1356 def lfvisitdirfn(dir):
1358 if dir == lfutil.shortname:
1357 if dir == lfutil.shortname:
1359 return True
1358 return True
1360 ret = origvisitdirfn(dir)
1359 ret = origvisitdirfn(dir)
1361 if ret:
1360 if ret:
1362 return ret
1361 return ret
1363 lf = lfutil.splitstandin(dir)
1362 lf = lfutil.splitstandin(dir)
1364 if lf is None:
1363 if lf is None:
1365 return False
1364 return False
1366 return origvisitdirfn(lf)
1365 return origvisitdirfn(lf)
1367 m.visitdir = lfvisitdirfn
1366 m.visitdir = lfvisitdirfn
1368
1367
1369 for f in ctx.walk(m):
1368 for f in ctx.walk(m):
1370 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1369 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1371 lf = lfutil.splitstandin(f)
1370 lf = lfutil.splitstandin(f)
1372 if lf is None or origmatchfn(f):
1371 if lf is None or origmatchfn(f):
1373 # duplicating unreachable code from commands.cat
1372 # duplicating unreachable code from commands.cat
1374 data = ctx[f].data()
1373 data = ctx[f].data()
1375 if opts.get('decode'):
1374 if opts.get('decode'):
1376 data = repo.wwritedata(f, data)
1375 data = repo.wwritedata(f, data)
1377 fp.write(data)
1376 fp.write(data)
1378 else:
1377 else:
1379 hash = lfutil.readasstandin(ctx[f])
1378 hash = lfutil.readasstandin(ctx[f])
1380 if not lfutil.inusercache(repo.ui, hash):
1379 if not lfutil.inusercache(repo.ui, hash):
1381 store = storefactory.openstore(repo)
1380 store = storefactory.openstore(repo)
1382 success, missing = store.get([(lf, hash)])
1381 success, missing = store.get([(lf, hash)])
1383 if len(success) != 1:
1382 if len(success) != 1:
1384 raise error.Abort(
1383 raise error.Abort(
1385 _('largefile %s is not in cache and could not be '
1384 _('largefile %s is not in cache and could not be '
1386 'downloaded') % lf)
1385 'downloaded') % lf)
1387 path = lfutil.usercachepath(repo.ui, hash)
1386 path = lfutil.usercachepath(repo.ui, hash)
1388 with open(path, "rb") as fpin:
1387 with open(path, "rb") as fpin:
1389 for chunk in util.filechunkiter(fpin):
1388 for chunk in util.filechunkiter(fpin):
1390 fp.write(chunk)
1389 fp.write(chunk)
1391 err = 0
1390 err = 0
1392 return err
1391 return err
1393
1392
1394 @eh.wrapfunction(merge, 'update')
1393 @eh.wrapfunction(merge, 'update')
1395 def mergeupdate(orig, repo, node, branchmerge, force,
1394 def mergeupdate(orig, repo, node, branchmerge, force,
1396 *args, **kwargs):
1395 *args, **kwargs):
1397 matcher = kwargs.get(r'matcher', None)
1396 matcher = kwargs.get(r'matcher', None)
1398 # note if this is a partial update
1397 # note if this is a partial update
1399 partial = matcher and not matcher.always()
1398 partial = matcher and not matcher.always()
1400 with repo.wlock():
1399 with repo.wlock():
1401 # branch | | |
1400 # branch | | |
1402 # merge | force | partial | action
1401 # merge | force | partial | action
1403 # -------+-------+---------+--------------
1402 # -------+-------+---------+--------------
1404 # x | x | x | linear-merge
1403 # x | x | x | linear-merge
1405 # o | x | x | branch-merge
1404 # o | x | x | branch-merge
1406 # x | o | x | overwrite (as clean update)
1405 # x | o | x | overwrite (as clean update)
1407 # o | o | x | force-branch-merge (*1)
1406 # o | o | x | force-branch-merge (*1)
1408 # x | x | o | (*)
1407 # x | x | o | (*)
1409 # o | x | o | (*)
1408 # o | x | o | (*)
1410 # x | o | o | overwrite (as revert)
1409 # x | o | o | overwrite (as revert)
1411 # o | o | o | (*)
1410 # o | o | o | (*)
1412 #
1411 #
1413 # (*) don't care
1412 # (*) don't care
1414 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1413 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1415
1414
1416 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1415 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1417 unsure, s = lfdirstate.status(matchmod.always(repo.root,
1416 unsure, s = lfdirstate.status(matchmod.always(), subrepos=[],
1418 repo.getcwd()),
1417 ignored=False, clean=True, unknown=False)
1419 subrepos=[], ignored=False,
1420 clean=True, unknown=False)
1421 oldclean = set(s.clean)
1418 oldclean = set(s.clean)
1422 pctx = repo['.']
1419 pctx = repo['.']
1423 dctx = repo[node]
1420 dctx = repo[node]
1424 for lfile in unsure + s.modified:
1421 for lfile in unsure + s.modified:
1425 lfileabs = repo.wvfs.join(lfile)
1422 lfileabs = repo.wvfs.join(lfile)
1426 if not repo.wvfs.exists(lfileabs):
1423 if not repo.wvfs.exists(lfileabs):
1427 continue
1424 continue
1428 lfhash = lfutil.hashfile(lfileabs)
1425 lfhash = lfutil.hashfile(lfileabs)
1429 standin = lfutil.standin(lfile)
1426 standin = lfutil.standin(lfile)
1430 lfutil.writestandin(repo, standin, lfhash,
1427 lfutil.writestandin(repo, standin, lfhash,
1431 lfutil.getexecutable(lfileabs))
1428 lfutil.getexecutable(lfileabs))
1432 if (standin in pctx and
1429 if (standin in pctx and
1433 lfhash == lfutil.readasstandin(pctx[standin])):
1430 lfhash == lfutil.readasstandin(pctx[standin])):
1434 oldclean.add(lfile)
1431 oldclean.add(lfile)
1435 for lfile in s.added:
1432 for lfile in s.added:
1436 fstandin = lfutil.standin(lfile)
1433 fstandin = lfutil.standin(lfile)
1437 if fstandin not in dctx:
1434 if fstandin not in dctx:
1438 # in this case, content of standin file is meaningless
1435 # in this case, content of standin file is meaningless
1439 # (in dctx, lfile is unknown, or normal file)
1436 # (in dctx, lfile is unknown, or normal file)
1440 continue
1437 continue
1441 lfutil.updatestandin(repo, lfile, fstandin)
1438 lfutil.updatestandin(repo, lfile, fstandin)
1442 # mark all clean largefiles as dirty, just in case the update gets
1439 # mark all clean largefiles as dirty, just in case the update gets
1443 # interrupted before largefiles and lfdirstate are synchronized
1440 # interrupted before largefiles and lfdirstate are synchronized
1444 for lfile in oldclean:
1441 for lfile in oldclean:
1445 lfdirstate.normallookup(lfile)
1442 lfdirstate.normallookup(lfile)
1446 lfdirstate.write()
1443 lfdirstate.write()
1447
1444
1448 oldstandins = lfutil.getstandinsstate(repo)
1445 oldstandins = lfutil.getstandinsstate(repo)
1449 # Make sure the merge runs on disk, not in-memory. largefiles is not a
1446 # Make sure the merge runs on disk, not in-memory. largefiles is not a
1450 # good candidate for in-memory merge (large files, custom dirstate,
1447 # good candidate for in-memory merge (large files, custom dirstate,
1451 # matcher usage).
1448 # matcher usage).
1452 kwargs[r'wc'] = repo[None]
1449 kwargs[r'wc'] = repo[None]
1453 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1450 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1454
1451
1455 newstandins = lfutil.getstandinsstate(repo)
1452 newstandins = lfutil.getstandinsstate(repo)
1456 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1453 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1457
1454
1458 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1455 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1459 # all the ones that didn't change as clean
1456 # all the ones that didn't change as clean
1460 for lfile in oldclean.difference(filelist):
1457 for lfile in oldclean.difference(filelist):
1461 lfdirstate.normal(lfile)
1458 lfdirstate.normal(lfile)
1462 lfdirstate.write()
1459 lfdirstate.write()
1463
1460
1464 if branchmerge or force or partial:
1461 if branchmerge or force or partial:
1465 filelist.extend(s.deleted + s.removed)
1462 filelist.extend(s.deleted + s.removed)
1466
1463
1467 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1464 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1468 normallookup=partial)
1465 normallookup=partial)
1469
1466
1470 return result
1467 return result
1471
1468
1472 @eh.wrapfunction(scmutil, 'marktouched')
1469 @eh.wrapfunction(scmutil, 'marktouched')
1473 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1470 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1474 result = orig(repo, files, *args, **kwargs)
1471 result = orig(repo, files, *args, **kwargs)
1475
1472
1476 filelist = []
1473 filelist = []
1477 for f in files:
1474 for f in files:
1478 lf = lfutil.splitstandin(f)
1475 lf = lfutil.splitstandin(f)
1479 if lf is not None:
1476 if lf is not None:
1480 filelist.append(lf)
1477 filelist.append(lf)
1481 if filelist:
1478 if filelist:
1482 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1479 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1483 printmessage=False, normallookup=True)
1480 printmessage=False, normallookup=True)
1484
1481
1485 return result
1482 return result
1486
1483
1487 @eh.wrapfunction(upgrade, 'preservedrequirements')
1484 @eh.wrapfunction(upgrade, 'preservedrequirements')
1488 @eh.wrapfunction(upgrade, 'supporteddestrequirements')
1485 @eh.wrapfunction(upgrade, 'supporteddestrequirements')
1489 def upgraderequirements(orig, repo):
1486 def upgraderequirements(orig, repo):
1490 reqs = orig(repo)
1487 reqs = orig(repo)
1491 if 'largefiles' in repo.requirements:
1488 if 'largefiles' in repo.requirements:
1492 reqs.add('largefiles')
1489 reqs.add('largefiles')
1493 return reqs
1490 return reqs
1494
1491
1495 _lfscheme = 'largefile://'
1492 _lfscheme = 'largefile://'
1496
1493
1497 @eh.wrapfunction(urlmod, 'open')
1494 @eh.wrapfunction(urlmod, 'open')
1498 def openlargefile(orig, ui, url_, data=None):
1495 def openlargefile(orig, ui, url_, data=None):
1499 if url_.startswith(_lfscheme):
1496 if url_.startswith(_lfscheme):
1500 if data:
1497 if data:
1501 msg = "cannot use data on a 'largefile://' url"
1498 msg = "cannot use data on a 'largefile://' url"
1502 raise error.ProgrammingError(msg)
1499 raise error.ProgrammingError(msg)
1503 lfid = url_[len(_lfscheme):]
1500 lfid = url_[len(_lfscheme):]
1504 return storefactory.getlfile(ui, lfid)
1501 return storefactory.getlfile(ui, lfid)
1505 else:
1502 else:
1506 return orig(ui, url_, data=data)
1503 return orig(ui, url_, data=data)
@@ -1,393 +1,393 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''setup for largefiles repositories: reposetup'''
9 '''setup for largefiles repositories: reposetup'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 import copy
12 import copy
13
13
14 from mercurial.i18n import _
14 from mercurial.i18n import _
15
15
16 from mercurial import (
16 from mercurial import (
17 error,
17 error,
18 localrepo,
18 localrepo,
19 match as matchmod,
19 match as matchmod,
20 scmutil,
20 scmutil,
21 )
21 )
22
22
23 from . import (
23 from . import (
24 lfcommands,
24 lfcommands,
25 lfutil,
25 lfutil,
26 )
26 )
27
27
28 def reposetup(ui, repo):
28 def reposetup(ui, repo):
29 # wire repositories should be given new wireproto functions
29 # wire repositories should be given new wireproto functions
30 # by "proto.wirereposetup()" via "hg.wirepeersetupfuncs"
30 # by "proto.wirereposetup()" via "hg.wirepeersetupfuncs"
31 if not repo.local():
31 if not repo.local():
32 return
32 return
33
33
34 class lfilesrepo(repo.__class__):
34 class lfilesrepo(repo.__class__):
35 # the mark to examine whether "repo" object enables largefiles or not
35 # the mark to examine whether "repo" object enables largefiles or not
36 _largefilesenabled = True
36 _largefilesenabled = True
37
37
38 lfstatus = False
38 lfstatus = False
39 def status_nolfiles(self, *args, **kwargs):
39 def status_nolfiles(self, *args, **kwargs):
40 return super(lfilesrepo, self).status(*args, **kwargs)
40 return super(lfilesrepo, self).status(*args, **kwargs)
41
41
42 # When lfstatus is set, return a context that gives the names
42 # When lfstatus is set, return a context that gives the names
43 # of largefiles instead of their corresponding standins and
43 # of largefiles instead of their corresponding standins and
44 # identifies the largefiles as always binary, regardless of
44 # identifies the largefiles as always binary, regardless of
45 # their actual contents.
45 # their actual contents.
46 def __getitem__(self, changeid):
46 def __getitem__(self, changeid):
47 ctx = super(lfilesrepo, self).__getitem__(changeid)
47 ctx = super(lfilesrepo, self).__getitem__(changeid)
48 if self.lfstatus:
48 if self.lfstatus:
49 class lfilesctx(ctx.__class__):
49 class lfilesctx(ctx.__class__):
50 def files(self):
50 def files(self):
51 filenames = super(lfilesctx, self).files()
51 filenames = super(lfilesctx, self).files()
52 return [lfutil.splitstandin(f) or f for f in filenames]
52 return [lfutil.splitstandin(f) or f for f in filenames]
53 def manifest(self):
53 def manifest(self):
54 man1 = super(lfilesctx, self).manifest()
54 man1 = super(lfilesctx, self).manifest()
55 class lfilesmanifest(man1.__class__):
55 class lfilesmanifest(man1.__class__):
56 def __contains__(self, filename):
56 def __contains__(self, filename):
57 orig = super(lfilesmanifest, self).__contains__
57 orig = super(lfilesmanifest, self).__contains__
58 return (orig(filename) or
58 return (orig(filename) or
59 orig(lfutil.standin(filename)))
59 orig(lfutil.standin(filename)))
60 man1.__class__ = lfilesmanifest
60 man1.__class__ = lfilesmanifest
61 return man1
61 return man1
62 def filectx(self, path, fileid=None, filelog=None):
62 def filectx(self, path, fileid=None, filelog=None):
63 orig = super(lfilesctx, self).filectx
63 orig = super(lfilesctx, self).filectx
64 try:
64 try:
65 if filelog is not None:
65 if filelog is not None:
66 result = orig(path, fileid, filelog)
66 result = orig(path, fileid, filelog)
67 else:
67 else:
68 result = orig(path, fileid)
68 result = orig(path, fileid)
69 except error.LookupError:
69 except error.LookupError:
70 # Adding a null character will cause Mercurial to
70 # Adding a null character will cause Mercurial to
71 # identify this as a binary file.
71 # identify this as a binary file.
72 if filelog is not None:
72 if filelog is not None:
73 result = orig(lfutil.standin(path), fileid,
73 result = orig(lfutil.standin(path), fileid,
74 filelog)
74 filelog)
75 else:
75 else:
76 result = orig(lfutil.standin(path), fileid)
76 result = orig(lfutil.standin(path), fileid)
77 olddata = result.data
77 olddata = result.data
78 result.data = lambda: olddata() + '\0'
78 result.data = lambda: olddata() + '\0'
79 return result
79 return result
80 ctx.__class__ = lfilesctx
80 ctx.__class__ = lfilesctx
81 return ctx
81 return ctx
82
82
83 # Figure out the status of big files and insert them into the
83 # Figure out the status of big files and insert them into the
84 # appropriate list in the result. Also removes standin files
84 # appropriate list in the result. Also removes standin files
85 # from the listing. Revert to the original status if
85 # from the listing. Revert to the original status if
86 # self.lfstatus is False.
86 # self.lfstatus is False.
87 # XXX large file status is buggy when used on repo proxy.
87 # XXX large file status is buggy when used on repo proxy.
88 # XXX this needs to be investigated.
88 # XXX this needs to be investigated.
89 @localrepo.unfilteredmethod
89 @localrepo.unfilteredmethod
90 def status(self, node1='.', node2=None, match=None, ignored=False,
90 def status(self, node1='.', node2=None, match=None, ignored=False,
91 clean=False, unknown=False, listsubrepos=False):
91 clean=False, unknown=False, listsubrepos=False):
92 listignored, listclean, listunknown = ignored, clean, unknown
92 listignored, listclean, listunknown = ignored, clean, unknown
93 orig = super(lfilesrepo, self).status
93 orig = super(lfilesrepo, self).status
94 if not self.lfstatus:
94 if not self.lfstatus:
95 return orig(node1, node2, match, listignored, listclean,
95 return orig(node1, node2, match, listignored, listclean,
96 listunknown, listsubrepos)
96 listunknown, listsubrepos)
97
97
98 # some calls in this function rely on the old version of status
98 # some calls in this function rely on the old version of status
99 self.lfstatus = False
99 self.lfstatus = False
100 ctx1 = self[node1]
100 ctx1 = self[node1]
101 ctx2 = self[node2]
101 ctx2 = self[node2]
102 working = ctx2.rev() is None
102 working = ctx2.rev() is None
103 parentworking = working and ctx1 == self['.']
103 parentworking = working and ctx1 == self['.']
104
104
105 if match is None:
105 if match is None:
106 match = matchmod.always(self.root, self.getcwd())
106 match = matchmod.always()
107
107
108 wlock = None
108 wlock = None
109 try:
109 try:
110 try:
110 try:
111 # updating the dirstate is optional
111 # updating the dirstate is optional
112 # so we don't wait on the lock
112 # so we don't wait on the lock
113 wlock = self.wlock(False)
113 wlock = self.wlock(False)
114 except error.LockError:
114 except error.LockError:
115 pass
115 pass
116
116
117 # First check if paths or patterns were specified on the
117 # First check if paths or patterns were specified on the
118 # command line. If there were, and they don't match any
118 # command line. If there were, and they don't match any
119 # largefiles, we should just bail here and let super
119 # largefiles, we should just bail here and let super
120 # handle it -- thus gaining a big performance boost.
120 # handle it -- thus gaining a big performance boost.
121 lfdirstate = lfutil.openlfdirstate(ui, self)
121 lfdirstate = lfutil.openlfdirstate(ui, self)
122 if not match.always():
122 if not match.always():
123 for f in lfdirstate:
123 for f in lfdirstate:
124 if match(f):
124 if match(f):
125 break
125 break
126 else:
126 else:
127 return orig(node1, node2, match, listignored, listclean,
127 return orig(node1, node2, match, listignored, listclean,
128 listunknown, listsubrepos)
128 listunknown, listsubrepos)
129
129
130 # Create a copy of match that matches standins instead
130 # Create a copy of match that matches standins instead
131 # of largefiles.
131 # of largefiles.
132 def tostandins(files):
132 def tostandins(files):
133 if not working:
133 if not working:
134 return files
134 return files
135 newfiles = []
135 newfiles = []
136 dirstate = self.dirstate
136 dirstate = self.dirstate
137 for f in files:
137 for f in files:
138 sf = lfutil.standin(f)
138 sf = lfutil.standin(f)
139 if sf in dirstate:
139 if sf in dirstate:
140 newfiles.append(sf)
140 newfiles.append(sf)
141 elif dirstate.hasdir(sf):
141 elif dirstate.hasdir(sf):
142 # Directory entries could be regular or
142 # Directory entries could be regular or
143 # standin, check both
143 # standin, check both
144 newfiles.extend((f, sf))
144 newfiles.extend((f, sf))
145 else:
145 else:
146 newfiles.append(f)
146 newfiles.append(f)
147 return newfiles
147 return newfiles
148
148
149 m = copy.copy(match)
149 m = copy.copy(match)
150 m._files = tostandins(m._files)
150 m._files = tostandins(m._files)
151
151
152 result = orig(node1, node2, m, ignored, clean, unknown,
152 result = orig(node1, node2, m, ignored, clean, unknown,
153 listsubrepos)
153 listsubrepos)
154 if working:
154 if working:
155
155
156 def sfindirstate(f):
156 def sfindirstate(f):
157 sf = lfutil.standin(f)
157 sf = lfutil.standin(f)
158 dirstate = self.dirstate
158 dirstate = self.dirstate
159 return sf in dirstate or dirstate.hasdir(sf)
159 return sf in dirstate or dirstate.hasdir(sf)
160
160
161 match._files = [f for f in match._files
161 match._files = [f for f in match._files
162 if sfindirstate(f)]
162 if sfindirstate(f)]
163 # Don't waste time getting the ignored and unknown
163 # Don't waste time getting the ignored and unknown
164 # files from lfdirstate
164 # files from lfdirstate
165 unsure, s = lfdirstate.status(match, subrepos=[],
165 unsure, s = lfdirstate.status(match, subrepos=[],
166 ignored=False,
166 ignored=False,
167 clean=listclean,
167 clean=listclean,
168 unknown=False)
168 unknown=False)
169 (modified, added, removed, deleted, clean) = (
169 (modified, added, removed, deleted, clean) = (
170 s.modified, s.added, s.removed, s.deleted, s.clean)
170 s.modified, s.added, s.removed, s.deleted, s.clean)
171 if parentworking:
171 if parentworking:
172 for lfile in unsure:
172 for lfile in unsure:
173 standin = lfutil.standin(lfile)
173 standin = lfutil.standin(lfile)
174 if standin not in ctx1:
174 if standin not in ctx1:
175 # from second parent
175 # from second parent
176 modified.append(lfile)
176 modified.append(lfile)
177 elif lfutil.readasstandin(ctx1[standin]) \
177 elif lfutil.readasstandin(ctx1[standin]) \
178 != lfutil.hashfile(self.wjoin(lfile)):
178 != lfutil.hashfile(self.wjoin(lfile)):
179 modified.append(lfile)
179 modified.append(lfile)
180 else:
180 else:
181 if listclean:
181 if listclean:
182 clean.append(lfile)
182 clean.append(lfile)
183 lfdirstate.normal(lfile)
183 lfdirstate.normal(lfile)
184 else:
184 else:
185 tocheck = unsure + modified + added + clean
185 tocheck = unsure + modified + added + clean
186 modified, added, clean = [], [], []
186 modified, added, clean = [], [], []
187 checkexec = self.dirstate._checkexec
187 checkexec = self.dirstate._checkexec
188
188
189 for lfile in tocheck:
189 for lfile in tocheck:
190 standin = lfutil.standin(lfile)
190 standin = lfutil.standin(lfile)
191 if standin in ctx1:
191 if standin in ctx1:
192 abslfile = self.wjoin(lfile)
192 abslfile = self.wjoin(lfile)
193 if ((lfutil.readasstandin(ctx1[standin]) !=
193 if ((lfutil.readasstandin(ctx1[standin]) !=
194 lfutil.hashfile(abslfile)) or
194 lfutil.hashfile(abslfile)) or
195 (checkexec and
195 (checkexec and
196 ('x' in ctx1.flags(standin)) !=
196 ('x' in ctx1.flags(standin)) !=
197 bool(lfutil.getexecutable(abslfile)))):
197 bool(lfutil.getexecutable(abslfile)))):
198 modified.append(lfile)
198 modified.append(lfile)
199 elif listclean:
199 elif listclean:
200 clean.append(lfile)
200 clean.append(lfile)
201 else:
201 else:
202 added.append(lfile)
202 added.append(lfile)
203
203
204 # at this point, 'removed' contains largefiles
204 # at this point, 'removed' contains largefiles
205 # marked as 'R' in the working context.
205 # marked as 'R' in the working context.
206 # then, largefiles not managed also in the target
206 # then, largefiles not managed also in the target
207 # context should be excluded from 'removed'.
207 # context should be excluded from 'removed'.
208 removed = [lfile for lfile in removed
208 removed = [lfile for lfile in removed
209 if lfutil.standin(lfile) in ctx1]
209 if lfutil.standin(lfile) in ctx1]
210
210
211 # Standins no longer found in lfdirstate have been deleted
211 # Standins no longer found in lfdirstate have been deleted
212 for standin in ctx1.walk(lfutil.getstandinmatcher(self)):
212 for standin in ctx1.walk(lfutil.getstandinmatcher(self)):
213 lfile = lfutil.splitstandin(standin)
213 lfile = lfutil.splitstandin(standin)
214 if not match(lfile):
214 if not match(lfile):
215 continue
215 continue
216 if lfile not in lfdirstate:
216 if lfile not in lfdirstate:
217 deleted.append(lfile)
217 deleted.append(lfile)
218 # Sync "largefile has been removed" back to the
218 # Sync "largefile has been removed" back to the
219 # standin. Removing a file as a side effect of
219 # standin. Removing a file as a side effect of
220 # running status is gross, but the alternatives (if
220 # running status is gross, but the alternatives (if
221 # any) are worse.
221 # any) are worse.
222 self.wvfs.unlinkpath(standin, ignoremissing=True)
222 self.wvfs.unlinkpath(standin, ignoremissing=True)
223
223
224 # Filter result lists
224 # Filter result lists
225 result = list(result)
225 result = list(result)
226
226
227 # Largefiles are not really removed when they're
227 # Largefiles are not really removed when they're
228 # still in the normal dirstate. Likewise, normal
228 # still in the normal dirstate. Likewise, normal
229 # files are not really removed if they are still in
229 # files are not really removed if they are still in
230 # lfdirstate. This happens in merges where files
230 # lfdirstate. This happens in merges where files
231 # change type.
231 # change type.
232 removed = [f for f in removed
232 removed = [f for f in removed
233 if f not in self.dirstate]
233 if f not in self.dirstate]
234 result[2] = [f for f in result[2]
234 result[2] = [f for f in result[2]
235 if f not in lfdirstate]
235 if f not in lfdirstate]
236
236
237 lfiles = set(lfdirstate._map)
237 lfiles = set(lfdirstate._map)
238 # Unknown files
238 # Unknown files
239 result[4] = set(result[4]).difference(lfiles)
239 result[4] = set(result[4]).difference(lfiles)
240 # Ignored files
240 # Ignored files
241 result[5] = set(result[5]).difference(lfiles)
241 result[5] = set(result[5]).difference(lfiles)
242 # combine normal files and largefiles
242 # combine normal files and largefiles
243 normals = [[fn for fn in filelist
243 normals = [[fn for fn in filelist
244 if not lfutil.isstandin(fn)]
244 if not lfutil.isstandin(fn)]
245 for filelist in result]
245 for filelist in result]
246 lfstatus = (modified, added, removed, deleted, [], [],
246 lfstatus = (modified, added, removed, deleted, [], [],
247 clean)
247 clean)
248 result = [sorted(list1 + list2)
248 result = [sorted(list1 + list2)
249 for (list1, list2) in zip(normals, lfstatus)]
249 for (list1, list2) in zip(normals, lfstatus)]
250 else: # not against working directory
250 else: # not against working directory
251 result = [[lfutil.splitstandin(f) or f for f in items]
251 result = [[lfutil.splitstandin(f) or f for f in items]
252 for items in result]
252 for items in result]
253
253
254 if wlock:
254 if wlock:
255 lfdirstate.write()
255 lfdirstate.write()
256
256
257 finally:
257 finally:
258 if wlock:
258 if wlock:
259 wlock.release()
259 wlock.release()
260
260
261 self.lfstatus = True
261 self.lfstatus = True
262 return scmutil.status(*result)
262 return scmutil.status(*result)
263
263
264 def commitctx(self, ctx, *args, **kwargs):
264 def commitctx(self, ctx, *args, **kwargs):
265 node = super(lfilesrepo, self).commitctx(ctx, *args, **kwargs)
265 node = super(lfilesrepo, self).commitctx(ctx, *args, **kwargs)
266 class lfilesctx(ctx.__class__):
266 class lfilesctx(ctx.__class__):
267 def markcommitted(self, node):
267 def markcommitted(self, node):
268 orig = super(lfilesctx, self).markcommitted
268 orig = super(lfilesctx, self).markcommitted
269 return lfutil.markcommitted(orig, self, node)
269 return lfutil.markcommitted(orig, self, node)
270 ctx.__class__ = lfilesctx
270 ctx.__class__ = lfilesctx
271 return node
271 return node
272
272
273 # Before commit, largefile standins have not had their
273 # Before commit, largefile standins have not had their
274 # contents updated to reflect the hash of their largefile.
274 # contents updated to reflect the hash of their largefile.
275 # Do that here.
275 # Do that here.
276 def commit(self, text="", user=None, date=None, match=None,
276 def commit(self, text="", user=None, date=None, match=None,
277 force=False, editor=False, extra=None):
277 force=False, editor=False, extra=None):
278 if extra is None:
278 if extra is None:
279 extra = {}
279 extra = {}
280 orig = super(lfilesrepo, self).commit
280 orig = super(lfilesrepo, self).commit
281
281
282 with self.wlock():
282 with self.wlock():
283 lfcommithook = self._lfcommithooks[-1]
283 lfcommithook = self._lfcommithooks[-1]
284 match = lfcommithook(self, match)
284 match = lfcommithook(self, match)
285 result = orig(text=text, user=user, date=date, match=match,
285 result = orig(text=text, user=user, date=date, match=match,
286 force=force, editor=editor, extra=extra)
286 force=force, editor=editor, extra=extra)
287 return result
287 return result
288
288
289 def push(self, remote, force=False, revs=None, newbranch=False):
289 def push(self, remote, force=False, revs=None, newbranch=False):
290 if remote.local():
290 if remote.local():
291 missing = set(self.requirements) - remote.local().supported
291 missing = set(self.requirements) - remote.local().supported
292 if missing:
292 if missing:
293 msg = _("required features are not"
293 msg = _("required features are not"
294 " supported in the destination:"
294 " supported in the destination:"
295 " %s") % (', '.join(sorted(missing)))
295 " %s") % (', '.join(sorted(missing)))
296 raise error.Abort(msg)
296 raise error.Abort(msg)
297 return super(lfilesrepo, self).push(remote, force=force, revs=revs,
297 return super(lfilesrepo, self).push(remote, force=force, revs=revs,
298 newbranch=newbranch)
298 newbranch=newbranch)
299
299
300 # TODO: _subdirlfs should be moved into "lfutil.py", because
300 # TODO: _subdirlfs should be moved into "lfutil.py", because
301 # it is referred only from "lfutil.updatestandinsbymatch"
301 # it is referred only from "lfutil.updatestandinsbymatch"
302 def _subdirlfs(self, files, lfiles):
302 def _subdirlfs(self, files, lfiles):
303 '''
303 '''
304 Adjust matched file list
304 Adjust matched file list
305 If we pass a directory to commit whose only committable files
305 If we pass a directory to commit whose only committable files
306 are largefiles, the core commit code aborts before finding
306 are largefiles, the core commit code aborts before finding
307 the largefiles.
307 the largefiles.
308 So we do the following:
308 So we do the following:
309 For directories that only have largefiles as matches,
309 For directories that only have largefiles as matches,
310 we explicitly add the largefiles to the match list and remove
310 we explicitly add the largefiles to the match list and remove
311 the directory.
311 the directory.
312 In other cases, we leave the match list unmodified.
312 In other cases, we leave the match list unmodified.
313 '''
313 '''
314 actualfiles = []
314 actualfiles = []
315 dirs = []
315 dirs = []
316 regulars = []
316 regulars = []
317
317
318 for f in files:
318 for f in files:
319 if lfutil.isstandin(f + '/'):
319 if lfutil.isstandin(f + '/'):
320 raise error.Abort(
320 raise error.Abort(
321 _('file "%s" is a largefile standin') % f,
321 _('file "%s" is a largefile standin') % f,
322 hint=('commit the largefile itself instead'))
322 hint=('commit the largefile itself instead'))
323 # Scan directories
323 # Scan directories
324 if self.wvfs.isdir(f):
324 if self.wvfs.isdir(f):
325 dirs.append(f)
325 dirs.append(f)
326 else:
326 else:
327 regulars.append(f)
327 regulars.append(f)
328
328
329 for f in dirs:
329 for f in dirs:
330 matcheddir = False
330 matcheddir = False
331 d = self.dirstate.normalize(f) + '/'
331 d = self.dirstate.normalize(f) + '/'
332 # Check for matched normal files
332 # Check for matched normal files
333 for mf in regulars:
333 for mf in regulars:
334 if self.dirstate.normalize(mf).startswith(d):
334 if self.dirstate.normalize(mf).startswith(d):
335 actualfiles.append(f)
335 actualfiles.append(f)
336 matcheddir = True
336 matcheddir = True
337 break
337 break
338 if not matcheddir:
338 if not matcheddir:
339 # If no normal match, manually append
339 # If no normal match, manually append
340 # any matching largefiles
340 # any matching largefiles
341 for lf in lfiles:
341 for lf in lfiles:
342 if self.dirstate.normalize(lf).startswith(d):
342 if self.dirstate.normalize(lf).startswith(d):
343 actualfiles.append(lf)
343 actualfiles.append(lf)
344 if not matcheddir:
344 if not matcheddir:
345 # There may still be normal files in the dir, so
345 # There may still be normal files in the dir, so
346 # add a directory to the list, which
346 # add a directory to the list, which
347 # forces status/dirstate to walk all files and
347 # forces status/dirstate to walk all files and
348 # call the match function on the matcher, even
348 # call the match function on the matcher, even
349 # on case sensitive filesystems.
349 # on case sensitive filesystems.
350 actualfiles.append('.')
350 actualfiles.append('.')
351 matcheddir = True
351 matcheddir = True
352 # Nothing in dir, so readd it
352 # Nothing in dir, so readd it
353 # and let commit reject it
353 # and let commit reject it
354 if not matcheddir:
354 if not matcheddir:
355 actualfiles.append(f)
355 actualfiles.append(f)
356
356
357 # Always add normal files
357 # Always add normal files
358 actualfiles += regulars
358 actualfiles += regulars
359 return actualfiles
359 return actualfiles
360
360
361 repo.__class__ = lfilesrepo
361 repo.__class__ = lfilesrepo
362
362
363 # stack of hooks being executed before committing.
363 # stack of hooks being executed before committing.
364 # only last element ("_lfcommithooks[-1]") is used for each committing.
364 # only last element ("_lfcommithooks[-1]") is used for each committing.
365 repo._lfcommithooks = [lfutil.updatestandinsbymatch]
365 repo._lfcommithooks = [lfutil.updatestandinsbymatch]
366
366
367 # Stack of status writer functions taking "*msg, **opts" arguments
367 # Stack of status writer functions taking "*msg, **opts" arguments
368 # like "ui.status()". Only last element ("_lfstatuswriters[-1]")
368 # like "ui.status()". Only last element ("_lfstatuswriters[-1]")
369 # is used to write status out.
369 # is used to write status out.
370 repo._lfstatuswriters = [ui.status]
370 repo._lfstatuswriters = [ui.status]
371
371
372 def prepushoutgoinghook(pushop):
372 def prepushoutgoinghook(pushop):
373 """Push largefiles for pushop before pushing revisions."""
373 """Push largefiles for pushop before pushing revisions."""
374 lfrevs = pushop.lfrevs
374 lfrevs = pushop.lfrevs
375 if lfrevs is None:
375 if lfrevs is None:
376 lfrevs = pushop.outgoing.missing
376 lfrevs = pushop.outgoing.missing
377 if lfrevs:
377 if lfrevs:
378 toupload = set()
378 toupload = set()
379 addfunc = lambda fn, lfhash: toupload.add(lfhash)
379 addfunc = lambda fn, lfhash: toupload.add(lfhash)
380 lfutil.getlfilestoupload(pushop.repo, lfrevs,
380 lfutil.getlfilestoupload(pushop.repo, lfrevs,
381 addfunc)
381 addfunc)
382 lfcommands.uploadlfiles(ui, pushop.repo, pushop.remote, toupload)
382 lfcommands.uploadlfiles(ui, pushop.repo, pushop.remote, toupload)
383 repo.prepushoutgoinghooks.add("largefiles", prepushoutgoinghook)
383 repo.prepushoutgoinghooks.add("largefiles", prepushoutgoinghook)
384
384
385 def checkrequireslfiles(ui, repo, **kwargs):
385 def checkrequireslfiles(ui, repo, **kwargs):
386 if 'largefiles' not in repo.requirements and any(
386 if 'largefiles' not in repo.requirements and any(
387 lfutil.shortname+'/' in f[0] for f in repo.store.datafiles()):
387 lfutil.shortname+'/' in f[0] for f in repo.store.datafiles()):
388 repo.requirements.add('largefiles')
388 repo.requirements.add('largefiles')
389 repo._writerequirements()
389 repo._writerequirements()
390
390
391 ui.setconfig('hooks', 'changegroup.lfiles', checkrequireslfiles,
391 ui.setconfig('hooks', 'changegroup.lfiles', checkrequireslfiles,
392 'largefiles')
392 'largefiles')
393 ui.setconfig('hooks', 'commit.lfiles', checkrequireslfiles, 'largefiles')
393 ui.setconfig('hooks', 'commit.lfiles', checkrequireslfiles, 'largefiles')
@@ -1,404 +1,404 b''
1 # remotefilelogserver.py - server logic for a remotefilelog server
1 # remotefilelogserver.py - server logic for a remotefilelog server
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import errno
9 import errno
10 import os
10 import os
11 import stat
11 import stat
12 import time
12 import time
13 import zlib
13 import zlib
14
14
15 from mercurial.i18n import _
15 from mercurial.i18n import _
16 from mercurial.node import bin, hex, nullid
16 from mercurial.node import bin, hex, nullid
17 from mercurial import (
17 from mercurial import (
18 changegroup,
18 changegroup,
19 changelog,
19 changelog,
20 context,
20 context,
21 error,
21 error,
22 extensions,
22 extensions,
23 match,
23 match,
24 store,
24 store,
25 streamclone,
25 streamclone,
26 util,
26 util,
27 wireprotoserver,
27 wireprotoserver,
28 wireprototypes,
28 wireprototypes,
29 wireprotov1server,
29 wireprotov1server,
30 )
30 )
31 from . import (
31 from . import (
32 constants,
32 constants,
33 shallowutil,
33 shallowutil,
34 )
34 )
35
35
36 _sshv1server = wireprotoserver.sshv1protocolhandler
36 _sshv1server = wireprotoserver.sshv1protocolhandler
37
37
38 def setupserver(ui, repo):
38 def setupserver(ui, repo):
39 """Sets up a normal Mercurial repo so it can serve files to shallow repos.
39 """Sets up a normal Mercurial repo so it can serve files to shallow repos.
40 """
40 """
41 onetimesetup(ui)
41 onetimesetup(ui)
42
42
43 # don't send files to shallow clients during pulls
43 # don't send files to shallow clients during pulls
44 def generatefiles(orig, self, changedfiles, linknodes, commonrevs, source,
44 def generatefiles(orig, self, changedfiles, linknodes, commonrevs, source,
45 *args, **kwargs):
45 *args, **kwargs):
46 caps = self._bundlecaps or []
46 caps = self._bundlecaps or []
47 if constants.BUNDLE2_CAPABLITY in caps:
47 if constants.BUNDLE2_CAPABLITY in caps:
48 # only send files that don't match the specified patterns
48 # only send files that don't match the specified patterns
49 includepattern = None
49 includepattern = None
50 excludepattern = None
50 excludepattern = None
51 for cap in (self._bundlecaps or []):
51 for cap in (self._bundlecaps or []):
52 if cap.startswith("includepattern="):
52 if cap.startswith("includepattern="):
53 includepattern = cap[len("includepattern="):].split('\0')
53 includepattern = cap[len("includepattern="):].split('\0')
54 elif cap.startswith("excludepattern="):
54 elif cap.startswith("excludepattern="):
55 excludepattern = cap[len("excludepattern="):].split('\0')
55 excludepattern = cap[len("excludepattern="):].split('\0')
56
56
57 m = match.always(repo.root, '')
57 m = match.always()
58 if includepattern or excludepattern:
58 if includepattern or excludepattern:
59 m = match.match(repo.root, '', None,
59 m = match.match(repo.root, '', None,
60 includepattern, excludepattern)
60 includepattern, excludepattern)
61
61
62 changedfiles = list([f for f in changedfiles if not m(f)])
62 changedfiles = list([f for f in changedfiles if not m(f)])
63 return orig(self, changedfiles, linknodes, commonrevs, source,
63 return orig(self, changedfiles, linknodes, commonrevs, source,
64 *args, **kwargs)
64 *args, **kwargs)
65
65
66 extensions.wrapfunction(
66 extensions.wrapfunction(
67 changegroup.cgpacker, 'generatefiles', generatefiles)
67 changegroup.cgpacker, 'generatefiles', generatefiles)
68
68
69 onetime = False
69 onetime = False
70 def onetimesetup(ui):
70 def onetimesetup(ui):
71 """Configures the wireprotocol for both clients and servers.
71 """Configures the wireprotocol for both clients and servers.
72 """
72 """
73 global onetime
73 global onetime
74 if onetime:
74 if onetime:
75 return
75 return
76 onetime = True
76 onetime = True
77
77
78 # support file content requests
78 # support file content requests
79 wireprotov1server.wireprotocommand(
79 wireprotov1server.wireprotocommand(
80 'x_rfl_getflogheads', 'path', permission='pull')(getflogheads)
80 'x_rfl_getflogheads', 'path', permission='pull')(getflogheads)
81 wireprotov1server.wireprotocommand(
81 wireprotov1server.wireprotocommand(
82 'x_rfl_getfiles', '', permission='pull')(getfiles)
82 'x_rfl_getfiles', '', permission='pull')(getfiles)
83 wireprotov1server.wireprotocommand(
83 wireprotov1server.wireprotocommand(
84 'x_rfl_getfile', 'file node', permission='pull')(getfile)
84 'x_rfl_getfile', 'file node', permission='pull')(getfile)
85
85
86 class streamstate(object):
86 class streamstate(object):
87 match = None
87 match = None
88 shallowremote = False
88 shallowremote = False
89 noflatmf = False
89 noflatmf = False
90 state = streamstate()
90 state = streamstate()
91
91
92 def stream_out_shallow(repo, proto, other):
92 def stream_out_shallow(repo, proto, other):
93 includepattern = None
93 includepattern = None
94 excludepattern = None
94 excludepattern = None
95 raw = other.get('includepattern')
95 raw = other.get('includepattern')
96 if raw:
96 if raw:
97 includepattern = raw.split('\0')
97 includepattern = raw.split('\0')
98 raw = other.get('excludepattern')
98 raw = other.get('excludepattern')
99 if raw:
99 if raw:
100 excludepattern = raw.split('\0')
100 excludepattern = raw.split('\0')
101
101
102 oldshallow = state.shallowremote
102 oldshallow = state.shallowremote
103 oldmatch = state.match
103 oldmatch = state.match
104 oldnoflatmf = state.noflatmf
104 oldnoflatmf = state.noflatmf
105 try:
105 try:
106 state.shallowremote = True
106 state.shallowremote = True
107 state.match = match.always(repo.root, '')
107 state.match = match.always()
108 state.noflatmf = other.get('noflatmanifest') == 'True'
108 state.noflatmf = other.get('noflatmanifest') == 'True'
109 if includepattern or excludepattern:
109 if includepattern or excludepattern:
110 state.match = match.match(repo.root, '', None,
110 state.match = match.match(repo.root, '', None,
111 includepattern, excludepattern)
111 includepattern, excludepattern)
112 streamres = wireprotov1server.stream(repo, proto)
112 streamres = wireprotov1server.stream(repo, proto)
113
113
114 # Force the first value to execute, so the file list is computed
114 # Force the first value to execute, so the file list is computed
115 # within the try/finally scope
115 # within the try/finally scope
116 first = next(streamres.gen)
116 first = next(streamres.gen)
117 second = next(streamres.gen)
117 second = next(streamres.gen)
118 def gen():
118 def gen():
119 yield first
119 yield first
120 yield second
120 yield second
121 for value in streamres.gen:
121 for value in streamres.gen:
122 yield value
122 yield value
123 return wireprototypes.streamres(gen())
123 return wireprototypes.streamres(gen())
124 finally:
124 finally:
125 state.shallowremote = oldshallow
125 state.shallowremote = oldshallow
126 state.match = oldmatch
126 state.match = oldmatch
127 state.noflatmf = oldnoflatmf
127 state.noflatmf = oldnoflatmf
128
128
129 wireprotov1server.commands['stream_out_shallow'] = (stream_out_shallow, '*')
129 wireprotov1server.commands['stream_out_shallow'] = (stream_out_shallow, '*')
130
130
131 # don't clone filelogs to shallow clients
131 # don't clone filelogs to shallow clients
132 def _walkstreamfiles(orig, repo, matcher=None):
132 def _walkstreamfiles(orig, repo, matcher=None):
133 if state.shallowremote:
133 if state.shallowremote:
134 # if we are shallow ourselves, stream our local commits
134 # if we are shallow ourselves, stream our local commits
135 if shallowutil.isenabled(repo):
135 if shallowutil.isenabled(repo):
136 striplen = len(repo.store.path) + 1
136 striplen = len(repo.store.path) + 1
137 readdir = repo.store.rawvfs.readdir
137 readdir = repo.store.rawvfs.readdir
138 visit = [os.path.join(repo.store.path, 'data')]
138 visit = [os.path.join(repo.store.path, 'data')]
139 while visit:
139 while visit:
140 p = visit.pop()
140 p = visit.pop()
141 for f, kind, st in readdir(p, stat=True):
141 for f, kind, st in readdir(p, stat=True):
142 fp = p + '/' + f
142 fp = p + '/' + f
143 if kind == stat.S_IFREG:
143 if kind == stat.S_IFREG:
144 if not fp.endswith('.i') and not fp.endswith('.d'):
144 if not fp.endswith('.i') and not fp.endswith('.d'):
145 n = util.pconvert(fp[striplen:])
145 n = util.pconvert(fp[striplen:])
146 yield (store.decodedir(n), n, st.st_size)
146 yield (store.decodedir(n), n, st.st_size)
147 if kind == stat.S_IFDIR:
147 if kind == stat.S_IFDIR:
148 visit.append(fp)
148 visit.append(fp)
149
149
150 if 'treemanifest' in repo.requirements:
150 if 'treemanifest' in repo.requirements:
151 for (u, e, s) in repo.store.datafiles():
151 for (u, e, s) in repo.store.datafiles():
152 if (u.startswith('meta/') and
152 if (u.startswith('meta/') and
153 (u.endswith('.i') or u.endswith('.d'))):
153 (u.endswith('.i') or u.endswith('.d'))):
154 yield (u, e, s)
154 yield (u, e, s)
155
155
156 # Return .d and .i files that do not match the shallow pattern
156 # Return .d and .i files that do not match the shallow pattern
157 match = state.match
157 match = state.match
158 if match and not match.always():
158 if match and not match.always():
159 for (u, e, s) in repo.store.datafiles():
159 for (u, e, s) in repo.store.datafiles():
160 f = u[5:-2] # trim data/... and .i/.d
160 f = u[5:-2] # trim data/... and .i/.d
161 if not state.match(f):
161 if not state.match(f):
162 yield (u, e, s)
162 yield (u, e, s)
163
163
164 for x in repo.store.topfiles():
164 for x in repo.store.topfiles():
165 if state.noflatmf and x[0][:11] == '00manifest.':
165 if state.noflatmf and x[0][:11] == '00manifest.':
166 continue
166 continue
167 yield x
167 yield x
168
168
169 elif shallowutil.isenabled(repo):
169 elif shallowutil.isenabled(repo):
170 # don't allow cloning from a shallow repo to a full repo
170 # don't allow cloning from a shallow repo to a full repo
171 # since it would require fetching every version of every
171 # since it would require fetching every version of every
172 # file in order to create the revlogs.
172 # file in order to create the revlogs.
173 raise error.Abort(_("Cannot clone from a shallow repo "
173 raise error.Abort(_("Cannot clone from a shallow repo "
174 "to a full repo."))
174 "to a full repo."))
175 else:
175 else:
176 for x in orig(repo, matcher):
176 for x in orig(repo, matcher):
177 yield x
177 yield x
178
178
179 extensions.wrapfunction(streamclone, '_walkstreamfiles', _walkstreamfiles)
179 extensions.wrapfunction(streamclone, '_walkstreamfiles', _walkstreamfiles)
180
180
181 # expose remotefilelog capabilities
181 # expose remotefilelog capabilities
182 def _capabilities(orig, repo, proto):
182 def _capabilities(orig, repo, proto):
183 caps = orig(repo, proto)
183 caps = orig(repo, proto)
184 if (shallowutil.isenabled(repo) or ui.configbool('remotefilelog',
184 if (shallowutil.isenabled(repo) or ui.configbool('remotefilelog',
185 'server')):
185 'server')):
186 if isinstance(proto, _sshv1server):
186 if isinstance(proto, _sshv1server):
187 # legacy getfiles method which only works over ssh
187 # legacy getfiles method which only works over ssh
188 caps.append(constants.NETWORK_CAP_LEGACY_SSH_GETFILES)
188 caps.append(constants.NETWORK_CAP_LEGACY_SSH_GETFILES)
189 caps.append('x_rfl_getflogheads')
189 caps.append('x_rfl_getflogheads')
190 caps.append('x_rfl_getfile')
190 caps.append('x_rfl_getfile')
191 return caps
191 return caps
192 extensions.wrapfunction(wireprotov1server, '_capabilities', _capabilities)
192 extensions.wrapfunction(wireprotov1server, '_capabilities', _capabilities)
193
193
194 def _adjustlinkrev(orig, self, *args, **kwargs):
194 def _adjustlinkrev(orig, self, *args, **kwargs):
195 # When generating file blobs, taking the real path is too slow on large
195 # When generating file blobs, taking the real path is too slow on large
196 # repos, so force it to just return the linkrev directly.
196 # repos, so force it to just return the linkrev directly.
197 repo = self._repo
197 repo = self._repo
198 if util.safehasattr(repo, 'forcelinkrev') and repo.forcelinkrev:
198 if util.safehasattr(repo, 'forcelinkrev') and repo.forcelinkrev:
199 return self._filelog.linkrev(self._filelog.rev(self._filenode))
199 return self._filelog.linkrev(self._filelog.rev(self._filenode))
200 return orig(self, *args, **kwargs)
200 return orig(self, *args, **kwargs)
201
201
202 extensions.wrapfunction(
202 extensions.wrapfunction(
203 context.basefilectx, '_adjustlinkrev', _adjustlinkrev)
203 context.basefilectx, '_adjustlinkrev', _adjustlinkrev)
204
204
205 def _iscmd(orig, cmd):
205 def _iscmd(orig, cmd):
206 if cmd == 'x_rfl_getfiles':
206 if cmd == 'x_rfl_getfiles':
207 return False
207 return False
208 return orig(cmd)
208 return orig(cmd)
209
209
210 extensions.wrapfunction(wireprotoserver, 'iscmd', _iscmd)
210 extensions.wrapfunction(wireprotoserver, 'iscmd', _iscmd)
211
211
212 def _loadfileblob(repo, cachepath, path, node):
212 def _loadfileblob(repo, cachepath, path, node):
213 filecachepath = os.path.join(cachepath, path, hex(node))
213 filecachepath = os.path.join(cachepath, path, hex(node))
214 if not os.path.exists(filecachepath) or os.path.getsize(filecachepath) == 0:
214 if not os.path.exists(filecachepath) or os.path.getsize(filecachepath) == 0:
215 filectx = repo.filectx(path, fileid=node)
215 filectx = repo.filectx(path, fileid=node)
216 if filectx.node() == nullid:
216 if filectx.node() == nullid:
217 repo.changelog = changelog.changelog(repo.svfs)
217 repo.changelog = changelog.changelog(repo.svfs)
218 filectx = repo.filectx(path, fileid=node)
218 filectx = repo.filectx(path, fileid=node)
219
219
220 text = createfileblob(filectx)
220 text = createfileblob(filectx)
221 # TODO configurable compression engines
221 # TODO configurable compression engines
222 text = zlib.compress(text)
222 text = zlib.compress(text)
223
223
224 # everything should be user & group read/writable
224 # everything should be user & group read/writable
225 oldumask = os.umask(0o002)
225 oldumask = os.umask(0o002)
226 try:
226 try:
227 dirname = os.path.dirname(filecachepath)
227 dirname = os.path.dirname(filecachepath)
228 if not os.path.exists(dirname):
228 if not os.path.exists(dirname):
229 try:
229 try:
230 os.makedirs(dirname)
230 os.makedirs(dirname)
231 except OSError as ex:
231 except OSError as ex:
232 if ex.errno != errno.EEXIST:
232 if ex.errno != errno.EEXIST:
233 raise
233 raise
234
234
235 f = None
235 f = None
236 try:
236 try:
237 f = util.atomictempfile(filecachepath, "wb")
237 f = util.atomictempfile(filecachepath, "wb")
238 f.write(text)
238 f.write(text)
239 except (IOError, OSError):
239 except (IOError, OSError):
240 # Don't abort if the user only has permission to read,
240 # Don't abort if the user only has permission to read,
241 # and not write.
241 # and not write.
242 pass
242 pass
243 finally:
243 finally:
244 if f:
244 if f:
245 f.close()
245 f.close()
246 finally:
246 finally:
247 os.umask(oldumask)
247 os.umask(oldumask)
248 else:
248 else:
249 with open(filecachepath, "rb") as f:
249 with open(filecachepath, "rb") as f:
250 text = f.read()
250 text = f.read()
251 return text
251 return text
252
252
253 def getflogheads(repo, proto, path):
253 def getflogheads(repo, proto, path):
254 """A server api for requesting a filelog's heads
254 """A server api for requesting a filelog's heads
255 """
255 """
256 flog = repo.file(path)
256 flog = repo.file(path)
257 heads = flog.heads()
257 heads = flog.heads()
258 return '\n'.join((hex(head) for head in heads if head != nullid))
258 return '\n'.join((hex(head) for head in heads if head != nullid))
259
259
260 def getfile(repo, proto, file, node):
260 def getfile(repo, proto, file, node):
261 """A server api for requesting a particular version of a file. Can be used
261 """A server api for requesting a particular version of a file. Can be used
262 in batches to request many files at once. The return protocol is:
262 in batches to request many files at once. The return protocol is:
263 <errorcode>\0<data/errormsg> where <errorcode> is 0 for success or
263 <errorcode>\0<data/errormsg> where <errorcode> is 0 for success or
264 non-zero for an error.
264 non-zero for an error.
265
265
266 data is a compressed blob with revlog flag and ancestors information. See
266 data is a compressed blob with revlog flag and ancestors information. See
267 createfileblob for its content.
267 createfileblob for its content.
268 """
268 """
269 if shallowutil.isenabled(repo):
269 if shallowutil.isenabled(repo):
270 return '1\0' + _('cannot fetch remote files from shallow repo')
270 return '1\0' + _('cannot fetch remote files from shallow repo')
271 cachepath = repo.ui.config("remotefilelog", "servercachepath")
271 cachepath = repo.ui.config("remotefilelog", "servercachepath")
272 if not cachepath:
272 if not cachepath:
273 cachepath = os.path.join(repo.path, "remotefilelogcache")
273 cachepath = os.path.join(repo.path, "remotefilelogcache")
274 node = bin(node.strip())
274 node = bin(node.strip())
275 if node == nullid:
275 if node == nullid:
276 return '0\0'
276 return '0\0'
277 return '0\0' + _loadfileblob(repo, cachepath, file, node)
277 return '0\0' + _loadfileblob(repo, cachepath, file, node)
278
278
279 def getfiles(repo, proto):
279 def getfiles(repo, proto):
280 """A server api for requesting particular versions of particular files.
280 """A server api for requesting particular versions of particular files.
281 """
281 """
282 if shallowutil.isenabled(repo):
282 if shallowutil.isenabled(repo):
283 raise error.Abort(_('cannot fetch remote files from shallow repo'))
283 raise error.Abort(_('cannot fetch remote files from shallow repo'))
284 if not isinstance(proto, _sshv1server):
284 if not isinstance(proto, _sshv1server):
285 raise error.Abort(_('cannot fetch remote files over non-ssh protocol'))
285 raise error.Abort(_('cannot fetch remote files over non-ssh protocol'))
286
286
287 def streamer():
287 def streamer():
288 fin = proto._fin
288 fin = proto._fin
289
289
290 cachepath = repo.ui.config("remotefilelog", "servercachepath")
290 cachepath = repo.ui.config("remotefilelog", "servercachepath")
291 if not cachepath:
291 if not cachepath:
292 cachepath = os.path.join(repo.path, "remotefilelogcache")
292 cachepath = os.path.join(repo.path, "remotefilelogcache")
293
293
294 while True:
294 while True:
295 request = fin.readline()[:-1]
295 request = fin.readline()[:-1]
296 if not request:
296 if not request:
297 break
297 break
298
298
299 node = bin(request[:40])
299 node = bin(request[:40])
300 if node == nullid:
300 if node == nullid:
301 yield '0\n'
301 yield '0\n'
302 continue
302 continue
303
303
304 path = request[40:]
304 path = request[40:]
305
305
306 text = _loadfileblob(repo, cachepath, path, node)
306 text = _loadfileblob(repo, cachepath, path, node)
307
307
308 yield '%d\n%s' % (len(text), text)
308 yield '%d\n%s' % (len(text), text)
309
309
310 # it would be better to only flush after processing a whole batch
310 # it would be better to only flush after processing a whole batch
311 # but currently we don't know if there are more requests coming
311 # but currently we don't know if there are more requests coming
312 proto._fout.flush()
312 proto._fout.flush()
313 return wireprototypes.streamres(streamer())
313 return wireprototypes.streamres(streamer())
314
314
315 def createfileblob(filectx):
315 def createfileblob(filectx):
316 """
316 """
317 format:
317 format:
318 v0:
318 v0:
319 str(len(rawtext)) + '\0' + rawtext + ancestortext
319 str(len(rawtext)) + '\0' + rawtext + ancestortext
320 v1:
320 v1:
321 'v1' + '\n' + metalist + '\0' + rawtext + ancestortext
321 'v1' + '\n' + metalist + '\0' + rawtext + ancestortext
322 metalist := metalist + '\n' + meta | meta
322 metalist := metalist + '\n' + meta | meta
323 meta := sizemeta | flagmeta
323 meta := sizemeta | flagmeta
324 sizemeta := METAKEYSIZE + str(len(rawtext))
324 sizemeta := METAKEYSIZE + str(len(rawtext))
325 flagmeta := METAKEYFLAG + str(flag)
325 flagmeta := METAKEYFLAG + str(flag)
326
326
327 note: sizemeta must exist. METAKEYFLAG and METAKEYSIZE must have a
327 note: sizemeta must exist. METAKEYFLAG and METAKEYSIZE must have a
328 length of 1.
328 length of 1.
329 """
329 """
330 flog = filectx.filelog()
330 flog = filectx.filelog()
331 frev = filectx.filerev()
331 frev = filectx.filerev()
332 revlogflags = flog._revlog.flags(frev)
332 revlogflags = flog._revlog.flags(frev)
333 if revlogflags == 0:
333 if revlogflags == 0:
334 # normal files
334 # normal files
335 text = filectx.data()
335 text = filectx.data()
336 else:
336 else:
337 # lfs, read raw revision data
337 # lfs, read raw revision data
338 text = flog.revision(frev, raw=True)
338 text = flog.revision(frev, raw=True)
339
339
340 repo = filectx._repo
340 repo = filectx._repo
341
341
342 ancestors = [filectx]
342 ancestors = [filectx]
343
343
344 try:
344 try:
345 repo.forcelinkrev = True
345 repo.forcelinkrev = True
346 ancestors.extend([f for f in filectx.ancestors()])
346 ancestors.extend([f for f in filectx.ancestors()])
347
347
348 ancestortext = ""
348 ancestortext = ""
349 for ancestorctx in ancestors:
349 for ancestorctx in ancestors:
350 parents = ancestorctx.parents()
350 parents = ancestorctx.parents()
351 p1 = nullid
351 p1 = nullid
352 p2 = nullid
352 p2 = nullid
353 if len(parents) > 0:
353 if len(parents) > 0:
354 p1 = parents[0].filenode()
354 p1 = parents[0].filenode()
355 if len(parents) > 1:
355 if len(parents) > 1:
356 p2 = parents[1].filenode()
356 p2 = parents[1].filenode()
357
357
358 copyname = ""
358 copyname = ""
359 rename = ancestorctx.renamed()
359 rename = ancestorctx.renamed()
360 if rename:
360 if rename:
361 copyname = rename[0]
361 copyname = rename[0]
362 linknode = ancestorctx.node()
362 linknode = ancestorctx.node()
363 ancestortext += "%s%s%s%s%s\0" % (
363 ancestortext += "%s%s%s%s%s\0" % (
364 ancestorctx.filenode(), p1, p2, linknode,
364 ancestorctx.filenode(), p1, p2, linknode,
365 copyname)
365 copyname)
366 finally:
366 finally:
367 repo.forcelinkrev = False
367 repo.forcelinkrev = False
368
368
369 header = shallowutil.buildfileblobheader(len(text), revlogflags)
369 header = shallowutil.buildfileblobheader(len(text), revlogflags)
370
370
371 return "%s\0%s%s" % (header, text, ancestortext)
371 return "%s\0%s%s" % (header, text, ancestortext)
372
372
373 def gcserver(ui, repo):
373 def gcserver(ui, repo):
374 if not repo.ui.configbool("remotefilelog", "server"):
374 if not repo.ui.configbool("remotefilelog", "server"):
375 return
375 return
376
376
377 neededfiles = set()
377 neededfiles = set()
378 heads = repo.revs("heads(tip~25000:) - null")
378 heads = repo.revs("heads(tip~25000:) - null")
379
379
380 cachepath = repo.vfs.join("remotefilelogcache")
380 cachepath = repo.vfs.join("remotefilelogcache")
381 for head in heads:
381 for head in heads:
382 mf = repo[head].manifest()
382 mf = repo[head].manifest()
383 for filename, filenode in mf.iteritems():
383 for filename, filenode in mf.iteritems():
384 filecachepath = os.path.join(cachepath, filename, hex(filenode))
384 filecachepath = os.path.join(cachepath, filename, hex(filenode))
385 neededfiles.add(filecachepath)
385 neededfiles.add(filecachepath)
386
386
387 # delete unneeded older files
387 # delete unneeded older files
388 days = repo.ui.configint("remotefilelog", "serverexpiration")
388 days = repo.ui.configint("remotefilelog", "serverexpiration")
389 expiration = time.time() - (days * 24 * 60 * 60)
389 expiration = time.time() - (days * 24 * 60 * 60)
390
390
391 progress = ui.makeprogress(_("removing old server cache"), unit="files")
391 progress = ui.makeprogress(_("removing old server cache"), unit="files")
392 progress.update(0)
392 progress.update(0)
393 for root, dirs, files in os.walk(cachepath):
393 for root, dirs, files in os.walk(cachepath):
394 for file in files:
394 for file in files:
395 filepath = os.path.join(root, file)
395 filepath = os.path.join(root, file)
396 progress.increment()
396 progress.increment()
397 if filepath in neededfiles:
397 if filepath in neededfiles:
398 continue
398 continue
399
399
400 stat = os.stat(filepath)
400 stat = os.stat(filepath)
401 if stat.st_mtime < expiration:
401 if stat.st_mtime < expiration:
402 os.remove(filepath)
402 os.remove(filepath)
403
403
404 progress.complete()
404 progress.complete()
@@ -1,293 +1,293 b''
1 # shallowbundle.py - bundle10 implementation for use with shallow repositories
1 # shallowbundle.py - bundle10 implementation for use with shallow repositories
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 from mercurial.i18n import _
9 from mercurial.i18n import _
10 from mercurial.node import bin, hex, nullid
10 from mercurial.node import bin, hex, nullid
11 from mercurial import (
11 from mercurial import (
12 bundlerepo,
12 bundlerepo,
13 changegroup,
13 changegroup,
14 error,
14 error,
15 match,
15 match,
16 mdiff,
16 mdiff,
17 pycompat,
17 pycompat,
18 )
18 )
19 from . import (
19 from . import (
20 constants,
20 constants,
21 remotefilelog,
21 remotefilelog,
22 shallowutil,
22 shallowutil,
23 )
23 )
24
24
25 NoFiles = 0
25 NoFiles = 0
26 LocalFiles = 1
26 LocalFiles = 1
27 AllFiles = 2
27 AllFiles = 2
28
28
29 def shallowgroup(cls, self, nodelist, rlog, lookup, units=None, reorder=None):
29 def shallowgroup(cls, self, nodelist, rlog, lookup, units=None, reorder=None):
30 if not isinstance(rlog, remotefilelog.remotefilelog):
30 if not isinstance(rlog, remotefilelog.remotefilelog):
31 for c in super(cls, self).group(nodelist, rlog, lookup,
31 for c in super(cls, self).group(nodelist, rlog, lookup,
32 units=units):
32 units=units):
33 yield c
33 yield c
34 return
34 return
35
35
36 if len(nodelist) == 0:
36 if len(nodelist) == 0:
37 yield self.close()
37 yield self.close()
38 return
38 return
39
39
40 nodelist = shallowutil.sortnodes(nodelist, rlog.parents)
40 nodelist = shallowutil.sortnodes(nodelist, rlog.parents)
41
41
42 # add the parent of the first rev
42 # add the parent of the first rev
43 p = rlog.parents(nodelist[0])[0]
43 p = rlog.parents(nodelist[0])[0]
44 nodelist.insert(0, p)
44 nodelist.insert(0, p)
45
45
46 # build deltas
46 # build deltas
47 for i in pycompat.xrange(len(nodelist) - 1):
47 for i in pycompat.xrange(len(nodelist) - 1):
48 prev, curr = nodelist[i], nodelist[i + 1]
48 prev, curr = nodelist[i], nodelist[i + 1]
49 linknode = lookup(curr)
49 linknode = lookup(curr)
50 for c in self.nodechunk(rlog, curr, prev, linknode):
50 for c in self.nodechunk(rlog, curr, prev, linknode):
51 yield c
51 yield c
52
52
53 yield self.close()
53 yield self.close()
54
54
55 class shallowcg1packer(changegroup.cgpacker):
55 class shallowcg1packer(changegroup.cgpacker):
56 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
56 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
57 if shallowutil.isenabled(self._repo):
57 if shallowutil.isenabled(self._repo):
58 fastpathlinkrev = False
58 fastpathlinkrev = False
59
59
60 return super(shallowcg1packer, self).generate(commonrevs, clnodes,
60 return super(shallowcg1packer, self).generate(commonrevs, clnodes,
61 fastpathlinkrev, source)
61 fastpathlinkrev, source)
62
62
63 def group(self, nodelist, rlog, lookup, units=None, reorder=None):
63 def group(self, nodelist, rlog, lookup, units=None, reorder=None):
64 return shallowgroup(shallowcg1packer, self, nodelist, rlog, lookup,
64 return shallowgroup(shallowcg1packer, self, nodelist, rlog, lookup,
65 units=units)
65 units=units)
66
66
67 def generatefiles(self, changedfiles, *args):
67 def generatefiles(self, changedfiles, *args):
68 try:
68 try:
69 linknodes, commonrevs, source = args
69 linknodes, commonrevs, source = args
70 except ValueError:
70 except ValueError:
71 commonrevs, source, mfdicts, fastpathlinkrev, fnodes, clrevs = args
71 commonrevs, source, mfdicts, fastpathlinkrev, fnodes, clrevs = args
72 if shallowutil.isenabled(self._repo):
72 if shallowutil.isenabled(self._repo):
73 repo = self._repo
73 repo = self._repo
74 if isinstance(repo, bundlerepo.bundlerepository):
74 if isinstance(repo, bundlerepo.bundlerepository):
75 # If the bundle contains filelogs, we can't pull from it, since
75 # If the bundle contains filelogs, we can't pull from it, since
76 # bundlerepo is heavily tied to revlogs. Instead require that
76 # bundlerepo is heavily tied to revlogs. Instead require that
77 # the user use unbundle instead.
77 # the user use unbundle instead.
78 # Force load the filelog data.
78 # Force load the filelog data.
79 bundlerepo.bundlerepository.file(repo, 'foo')
79 bundlerepo.bundlerepository.file(repo, 'foo')
80 if repo._cgfilespos:
80 if repo._cgfilespos:
81 raise error.Abort("cannot pull from full bundles",
81 raise error.Abort("cannot pull from full bundles",
82 hint="use `hg unbundle` instead")
82 hint="use `hg unbundle` instead")
83 return []
83 return []
84 filestosend = self.shouldaddfilegroups(source)
84 filestosend = self.shouldaddfilegroups(source)
85 if filestosend == NoFiles:
85 if filestosend == NoFiles:
86 changedfiles = list([f for f in changedfiles
86 changedfiles = list([f for f in changedfiles
87 if not repo.shallowmatch(f)])
87 if not repo.shallowmatch(f)])
88
88
89 return super(shallowcg1packer, self).generatefiles(
89 return super(shallowcg1packer, self).generatefiles(
90 changedfiles, *args)
90 changedfiles, *args)
91
91
92 def shouldaddfilegroups(self, source):
92 def shouldaddfilegroups(self, source):
93 repo = self._repo
93 repo = self._repo
94 if not shallowutil.isenabled(repo):
94 if not shallowutil.isenabled(repo):
95 return AllFiles
95 return AllFiles
96
96
97 if source == "push" or source == "bundle":
97 if source == "push" or source == "bundle":
98 return AllFiles
98 return AllFiles
99
99
100 caps = self._bundlecaps or []
100 caps = self._bundlecaps or []
101 if source == "serve" or source == "pull":
101 if source == "serve" or source == "pull":
102 if constants.BUNDLE2_CAPABLITY in caps:
102 if constants.BUNDLE2_CAPABLITY in caps:
103 return LocalFiles
103 return LocalFiles
104 else:
104 else:
105 # Serving to a full repo requires us to serve everything
105 # Serving to a full repo requires us to serve everything
106 repo.ui.warn(_("pulling from a shallow repo\n"))
106 repo.ui.warn(_("pulling from a shallow repo\n"))
107 return AllFiles
107 return AllFiles
108
108
109 return NoFiles
109 return NoFiles
110
110
111 def prune(self, rlog, missing, commonrevs):
111 def prune(self, rlog, missing, commonrevs):
112 if not isinstance(rlog, remotefilelog.remotefilelog):
112 if not isinstance(rlog, remotefilelog.remotefilelog):
113 return super(shallowcg1packer, self).prune(rlog, missing,
113 return super(shallowcg1packer, self).prune(rlog, missing,
114 commonrevs)
114 commonrevs)
115
115
116 repo = self._repo
116 repo = self._repo
117 results = []
117 results = []
118 for fnode in missing:
118 for fnode in missing:
119 fctx = repo.filectx(rlog.filename, fileid=fnode)
119 fctx = repo.filectx(rlog.filename, fileid=fnode)
120 if fctx.linkrev() not in commonrevs:
120 if fctx.linkrev() not in commonrevs:
121 results.append(fnode)
121 results.append(fnode)
122 return results
122 return results
123
123
124 def nodechunk(self, revlog, node, prevnode, linknode):
124 def nodechunk(self, revlog, node, prevnode, linknode):
125 prefix = ''
125 prefix = ''
126 if prevnode == nullid:
126 if prevnode == nullid:
127 delta = revlog.revision(node, raw=True)
127 delta = revlog.revision(node, raw=True)
128 prefix = mdiff.trivialdiffheader(len(delta))
128 prefix = mdiff.trivialdiffheader(len(delta))
129 else:
129 else:
130 # Actually uses remotefilelog.revdiff which works on nodes, not revs
130 # Actually uses remotefilelog.revdiff which works on nodes, not revs
131 delta = revlog.revdiff(prevnode, node)
131 delta = revlog.revdiff(prevnode, node)
132 p1, p2 = revlog.parents(node)
132 p1, p2 = revlog.parents(node)
133 flags = revlog.flags(node)
133 flags = revlog.flags(node)
134 meta = self.builddeltaheader(node, p1, p2, prevnode, linknode, flags)
134 meta = self.builddeltaheader(node, p1, p2, prevnode, linknode, flags)
135 meta += prefix
135 meta += prefix
136 l = len(meta) + len(delta)
136 l = len(meta) + len(delta)
137 yield changegroup.chunkheader(l)
137 yield changegroup.chunkheader(l)
138 yield meta
138 yield meta
139 yield delta
139 yield delta
140
140
141 def makechangegroup(orig, repo, outgoing, version, source, *args, **kwargs):
141 def makechangegroup(orig, repo, outgoing, version, source, *args, **kwargs):
142 if not shallowutil.isenabled(repo):
142 if not shallowutil.isenabled(repo):
143 return orig(repo, outgoing, version, source, *args, **kwargs)
143 return orig(repo, outgoing, version, source, *args, **kwargs)
144
144
145 original = repo.shallowmatch
145 original = repo.shallowmatch
146 try:
146 try:
147 # if serving, only send files the clients has patterns for
147 # if serving, only send files the clients has patterns for
148 if source == 'serve':
148 if source == 'serve':
149 bundlecaps = kwargs.get(r'bundlecaps')
149 bundlecaps = kwargs.get(r'bundlecaps')
150 includepattern = None
150 includepattern = None
151 excludepattern = None
151 excludepattern = None
152 for cap in (bundlecaps or []):
152 for cap in (bundlecaps or []):
153 if cap.startswith("includepattern="):
153 if cap.startswith("includepattern="):
154 raw = cap[len("includepattern="):]
154 raw = cap[len("includepattern="):]
155 if raw:
155 if raw:
156 includepattern = raw.split('\0')
156 includepattern = raw.split('\0')
157 elif cap.startswith("excludepattern="):
157 elif cap.startswith("excludepattern="):
158 raw = cap[len("excludepattern="):]
158 raw = cap[len("excludepattern="):]
159 if raw:
159 if raw:
160 excludepattern = raw.split('\0')
160 excludepattern = raw.split('\0')
161 if includepattern or excludepattern:
161 if includepattern or excludepattern:
162 repo.shallowmatch = match.match(repo.root, '', None,
162 repo.shallowmatch = match.match(repo.root, '', None,
163 includepattern, excludepattern)
163 includepattern, excludepattern)
164 else:
164 else:
165 repo.shallowmatch = match.always(repo.root, '')
165 repo.shallowmatch = match.always()
166 return orig(repo, outgoing, version, source, *args, **kwargs)
166 return orig(repo, outgoing, version, source, *args, **kwargs)
167 finally:
167 finally:
168 repo.shallowmatch = original
168 repo.shallowmatch = original
169
169
170 def addchangegroupfiles(orig, repo, source, revmap, trp, expectedfiles, *args):
170 def addchangegroupfiles(orig, repo, source, revmap, trp, expectedfiles, *args):
171 if not shallowutil.isenabled(repo):
171 if not shallowutil.isenabled(repo):
172 return orig(repo, source, revmap, trp, expectedfiles, *args)
172 return orig(repo, source, revmap, trp, expectedfiles, *args)
173
173
174 newfiles = 0
174 newfiles = 0
175 visited = set()
175 visited = set()
176 revisiondatas = {}
176 revisiondatas = {}
177 queue = []
177 queue = []
178
178
179 # Normal Mercurial processes each file one at a time, adding all
179 # Normal Mercurial processes each file one at a time, adding all
180 # the new revisions for that file at once. In remotefilelog a file
180 # the new revisions for that file at once. In remotefilelog a file
181 # revision may depend on a different file's revision (in the case
181 # revision may depend on a different file's revision (in the case
182 # of a rename/copy), so we must lay all revisions down across all
182 # of a rename/copy), so we must lay all revisions down across all
183 # files in topological order.
183 # files in topological order.
184
184
185 # read all the file chunks but don't add them
185 # read all the file chunks but don't add them
186 progress = repo.ui.makeprogress(_('files'), total=expectedfiles)
186 progress = repo.ui.makeprogress(_('files'), total=expectedfiles)
187 while True:
187 while True:
188 chunkdata = source.filelogheader()
188 chunkdata = source.filelogheader()
189 if not chunkdata:
189 if not chunkdata:
190 break
190 break
191 f = chunkdata["filename"]
191 f = chunkdata["filename"]
192 repo.ui.debug("adding %s revisions\n" % f)
192 repo.ui.debug("adding %s revisions\n" % f)
193 progress.increment()
193 progress.increment()
194
194
195 if not repo.shallowmatch(f):
195 if not repo.shallowmatch(f):
196 fl = repo.file(f)
196 fl = repo.file(f)
197 deltas = source.deltaiter()
197 deltas = source.deltaiter()
198 fl.addgroup(deltas, revmap, trp)
198 fl.addgroup(deltas, revmap, trp)
199 continue
199 continue
200
200
201 chain = None
201 chain = None
202 while True:
202 while True:
203 # returns: (node, p1, p2, cs, deltabase, delta, flags) or None
203 # returns: (node, p1, p2, cs, deltabase, delta, flags) or None
204 revisiondata = source.deltachunk(chain)
204 revisiondata = source.deltachunk(chain)
205 if not revisiondata:
205 if not revisiondata:
206 break
206 break
207
207
208 chain = revisiondata[0]
208 chain = revisiondata[0]
209
209
210 revisiondatas[(f, chain)] = revisiondata
210 revisiondatas[(f, chain)] = revisiondata
211 queue.append((f, chain))
211 queue.append((f, chain))
212
212
213 if f not in visited:
213 if f not in visited:
214 newfiles += 1
214 newfiles += 1
215 visited.add(f)
215 visited.add(f)
216
216
217 if chain is None:
217 if chain is None:
218 raise error.Abort(_("received file revlog group is empty"))
218 raise error.Abort(_("received file revlog group is empty"))
219
219
220 processed = set()
220 processed = set()
221 def available(f, node, depf, depnode):
221 def available(f, node, depf, depnode):
222 if depnode != nullid and (depf, depnode) not in processed:
222 if depnode != nullid and (depf, depnode) not in processed:
223 if not (depf, depnode) in revisiondatas:
223 if not (depf, depnode) in revisiondatas:
224 # It's not in the changegroup, assume it's already
224 # It's not in the changegroup, assume it's already
225 # in the repo
225 # in the repo
226 return True
226 return True
227 # re-add self to queue
227 # re-add self to queue
228 queue.insert(0, (f, node))
228 queue.insert(0, (f, node))
229 # add dependency in front
229 # add dependency in front
230 queue.insert(0, (depf, depnode))
230 queue.insert(0, (depf, depnode))
231 return False
231 return False
232 return True
232 return True
233
233
234 skipcount = 0
234 skipcount = 0
235
235
236 # Prefetch the non-bundled revisions that we will need
236 # Prefetch the non-bundled revisions that we will need
237 prefetchfiles = []
237 prefetchfiles = []
238 for f, node in queue:
238 for f, node in queue:
239 revisiondata = revisiondatas[(f, node)]
239 revisiondata = revisiondatas[(f, node)]
240 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
240 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
241 dependents = [revisiondata[1], revisiondata[2], revisiondata[4]]
241 dependents = [revisiondata[1], revisiondata[2], revisiondata[4]]
242
242
243 for dependent in dependents:
243 for dependent in dependents:
244 if dependent == nullid or (f, dependent) in revisiondatas:
244 if dependent == nullid or (f, dependent) in revisiondatas:
245 continue
245 continue
246 prefetchfiles.append((f, hex(dependent)))
246 prefetchfiles.append((f, hex(dependent)))
247
247
248 repo.fileservice.prefetch(prefetchfiles)
248 repo.fileservice.prefetch(prefetchfiles)
249
249
250 # Apply the revisions in topological order such that a revision
250 # Apply the revisions in topological order such that a revision
251 # is only written once it's deltabase and parents have been written.
251 # is only written once it's deltabase and parents have been written.
252 while queue:
252 while queue:
253 f, node = queue.pop(0)
253 f, node = queue.pop(0)
254 if (f, node) in processed:
254 if (f, node) in processed:
255 continue
255 continue
256
256
257 skipcount += 1
257 skipcount += 1
258 if skipcount > len(queue) + 1:
258 if skipcount > len(queue) + 1:
259 raise error.Abort(_("circular node dependency"))
259 raise error.Abort(_("circular node dependency"))
260
260
261 fl = repo.file(f)
261 fl = repo.file(f)
262
262
263 revisiondata = revisiondatas[(f, node)]
263 revisiondata = revisiondatas[(f, node)]
264 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
264 # revisiondata: (node, p1, p2, cs, deltabase, delta, flags)
265 node, p1, p2, linknode, deltabase, delta, flags = revisiondata
265 node, p1, p2, linknode, deltabase, delta, flags = revisiondata
266
266
267 if not available(f, node, f, deltabase):
267 if not available(f, node, f, deltabase):
268 continue
268 continue
269
269
270 base = fl.revision(deltabase, raw=True)
270 base = fl.revision(deltabase, raw=True)
271 text = mdiff.patch(base, delta)
271 text = mdiff.patch(base, delta)
272 if not isinstance(text, bytes):
272 if not isinstance(text, bytes):
273 text = bytes(text)
273 text = bytes(text)
274
274
275 meta, text = shallowutil.parsemeta(text)
275 meta, text = shallowutil.parsemeta(text)
276 if 'copy' in meta:
276 if 'copy' in meta:
277 copyfrom = meta['copy']
277 copyfrom = meta['copy']
278 copynode = bin(meta['copyrev'])
278 copynode = bin(meta['copyrev'])
279 if not available(f, node, copyfrom, copynode):
279 if not available(f, node, copyfrom, copynode):
280 continue
280 continue
281
281
282 for p in [p1, p2]:
282 for p in [p1, p2]:
283 if p != nullid:
283 if p != nullid:
284 if not available(f, node, f, p):
284 if not available(f, node, f, p):
285 continue
285 continue
286
286
287 fl.add(text, meta, trp, linknode, p1, p2)
287 fl.add(text, meta, trp, linknode, p1, p2)
288 processed.add((f, node))
288 processed.add((f, node))
289 skipcount = 0
289 skipcount = 0
290
290
291 progress.complete()
291 progress.complete()
292
292
293 return len(revisiondatas), newfiles
293 return len(revisiondatas), newfiles
@@ -1,305 +1,305 b''
1 # shallowrepo.py - shallow repository that uses remote filelogs
1 # shallowrepo.py - shallow repository that uses remote filelogs
2 #
2 #
3 # Copyright 2013 Facebook, Inc.
3 # Copyright 2013 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7 from __future__ import absolute_import
7 from __future__ import absolute_import
8
8
9 import os
9 import os
10
10
11 from mercurial.i18n import _
11 from mercurial.i18n import _
12 from mercurial.node import hex, nullid, nullrev
12 from mercurial.node import hex, nullid, nullrev
13 from mercurial import (
13 from mercurial import (
14 encoding,
14 encoding,
15 error,
15 error,
16 localrepo,
16 localrepo,
17 match,
17 match,
18 scmutil,
18 scmutil,
19 sparse,
19 sparse,
20 util,
20 util,
21 )
21 )
22 from mercurial.utils import procutil
22 from mercurial.utils import procutil
23 from . import (
23 from . import (
24 connectionpool,
24 connectionpool,
25 constants,
25 constants,
26 contentstore,
26 contentstore,
27 datapack,
27 datapack,
28 fileserverclient,
28 fileserverclient,
29 historypack,
29 historypack,
30 metadatastore,
30 metadatastore,
31 remotefilectx,
31 remotefilectx,
32 remotefilelog,
32 remotefilelog,
33 shallowutil,
33 shallowutil,
34 )
34 )
35
35
36 if util.safehasattr(util, '_hgexecutable'):
36 if util.safehasattr(util, '_hgexecutable'):
37 # Before 5be286db
37 # Before 5be286db
38 _hgexecutable = util.hgexecutable
38 _hgexecutable = util.hgexecutable
39 else:
39 else:
40 from mercurial.utils import procutil
40 from mercurial.utils import procutil
41 _hgexecutable = procutil.hgexecutable
41 _hgexecutable = procutil.hgexecutable
42
42
43 # These make*stores functions are global so that other extensions can replace
43 # These make*stores functions are global so that other extensions can replace
44 # them.
44 # them.
45 def makelocalstores(repo):
45 def makelocalstores(repo):
46 """In-repo stores, like .hg/store/data; can not be discarded."""
46 """In-repo stores, like .hg/store/data; can not be discarded."""
47 localpath = os.path.join(repo.svfs.vfs.base, 'data')
47 localpath = os.path.join(repo.svfs.vfs.base, 'data')
48 if not os.path.exists(localpath):
48 if not os.path.exists(localpath):
49 os.makedirs(localpath)
49 os.makedirs(localpath)
50
50
51 # Instantiate local data stores
51 # Instantiate local data stores
52 localcontent = contentstore.remotefilelogcontentstore(
52 localcontent = contentstore.remotefilelogcontentstore(
53 repo, localpath, repo.name, shared=False)
53 repo, localpath, repo.name, shared=False)
54 localmetadata = metadatastore.remotefilelogmetadatastore(
54 localmetadata = metadatastore.remotefilelogmetadatastore(
55 repo, localpath, repo.name, shared=False)
55 repo, localpath, repo.name, shared=False)
56 return localcontent, localmetadata
56 return localcontent, localmetadata
57
57
58 def makecachestores(repo):
58 def makecachestores(repo):
59 """Typically machine-wide, cache of remote data; can be discarded."""
59 """Typically machine-wide, cache of remote data; can be discarded."""
60 # Instantiate shared cache stores
60 # Instantiate shared cache stores
61 cachepath = shallowutil.getcachepath(repo.ui)
61 cachepath = shallowutil.getcachepath(repo.ui)
62 cachecontent = contentstore.remotefilelogcontentstore(
62 cachecontent = contentstore.remotefilelogcontentstore(
63 repo, cachepath, repo.name, shared=True)
63 repo, cachepath, repo.name, shared=True)
64 cachemetadata = metadatastore.remotefilelogmetadatastore(
64 cachemetadata = metadatastore.remotefilelogmetadatastore(
65 repo, cachepath, repo.name, shared=True)
65 repo, cachepath, repo.name, shared=True)
66
66
67 repo.sharedstore = cachecontent
67 repo.sharedstore = cachecontent
68 repo.shareddatastores.append(cachecontent)
68 repo.shareddatastores.append(cachecontent)
69 repo.sharedhistorystores.append(cachemetadata)
69 repo.sharedhistorystores.append(cachemetadata)
70
70
71 return cachecontent, cachemetadata
71 return cachecontent, cachemetadata
72
72
73 def makeremotestores(repo, cachecontent, cachemetadata):
73 def makeremotestores(repo, cachecontent, cachemetadata):
74 """These stores fetch data from a remote server."""
74 """These stores fetch data from a remote server."""
75 # Instantiate remote stores
75 # Instantiate remote stores
76 repo.fileservice = fileserverclient.fileserverclient(repo)
76 repo.fileservice = fileserverclient.fileserverclient(repo)
77 remotecontent = contentstore.remotecontentstore(
77 remotecontent = contentstore.remotecontentstore(
78 repo.ui, repo.fileservice, cachecontent)
78 repo.ui, repo.fileservice, cachecontent)
79 remotemetadata = metadatastore.remotemetadatastore(
79 remotemetadata = metadatastore.remotemetadatastore(
80 repo.ui, repo.fileservice, cachemetadata)
80 repo.ui, repo.fileservice, cachemetadata)
81 return remotecontent, remotemetadata
81 return remotecontent, remotemetadata
82
82
83 def makepackstores(repo):
83 def makepackstores(repo):
84 """Packs are more efficient (to read from) cache stores."""
84 """Packs are more efficient (to read from) cache stores."""
85 # Instantiate pack stores
85 # Instantiate pack stores
86 packpath = shallowutil.getcachepackpath(repo,
86 packpath = shallowutil.getcachepackpath(repo,
87 constants.FILEPACK_CATEGORY)
87 constants.FILEPACK_CATEGORY)
88 packcontentstore = datapack.datapackstore(repo.ui, packpath)
88 packcontentstore = datapack.datapackstore(repo.ui, packpath)
89 packmetadatastore = historypack.historypackstore(repo.ui, packpath)
89 packmetadatastore = historypack.historypackstore(repo.ui, packpath)
90
90
91 repo.shareddatastores.append(packcontentstore)
91 repo.shareddatastores.append(packcontentstore)
92 repo.sharedhistorystores.append(packmetadatastore)
92 repo.sharedhistorystores.append(packmetadatastore)
93 shallowutil.reportpackmetrics(repo.ui, 'filestore', packcontentstore,
93 shallowutil.reportpackmetrics(repo.ui, 'filestore', packcontentstore,
94 packmetadatastore)
94 packmetadatastore)
95 return packcontentstore, packmetadatastore
95 return packcontentstore, packmetadatastore
96
96
97 def makeunionstores(repo):
97 def makeunionstores(repo):
98 """Union stores iterate the other stores and return the first result."""
98 """Union stores iterate the other stores and return the first result."""
99 repo.shareddatastores = []
99 repo.shareddatastores = []
100 repo.sharedhistorystores = []
100 repo.sharedhistorystores = []
101
101
102 packcontentstore, packmetadatastore = makepackstores(repo)
102 packcontentstore, packmetadatastore = makepackstores(repo)
103 cachecontent, cachemetadata = makecachestores(repo)
103 cachecontent, cachemetadata = makecachestores(repo)
104 localcontent, localmetadata = makelocalstores(repo)
104 localcontent, localmetadata = makelocalstores(repo)
105 remotecontent, remotemetadata = makeremotestores(repo, cachecontent,
105 remotecontent, remotemetadata = makeremotestores(repo, cachecontent,
106 cachemetadata)
106 cachemetadata)
107
107
108 # Instantiate union stores
108 # Instantiate union stores
109 repo.contentstore = contentstore.unioncontentstore(
109 repo.contentstore = contentstore.unioncontentstore(
110 packcontentstore, cachecontent,
110 packcontentstore, cachecontent,
111 localcontent, remotecontent, writestore=localcontent)
111 localcontent, remotecontent, writestore=localcontent)
112 repo.metadatastore = metadatastore.unionmetadatastore(
112 repo.metadatastore = metadatastore.unionmetadatastore(
113 packmetadatastore, cachemetadata, localmetadata, remotemetadata,
113 packmetadatastore, cachemetadata, localmetadata, remotemetadata,
114 writestore=localmetadata)
114 writestore=localmetadata)
115
115
116 fileservicedatawrite = cachecontent
116 fileservicedatawrite = cachecontent
117 fileservicehistorywrite = cachemetadata
117 fileservicehistorywrite = cachemetadata
118 repo.fileservice.setstore(repo.contentstore, repo.metadatastore,
118 repo.fileservice.setstore(repo.contentstore, repo.metadatastore,
119 fileservicedatawrite, fileservicehistorywrite)
119 fileservicedatawrite, fileservicehistorywrite)
120 shallowutil.reportpackmetrics(repo.ui, 'filestore',
120 shallowutil.reportpackmetrics(repo.ui, 'filestore',
121 packcontentstore, packmetadatastore)
121 packcontentstore, packmetadatastore)
122
122
123 def wraprepo(repo):
123 def wraprepo(repo):
124 class shallowrepository(repo.__class__):
124 class shallowrepository(repo.__class__):
125 @util.propertycache
125 @util.propertycache
126 def name(self):
126 def name(self):
127 return self.ui.config('remotefilelog', 'reponame')
127 return self.ui.config('remotefilelog', 'reponame')
128
128
129 @util.propertycache
129 @util.propertycache
130 def fallbackpath(self):
130 def fallbackpath(self):
131 path = repo.ui.config("remotefilelog", "fallbackpath",
131 path = repo.ui.config("remotefilelog", "fallbackpath",
132 repo.ui.config('paths', 'default'))
132 repo.ui.config('paths', 'default'))
133 if not path:
133 if not path:
134 raise error.Abort("no remotefilelog server "
134 raise error.Abort("no remotefilelog server "
135 "configured - is your .hg/hgrc trusted?")
135 "configured - is your .hg/hgrc trusted?")
136
136
137 return path
137 return path
138
138
139 def maybesparsematch(self, *revs, **kwargs):
139 def maybesparsematch(self, *revs, **kwargs):
140 '''
140 '''
141 A wrapper that allows the remotefilelog to invoke sparsematch() if
141 A wrapper that allows the remotefilelog to invoke sparsematch() if
142 this is a sparse repository, or returns None if this is not a
142 this is a sparse repository, or returns None if this is not a
143 sparse repository.
143 sparse repository.
144 '''
144 '''
145 if revs:
145 if revs:
146 ret = sparse.matcher(repo, revs=revs)
146 ret = sparse.matcher(repo, revs=revs)
147 else:
147 else:
148 ret = sparse.matcher(repo)
148 ret = sparse.matcher(repo)
149
149
150 if ret.always():
150 if ret.always():
151 return None
151 return None
152 return ret
152 return ret
153
153
154 def file(self, f):
154 def file(self, f):
155 if f[0] == '/':
155 if f[0] == '/':
156 f = f[1:]
156 f = f[1:]
157
157
158 if self.shallowmatch(f):
158 if self.shallowmatch(f):
159 return remotefilelog.remotefilelog(self.svfs, f, self)
159 return remotefilelog.remotefilelog(self.svfs, f, self)
160 else:
160 else:
161 return super(shallowrepository, self).file(f)
161 return super(shallowrepository, self).file(f)
162
162
163 def filectx(self, path, *args, **kwargs):
163 def filectx(self, path, *args, **kwargs):
164 if self.shallowmatch(path):
164 if self.shallowmatch(path):
165 return remotefilectx.remotefilectx(self, path, *args, **kwargs)
165 return remotefilectx.remotefilectx(self, path, *args, **kwargs)
166 else:
166 else:
167 return super(shallowrepository, self).filectx(path, *args,
167 return super(shallowrepository, self).filectx(path, *args,
168 **kwargs)
168 **kwargs)
169
169
170 @localrepo.unfilteredmethod
170 @localrepo.unfilteredmethod
171 def commitctx(self, ctx, error=False):
171 def commitctx(self, ctx, error=False):
172 """Add a new revision to current repository.
172 """Add a new revision to current repository.
173 Revision information is passed via the context argument.
173 Revision information is passed via the context argument.
174 """
174 """
175
175
176 # some contexts already have manifest nodes, they don't need any
176 # some contexts already have manifest nodes, they don't need any
177 # prefetching (for example if we're just editing a commit message
177 # prefetching (for example if we're just editing a commit message
178 # we can reuse manifest
178 # we can reuse manifest
179 if not ctx.manifestnode():
179 if not ctx.manifestnode():
180 # prefetch files that will likely be compared
180 # prefetch files that will likely be compared
181 m1 = ctx.p1().manifest()
181 m1 = ctx.p1().manifest()
182 files = []
182 files = []
183 for f in ctx.modified() + ctx.added():
183 for f in ctx.modified() + ctx.added():
184 fparent1 = m1.get(f, nullid)
184 fparent1 = m1.get(f, nullid)
185 if fparent1 != nullid:
185 if fparent1 != nullid:
186 files.append((f, hex(fparent1)))
186 files.append((f, hex(fparent1)))
187 self.fileservice.prefetch(files)
187 self.fileservice.prefetch(files)
188 return super(shallowrepository, self).commitctx(ctx,
188 return super(shallowrepository, self).commitctx(ctx,
189 error=error)
189 error=error)
190
190
191 def backgroundprefetch(self, revs, base=None, repack=False, pats=None,
191 def backgroundprefetch(self, revs, base=None, repack=False, pats=None,
192 opts=None):
192 opts=None):
193 """Runs prefetch in background with optional repack
193 """Runs prefetch in background with optional repack
194 """
194 """
195 cmd = [_hgexecutable(), '-R', repo.origroot, 'prefetch']
195 cmd = [_hgexecutable(), '-R', repo.origroot, 'prefetch']
196 if repack:
196 if repack:
197 cmd.append('--repack')
197 cmd.append('--repack')
198 if revs:
198 if revs:
199 cmd += ['-r', revs]
199 cmd += ['-r', revs]
200 procutil.runbgcommand(cmd, encoding.environ)
200 procutil.runbgcommand(cmd, encoding.environ)
201
201
202 def prefetch(self, revs, base=None, pats=None, opts=None):
202 def prefetch(self, revs, base=None, pats=None, opts=None):
203 """Prefetches all the necessary file revisions for the given revs
203 """Prefetches all the necessary file revisions for the given revs
204 Optionally runs repack in background
204 Optionally runs repack in background
205 """
205 """
206 with repo._lock(repo.svfs, 'prefetchlock', True, None, None,
206 with repo._lock(repo.svfs, 'prefetchlock', True, None, None,
207 _('prefetching in %s') % repo.origroot):
207 _('prefetching in %s') % repo.origroot):
208 self._prefetch(revs, base, pats, opts)
208 self._prefetch(revs, base, pats, opts)
209
209
210 def _prefetch(self, revs, base=None, pats=None, opts=None):
210 def _prefetch(self, revs, base=None, pats=None, opts=None):
211 fallbackpath = self.fallbackpath
211 fallbackpath = self.fallbackpath
212 if fallbackpath:
212 if fallbackpath:
213 # If we know a rev is on the server, we should fetch the server
213 # If we know a rev is on the server, we should fetch the server
214 # version of those files, since our local file versions might
214 # version of those files, since our local file versions might
215 # become obsolete if the local commits are stripped.
215 # become obsolete if the local commits are stripped.
216 localrevs = repo.revs('outgoing(%s)', fallbackpath)
216 localrevs = repo.revs('outgoing(%s)', fallbackpath)
217 if base is not None and base != nullrev:
217 if base is not None and base != nullrev:
218 serverbase = list(repo.revs('first(reverse(::%s) - %ld)',
218 serverbase = list(repo.revs('first(reverse(::%s) - %ld)',
219 base, localrevs))
219 base, localrevs))
220 if serverbase:
220 if serverbase:
221 base = serverbase[0]
221 base = serverbase[0]
222 else:
222 else:
223 localrevs = repo
223 localrevs = repo
224
224
225 mfl = repo.manifestlog
225 mfl = repo.manifestlog
226 mfrevlog = mfl.getstorage('')
226 mfrevlog = mfl.getstorage('')
227 if base is not None:
227 if base is not None:
228 mfdict = mfl[repo[base].manifestnode()].read()
228 mfdict = mfl[repo[base].manifestnode()].read()
229 skip = set(mfdict.iteritems())
229 skip = set(mfdict.iteritems())
230 else:
230 else:
231 skip = set()
231 skip = set()
232
232
233 # Copy the skip set to start large and avoid constant resizing,
233 # Copy the skip set to start large and avoid constant resizing,
234 # and since it's likely to be very similar to the prefetch set.
234 # and since it's likely to be very similar to the prefetch set.
235 files = skip.copy()
235 files = skip.copy()
236 serverfiles = skip.copy()
236 serverfiles = skip.copy()
237 visited = set()
237 visited = set()
238 visited.add(nullrev)
238 visited.add(nullrev)
239 revcount = len(revs)
239 revcount = len(revs)
240 progress = self.ui.makeprogress(_('prefetching'), total=revcount)
240 progress = self.ui.makeprogress(_('prefetching'), total=revcount)
241 progress.update(0)
241 progress.update(0)
242 for rev in sorted(revs):
242 for rev in sorted(revs):
243 ctx = repo[rev]
243 ctx = repo[rev]
244 if pats:
244 if pats:
245 m = scmutil.match(ctx, pats, opts)
245 m = scmutil.match(ctx, pats, opts)
246 sparsematch = repo.maybesparsematch(rev)
246 sparsematch = repo.maybesparsematch(rev)
247
247
248 mfnode = ctx.manifestnode()
248 mfnode = ctx.manifestnode()
249 mfrev = mfrevlog.rev(mfnode)
249 mfrev = mfrevlog.rev(mfnode)
250
250
251 # Decompressing manifests is expensive.
251 # Decompressing manifests is expensive.
252 # When possible, only read the deltas.
252 # When possible, only read the deltas.
253 p1, p2 = mfrevlog.parentrevs(mfrev)
253 p1, p2 = mfrevlog.parentrevs(mfrev)
254 if p1 in visited and p2 in visited:
254 if p1 in visited and p2 in visited:
255 mfdict = mfl[mfnode].readfast()
255 mfdict = mfl[mfnode].readfast()
256 else:
256 else:
257 mfdict = mfl[mfnode].read()
257 mfdict = mfl[mfnode].read()
258
258
259 diff = mfdict.iteritems()
259 diff = mfdict.iteritems()
260 if pats:
260 if pats:
261 diff = (pf for pf in diff if m(pf[0]))
261 diff = (pf for pf in diff if m(pf[0]))
262 if sparsematch:
262 if sparsematch:
263 diff = (pf for pf in diff if sparsematch(pf[0]))
263 diff = (pf for pf in diff if sparsematch(pf[0]))
264 if rev not in localrevs:
264 if rev not in localrevs:
265 serverfiles.update(diff)
265 serverfiles.update(diff)
266 else:
266 else:
267 files.update(diff)
267 files.update(diff)
268
268
269 visited.add(mfrev)
269 visited.add(mfrev)
270 progress.increment()
270 progress.increment()
271
271
272 files.difference_update(skip)
272 files.difference_update(skip)
273 serverfiles.difference_update(skip)
273 serverfiles.difference_update(skip)
274 progress.complete()
274 progress.complete()
275
275
276 # Fetch files known to be on the server
276 # Fetch files known to be on the server
277 if serverfiles:
277 if serverfiles:
278 results = [(path, hex(fnode)) for (path, fnode) in serverfiles]
278 results = [(path, hex(fnode)) for (path, fnode) in serverfiles]
279 repo.fileservice.prefetch(results, force=True)
279 repo.fileservice.prefetch(results, force=True)
280
280
281 # Fetch files that may or may not be on the server
281 # Fetch files that may or may not be on the server
282 if files:
282 if files:
283 results = [(path, hex(fnode)) for (path, fnode) in files]
283 results = [(path, hex(fnode)) for (path, fnode) in files]
284 repo.fileservice.prefetch(results)
284 repo.fileservice.prefetch(results)
285
285
286 def close(self):
286 def close(self):
287 super(shallowrepository, self).close()
287 super(shallowrepository, self).close()
288 self.connectionpool.close()
288 self.connectionpool.close()
289
289
290 repo.__class__ = shallowrepository
290 repo.__class__ = shallowrepository
291
291
292 repo.shallowmatch = match.always(repo.root, '')
292 repo.shallowmatch = match.always()
293
293
294 makeunionstores(repo)
294 makeunionstores(repo)
295
295
296 repo.includepattern = repo.ui.configlist("remotefilelog", "includepattern",
296 repo.includepattern = repo.ui.configlist("remotefilelog", "includepattern",
297 None)
297 None)
298 repo.excludepattern = repo.ui.configlist("remotefilelog", "excludepattern",
298 repo.excludepattern = repo.ui.configlist("remotefilelog", "excludepattern",
299 None)
299 None)
300 if not util.safehasattr(repo, 'connectionpool'):
300 if not util.safehasattr(repo, 'connectionpool'):
301 repo.connectionpool = connectionpool.connectionpool(repo)
301 repo.connectionpool = connectionpool.connectionpool(repo)
302
302
303 if repo.includepattern or repo.excludepattern:
303 if repo.includepattern or repo.excludepattern:
304 repo.shallowmatch = match.match(repo.root, '', None,
304 repo.shallowmatch = match.match(repo.root, '', None,
305 repo.includepattern, repo.excludepattern)
305 repo.includepattern, repo.excludepattern)
@@ -1,347 +1,347 b''
1 # sparse.py - allow sparse checkouts of the working directory
1 # sparse.py - allow sparse checkouts of the working directory
2 #
2 #
3 # Copyright 2014 Facebook, Inc.
3 # Copyright 2014 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """allow sparse checkouts of the working directory (EXPERIMENTAL)
8 """allow sparse checkouts of the working directory (EXPERIMENTAL)
9
9
10 (This extension is not yet protected by backwards compatibility
10 (This extension is not yet protected by backwards compatibility
11 guarantees. Any aspect may break in future releases until this
11 guarantees. Any aspect may break in future releases until this
12 notice is removed.)
12 notice is removed.)
13
13
14 This extension allows the working directory to only consist of a
14 This extension allows the working directory to only consist of a
15 subset of files for the revision. This allows specific files or
15 subset of files for the revision. This allows specific files or
16 directories to be explicitly included or excluded. Many repository
16 directories to be explicitly included or excluded. Many repository
17 operations have performance proportional to the number of files in
17 operations have performance proportional to the number of files in
18 the working directory. So only realizing a subset of files in the
18 the working directory. So only realizing a subset of files in the
19 working directory can improve performance.
19 working directory can improve performance.
20
20
21 Sparse Config Files
21 Sparse Config Files
22 -------------------
22 -------------------
23
23
24 The set of files that are part of a sparse checkout are defined by
24 The set of files that are part of a sparse checkout are defined by
25 a sparse config file. The file defines 3 things: includes (files to
25 a sparse config file. The file defines 3 things: includes (files to
26 include in the sparse checkout), excludes (files to exclude from the
26 include in the sparse checkout), excludes (files to exclude from the
27 sparse checkout), and profiles (links to other config files).
27 sparse checkout), and profiles (links to other config files).
28
28
29 The file format is newline delimited. Empty lines and lines beginning
29 The file format is newline delimited. Empty lines and lines beginning
30 with ``#`` are ignored.
30 with ``#`` are ignored.
31
31
32 Lines beginning with ``%include `` denote another sparse config file
32 Lines beginning with ``%include `` denote another sparse config file
33 to include. e.g. ``%include tests.sparse``. The filename is relative
33 to include. e.g. ``%include tests.sparse``. The filename is relative
34 to the repository root.
34 to the repository root.
35
35
36 The special lines ``[include]`` and ``[exclude]`` denote the section
36 The special lines ``[include]`` and ``[exclude]`` denote the section
37 for includes and excludes that follow, respectively. It is illegal to
37 for includes and excludes that follow, respectively. It is illegal to
38 have ``[include]`` after ``[exclude]``.
38 have ``[include]`` after ``[exclude]``.
39
39
40 Non-special lines resemble file patterns to be added to either includes
40 Non-special lines resemble file patterns to be added to either includes
41 or excludes. The syntax of these lines is documented by :hg:`help patterns`.
41 or excludes. The syntax of these lines is documented by :hg:`help patterns`.
42 Patterns are interpreted as ``glob:`` by default and match against the
42 Patterns are interpreted as ``glob:`` by default and match against the
43 root of the repository.
43 root of the repository.
44
44
45 Exclusion patterns take precedence over inclusion patterns. So even
45 Exclusion patterns take precedence over inclusion patterns. So even
46 if a file is explicitly included, an ``[exclude]`` entry can remove it.
46 if a file is explicitly included, an ``[exclude]`` entry can remove it.
47
47
48 For example, say you have a repository with 3 directories, ``frontend/``,
48 For example, say you have a repository with 3 directories, ``frontend/``,
49 ``backend/``, and ``tools/``. ``frontend/`` and ``backend/`` correspond
49 ``backend/``, and ``tools/``. ``frontend/`` and ``backend/`` correspond
50 to different projects and it is uncommon for someone working on one
50 to different projects and it is uncommon for someone working on one
51 to need the files for the other. But ``tools/`` contains files shared
51 to need the files for the other. But ``tools/`` contains files shared
52 between both projects. Your sparse config files may resemble::
52 between both projects. Your sparse config files may resemble::
53
53
54 # frontend.sparse
54 # frontend.sparse
55 frontend/**
55 frontend/**
56 tools/**
56 tools/**
57
57
58 # backend.sparse
58 # backend.sparse
59 backend/**
59 backend/**
60 tools/**
60 tools/**
61
61
62 Say the backend grows in size. Or there's a directory with thousands
62 Say the backend grows in size. Or there's a directory with thousands
63 of files you wish to exclude. You can modify the profile to exclude
63 of files you wish to exclude. You can modify the profile to exclude
64 certain files::
64 certain files::
65
65
66 [include]
66 [include]
67 backend/**
67 backend/**
68 tools/**
68 tools/**
69
69
70 [exclude]
70 [exclude]
71 tools/tests/**
71 tools/tests/**
72 """
72 """
73
73
74 from __future__ import absolute_import
74 from __future__ import absolute_import
75
75
76 from mercurial.i18n import _
76 from mercurial.i18n import _
77 from mercurial import (
77 from mercurial import (
78 commands,
78 commands,
79 dirstate,
79 dirstate,
80 error,
80 error,
81 extensions,
81 extensions,
82 hg,
82 hg,
83 logcmdutil,
83 logcmdutil,
84 match as matchmod,
84 match as matchmod,
85 pycompat,
85 pycompat,
86 registrar,
86 registrar,
87 sparse,
87 sparse,
88 util,
88 util,
89 )
89 )
90
90
91 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
91 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
92 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
92 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
93 # be specifying the version(s) of Mercurial they are tested with, or
93 # be specifying the version(s) of Mercurial they are tested with, or
94 # leave the attribute unspecified.
94 # leave the attribute unspecified.
95 testedwith = 'ships-with-hg-core'
95 testedwith = 'ships-with-hg-core'
96
96
97 cmdtable = {}
97 cmdtable = {}
98 command = registrar.command(cmdtable)
98 command = registrar.command(cmdtable)
99
99
100 def extsetup(ui):
100 def extsetup(ui):
101 sparse.enabled = True
101 sparse.enabled = True
102
102
103 _setupclone(ui)
103 _setupclone(ui)
104 _setuplog(ui)
104 _setuplog(ui)
105 _setupadd(ui)
105 _setupadd(ui)
106 _setupdirstate(ui)
106 _setupdirstate(ui)
107
107
108 def replacefilecache(cls, propname, replacement):
108 def replacefilecache(cls, propname, replacement):
109 """Replace a filecache property with a new class. This allows changing the
109 """Replace a filecache property with a new class. This allows changing the
110 cache invalidation condition."""
110 cache invalidation condition."""
111 origcls = cls
111 origcls = cls
112 assert callable(replacement)
112 assert callable(replacement)
113 while cls is not object:
113 while cls is not object:
114 if propname in cls.__dict__:
114 if propname in cls.__dict__:
115 orig = cls.__dict__[propname]
115 orig = cls.__dict__[propname]
116 setattr(cls, propname, replacement(orig))
116 setattr(cls, propname, replacement(orig))
117 break
117 break
118 cls = cls.__bases__[0]
118 cls = cls.__bases__[0]
119
119
120 if cls is object:
120 if cls is object:
121 raise AttributeError(_("type '%s' has no property '%s'") % (origcls,
121 raise AttributeError(_("type '%s' has no property '%s'") % (origcls,
122 propname))
122 propname))
123
123
124 def _setuplog(ui):
124 def _setuplog(ui):
125 entry = commands.table['log|history']
125 entry = commands.table['log|history']
126 entry[1].append(('', 'sparse', None,
126 entry[1].append(('', 'sparse', None,
127 "limit to changesets affecting the sparse checkout"))
127 "limit to changesets affecting the sparse checkout"))
128
128
129 def _initialrevs(orig, repo, opts):
129 def _initialrevs(orig, repo, opts):
130 revs = orig(repo, opts)
130 revs = orig(repo, opts)
131 if opts.get('sparse'):
131 if opts.get('sparse'):
132 sparsematch = sparse.matcher(repo)
132 sparsematch = sparse.matcher(repo)
133 def ctxmatch(rev):
133 def ctxmatch(rev):
134 ctx = repo[rev]
134 ctx = repo[rev]
135 return any(f for f in ctx.files() if sparsematch(f))
135 return any(f for f in ctx.files() if sparsematch(f))
136 revs = revs.filter(ctxmatch)
136 revs = revs.filter(ctxmatch)
137 return revs
137 return revs
138 extensions.wrapfunction(logcmdutil, '_initialrevs', _initialrevs)
138 extensions.wrapfunction(logcmdutil, '_initialrevs', _initialrevs)
139
139
140 def _clonesparsecmd(orig, ui, repo, *args, **opts):
140 def _clonesparsecmd(orig, ui, repo, *args, **opts):
141 include_pat = opts.get(r'include')
141 include_pat = opts.get(r'include')
142 exclude_pat = opts.get(r'exclude')
142 exclude_pat = opts.get(r'exclude')
143 enableprofile_pat = opts.get(r'enable_profile')
143 enableprofile_pat = opts.get(r'enable_profile')
144 narrow_pat = opts.get(r'narrow')
144 narrow_pat = opts.get(r'narrow')
145 include = exclude = enableprofile = False
145 include = exclude = enableprofile = False
146 if include_pat:
146 if include_pat:
147 pat = include_pat
147 pat = include_pat
148 include = True
148 include = True
149 if exclude_pat:
149 if exclude_pat:
150 pat = exclude_pat
150 pat = exclude_pat
151 exclude = True
151 exclude = True
152 if enableprofile_pat:
152 if enableprofile_pat:
153 pat = enableprofile_pat
153 pat = enableprofile_pat
154 enableprofile = True
154 enableprofile = True
155 if sum([include, exclude, enableprofile]) > 1:
155 if sum([include, exclude, enableprofile]) > 1:
156 raise error.Abort(_("too many flags specified."))
156 raise error.Abort(_("too many flags specified."))
157 # if --narrow is passed, it means they are includes and excludes for narrow
157 # if --narrow is passed, it means they are includes and excludes for narrow
158 # clone
158 # clone
159 if not narrow_pat and (include or exclude or enableprofile):
159 if not narrow_pat and (include or exclude or enableprofile):
160 def clonesparse(orig, self, node, overwrite, *args, **kwargs):
160 def clonesparse(orig, self, node, overwrite, *args, **kwargs):
161 sparse.updateconfig(self.unfiltered(), pat, {}, include=include,
161 sparse.updateconfig(self.unfiltered(), pat, {}, include=include,
162 exclude=exclude, enableprofile=enableprofile,
162 exclude=exclude, enableprofile=enableprofile,
163 usereporootpaths=True)
163 usereporootpaths=True)
164 return orig(self, node, overwrite, *args, **kwargs)
164 return orig(self, node, overwrite, *args, **kwargs)
165 extensions.wrapfunction(hg, 'updaterepo', clonesparse)
165 extensions.wrapfunction(hg, 'updaterepo', clonesparse)
166 return orig(ui, repo, *args, **opts)
166 return orig(ui, repo, *args, **opts)
167
167
168 def _setupclone(ui):
168 def _setupclone(ui):
169 entry = commands.table['clone']
169 entry = commands.table['clone']
170 entry[1].append(('', 'enable-profile', [],
170 entry[1].append(('', 'enable-profile', [],
171 'enable a sparse profile'))
171 'enable a sparse profile'))
172 entry[1].append(('', 'include', [],
172 entry[1].append(('', 'include', [],
173 'include sparse pattern'))
173 'include sparse pattern'))
174 entry[1].append(('', 'exclude', [],
174 entry[1].append(('', 'exclude', [],
175 'exclude sparse pattern'))
175 'exclude sparse pattern'))
176 extensions.wrapcommand(commands.table, 'clone', _clonesparsecmd)
176 extensions.wrapcommand(commands.table, 'clone', _clonesparsecmd)
177
177
178 def _setupadd(ui):
178 def _setupadd(ui):
179 entry = commands.table['add']
179 entry = commands.table['add']
180 entry[1].append(('s', 'sparse', None,
180 entry[1].append(('s', 'sparse', None,
181 'also include directories of added files in sparse config'))
181 'also include directories of added files in sparse config'))
182
182
183 def _add(orig, ui, repo, *pats, **opts):
183 def _add(orig, ui, repo, *pats, **opts):
184 if opts.get(r'sparse'):
184 if opts.get(r'sparse'):
185 dirs = set()
185 dirs = set()
186 for pat in pats:
186 for pat in pats:
187 dirname, basename = util.split(pat)
187 dirname, basename = util.split(pat)
188 dirs.add(dirname)
188 dirs.add(dirname)
189 sparse.updateconfig(repo, list(dirs), opts, include=True)
189 sparse.updateconfig(repo, list(dirs), opts, include=True)
190 return orig(ui, repo, *pats, **opts)
190 return orig(ui, repo, *pats, **opts)
191
191
192 extensions.wrapcommand(commands.table, 'add', _add)
192 extensions.wrapcommand(commands.table, 'add', _add)
193
193
194 def _setupdirstate(ui):
194 def _setupdirstate(ui):
195 """Modify the dirstate to prevent stat'ing excluded files,
195 """Modify the dirstate to prevent stat'ing excluded files,
196 and to prevent modifications to files outside the checkout.
196 and to prevent modifications to files outside the checkout.
197 """
197 """
198
198
199 def walk(orig, self, match, subrepos, unknown, ignored, full=True):
199 def walk(orig, self, match, subrepos, unknown, ignored, full=True):
200 # hack to not exclude explicitly-specified paths so that they can
200 # hack to not exclude explicitly-specified paths so that they can
201 # be warned later on e.g. dirstate.add()
201 # be warned later on e.g. dirstate.add()
202 em = matchmod.exact(None, None, match.files())
202 em = matchmod.exact(match.files())
203 sm = matchmod.unionmatcher([self._sparsematcher, em])
203 sm = matchmod.unionmatcher([self._sparsematcher, em])
204 match = matchmod.intersectmatchers(match, sm)
204 match = matchmod.intersectmatchers(match, sm)
205 return orig(self, match, subrepos, unknown, ignored, full)
205 return orig(self, match, subrepos, unknown, ignored, full)
206
206
207 extensions.wrapfunction(dirstate.dirstate, 'walk', walk)
207 extensions.wrapfunction(dirstate.dirstate, 'walk', walk)
208
208
209 # dirstate.rebuild should not add non-matching files
209 # dirstate.rebuild should not add non-matching files
210 def _rebuild(orig, self, parent, allfiles, changedfiles=None):
210 def _rebuild(orig, self, parent, allfiles, changedfiles=None):
211 matcher = self._sparsematcher
211 matcher = self._sparsematcher
212 if not matcher.always():
212 if not matcher.always():
213 allfiles = [f for f in allfiles if matcher(f)]
213 allfiles = [f for f in allfiles if matcher(f)]
214 if changedfiles:
214 if changedfiles:
215 changedfiles = [f for f in changedfiles if matcher(f)]
215 changedfiles = [f for f in changedfiles if matcher(f)]
216
216
217 if changedfiles is not None:
217 if changedfiles is not None:
218 # In _rebuild, these files will be deleted from the dirstate
218 # In _rebuild, these files will be deleted from the dirstate
219 # when they are not found to be in allfiles
219 # when they are not found to be in allfiles
220 dirstatefilestoremove = set(f for f in self if not matcher(f))
220 dirstatefilestoremove = set(f for f in self if not matcher(f))
221 changedfiles = dirstatefilestoremove.union(changedfiles)
221 changedfiles = dirstatefilestoremove.union(changedfiles)
222
222
223 return orig(self, parent, allfiles, changedfiles)
223 return orig(self, parent, allfiles, changedfiles)
224 extensions.wrapfunction(dirstate.dirstate, 'rebuild', _rebuild)
224 extensions.wrapfunction(dirstate.dirstate, 'rebuild', _rebuild)
225
225
226 # Prevent adding files that are outside the sparse checkout
226 # Prevent adding files that are outside the sparse checkout
227 editfuncs = ['normal', 'add', 'normallookup', 'copy', 'remove', 'merge']
227 editfuncs = ['normal', 'add', 'normallookup', 'copy', 'remove', 'merge']
228 hint = _('include file with `hg debugsparse --include <pattern>` or use ' +
228 hint = _('include file with `hg debugsparse --include <pattern>` or use ' +
229 '`hg add -s <file>` to include file directory while adding')
229 '`hg add -s <file>` to include file directory while adding')
230 for func in editfuncs:
230 for func in editfuncs:
231 def _wrapper(orig, self, *args):
231 def _wrapper(orig, self, *args):
232 sparsematch = self._sparsematcher
232 sparsematch = self._sparsematcher
233 if not sparsematch.always():
233 if not sparsematch.always():
234 for f in args:
234 for f in args:
235 if (f is not None and not sparsematch(f) and
235 if (f is not None and not sparsematch(f) and
236 f not in self):
236 f not in self):
237 raise error.Abort(_("cannot add '%s' - it is outside "
237 raise error.Abort(_("cannot add '%s' - it is outside "
238 "the sparse checkout") % f,
238 "the sparse checkout") % f,
239 hint=hint)
239 hint=hint)
240 return orig(self, *args)
240 return orig(self, *args)
241 extensions.wrapfunction(dirstate.dirstate, func, _wrapper)
241 extensions.wrapfunction(dirstate.dirstate, func, _wrapper)
242
242
243 @command('debugsparse', [
243 @command('debugsparse', [
244 ('I', 'include', False, _('include files in the sparse checkout')),
244 ('I', 'include', False, _('include files in the sparse checkout')),
245 ('X', 'exclude', False, _('exclude files in the sparse checkout')),
245 ('X', 'exclude', False, _('exclude files in the sparse checkout')),
246 ('d', 'delete', False, _('delete an include/exclude rule')),
246 ('d', 'delete', False, _('delete an include/exclude rule')),
247 ('f', 'force', False, _('allow changing rules even with pending changes')),
247 ('f', 'force', False, _('allow changing rules even with pending changes')),
248 ('', 'enable-profile', False, _('enables the specified profile')),
248 ('', 'enable-profile', False, _('enables the specified profile')),
249 ('', 'disable-profile', False, _('disables the specified profile')),
249 ('', 'disable-profile', False, _('disables the specified profile')),
250 ('', 'import-rules', False, _('imports rules from a file')),
250 ('', 'import-rules', False, _('imports rules from a file')),
251 ('', 'clear-rules', False, _('clears local include/exclude rules')),
251 ('', 'clear-rules', False, _('clears local include/exclude rules')),
252 ('', 'refresh', False, _('updates the working after sparseness changes')),
252 ('', 'refresh', False, _('updates the working after sparseness changes')),
253 ('', 'reset', False, _('makes the repo full again')),
253 ('', 'reset', False, _('makes the repo full again')),
254 ] + commands.templateopts,
254 ] + commands.templateopts,
255 _('[--OPTION] PATTERN...'),
255 _('[--OPTION] PATTERN...'),
256 helpbasic=True)
256 helpbasic=True)
257 def debugsparse(ui, repo, *pats, **opts):
257 def debugsparse(ui, repo, *pats, **opts):
258 """make the current checkout sparse, or edit the existing checkout
258 """make the current checkout sparse, or edit the existing checkout
259
259
260 The sparse command is used to make the current checkout sparse.
260 The sparse command is used to make the current checkout sparse.
261 This means files that don't meet the sparse condition will not be
261 This means files that don't meet the sparse condition will not be
262 written to disk, or show up in any working copy operations. It does
262 written to disk, or show up in any working copy operations. It does
263 not affect files in history in any way.
263 not affect files in history in any way.
264
264
265 Passing no arguments prints the currently applied sparse rules.
265 Passing no arguments prints the currently applied sparse rules.
266
266
267 --include and --exclude are used to add and remove files from the sparse
267 --include and --exclude are used to add and remove files from the sparse
268 checkout. The effects of adding an include or exclude rule are applied
268 checkout. The effects of adding an include or exclude rule are applied
269 immediately. If applying the new rule would cause a file with pending
269 immediately. If applying the new rule would cause a file with pending
270 changes to be added or removed, the command will fail. Pass --force to
270 changes to be added or removed, the command will fail. Pass --force to
271 force a rule change even with pending changes (the changes on disk will
271 force a rule change even with pending changes (the changes on disk will
272 be preserved).
272 be preserved).
273
273
274 --delete removes an existing include/exclude rule. The effects are
274 --delete removes an existing include/exclude rule. The effects are
275 immediate.
275 immediate.
276
276
277 --refresh refreshes the files on disk based on the sparse rules. This is
277 --refresh refreshes the files on disk based on the sparse rules. This is
278 only necessary if .hg/sparse was changed by hand.
278 only necessary if .hg/sparse was changed by hand.
279
279
280 --enable-profile and --disable-profile accept a path to a .hgsparse file.
280 --enable-profile and --disable-profile accept a path to a .hgsparse file.
281 This allows defining sparse checkouts and tracking them inside the
281 This allows defining sparse checkouts and tracking them inside the
282 repository. This is useful for defining commonly used sparse checkouts for
282 repository. This is useful for defining commonly used sparse checkouts for
283 many people to use. As the profile definition changes over time, the sparse
283 many people to use. As the profile definition changes over time, the sparse
284 checkout will automatically be updated appropriately, depending on which
284 checkout will automatically be updated appropriately, depending on which
285 changeset is checked out. Changes to .hgsparse are not applied until they
285 changeset is checked out. Changes to .hgsparse are not applied until they
286 have been committed.
286 have been committed.
287
287
288 --import-rules accepts a path to a file containing rules in the .hgsparse
288 --import-rules accepts a path to a file containing rules in the .hgsparse
289 format, allowing you to add --include, --exclude and --enable-profile rules
289 format, allowing you to add --include, --exclude and --enable-profile rules
290 in bulk. Like the --include, --exclude and --enable-profile switches, the
290 in bulk. Like the --include, --exclude and --enable-profile switches, the
291 changes are applied immediately.
291 changes are applied immediately.
292
292
293 --clear-rules removes all local include and exclude rules, while leaving
293 --clear-rules removes all local include and exclude rules, while leaving
294 any enabled profiles in place.
294 any enabled profiles in place.
295
295
296 Returns 0 if editing the sparse checkout succeeds.
296 Returns 0 if editing the sparse checkout succeeds.
297 """
297 """
298 opts = pycompat.byteskwargs(opts)
298 opts = pycompat.byteskwargs(opts)
299 include = opts.get('include')
299 include = opts.get('include')
300 exclude = opts.get('exclude')
300 exclude = opts.get('exclude')
301 force = opts.get('force')
301 force = opts.get('force')
302 enableprofile = opts.get('enable_profile')
302 enableprofile = opts.get('enable_profile')
303 disableprofile = opts.get('disable_profile')
303 disableprofile = opts.get('disable_profile')
304 importrules = opts.get('import_rules')
304 importrules = opts.get('import_rules')
305 clearrules = opts.get('clear_rules')
305 clearrules = opts.get('clear_rules')
306 delete = opts.get('delete')
306 delete = opts.get('delete')
307 refresh = opts.get('refresh')
307 refresh = opts.get('refresh')
308 reset = opts.get('reset')
308 reset = opts.get('reset')
309 count = sum([include, exclude, enableprofile, disableprofile, delete,
309 count = sum([include, exclude, enableprofile, disableprofile, delete,
310 importrules, refresh, clearrules, reset])
310 importrules, refresh, clearrules, reset])
311 if count > 1:
311 if count > 1:
312 raise error.Abort(_("too many flags specified"))
312 raise error.Abort(_("too many flags specified"))
313
313
314 if count == 0:
314 if count == 0:
315 if repo.vfs.exists('sparse'):
315 if repo.vfs.exists('sparse'):
316 ui.status(repo.vfs.read("sparse") + "\n")
316 ui.status(repo.vfs.read("sparse") + "\n")
317 temporaryincludes = sparse.readtemporaryincludes(repo)
317 temporaryincludes = sparse.readtemporaryincludes(repo)
318 if temporaryincludes:
318 if temporaryincludes:
319 ui.status(_("Temporarily Included Files (for merge/rebase):\n"))
319 ui.status(_("Temporarily Included Files (for merge/rebase):\n"))
320 ui.status(("\n".join(temporaryincludes) + "\n"))
320 ui.status(("\n".join(temporaryincludes) + "\n"))
321 else:
321 else:
322 ui.status(_('repo is not sparse\n'))
322 ui.status(_('repo is not sparse\n'))
323 return
323 return
324
324
325 if include or exclude or delete or reset or enableprofile or disableprofile:
325 if include or exclude or delete or reset or enableprofile or disableprofile:
326 sparse.updateconfig(repo, pats, opts, include=include, exclude=exclude,
326 sparse.updateconfig(repo, pats, opts, include=include, exclude=exclude,
327 reset=reset, delete=delete,
327 reset=reset, delete=delete,
328 enableprofile=enableprofile,
328 enableprofile=enableprofile,
329 disableprofile=disableprofile, force=force)
329 disableprofile=disableprofile, force=force)
330
330
331 if importrules:
331 if importrules:
332 sparse.importfromfiles(repo, opts, pats, force=force)
332 sparse.importfromfiles(repo, opts, pats, force=force)
333
333
334 if clearrules:
334 if clearrules:
335 sparse.clearrules(repo, force=force)
335 sparse.clearrules(repo, force=force)
336
336
337 if refresh:
337 if refresh:
338 try:
338 try:
339 wlock = repo.wlock()
339 wlock = repo.wlock()
340 fcounts = map(
340 fcounts = map(
341 len,
341 len,
342 sparse.refreshwdir(repo, repo.status(), sparse.matcher(repo),
342 sparse.refreshwdir(repo, repo.status(), sparse.matcher(repo),
343 force=force))
343 force=force))
344 sparse.printchanges(ui, opts, added=fcounts[0], dropped=fcounts[1],
344 sparse.printchanges(ui, opts, added=fcounts[0], dropped=fcounts[1],
345 conflicting=fcounts[2])
345 conflicting=fcounts[2])
346 finally:
346 finally:
347 wlock.release()
347 wlock.release()
@@ -1,765 +1,765 b''
1 # Patch transplanting extension for Mercurial
1 # Patch transplanting extension for Mercurial
2 #
2 #
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
3 # Copyright 2006, 2007 Brendan Cully <brendan@kublai.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 '''command to transplant changesets from another branch
8 '''command to transplant changesets from another branch
9
9
10 This extension allows you to transplant changes to another parent revision,
10 This extension allows you to transplant changes to another parent revision,
11 possibly in another repository. The transplant is done using 'diff' patches.
11 possibly in another repository. The transplant is done using 'diff' patches.
12
12
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
13 Transplanted patches are recorded in .hg/transplant/transplants, as a
14 map from a changeset hash to its hash in the source repository.
14 map from a changeset hash to its hash in the source repository.
15 '''
15 '''
16 from __future__ import absolute_import
16 from __future__ import absolute_import
17
17
18 import os
18 import os
19
19
20 from mercurial.i18n import _
20 from mercurial.i18n import _
21 from mercurial import (
21 from mercurial import (
22 bundlerepo,
22 bundlerepo,
23 cmdutil,
23 cmdutil,
24 error,
24 error,
25 exchange,
25 exchange,
26 hg,
26 hg,
27 logcmdutil,
27 logcmdutil,
28 match,
28 match,
29 merge,
29 merge,
30 node as nodemod,
30 node as nodemod,
31 patch,
31 patch,
32 pycompat,
32 pycompat,
33 registrar,
33 registrar,
34 revlog,
34 revlog,
35 revset,
35 revset,
36 scmutil,
36 scmutil,
37 smartset,
37 smartset,
38 util,
38 util,
39 vfs as vfsmod,
39 vfs as vfsmod,
40 )
40 )
41 from mercurial.utils import (
41 from mercurial.utils import (
42 procutil,
42 procutil,
43 stringutil,
43 stringutil,
44 )
44 )
45
45
46 class TransplantError(error.Abort):
46 class TransplantError(error.Abort):
47 pass
47 pass
48
48
49 cmdtable = {}
49 cmdtable = {}
50 command = registrar.command(cmdtable)
50 command = registrar.command(cmdtable)
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
51 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
52 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
53 # be specifying the version(s) of Mercurial they are tested with, or
53 # be specifying the version(s) of Mercurial they are tested with, or
54 # leave the attribute unspecified.
54 # leave the attribute unspecified.
55 testedwith = 'ships-with-hg-core'
55 testedwith = 'ships-with-hg-core'
56
56
57 configtable = {}
57 configtable = {}
58 configitem = registrar.configitem(configtable)
58 configitem = registrar.configitem(configtable)
59
59
60 configitem('transplant', 'filter',
60 configitem('transplant', 'filter',
61 default=None,
61 default=None,
62 )
62 )
63 configitem('transplant', 'log',
63 configitem('transplant', 'log',
64 default=None,
64 default=None,
65 )
65 )
66
66
67 class transplantentry(object):
67 class transplantentry(object):
68 def __init__(self, lnode, rnode):
68 def __init__(self, lnode, rnode):
69 self.lnode = lnode
69 self.lnode = lnode
70 self.rnode = rnode
70 self.rnode = rnode
71
71
72 class transplants(object):
72 class transplants(object):
73 def __init__(self, path=None, transplantfile=None, opener=None):
73 def __init__(self, path=None, transplantfile=None, opener=None):
74 self.path = path
74 self.path = path
75 self.transplantfile = transplantfile
75 self.transplantfile = transplantfile
76 self.opener = opener
76 self.opener = opener
77
77
78 if not opener:
78 if not opener:
79 self.opener = vfsmod.vfs(self.path)
79 self.opener = vfsmod.vfs(self.path)
80 self.transplants = {}
80 self.transplants = {}
81 self.dirty = False
81 self.dirty = False
82 self.read()
82 self.read()
83
83
84 def read(self):
84 def read(self):
85 abspath = os.path.join(self.path, self.transplantfile)
85 abspath = os.path.join(self.path, self.transplantfile)
86 if self.transplantfile and os.path.exists(abspath):
86 if self.transplantfile and os.path.exists(abspath):
87 for line in self.opener.read(self.transplantfile).splitlines():
87 for line in self.opener.read(self.transplantfile).splitlines():
88 lnode, rnode = map(revlog.bin, line.split(':'))
88 lnode, rnode = map(revlog.bin, line.split(':'))
89 list = self.transplants.setdefault(rnode, [])
89 list = self.transplants.setdefault(rnode, [])
90 list.append(transplantentry(lnode, rnode))
90 list.append(transplantentry(lnode, rnode))
91
91
92 def write(self):
92 def write(self):
93 if self.dirty and self.transplantfile:
93 if self.dirty and self.transplantfile:
94 if not os.path.isdir(self.path):
94 if not os.path.isdir(self.path):
95 os.mkdir(self.path)
95 os.mkdir(self.path)
96 fp = self.opener(self.transplantfile, 'w')
96 fp = self.opener(self.transplantfile, 'w')
97 for list in self.transplants.itervalues():
97 for list in self.transplants.itervalues():
98 for t in list:
98 for t in list:
99 l, r = map(nodemod.hex, (t.lnode, t.rnode))
99 l, r = map(nodemod.hex, (t.lnode, t.rnode))
100 fp.write(l + ':' + r + '\n')
100 fp.write(l + ':' + r + '\n')
101 fp.close()
101 fp.close()
102 self.dirty = False
102 self.dirty = False
103
103
104 def get(self, rnode):
104 def get(self, rnode):
105 return self.transplants.get(rnode) or []
105 return self.transplants.get(rnode) or []
106
106
107 def set(self, lnode, rnode):
107 def set(self, lnode, rnode):
108 list = self.transplants.setdefault(rnode, [])
108 list = self.transplants.setdefault(rnode, [])
109 list.append(transplantentry(lnode, rnode))
109 list.append(transplantentry(lnode, rnode))
110 self.dirty = True
110 self.dirty = True
111
111
112 def remove(self, transplant):
112 def remove(self, transplant):
113 list = self.transplants.get(transplant.rnode)
113 list = self.transplants.get(transplant.rnode)
114 if list:
114 if list:
115 del list[list.index(transplant)]
115 del list[list.index(transplant)]
116 self.dirty = True
116 self.dirty = True
117
117
118 class transplanter(object):
118 class transplanter(object):
119 def __init__(self, ui, repo, opts):
119 def __init__(self, ui, repo, opts):
120 self.ui = ui
120 self.ui = ui
121 self.path = repo.vfs.join('transplant')
121 self.path = repo.vfs.join('transplant')
122 self.opener = vfsmod.vfs(self.path)
122 self.opener = vfsmod.vfs(self.path)
123 self.transplants = transplants(self.path, 'transplants',
123 self.transplants = transplants(self.path, 'transplants',
124 opener=self.opener)
124 opener=self.opener)
125 def getcommiteditor():
125 def getcommiteditor():
126 editform = cmdutil.mergeeditform(repo[None], 'transplant')
126 editform = cmdutil.mergeeditform(repo[None], 'transplant')
127 return cmdutil.getcommiteditor(editform=editform,
127 return cmdutil.getcommiteditor(editform=editform,
128 **pycompat.strkwargs(opts))
128 **pycompat.strkwargs(opts))
129 self.getcommiteditor = getcommiteditor
129 self.getcommiteditor = getcommiteditor
130
130
131 def applied(self, repo, node, parent):
131 def applied(self, repo, node, parent):
132 '''returns True if a node is already an ancestor of parent
132 '''returns True if a node is already an ancestor of parent
133 or is parent or has already been transplanted'''
133 or is parent or has already been transplanted'''
134 if hasnode(repo, parent):
134 if hasnode(repo, parent):
135 parentrev = repo.changelog.rev(parent)
135 parentrev = repo.changelog.rev(parent)
136 if hasnode(repo, node):
136 if hasnode(repo, node):
137 rev = repo.changelog.rev(node)
137 rev = repo.changelog.rev(node)
138 reachable = repo.changelog.ancestors([parentrev], rev,
138 reachable = repo.changelog.ancestors([parentrev], rev,
139 inclusive=True)
139 inclusive=True)
140 if rev in reachable:
140 if rev in reachable:
141 return True
141 return True
142 for t in self.transplants.get(node):
142 for t in self.transplants.get(node):
143 # it might have been stripped
143 # it might have been stripped
144 if not hasnode(repo, t.lnode):
144 if not hasnode(repo, t.lnode):
145 self.transplants.remove(t)
145 self.transplants.remove(t)
146 return False
146 return False
147 lnoderev = repo.changelog.rev(t.lnode)
147 lnoderev = repo.changelog.rev(t.lnode)
148 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
148 if lnoderev in repo.changelog.ancestors([parentrev], lnoderev,
149 inclusive=True):
149 inclusive=True):
150 return True
150 return True
151 return False
151 return False
152
152
153 def apply(self, repo, source, revmap, merges, opts=None):
153 def apply(self, repo, source, revmap, merges, opts=None):
154 '''apply the revisions in revmap one by one in revision order'''
154 '''apply the revisions in revmap one by one in revision order'''
155 if opts is None:
155 if opts is None:
156 opts = {}
156 opts = {}
157 revs = sorted(revmap)
157 revs = sorted(revmap)
158 p1 = repo.dirstate.p1()
158 p1 = repo.dirstate.p1()
159 pulls = []
159 pulls = []
160 diffopts = patch.difffeatureopts(self.ui, opts)
160 diffopts = patch.difffeatureopts(self.ui, opts)
161 diffopts.git = True
161 diffopts.git = True
162
162
163 lock = tr = None
163 lock = tr = None
164 try:
164 try:
165 lock = repo.lock()
165 lock = repo.lock()
166 tr = repo.transaction('transplant')
166 tr = repo.transaction('transplant')
167 for rev in revs:
167 for rev in revs:
168 node = revmap[rev]
168 node = revmap[rev]
169 revstr = '%d:%s' % (rev, nodemod.short(node))
169 revstr = '%d:%s' % (rev, nodemod.short(node))
170
170
171 if self.applied(repo, node, p1):
171 if self.applied(repo, node, p1):
172 self.ui.warn(_('skipping already applied revision %s\n') %
172 self.ui.warn(_('skipping already applied revision %s\n') %
173 revstr)
173 revstr)
174 continue
174 continue
175
175
176 parents = source.changelog.parents(node)
176 parents = source.changelog.parents(node)
177 if not (opts.get('filter') or opts.get('log')):
177 if not (opts.get('filter') or opts.get('log')):
178 # If the changeset parent is the same as the
178 # If the changeset parent is the same as the
179 # wdir's parent, just pull it.
179 # wdir's parent, just pull it.
180 if parents[0] == p1:
180 if parents[0] == p1:
181 pulls.append(node)
181 pulls.append(node)
182 p1 = node
182 p1 = node
183 continue
183 continue
184 if pulls:
184 if pulls:
185 if source != repo:
185 if source != repo:
186 exchange.pull(repo, source.peer(), heads=pulls)
186 exchange.pull(repo, source.peer(), heads=pulls)
187 merge.update(repo, pulls[-1], branchmerge=False,
187 merge.update(repo, pulls[-1], branchmerge=False,
188 force=False)
188 force=False)
189 p1 = repo.dirstate.p1()
189 p1 = repo.dirstate.p1()
190 pulls = []
190 pulls = []
191
191
192 domerge = False
192 domerge = False
193 if node in merges:
193 if node in merges:
194 # pulling all the merge revs at once would mean we
194 # pulling all the merge revs at once would mean we
195 # couldn't transplant after the latest even if
195 # couldn't transplant after the latest even if
196 # transplants before them fail.
196 # transplants before them fail.
197 domerge = True
197 domerge = True
198 if not hasnode(repo, node):
198 if not hasnode(repo, node):
199 exchange.pull(repo, source.peer(), heads=[node])
199 exchange.pull(repo, source.peer(), heads=[node])
200
200
201 skipmerge = False
201 skipmerge = False
202 if parents[1] != revlog.nullid:
202 if parents[1] != revlog.nullid:
203 if not opts.get('parent'):
203 if not opts.get('parent'):
204 self.ui.note(_('skipping merge changeset %d:%s\n')
204 self.ui.note(_('skipping merge changeset %d:%s\n')
205 % (rev, nodemod.short(node)))
205 % (rev, nodemod.short(node)))
206 skipmerge = True
206 skipmerge = True
207 else:
207 else:
208 parent = source.lookup(opts['parent'])
208 parent = source.lookup(opts['parent'])
209 if parent not in parents:
209 if parent not in parents:
210 raise error.Abort(_('%s is not a parent of %s') %
210 raise error.Abort(_('%s is not a parent of %s') %
211 (nodemod.short(parent),
211 (nodemod.short(parent),
212 nodemod.short(node)))
212 nodemod.short(node)))
213 else:
213 else:
214 parent = parents[0]
214 parent = parents[0]
215
215
216 if skipmerge:
216 if skipmerge:
217 patchfile = None
217 patchfile = None
218 else:
218 else:
219 fd, patchfile = pycompat.mkstemp(prefix='hg-transplant-')
219 fd, patchfile = pycompat.mkstemp(prefix='hg-transplant-')
220 fp = os.fdopen(fd, r'wb')
220 fp = os.fdopen(fd, r'wb')
221 gen = patch.diff(source, parent, node, opts=diffopts)
221 gen = patch.diff(source, parent, node, opts=diffopts)
222 for chunk in gen:
222 for chunk in gen:
223 fp.write(chunk)
223 fp.write(chunk)
224 fp.close()
224 fp.close()
225
225
226 del revmap[rev]
226 del revmap[rev]
227 if patchfile or domerge:
227 if patchfile or domerge:
228 try:
228 try:
229 try:
229 try:
230 n = self.applyone(repo, node,
230 n = self.applyone(repo, node,
231 source.changelog.read(node),
231 source.changelog.read(node),
232 patchfile, merge=domerge,
232 patchfile, merge=domerge,
233 log=opts.get('log'),
233 log=opts.get('log'),
234 filter=opts.get('filter'))
234 filter=opts.get('filter'))
235 except TransplantError:
235 except TransplantError:
236 # Do not rollback, it is up to the user to
236 # Do not rollback, it is up to the user to
237 # fix the merge or cancel everything
237 # fix the merge or cancel everything
238 tr.close()
238 tr.close()
239 raise
239 raise
240 if n and domerge:
240 if n and domerge:
241 self.ui.status(_('%s merged at %s\n') % (revstr,
241 self.ui.status(_('%s merged at %s\n') % (revstr,
242 nodemod.short(n)))
242 nodemod.short(n)))
243 elif n:
243 elif n:
244 self.ui.status(_('%s transplanted to %s\n')
244 self.ui.status(_('%s transplanted to %s\n')
245 % (nodemod.short(node),
245 % (nodemod.short(node),
246 nodemod.short(n)))
246 nodemod.short(n)))
247 finally:
247 finally:
248 if patchfile:
248 if patchfile:
249 os.unlink(patchfile)
249 os.unlink(patchfile)
250 tr.close()
250 tr.close()
251 if pulls:
251 if pulls:
252 exchange.pull(repo, source.peer(), heads=pulls)
252 exchange.pull(repo, source.peer(), heads=pulls)
253 merge.update(repo, pulls[-1], branchmerge=False, force=False)
253 merge.update(repo, pulls[-1], branchmerge=False, force=False)
254 finally:
254 finally:
255 self.saveseries(revmap, merges)
255 self.saveseries(revmap, merges)
256 self.transplants.write()
256 self.transplants.write()
257 if tr:
257 if tr:
258 tr.release()
258 tr.release()
259 if lock:
259 if lock:
260 lock.release()
260 lock.release()
261
261
262 def filter(self, filter, node, changelog, patchfile):
262 def filter(self, filter, node, changelog, patchfile):
263 '''arbitrarily rewrite changeset before applying it'''
263 '''arbitrarily rewrite changeset before applying it'''
264
264
265 self.ui.status(_('filtering %s\n') % patchfile)
265 self.ui.status(_('filtering %s\n') % patchfile)
266 user, date, msg = (changelog[1], changelog[2], changelog[4])
266 user, date, msg = (changelog[1], changelog[2], changelog[4])
267 fd, headerfile = pycompat.mkstemp(prefix='hg-transplant-')
267 fd, headerfile = pycompat.mkstemp(prefix='hg-transplant-')
268 fp = os.fdopen(fd, r'wb')
268 fp = os.fdopen(fd, r'wb')
269 fp.write("# HG changeset patch\n")
269 fp.write("# HG changeset patch\n")
270 fp.write("# User %s\n" % user)
270 fp.write("# User %s\n" % user)
271 fp.write("# Date %d %d\n" % date)
271 fp.write("# Date %d %d\n" % date)
272 fp.write(msg + '\n')
272 fp.write(msg + '\n')
273 fp.close()
273 fp.close()
274
274
275 try:
275 try:
276 self.ui.system('%s %s %s' % (filter,
276 self.ui.system('%s %s %s' % (filter,
277 procutil.shellquote(headerfile),
277 procutil.shellquote(headerfile),
278 procutil.shellquote(patchfile)),
278 procutil.shellquote(patchfile)),
279 environ={'HGUSER': changelog[1],
279 environ={'HGUSER': changelog[1],
280 'HGREVISION': nodemod.hex(node),
280 'HGREVISION': nodemod.hex(node),
281 },
281 },
282 onerr=error.Abort, errprefix=_('filter failed'),
282 onerr=error.Abort, errprefix=_('filter failed'),
283 blockedtag='transplant_filter')
283 blockedtag='transplant_filter')
284 user, date, msg = self.parselog(open(headerfile, 'rb'))[1:4]
284 user, date, msg = self.parselog(open(headerfile, 'rb'))[1:4]
285 finally:
285 finally:
286 os.unlink(headerfile)
286 os.unlink(headerfile)
287
287
288 return (user, date, msg)
288 return (user, date, msg)
289
289
290 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
290 def applyone(self, repo, node, cl, patchfile, merge=False, log=False,
291 filter=None):
291 filter=None):
292 '''apply the patch in patchfile to the repository as a transplant'''
292 '''apply the patch in patchfile to the repository as a transplant'''
293 (manifest, user, (time, timezone), files, message) = cl[:5]
293 (manifest, user, (time, timezone), files, message) = cl[:5]
294 date = "%d %d" % (time, timezone)
294 date = "%d %d" % (time, timezone)
295 extra = {'transplant_source': node}
295 extra = {'transplant_source': node}
296 if filter:
296 if filter:
297 (user, date, message) = self.filter(filter, node, cl, patchfile)
297 (user, date, message) = self.filter(filter, node, cl, patchfile)
298
298
299 if log:
299 if log:
300 # we don't translate messages inserted into commits
300 # we don't translate messages inserted into commits
301 message += '\n(transplanted from %s)' % nodemod.hex(node)
301 message += '\n(transplanted from %s)' % nodemod.hex(node)
302
302
303 self.ui.status(_('applying %s\n') % nodemod.short(node))
303 self.ui.status(_('applying %s\n') % nodemod.short(node))
304 self.ui.note('%s %s\n%s\n' % (user, date, message))
304 self.ui.note('%s %s\n%s\n' % (user, date, message))
305
305
306 if not patchfile and not merge:
306 if not patchfile and not merge:
307 raise error.Abort(_('can only omit patchfile if merging'))
307 raise error.Abort(_('can only omit patchfile if merging'))
308 if patchfile:
308 if patchfile:
309 try:
309 try:
310 files = set()
310 files = set()
311 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
311 patch.patch(self.ui, repo, patchfile, files=files, eolmode=None)
312 files = list(files)
312 files = list(files)
313 except Exception as inst:
313 except Exception as inst:
314 seriespath = os.path.join(self.path, 'series')
314 seriespath = os.path.join(self.path, 'series')
315 if os.path.exists(seriespath):
315 if os.path.exists(seriespath):
316 os.unlink(seriespath)
316 os.unlink(seriespath)
317 p1 = repo.dirstate.p1()
317 p1 = repo.dirstate.p1()
318 p2 = node
318 p2 = node
319 self.log(user, date, message, p1, p2, merge=merge)
319 self.log(user, date, message, p1, p2, merge=merge)
320 self.ui.write(stringutil.forcebytestr(inst) + '\n')
320 self.ui.write(stringutil.forcebytestr(inst) + '\n')
321 raise TransplantError(_('fix up the working directory and run '
321 raise TransplantError(_('fix up the working directory and run '
322 'hg transplant --continue'))
322 'hg transplant --continue'))
323 else:
323 else:
324 files = None
324 files = None
325 if merge:
325 if merge:
326 p1 = repo.dirstate.p1()
326 p1 = repo.dirstate.p1()
327 repo.setparents(p1, node)
327 repo.setparents(p1, node)
328 m = match.always(repo.root, '')
328 m = match.always()
329 else:
329 else:
330 m = match.exact(repo.root, '', files)
330 m = match.exact(files)
331
331
332 n = repo.commit(message, user, date, extra=extra, match=m,
332 n = repo.commit(message, user, date, extra=extra, match=m,
333 editor=self.getcommiteditor())
333 editor=self.getcommiteditor())
334 if not n:
334 if not n:
335 self.ui.warn(_('skipping emptied changeset %s\n') %
335 self.ui.warn(_('skipping emptied changeset %s\n') %
336 nodemod.short(node))
336 nodemod.short(node))
337 return None
337 return None
338 if not merge:
338 if not merge:
339 self.transplants.set(n, node)
339 self.transplants.set(n, node)
340
340
341 return n
341 return n
342
342
343 def canresume(self):
343 def canresume(self):
344 return os.path.exists(os.path.join(self.path, 'journal'))
344 return os.path.exists(os.path.join(self.path, 'journal'))
345
345
346 def resume(self, repo, source, opts):
346 def resume(self, repo, source, opts):
347 '''recover last transaction and apply remaining changesets'''
347 '''recover last transaction and apply remaining changesets'''
348 if os.path.exists(os.path.join(self.path, 'journal')):
348 if os.path.exists(os.path.join(self.path, 'journal')):
349 n, node = self.recover(repo, source, opts)
349 n, node = self.recover(repo, source, opts)
350 if n:
350 if n:
351 self.ui.status(_('%s transplanted as %s\n') %
351 self.ui.status(_('%s transplanted as %s\n') %
352 (nodemod.short(node),
352 (nodemod.short(node),
353 nodemod.short(n)))
353 nodemod.short(n)))
354 else:
354 else:
355 self.ui.status(_('%s skipped due to empty diff\n')
355 self.ui.status(_('%s skipped due to empty diff\n')
356 % (nodemod.short(node),))
356 % (nodemod.short(node),))
357 seriespath = os.path.join(self.path, 'series')
357 seriespath = os.path.join(self.path, 'series')
358 if not os.path.exists(seriespath):
358 if not os.path.exists(seriespath):
359 self.transplants.write()
359 self.transplants.write()
360 return
360 return
361 nodes, merges = self.readseries()
361 nodes, merges = self.readseries()
362 revmap = {}
362 revmap = {}
363 for n in nodes:
363 for n in nodes:
364 revmap[source.changelog.rev(n)] = n
364 revmap[source.changelog.rev(n)] = n
365 os.unlink(seriespath)
365 os.unlink(seriespath)
366
366
367 self.apply(repo, source, revmap, merges, opts)
367 self.apply(repo, source, revmap, merges, opts)
368
368
369 def recover(self, repo, source, opts):
369 def recover(self, repo, source, opts):
370 '''commit working directory using journal metadata'''
370 '''commit working directory using journal metadata'''
371 node, user, date, message, parents = self.readlog()
371 node, user, date, message, parents = self.readlog()
372 merge = False
372 merge = False
373
373
374 if not user or not date or not message or not parents[0]:
374 if not user or not date or not message or not parents[0]:
375 raise error.Abort(_('transplant log file is corrupt'))
375 raise error.Abort(_('transplant log file is corrupt'))
376
376
377 parent = parents[0]
377 parent = parents[0]
378 if len(parents) > 1:
378 if len(parents) > 1:
379 if opts.get('parent'):
379 if opts.get('parent'):
380 parent = source.lookup(opts['parent'])
380 parent = source.lookup(opts['parent'])
381 if parent not in parents:
381 if parent not in parents:
382 raise error.Abort(_('%s is not a parent of %s') %
382 raise error.Abort(_('%s is not a parent of %s') %
383 (nodemod.short(parent),
383 (nodemod.short(parent),
384 nodemod.short(node)))
384 nodemod.short(node)))
385 else:
385 else:
386 merge = True
386 merge = True
387
387
388 extra = {'transplant_source': node}
388 extra = {'transplant_source': node}
389 try:
389 try:
390 p1 = repo.dirstate.p1()
390 p1 = repo.dirstate.p1()
391 if p1 != parent:
391 if p1 != parent:
392 raise error.Abort(_('working directory not at transplant '
392 raise error.Abort(_('working directory not at transplant '
393 'parent %s') % nodemod.hex(parent))
393 'parent %s') % nodemod.hex(parent))
394 if merge:
394 if merge:
395 repo.setparents(p1, parents[1])
395 repo.setparents(p1, parents[1])
396 modified, added, removed, deleted = repo.status()[:4]
396 modified, added, removed, deleted = repo.status()[:4]
397 if merge or modified or added or removed or deleted:
397 if merge or modified or added or removed or deleted:
398 n = repo.commit(message, user, date, extra=extra,
398 n = repo.commit(message, user, date, extra=extra,
399 editor=self.getcommiteditor())
399 editor=self.getcommiteditor())
400 if not n:
400 if not n:
401 raise error.Abort(_('commit failed'))
401 raise error.Abort(_('commit failed'))
402 if not merge:
402 if not merge:
403 self.transplants.set(n, node)
403 self.transplants.set(n, node)
404 else:
404 else:
405 n = None
405 n = None
406 self.unlog()
406 self.unlog()
407
407
408 return n, node
408 return n, node
409 finally:
409 finally:
410 # TODO: get rid of this meaningless try/finally enclosing.
410 # TODO: get rid of this meaningless try/finally enclosing.
411 # this is kept only to reduce changes in a patch.
411 # this is kept only to reduce changes in a patch.
412 pass
412 pass
413
413
414 def readseries(self):
414 def readseries(self):
415 nodes = []
415 nodes = []
416 merges = []
416 merges = []
417 cur = nodes
417 cur = nodes
418 for line in self.opener.read('series').splitlines():
418 for line in self.opener.read('series').splitlines():
419 if line.startswith('# Merges'):
419 if line.startswith('# Merges'):
420 cur = merges
420 cur = merges
421 continue
421 continue
422 cur.append(revlog.bin(line))
422 cur.append(revlog.bin(line))
423
423
424 return (nodes, merges)
424 return (nodes, merges)
425
425
426 def saveseries(self, revmap, merges):
426 def saveseries(self, revmap, merges):
427 if not revmap:
427 if not revmap:
428 return
428 return
429
429
430 if not os.path.isdir(self.path):
430 if not os.path.isdir(self.path):
431 os.mkdir(self.path)
431 os.mkdir(self.path)
432 series = self.opener('series', 'w')
432 series = self.opener('series', 'w')
433 for rev in sorted(revmap):
433 for rev in sorted(revmap):
434 series.write(nodemod.hex(revmap[rev]) + '\n')
434 series.write(nodemod.hex(revmap[rev]) + '\n')
435 if merges:
435 if merges:
436 series.write('# Merges\n')
436 series.write('# Merges\n')
437 for m in merges:
437 for m in merges:
438 series.write(nodemod.hex(m) + '\n')
438 series.write(nodemod.hex(m) + '\n')
439 series.close()
439 series.close()
440
440
441 def parselog(self, fp):
441 def parselog(self, fp):
442 parents = []
442 parents = []
443 message = []
443 message = []
444 node = revlog.nullid
444 node = revlog.nullid
445 inmsg = False
445 inmsg = False
446 user = None
446 user = None
447 date = None
447 date = None
448 for line in fp.read().splitlines():
448 for line in fp.read().splitlines():
449 if inmsg:
449 if inmsg:
450 message.append(line)
450 message.append(line)
451 elif line.startswith('# User '):
451 elif line.startswith('# User '):
452 user = line[7:]
452 user = line[7:]
453 elif line.startswith('# Date '):
453 elif line.startswith('# Date '):
454 date = line[7:]
454 date = line[7:]
455 elif line.startswith('# Node ID '):
455 elif line.startswith('# Node ID '):
456 node = revlog.bin(line[10:])
456 node = revlog.bin(line[10:])
457 elif line.startswith('# Parent '):
457 elif line.startswith('# Parent '):
458 parents.append(revlog.bin(line[9:]))
458 parents.append(revlog.bin(line[9:]))
459 elif not line.startswith('# '):
459 elif not line.startswith('# '):
460 inmsg = True
460 inmsg = True
461 message.append(line)
461 message.append(line)
462 if None in (user, date):
462 if None in (user, date):
463 raise error.Abort(_("filter corrupted changeset (no user or date)"))
463 raise error.Abort(_("filter corrupted changeset (no user or date)"))
464 return (node, user, date, '\n'.join(message), parents)
464 return (node, user, date, '\n'.join(message), parents)
465
465
466 def log(self, user, date, message, p1, p2, merge=False):
466 def log(self, user, date, message, p1, p2, merge=False):
467 '''journal changelog metadata for later recover'''
467 '''journal changelog metadata for later recover'''
468
468
469 if not os.path.isdir(self.path):
469 if not os.path.isdir(self.path):
470 os.mkdir(self.path)
470 os.mkdir(self.path)
471 fp = self.opener('journal', 'w')
471 fp = self.opener('journal', 'w')
472 fp.write('# User %s\n' % user)
472 fp.write('# User %s\n' % user)
473 fp.write('# Date %s\n' % date)
473 fp.write('# Date %s\n' % date)
474 fp.write('# Node ID %s\n' % nodemod.hex(p2))
474 fp.write('# Node ID %s\n' % nodemod.hex(p2))
475 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
475 fp.write('# Parent ' + nodemod.hex(p1) + '\n')
476 if merge:
476 if merge:
477 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
477 fp.write('# Parent ' + nodemod.hex(p2) + '\n')
478 fp.write(message.rstrip() + '\n')
478 fp.write(message.rstrip() + '\n')
479 fp.close()
479 fp.close()
480
480
481 def readlog(self):
481 def readlog(self):
482 return self.parselog(self.opener('journal'))
482 return self.parselog(self.opener('journal'))
483
483
484 def unlog(self):
484 def unlog(self):
485 '''remove changelog journal'''
485 '''remove changelog journal'''
486 absdst = os.path.join(self.path, 'journal')
486 absdst = os.path.join(self.path, 'journal')
487 if os.path.exists(absdst):
487 if os.path.exists(absdst):
488 os.unlink(absdst)
488 os.unlink(absdst)
489
489
490 def transplantfilter(self, repo, source, root):
490 def transplantfilter(self, repo, source, root):
491 def matchfn(node):
491 def matchfn(node):
492 if self.applied(repo, node, root):
492 if self.applied(repo, node, root):
493 return False
493 return False
494 if source.changelog.parents(node)[1] != revlog.nullid:
494 if source.changelog.parents(node)[1] != revlog.nullid:
495 return False
495 return False
496 extra = source.changelog.read(node)[5]
496 extra = source.changelog.read(node)[5]
497 cnode = extra.get('transplant_source')
497 cnode = extra.get('transplant_source')
498 if cnode and self.applied(repo, cnode, root):
498 if cnode and self.applied(repo, cnode, root):
499 return False
499 return False
500 return True
500 return True
501
501
502 return matchfn
502 return matchfn
503
503
504 def hasnode(repo, node):
504 def hasnode(repo, node):
505 try:
505 try:
506 return repo.changelog.rev(node) is not None
506 return repo.changelog.rev(node) is not None
507 except error.StorageError:
507 except error.StorageError:
508 return False
508 return False
509
509
510 def browserevs(ui, repo, nodes, opts):
510 def browserevs(ui, repo, nodes, opts):
511 '''interactively transplant changesets'''
511 '''interactively transplant changesets'''
512 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
512 displayer = logcmdutil.changesetdisplayer(ui, repo, opts)
513 transplants = []
513 transplants = []
514 merges = []
514 merges = []
515 prompt = _('apply changeset? [ynmpcq?]:'
515 prompt = _('apply changeset? [ynmpcq?]:'
516 '$$ &yes, transplant this changeset'
516 '$$ &yes, transplant this changeset'
517 '$$ &no, skip this changeset'
517 '$$ &no, skip this changeset'
518 '$$ &merge at this changeset'
518 '$$ &merge at this changeset'
519 '$$ show &patch'
519 '$$ show &patch'
520 '$$ &commit selected changesets'
520 '$$ &commit selected changesets'
521 '$$ &quit and cancel transplant'
521 '$$ &quit and cancel transplant'
522 '$$ &? (show this help)')
522 '$$ &? (show this help)')
523 for node in nodes:
523 for node in nodes:
524 displayer.show(repo[node])
524 displayer.show(repo[node])
525 action = None
525 action = None
526 while not action:
526 while not action:
527 choice = ui.promptchoice(prompt)
527 choice = ui.promptchoice(prompt)
528 action = 'ynmpcq?'[choice:choice + 1]
528 action = 'ynmpcq?'[choice:choice + 1]
529 if action == '?':
529 if action == '?':
530 for c, t in ui.extractchoices(prompt)[1]:
530 for c, t in ui.extractchoices(prompt)[1]:
531 ui.write('%s: %s\n' % (c, t))
531 ui.write('%s: %s\n' % (c, t))
532 action = None
532 action = None
533 elif action == 'p':
533 elif action == 'p':
534 parent = repo.changelog.parents(node)[0]
534 parent = repo.changelog.parents(node)[0]
535 for chunk in patch.diff(repo, parent, node):
535 for chunk in patch.diff(repo, parent, node):
536 ui.write(chunk)
536 ui.write(chunk)
537 action = None
537 action = None
538 if action == 'y':
538 if action == 'y':
539 transplants.append(node)
539 transplants.append(node)
540 elif action == 'm':
540 elif action == 'm':
541 merges.append(node)
541 merges.append(node)
542 elif action == 'c':
542 elif action == 'c':
543 break
543 break
544 elif action == 'q':
544 elif action == 'q':
545 transplants = ()
545 transplants = ()
546 merges = ()
546 merges = ()
547 break
547 break
548 displayer.close()
548 displayer.close()
549 return (transplants, merges)
549 return (transplants, merges)
550
550
551 @command('transplant',
551 @command('transplant',
552 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
552 [('s', 'source', '', _('transplant changesets from REPO'), _('REPO')),
553 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
553 ('b', 'branch', [], _('use this source changeset as head'), _('REV')),
554 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
554 ('a', 'all', None, _('pull all changesets up to the --branch revisions')),
555 ('p', 'prune', [], _('skip over REV'), _('REV')),
555 ('p', 'prune', [], _('skip over REV'), _('REV')),
556 ('m', 'merge', [], _('merge at REV'), _('REV')),
556 ('m', 'merge', [], _('merge at REV'), _('REV')),
557 ('', 'parent', '',
557 ('', 'parent', '',
558 _('parent to choose when transplanting merge'), _('REV')),
558 _('parent to choose when transplanting merge'), _('REV')),
559 ('e', 'edit', False, _('invoke editor on commit messages')),
559 ('e', 'edit', False, _('invoke editor on commit messages')),
560 ('', 'log', None, _('append transplant info to log message')),
560 ('', 'log', None, _('append transplant info to log message')),
561 ('c', 'continue', None, _('continue last transplant session '
561 ('c', 'continue', None, _('continue last transplant session '
562 'after fixing conflicts')),
562 'after fixing conflicts')),
563 ('', 'filter', '',
563 ('', 'filter', '',
564 _('filter changesets through command'), _('CMD'))],
564 _('filter changesets through command'), _('CMD'))],
565 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
565 _('hg transplant [-s REPO] [-b BRANCH [-a]] [-p REV] '
566 '[-m REV] [REV]...'),
566 '[-m REV] [REV]...'),
567 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
567 helpcategory=command.CATEGORY_CHANGE_MANAGEMENT)
568 def transplant(ui, repo, *revs, **opts):
568 def transplant(ui, repo, *revs, **opts):
569 '''transplant changesets from another branch
569 '''transplant changesets from another branch
570
570
571 Selected changesets will be applied on top of the current working
571 Selected changesets will be applied on top of the current working
572 directory with the log of the original changeset. The changesets
572 directory with the log of the original changeset. The changesets
573 are copied and will thus appear twice in the history with different
573 are copied and will thus appear twice in the history with different
574 identities.
574 identities.
575
575
576 Consider using the graft command if everything is inside the same
576 Consider using the graft command if everything is inside the same
577 repository - it will use merges and will usually give a better result.
577 repository - it will use merges and will usually give a better result.
578 Use the rebase extension if the changesets are unpublished and you want
578 Use the rebase extension if the changesets are unpublished and you want
579 to move them instead of copying them.
579 to move them instead of copying them.
580
580
581 If --log is specified, log messages will have a comment appended
581 If --log is specified, log messages will have a comment appended
582 of the form::
582 of the form::
583
583
584 (transplanted from CHANGESETHASH)
584 (transplanted from CHANGESETHASH)
585
585
586 You can rewrite the changelog message with the --filter option.
586 You can rewrite the changelog message with the --filter option.
587 Its argument will be invoked with the current changelog message as
587 Its argument will be invoked with the current changelog message as
588 $1 and the patch as $2.
588 $1 and the patch as $2.
589
589
590 --source/-s specifies another repository to use for selecting changesets,
590 --source/-s specifies another repository to use for selecting changesets,
591 just as if it temporarily had been pulled.
591 just as if it temporarily had been pulled.
592 If --branch/-b is specified, these revisions will be used as
592 If --branch/-b is specified, these revisions will be used as
593 heads when deciding which changesets to transplant, just as if only
593 heads when deciding which changesets to transplant, just as if only
594 these revisions had been pulled.
594 these revisions had been pulled.
595 If --all/-a is specified, all the revisions up to the heads specified
595 If --all/-a is specified, all the revisions up to the heads specified
596 with --branch will be transplanted.
596 with --branch will be transplanted.
597
597
598 Example:
598 Example:
599
599
600 - transplant all changes up to REV on top of your current revision::
600 - transplant all changes up to REV on top of your current revision::
601
601
602 hg transplant --branch REV --all
602 hg transplant --branch REV --all
603
603
604 You can optionally mark selected transplanted changesets as merge
604 You can optionally mark selected transplanted changesets as merge
605 changesets. You will not be prompted to transplant any ancestors
605 changesets. You will not be prompted to transplant any ancestors
606 of a merged transplant, and you can merge descendants of them
606 of a merged transplant, and you can merge descendants of them
607 normally instead of transplanting them.
607 normally instead of transplanting them.
608
608
609 Merge changesets may be transplanted directly by specifying the
609 Merge changesets may be transplanted directly by specifying the
610 proper parent changeset by calling :hg:`transplant --parent`.
610 proper parent changeset by calling :hg:`transplant --parent`.
611
611
612 If no merges or revisions are provided, :hg:`transplant` will
612 If no merges or revisions are provided, :hg:`transplant` will
613 start an interactive changeset browser.
613 start an interactive changeset browser.
614
614
615 If a changeset application fails, you can fix the merge by hand
615 If a changeset application fails, you can fix the merge by hand
616 and then resume where you left off by calling :hg:`transplant
616 and then resume where you left off by calling :hg:`transplant
617 --continue/-c`.
617 --continue/-c`.
618 '''
618 '''
619 with repo.wlock():
619 with repo.wlock():
620 return _dotransplant(ui, repo, *revs, **opts)
620 return _dotransplant(ui, repo, *revs, **opts)
621
621
622 def _dotransplant(ui, repo, *revs, **opts):
622 def _dotransplant(ui, repo, *revs, **opts):
623 def incwalk(repo, csets, match=util.always):
623 def incwalk(repo, csets, match=util.always):
624 for node in csets:
624 for node in csets:
625 if match(node):
625 if match(node):
626 yield node
626 yield node
627
627
628 def transplantwalk(repo, dest, heads, match=util.always):
628 def transplantwalk(repo, dest, heads, match=util.always):
629 '''Yield all nodes that are ancestors of a head but not ancestors
629 '''Yield all nodes that are ancestors of a head but not ancestors
630 of dest.
630 of dest.
631 If no heads are specified, the heads of repo will be used.'''
631 If no heads are specified, the heads of repo will be used.'''
632 if not heads:
632 if not heads:
633 heads = repo.heads()
633 heads = repo.heads()
634 ancestors = []
634 ancestors = []
635 ctx = repo[dest]
635 ctx = repo[dest]
636 for head in heads:
636 for head in heads:
637 ancestors.append(ctx.ancestor(repo[head]).node())
637 ancestors.append(ctx.ancestor(repo[head]).node())
638 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
638 for node in repo.changelog.nodesbetween(ancestors, heads)[0]:
639 if match(node):
639 if match(node):
640 yield node
640 yield node
641
641
642 def checkopts(opts, revs):
642 def checkopts(opts, revs):
643 if opts.get('continue'):
643 if opts.get('continue'):
644 if opts.get('branch') or opts.get('all') or opts.get('merge'):
644 if opts.get('branch') or opts.get('all') or opts.get('merge'):
645 raise error.Abort(_('--continue is incompatible with '
645 raise error.Abort(_('--continue is incompatible with '
646 '--branch, --all and --merge'))
646 '--branch, --all and --merge'))
647 return
647 return
648 if not (opts.get('source') or revs or
648 if not (opts.get('source') or revs or
649 opts.get('merge') or opts.get('branch')):
649 opts.get('merge') or opts.get('branch')):
650 raise error.Abort(_('no source URL, branch revision, or revision '
650 raise error.Abort(_('no source URL, branch revision, or revision '
651 'list provided'))
651 'list provided'))
652 if opts.get('all'):
652 if opts.get('all'):
653 if not opts.get('branch'):
653 if not opts.get('branch'):
654 raise error.Abort(_('--all requires a branch revision'))
654 raise error.Abort(_('--all requires a branch revision'))
655 if revs:
655 if revs:
656 raise error.Abort(_('--all is incompatible with a '
656 raise error.Abort(_('--all is incompatible with a '
657 'revision list'))
657 'revision list'))
658
658
659 opts = pycompat.byteskwargs(opts)
659 opts = pycompat.byteskwargs(opts)
660 checkopts(opts, revs)
660 checkopts(opts, revs)
661
661
662 if not opts.get('log'):
662 if not opts.get('log'):
663 # deprecated config: transplant.log
663 # deprecated config: transplant.log
664 opts['log'] = ui.config('transplant', 'log')
664 opts['log'] = ui.config('transplant', 'log')
665 if not opts.get('filter'):
665 if not opts.get('filter'):
666 # deprecated config: transplant.filter
666 # deprecated config: transplant.filter
667 opts['filter'] = ui.config('transplant', 'filter')
667 opts['filter'] = ui.config('transplant', 'filter')
668
668
669 tp = transplanter(ui, repo, opts)
669 tp = transplanter(ui, repo, opts)
670
670
671 p1 = repo.dirstate.p1()
671 p1 = repo.dirstate.p1()
672 if len(repo) > 0 and p1 == revlog.nullid:
672 if len(repo) > 0 and p1 == revlog.nullid:
673 raise error.Abort(_('no revision checked out'))
673 raise error.Abort(_('no revision checked out'))
674 if opts.get('continue'):
674 if opts.get('continue'):
675 if not tp.canresume():
675 if not tp.canresume():
676 raise error.Abort(_('no transplant to continue'))
676 raise error.Abort(_('no transplant to continue'))
677 else:
677 else:
678 cmdutil.checkunfinished(repo)
678 cmdutil.checkunfinished(repo)
679 cmdutil.bailifchanged(repo)
679 cmdutil.bailifchanged(repo)
680
680
681 sourcerepo = opts.get('source')
681 sourcerepo = opts.get('source')
682 if sourcerepo:
682 if sourcerepo:
683 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
683 peer = hg.peer(repo, opts, ui.expandpath(sourcerepo))
684 heads = pycompat.maplist(peer.lookup, opts.get('branch', ()))
684 heads = pycompat.maplist(peer.lookup, opts.get('branch', ()))
685 target = set(heads)
685 target = set(heads)
686 for r in revs:
686 for r in revs:
687 try:
687 try:
688 target.add(peer.lookup(r))
688 target.add(peer.lookup(r))
689 except error.RepoError:
689 except error.RepoError:
690 pass
690 pass
691 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
691 source, csets, cleanupfn = bundlerepo.getremotechanges(ui, repo, peer,
692 onlyheads=sorted(target), force=True)
692 onlyheads=sorted(target), force=True)
693 else:
693 else:
694 source = repo
694 source = repo
695 heads = pycompat.maplist(source.lookup, opts.get('branch', ()))
695 heads = pycompat.maplist(source.lookup, opts.get('branch', ()))
696 cleanupfn = None
696 cleanupfn = None
697
697
698 try:
698 try:
699 if opts.get('continue'):
699 if opts.get('continue'):
700 tp.resume(repo, source, opts)
700 tp.resume(repo, source, opts)
701 return
701 return
702
702
703 tf = tp.transplantfilter(repo, source, p1)
703 tf = tp.transplantfilter(repo, source, p1)
704 if opts.get('prune'):
704 if opts.get('prune'):
705 prune = set(source[r].node()
705 prune = set(source[r].node()
706 for r in scmutil.revrange(source, opts.get('prune')))
706 for r in scmutil.revrange(source, opts.get('prune')))
707 matchfn = lambda x: tf(x) and x not in prune
707 matchfn = lambda x: tf(x) and x not in prune
708 else:
708 else:
709 matchfn = tf
709 matchfn = tf
710 merges = pycompat.maplist(source.lookup, opts.get('merge', ()))
710 merges = pycompat.maplist(source.lookup, opts.get('merge', ()))
711 revmap = {}
711 revmap = {}
712 if revs:
712 if revs:
713 for r in scmutil.revrange(source, revs):
713 for r in scmutil.revrange(source, revs):
714 revmap[int(r)] = source[r].node()
714 revmap[int(r)] = source[r].node()
715 elif opts.get('all') or not merges:
715 elif opts.get('all') or not merges:
716 if source != repo:
716 if source != repo:
717 alltransplants = incwalk(source, csets, match=matchfn)
717 alltransplants = incwalk(source, csets, match=matchfn)
718 else:
718 else:
719 alltransplants = transplantwalk(source, p1, heads,
719 alltransplants = transplantwalk(source, p1, heads,
720 match=matchfn)
720 match=matchfn)
721 if opts.get('all'):
721 if opts.get('all'):
722 revs = alltransplants
722 revs = alltransplants
723 else:
723 else:
724 revs, newmerges = browserevs(ui, source, alltransplants, opts)
724 revs, newmerges = browserevs(ui, source, alltransplants, opts)
725 merges.extend(newmerges)
725 merges.extend(newmerges)
726 for r in revs:
726 for r in revs:
727 revmap[source.changelog.rev(r)] = r
727 revmap[source.changelog.rev(r)] = r
728 for r in merges:
728 for r in merges:
729 revmap[source.changelog.rev(r)] = r
729 revmap[source.changelog.rev(r)] = r
730
730
731 tp.apply(repo, source, revmap, merges, opts)
731 tp.apply(repo, source, revmap, merges, opts)
732 finally:
732 finally:
733 if cleanupfn:
733 if cleanupfn:
734 cleanupfn()
734 cleanupfn()
735
735
736 revsetpredicate = registrar.revsetpredicate()
736 revsetpredicate = registrar.revsetpredicate()
737
737
738 @revsetpredicate('transplanted([set])')
738 @revsetpredicate('transplanted([set])')
739 def revsettransplanted(repo, subset, x):
739 def revsettransplanted(repo, subset, x):
740 """Transplanted changesets in set, or all transplanted changesets.
740 """Transplanted changesets in set, or all transplanted changesets.
741 """
741 """
742 if x:
742 if x:
743 s = revset.getset(repo, subset, x)
743 s = revset.getset(repo, subset, x)
744 else:
744 else:
745 s = subset
745 s = subset
746 return smartset.baseset([r for r in s if
746 return smartset.baseset([r for r in s if
747 repo[r].extra().get('transplant_source')])
747 repo[r].extra().get('transplant_source')])
748
748
749 templatekeyword = registrar.templatekeyword()
749 templatekeyword = registrar.templatekeyword()
750
750
751 @templatekeyword('transplanted', requires={'ctx'})
751 @templatekeyword('transplanted', requires={'ctx'})
752 def kwtransplanted(context, mapping):
752 def kwtransplanted(context, mapping):
753 """String. The node identifier of the transplanted
753 """String. The node identifier of the transplanted
754 changeset if any."""
754 changeset if any."""
755 ctx = context.resource(mapping, 'ctx')
755 ctx = context.resource(mapping, 'ctx')
756 n = ctx.extra().get('transplant_source')
756 n = ctx.extra().get('transplant_source')
757 return n and nodemod.hex(n) or ''
757 return n and nodemod.hex(n) or ''
758
758
759 def extsetup(ui):
759 def extsetup(ui):
760 cmdutil.unfinishedstates.append(
760 cmdutil.unfinishedstates.append(
761 ['transplant/journal', True, False, _('transplant in progress'),
761 ['transplant/journal', True, False, _('transplant in progress'),
762 _("use 'hg transplant --continue' or 'hg update' to abort")])
762 _("use 'hg transplant --continue' or 'hg update' to abort")])
763
763
764 # tell hggettext to extract docstrings from these functions:
764 # tell hggettext to extract docstrings from these functions:
765 i18nfunctions = [revsettransplanted, kwtransplanted]
765 i18nfunctions = [revsettransplanted, kwtransplanted]
@@ -1,1418 +1,1418 b''
1 # changegroup.py - Mercurial changegroup manipulation functions
1 # changegroup.py - Mercurial changegroup manipulation functions
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import os
10 import os
11 import struct
11 import struct
12 import weakref
12 import weakref
13
13
14 from .i18n import _
14 from .i18n import _
15 from .node import (
15 from .node import (
16 hex,
16 hex,
17 nullid,
17 nullid,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 error,
23 error,
24 match as matchmod,
24 match as matchmod,
25 mdiff,
25 mdiff,
26 phases,
26 phases,
27 pycompat,
27 pycompat,
28 repository,
28 repository,
29 util,
29 util,
30 )
30 )
31
31
32 _CHANGEGROUPV1_DELTA_HEADER = struct.Struct("20s20s20s20s")
32 _CHANGEGROUPV1_DELTA_HEADER = struct.Struct("20s20s20s20s")
33 _CHANGEGROUPV2_DELTA_HEADER = struct.Struct("20s20s20s20s20s")
33 _CHANGEGROUPV2_DELTA_HEADER = struct.Struct("20s20s20s20s20s")
34 _CHANGEGROUPV3_DELTA_HEADER = struct.Struct(">20s20s20s20s20sH")
34 _CHANGEGROUPV3_DELTA_HEADER = struct.Struct(">20s20s20s20s20sH")
35
35
36 LFS_REQUIREMENT = 'lfs'
36 LFS_REQUIREMENT = 'lfs'
37
37
38 readexactly = util.readexactly
38 readexactly = util.readexactly
39
39
40 def getchunk(stream):
40 def getchunk(stream):
41 """return the next chunk from stream as a string"""
41 """return the next chunk from stream as a string"""
42 d = readexactly(stream, 4)
42 d = readexactly(stream, 4)
43 l = struct.unpack(">l", d)[0]
43 l = struct.unpack(">l", d)[0]
44 if l <= 4:
44 if l <= 4:
45 if l:
45 if l:
46 raise error.Abort(_("invalid chunk length %d") % l)
46 raise error.Abort(_("invalid chunk length %d") % l)
47 return ""
47 return ""
48 return readexactly(stream, l - 4)
48 return readexactly(stream, l - 4)
49
49
50 def chunkheader(length):
50 def chunkheader(length):
51 """return a changegroup chunk header (string)"""
51 """return a changegroup chunk header (string)"""
52 return struct.pack(">l", length + 4)
52 return struct.pack(">l", length + 4)
53
53
54 def closechunk():
54 def closechunk():
55 """return a changegroup chunk header (string) for a zero-length chunk"""
55 """return a changegroup chunk header (string) for a zero-length chunk"""
56 return struct.pack(">l", 0)
56 return struct.pack(">l", 0)
57
57
58 def _fileheader(path):
58 def _fileheader(path):
59 """Obtain a changegroup chunk header for a named path."""
59 """Obtain a changegroup chunk header for a named path."""
60 return chunkheader(len(path)) + path
60 return chunkheader(len(path)) + path
61
61
62 def writechunks(ui, chunks, filename, vfs=None):
62 def writechunks(ui, chunks, filename, vfs=None):
63 """Write chunks to a file and return its filename.
63 """Write chunks to a file and return its filename.
64
64
65 The stream is assumed to be a bundle file.
65 The stream is assumed to be a bundle file.
66 Existing files will not be overwritten.
66 Existing files will not be overwritten.
67 If no filename is specified, a temporary file is created.
67 If no filename is specified, a temporary file is created.
68 """
68 """
69 fh = None
69 fh = None
70 cleanup = None
70 cleanup = None
71 try:
71 try:
72 if filename:
72 if filename:
73 if vfs:
73 if vfs:
74 fh = vfs.open(filename, "wb")
74 fh = vfs.open(filename, "wb")
75 else:
75 else:
76 # Increase default buffer size because default is usually
76 # Increase default buffer size because default is usually
77 # small (4k is common on Linux).
77 # small (4k is common on Linux).
78 fh = open(filename, "wb", 131072)
78 fh = open(filename, "wb", 131072)
79 else:
79 else:
80 fd, filename = pycompat.mkstemp(prefix="hg-bundle-", suffix=".hg")
80 fd, filename = pycompat.mkstemp(prefix="hg-bundle-", suffix=".hg")
81 fh = os.fdopen(fd, r"wb")
81 fh = os.fdopen(fd, r"wb")
82 cleanup = filename
82 cleanup = filename
83 for c in chunks:
83 for c in chunks:
84 fh.write(c)
84 fh.write(c)
85 cleanup = None
85 cleanup = None
86 return filename
86 return filename
87 finally:
87 finally:
88 if fh is not None:
88 if fh is not None:
89 fh.close()
89 fh.close()
90 if cleanup is not None:
90 if cleanup is not None:
91 if filename and vfs:
91 if filename and vfs:
92 vfs.unlink(cleanup)
92 vfs.unlink(cleanup)
93 else:
93 else:
94 os.unlink(cleanup)
94 os.unlink(cleanup)
95
95
96 class cg1unpacker(object):
96 class cg1unpacker(object):
97 """Unpacker for cg1 changegroup streams.
97 """Unpacker for cg1 changegroup streams.
98
98
99 A changegroup unpacker handles the framing of the revision data in
99 A changegroup unpacker handles the framing of the revision data in
100 the wire format. Most consumers will want to use the apply()
100 the wire format. Most consumers will want to use the apply()
101 method to add the changes from the changegroup to a repository.
101 method to add the changes from the changegroup to a repository.
102
102
103 If you're forwarding a changegroup unmodified to another consumer,
103 If you're forwarding a changegroup unmodified to another consumer,
104 use getchunks(), which returns an iterator of changegroup
104 use getchunks(), which returns an iterator of changegroup
105 chunks. This is mostly useful for cases where you need to know the
105 chunks. This is mostly useful for cases where you need to know the
106 data stream has ended by observing the end of the changegroup.
106 data stream has ended by observing the end of the changegroup.
107
107
108 deltachunk() is useful only if you're applying delta data. Most
108 deltachunk() is useful only if you're applying delta data. Most
109 consumers should prefer apply() instead.
109 consumers should prefer apply() instead.
110
110
111 A few other public methods exist. Those are used only for
111 A few other public methods exist. Those are used only for
112 bundlerepo and some debug commands - their use is discouraged.
112 bundlerepo and some debug commands - their use is discouraged.
113 """
113 """
114 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
114 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
115 deltaheadersize = deltaheader.size
115 deltaheadersize = deltaheader.size
116 version = '01'
116 version = '01'
117 _grouplistcount = 1 # One list of files after the manifests
117 _grouplistcount = 1 # One list of files after the manifests
118
118
119 def __init__(self, fh, alg, extras=None):
119 def __init__(self, fh, alg, extras=None):
120 if alg is None:
120 if alg is None:
121 alg = 'UN'
121 alg = 'UN'
122 if alg not in util.compengines.supportedbundletypes:
122 if alg not in util.compengines.supportedbundletypes:
123 raise error.Abort(_('unknown stream compression type: %s')
123 raise error.Abort(_('unknown stream compression type: %s')
124 % alg)
124 % alg)
125 if alg == 'BZ':
125 if alg == 'BZ':
126 alg = '_truncatedBZ'
126 alg = '_truncatedBZ'
127
127
128 compengine = util.compengines.forbundletype(alg)
128 compengine = util.compengines.forbundletype(alg)
129 self._stream = compengine.decompressorreader(fh)
129 self._stream = compengine.decompressorreader(fh)
130 self._type = alg
130 self._type = alg
131 self.extras = extras or {}
131 self.extras = extras or {}
132 self.callback = None
132 self.callback = None
133
133
134 # These methods (compressed, read, seek, tell) all appear to only
134 # These methods (compressed, read, seek, tell) all appear to only
135 # be used by bundlerepo, but it's a little hard to tell.
135 # be used by bundlerepo, but it's a little hard to tell.
136 def compressed(self):
136 def compressed(self):
137 return self._type is not None and self._type != 'UN'
137 return self._type is not None and self._type != 'UN'
138 def read(self, l):
138 def read(self, l):
139 return self._stream.read(l)
139 return self._stream.read(l)
140 def seek(self, pos):
140 def seek(self, pos):
141 return self._stream.seek(pos)
141 return self._stream.seek(pos)
142 def tell(self):
142 def tell(self):
143 return self._stream.tell()
143 return self._stream.tell()
144 def close(self):
144 def close(self):
145 return self._stream.close()
145 return self._stream.close()
146
146
147 def _chunklength(self):
147 def _chunklength(self):
148 d = readexactly(self._stream, 4)
148 d = readexactly(self._stream, 4)
149 l = struct.unpack(">l", d)[0]
149 l = struct.unpack(">l", d)[0]
150 if l <= 4:
150 if l <= 4:
151 if l:
151 if l:
152 raise error.Abort(_("invalid chunk length %d") % l)
152 raise error.Abort(_("invalid chunk length %d") % l)
153 return 0
153 return 0
154 if self.callback:
154 if self.callback:
155 self.callback()
155 self.callback()
156 return l - 4
156 return l - 4
157
157
158 def changelogheader(self):
158 def changelogheader(self):
159 """v10 does not have a changelog header chunk"""
159 """v10 does not have a changelog header chunk"""
160 return {}
160 return {}
161
161
162 def manifestheader(self):
162 def manifestheader(self):
163 """v10 does not have a manifest header chunk"""
163 """v10 does not have a manifest header chunk"""
164 return {}
164 return {}
165
165
166 def filelogheader(self):
166 def filelogheader(self):
167 """return the header of the filelogs chunk, v10 only has the filename"""
167 """return the header of the filelogs chunk, v10 only has the filename"""
168 l = self._chunklength()
168 l = self._chunklength()
169 if not l:
169 if not l:
170 return {}
170 return {}
171 fname = readexactly(self._stream, l)
171 fname = readexactly(self._stream, l)
172 return {'filename': fname}
172 return {'filename': fname}
173
173
174 def _deltaheader(self, headertuple, prevnode):
174 def _deltaheader(self, headertuple, prevnode):
175 node, p1, p2, cs = headertuple
175 node, p1, p2, cs = headertuple
176 if prevnode is None:
176 if prevnode is None:
177 deltabase = p1
177 deltabase = p1
178 else:
178 else:
179 deltabase = prevnode
179 deltabase = prevnode
180 flags = 0
180 flags = 0
181 return node, p1, p2, deltabase, cs, flags
181 return node, p1, p2, deltabase, cs, flags
182
182
183 def deltachunk(self, prevnode):
183 def deltachunk(self, prevnode):
184 l = self._chunklength()
184 l = self._chunklength()
185 if not l:
185 if not l:
186 return {}
186 return {}
187 headerdata = readexactly(self._stream, self.deltaheadersize)
187 headerdata = readexactly(self._stream, self.deltaheadersize)
188 header = self.deltaheader.unpack(headerdata)
188 header = self.deltaheader.unpack(headerdata)
189 delta = readexactly(self._stream, l - self.deltaheadersize)
189 delta = readexactly(self._stream, l - self.deltaheadersize)
190 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
190 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
191 return (node, p1, p2, cs, deltabase, delta, flags)
191 return (node, p1, p2, cs, deltabase, delta, flags)
192
192
193 def getchunks(self):
193 def getchunks(self):
194 """returns all the chunks contains in the bundle
194 """returns all the chunks contains in the bundle
195
195
196 Used when you need to forward the binary stream to a file or another
196 Used when you need to forward the binary stream to a file or another
197 network API. To do so, it parse the changegroup data, otherwise it will
197 network API. To do so, it parse the changegroup data, otherwise it will
198 block in case of sshrepo because it don't know the end of the stream.
198 block in case of sshrepo because it don't know the end of the stream.
199 """
199 """
200 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
200 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
201 # and a list of filelogs. For changegroup 3, we expect 4 parts:
201 # and a list of filelogs. For changegroup 3, we expect 4 parts:
202 # changelog, manifestlog, a list of tree manifestlogs, and a list of
202 # changelog, manifestlog, a list of tree manifestlogs, and a list of
203 # filelogs.
203 # filelogs.
204 #
204 #
205 # Changelog and manifestlog parts are terminated with empty chunks. The
205 # Changelog and manifestlog parts are terminated with empty chunks. The
206 # tree and file parts are a list of entry sections. Each entry section
206 # tree and file parts are a list of entry sections. Each entry section
207 # is a series of chunks terminating in an empty chunk. The list of these
207 # is a series of chunks terminating in an empty chunk. The list of these
208 # entry sections is terminated in yet another empty chunk, so we know
208 # entry sections is terminated in yet another empty chunk, so we know
209 # we've reached the end of the tree/file list when we reach an empty
209 # we've reached the end of the tree/file list when we reach an empty
210 # chunk that was proceeded by no non-empty chunks.
210 # chunk that was proceeded by no non-empty chunks.
211
211
212 parts = 0
212 parts = 0
213 while parts < 2 + self._grouplistcount:
213 while parts < 2 + self._grouplistcount:
214 noentries = True
214 noentries = True
215 while True:
215 while True:
216 chunk = getchunk(self)
216 chunk = getchunk(self)
217 if not chunk:
217 if not chunk:
218 # The first two empty chunks represent the end of the
218 # The first two empty chunks represent the end of the
219 # changelog and the manifestlog portions. The remaining
219 # changelog and the manifestlog portions. The remaining
220 # empty chunks represent either A) the end of individual
220 # empty chunks represent either A) the end of individual
221 # tree or file entries in the file list, or B) the end of
221 # tree or file entries in the file list, or B) the end of
222 # the entire list. It's the end of the entire list if there
222 # the entire list. It's the end of the entire list if there
223 # were no entries (i.e. noentries is True).
223 # were no entries (i.e. noentries is True).
224 if parts < 2:
224 if parts < 2:
225 parts += 1
225 parts += 1
226 elif noentries:
226 elif noentries:
227 parts += 1
227 parts += 1
228 break
228 break
229 noentries = False
229 noentries = False
230 yield chunkheader(len(chunk))
230 yield chunkheader(len(chunk))
231 pos = 0
231 pos = 0
232 while pos < len(chunk):
232 while pos < len(chunk):
233 next = pos + 2**20
233 next = pos + 2**20
234 yield chunk[pos:next]
234 yield chunk[pos:next]
235 pos = next
235 pos = next
236 yield closechunk()
236 yield closechunk()
237
237
238 def _unpackmanifests(self, repo, revmap, trp, prog):
238 def _unpackmanifests(self, repo, revmap, trp, prog):
239 self.callback = prog.increment
239 self.callback = prog.increment
240 # no need to check for empty manifest group here:
240 # no need to check for empty manifest group here:
241 # if the result of the merge of 1 and 2 is the same in 3 and 4,
241 # if the result of the merge of 1 and 2 is the same in 3 and 4,
242 # no new manifest will be created and the manifest group will
242 # no new manifest will be created and the manifest group will
243 # be empty during the pull
243 # be empty during the pull
244 self.manifestheader()
244 self.manifestheader()
245 deltas = self.deltaiter()
245 deltas = self.deltaiter()
246 repo.manifestlog.getstorage(b'').addgroup(deltas, revmap, trp)
246 repo.manifestlog.getstorage(b'').addgroup(deltas, revmap, trp)
247 prog.complete()
247 prog.complete()
248 self.callback = None
248 self.callback = None
249
249
250 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
250 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
251 expectedtotal=None):
251 expectedtotal=None):
252 """Add the changegroup returned by source.read() to this repo.
252 """Add the changegroup returned by source.read() to this repo.
253 srctype is a string like 'push', 'pull', or 'unbundle'. url is
253 srctype is a string like 'push', 'pull', or 'unbundle'. url is
254 the URL of the repo where this changegroup is coming from.
254 the URL of the repo where this changegroup is coming from.
255
255
256 Return an integer summarizing the change to this repo:
256 Return an integer summarizing the change to this repo:
257 - nothing changed or no source: 0
257 - nothing changed or no source: 0
258 - more heads than before: 1+added heads (2..n)
258 - more heads than before: 1+added heads (2..n)
259 - fewer heads than before: -1-removed heads (-2..-n)
259 - fewer heads than before: -1-removed heads (-2..-n)
260 - number of heads stays the same: 1
260 - number of heads stays the same: 1
261 """
261 """
262 repo = repo.unfiltered()
262 repo = repo.unfiltered()
263 def csmap(x):
263 def csmap(x):
264 repo.ui.debug("add changeset %s\n" % short(x))
264 repo.ui.debug("add changeset %s\n" % short(x))
265 return len(cl)
265 return len(cl)
266
266
267 def revmap(x):
267 def revmap(x):
268 return cl.rev(x)
268 return cl.rev(x)
269
269
270 changesets = files = revisions = 0
270 changesets = files = revisions = 0
271
271
272 try:
272 try:
273 # The transaction may already carry source information. In this
273 # The transaction may already carry source information. In this
274 # case we use the top level data. We overwrite the argument
274 # case we use the top level data. We overwrite the argument
275 # because we need to use the top level value (if they exist)
275 # because we need to use the top level value (if they exist)
276 # in this function.
276 # in this function.
277 srctype = tr.hookargs.setdefault('source', srctype)
277 srctype = tr.hookargs.setdefault('source', srctype)
278 tr.hookargs.setdefault('url', url)
278 tr.hookargs.setdefault('url', url)
279 repo.hook('prechangegroup',
279 repo.hook('prechangegroup',
280 throw=True, **pycompat.strkwargs(tr.hookargs))
280 throw=True, **pycompat.strkwargs(tr.hookargs))
281
281
282 # write changelog data to temp files so concurrent readers
282 # write changelog data to temp files so concurrent readers
283 # will not see an inconsistent view
283 # will not see an inconsistent view
284 cl = repo.changelog
284 cl = repo.changelog
285 cl.delayupdate(tr)
285 cl.delayupdate(tr)
286 oldheads = set(cl.heads())
286 oldheads = set(cl.heads())
287
287
288 trp = weakref.proxy(tr)
288 trp = weakref.proxy(tr)
289 # pull off the changeset group
289 # pull off the changeset group
290 repo.ui.status(_("adding changesets\n"))
290 repo.ui.status(_("adding changesets\n"))
291 clstart = len(cl)
291 clstart = len(cl)
292 progress = repo.ui.makeprogress(_('changesets'), unit=_('chunks'),
292 progress = repo.ui.makeprogress(_('changesets'), unit=_('chunks'),
293 total=expectedtotal)
293 total=expectedtotal)
294 self.callback = progress.increment
294 self.callback = progress.increment
295
295
296 efiles = set()
296 efiles = set()
297 def onchangelog(cl, node):
297 def onchangelog(cl, node):
298 efiles.update(cl.readfiles(node))
298 efiles.update(cl.readfiles(node))
299
299
300 self.changelogheader()
300 self.changelogheader()
301 deltas = self.deltaiter()
301 deltas = self.deltaiter()
302 cgnodes = cl.addgroup(deltas, csmap, trp, addrevisioncb=onchangelog)
302 cgnodes = cl.addgroup(deltas, csmap, trp, addrevisioncb=onchangelog)
303 efiles = len(efiles)
303 efiles = len(efiles)
304
304
305 if not cgnodes:
305 if not cgnodes:
306 repo.ui.develwarn('applied empty changelog from changegroup',
306 repo.ui.develwarn('applied empty changelog from changegroup',
307 config='warn-empty-changegroup')
307 config='warn-empty-changegroup')
308 clend = len(cl)
308 clend = len(cl)
309 changesets = clend - clstart
309 changesets = clend - clstart
310 progress.complete()
310 progress.complete()
311 self.callback = None
311 self.callback = None
312
312
313 # pull off the manifest group
313 # pull off the manifest group
314 repo.ui.status(_("adding manifests\n"))
314 repo.ui.status(_("adding manifests\n"))
315 # We know that we'll never have more manifests than we had
315 # We know that we'll never have more manifests than we had
316 # changesets.
316 # changesets.
317 progress = repo.ui.makeprogress(_('manifests'), unit=_('chunks'),
317 progress = repo.ui.makeprogress(_('manifests'), unit=_('chunks'),
318 total=changesets)
318 total=changesets)
319 self._unpackmanifests(repo, revmap, trp, progress)
319 self._unpackmanifests(repo, revmap, trp, progress)
320
320
321 needfiles = {}
321 needfiles = {}
322 if repo.ui.configbool('server', 'validate'):
322 if repo.ui.configbool('server', 'validate'):
323 cl = repo.changelog
323 cl = repo.changelog
324 ml = repo.manifestlog
324 ml = repo.manifestlog
325 # validate incoming csets have their manifests
325 # validate incoming csets have their manifests
326 for cset in pycompat.xrange(clstart, clend):
326 for cset in pycompat.xrange(clstart, clend):
327 mfnode = cl.changelogrevision(cset).manifest
327 mfnode = cl.changelogrevision(cset).manifest
328 mfest = ml[mfnode].readdelta()
328 mfest = ml[mfnode].readdelta()
329 # store file cgnodes we must see
329 # store file cgnodes we must see
330 for f, n in mfest.iteritems():
330 for f, n in mfest.iteritems():
331 needfiles.setdefault(f, set()).add(n)
331 needfiles.setdefault(f, set()).add(n)
332
332
333 # process the files
333 # process the files
334 repo.ui.status(_("adding file changes\n"))
334 repo.ui.status(_("adding file changes\n"))
335 newrevs, newfiles = _addchangegroupfiles(
335 newrevs, newfiles = _addchangegroupfiles(
336 repo, self, revmap, trp, efiles, needfiles)
336 repo, self, revmap, trp, efiles, needfiles)
337 revisions += newrevs
337 revisions += newrevs
338 files += newfiles
338 files += newfiles
339
339
340 deltaheads = 0
340 deltaheads = 0
341 if oldheads:
341 if oldheads:
342 heads = cl.heads()
342 heads = cl.heads()
343 deltaheads = len(heads) - len(oldheads)
343 deltaheads = len(heads) - len(oldheads)
344 for h in heads:
344 for h in heads:
345 if h not in oldheads and repo[h].closesbranch():
345 if h not in oldheads and repo[h].closesbranch():
346 deltaheads -= 1
346 deltaheads -= 1
347 htext = ""
347 htext = ""
348 if deltaheads:
348 if deltaheads:
349 htext = _(" (%+d heads)") % deltaheads
349 htext = _(" (%+d heads)") % deltaheads
350
350
351 repo.ui.status(_("added %d changesets"
351 repo.ui.status(_("added %d changesets"
352 " with %d changes to %d files%s\n")
352 " with %d changes to %d files%s\n")
353 % (changesets, revisions, files, htext))
353 % (changesets, revisions, files, htext))
354 repo.invalidatevolatilesets()
354 repo.invalidatevolatilesets()
355
355
356 if changesets > 0:
356 if changesets > 0:
357 if 'node' not in tr.hookargs:
357 if 'node' not in tr.hookargs:
358 tr.hookargs['node'] = hex(cl.node(clstart))
358 tr.hookargs['node'] = hex(cl.node(clstart))
359 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
359 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
360 hookargs = dict(tr.hookargs)
360 hookargs = dict(tr.hookargs)
361 else:
361 else:
362 hookargs = dict(tr.hookargs)
362 hookargs = dict(tr.hookargs)
363 hookargs['node'] = hex(cl.node(clstart))
363 hookargs['node'] = hex(cl.node(clstart))
364 hookargs['node_last'] = hex(cl.node(clend - 1))
364 hookargs['node_last'] = hex(cl.node(clend - 1))
365 repo.hook('pretxnchangegroup',
365 repo.hook('pretxnchangegroup',
366 throw=True, **pycompat.strkwargs(hookargs))
366 throw=True, **pycompat.strkwargs(hookargs))
367
367
368 added = [cl.node(r) for r in pycompat.xrange(clstart, clend)]
368 added = [cl.node(r) for r in pycompat.xrange(clstart, clend)]
369 phaseall = None
369 phaseall = None
370 if srctype in ('push', 'serve'):
370 if srctype in ('push', 'serve'):
371 # Old servers can not push the boundary themselves.
371 # Old servers can not push the boundary themselves.
372 # New servers won't push the boundary if changeset already
372 # New servers won't push the boundary if changeset already
373 # exists locally as secret
373 # exists locally as secret
374 #
374 #
375 # We should not use added here but the list of all change in
375 # We should not use added here but the list of all change in
376 # the bundle
376 # the bundle
377 if repo.publishing():
377 if repo.publishing():
378 targetphase = phaseall = phases.public
378 targetphase = phaseall = phases.public
379 else:
379 else:
380 # closer target phase computation
380 # closer target phase computation
381
381
382 # Those changesets have been pushed from the
382 # Those changesets have been pushed from the
383 # outside, their phases are going to be pushed
383 # outside, their phases are going to be pushed
384 # alongside. Therefor `targetphase` is
384 # alongside. Therefor `targetphase` is
385 # ignored.
385 # ignored.
386 targetphase = phaseall = phases.draft
386 targetphase = phaseall = phases.draft
387 if added:
387 if added:
388 phases.registernew(repo, tr, targetphase, added)
388 phases.registernew(repo, tr, targetphase, added)
389 if phaseall is not None:
389 if phaseall is not None:
390 phases.advanceboundary(repo, tr, phaseall, cgnodes)
390 phases.advanceboundary(repo, tr, phaseall, cgnodes)
391
391
392 if changesets > 0:
392 if changesets > 0:
393
393
394 def runhooks():
394 def runhooks():
395 # These hooks run when the lock releases, not when the
395 # These hooks run when the lock releases, not when the
396 # transaction closes. So it's possible for the changelog
396 # transaction closes. So it's possible for the changelog
397 # to have changed since we last saw it.
397 # to have changed since we last saw it.
398 if clstart >= len(repo):
398 if clstart >= len(repo):
399 return
399 return
400
400
401 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
401 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
402
402
403 for n in added:
403 for n in added:
404 args = hookargs.copy()
404 args = hookargs.copy()
405 args['node'] = hex(n)
405 args['node'] = hex(n)
406 del args['node_last']
406 del args['node_last']
407 repo.hook("incoming", **pycompat.strkwargs(args))
407 repo.hook("incoming", **pycompat.strkwargs(args))
408
408
409 newheads = [h for h in repo.heads()
409 newheads = [h for h in repo.heads()
410 if h not in oldheads]
410 if h not in oldheads]
411 repo.ui.log("incoming",
411 repo.ui.log("incoming",
412 "%d incoming changes - new heads: %s\n",
412 "%d incoming changes - new heads: %s\n",
413 len(added),
413 len(added),
414 ', '.join([hex(c[:6]) for c in newheads]))
414 ', '.join([hex(c[:6]) for c in newheads]))
415
415
416 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
416 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
417 lambda tr: repo._afterlock(runhooks))
417 lambda tr: repo._afterlock(runhooks))
418 finally:
418 finally:
419 repo.ui.flush()
419 repo.ui.flush()
420 # never return 0 here:
420 # never return 0 here:
421 if deltaheads < 0:
421 if deltaheads < 0:
422 ret = deltaheads - 1
422 ret = deltaheads - 1
423 else:
423 else:
424 ret = deltaheads + 1
424 ret = deltaheads + 1
425 return ret
425 return ret
426
426
427 def deltaiter(self):
427 def deltaiter(self):
428 """
428 """
429 returns an iterator of the deltas in this changegroup
429 returns an iterator of the deltas in this changegroup
430
430
431 Useful for passing to the underlying storage system to be stored.
431 Useful for passing to the underlying storage system to be stored.
432 """
432 """
433 chain = None
433 chain = None
434 for chunkdata in iter(lambda: self.deltachunk(chain), {}):
434 for chunkdata in iter(lambda: self.deltachunk(chain), {}):
435 # Chunkdata: (node, p1, p2, cs, deltabase, delta, flags)
435 # Chunkdata: (node, p1, p2, cs, deltabase, delta, flags)
436 yield chunkdata
436 yield chunkdata
437 chain = chunkdata[0]
437 chain = chunkdata[0]
438
438
439 class cg2unpacker(cg1unpacker):
439 class cg2unpacker(cg1unpacker):
440 """Unpacker for cg2 streams.
440 """Unpacker for cg2 streams.
441
441
442 cg2 streams add support for generaldelta, so the delta header
442 cg2 streams add support for generaldelta, so the delta header
443 format is slightly different. All other features about the data
443 format is slightly different. All other features about the data
444 remain the same.
444 remain the same.
445 """
445 """
446 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
446 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
447 deltaheadersize = deltaheader.size
447 deltaheadersize = deltaheader.size
448 version = '02'
448 version = '02'
449
449
450 def _deltaheader(self, headertuple, prevnode):
450 def _deltaheader(self, headertuple, prevnode):
451 node, p1, p2, deltabase, cs = headertuple
451 node, p1, p2, deltabase, cs = headertuple
452 flags = 0
452 flags = 0
453 return node, p1, p2, deltabase, cs, flags
453 return node, p1, p2, deltabase, cs, flags
454
454
455 class cg3unpacker(cg2unpacker):
455 class cg3unpacker(cg2unpacker):
456 """Unpacker for cg3 streams.
456 """Unpacker for cg3 streams.
457
457
458 cg3 streams add support for exchanging treemanifests and revlog
458 cg3 streams add support for exchanging treemanifests and revlog
459 flags. It adds the revlog flags to the delta header and an empty chunk
459 flags. It adds the revlog flags to the delta header and an empty chunk
460 separating manifests and files.
460 separating manifests and files.
461 """
461 """
462 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
462 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
463 deltaheadersize = deltaheader.size
463 deltaheadersize = deltaheader.size
464 version = '03'
464 version = '03'
465 _grouplistcount = 2 # One list of manifests and one list of files
465 _grouplistcount = 2 # One list of manifests and one list of files
466
466
467 def _deltaheader(self, headertuple, prevnode):
467 def _deltaheader(self, headertuple, prevnode):
468 node, p1, p2, deltabase, cs, flags = headertuple
468 node, p1, p2, deltabase, cs, flags = headertuple
469 return node, p1, p2, deltabase, cs, flags
469 return node, p1, p2, deltabase, cs, flags
470
470
471 def _unpackmanifests(self, repo, revmap, trp, prog):
471 def _unpackmanifests(self, repo, revmap, trp, prog):
472 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog)
472 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog)
473 for chunkdata in iter(self.filelogheader, {}):
473 for chunkdata in iter(self.filelogheader, {}):
474 # If we get here, there are directory manifests in the changegroup
474 # If we get here, there are directory manifests in the changegroup
475 d = chunkdata["filename"]
475 d = chunkdata["filename"]
476 repo.ui.debug("adding %s revisions\n" % d)
476 repo.ui.debug("adding %s revisions\n" % d)
477 deltas = self.deltaiter()
477 deltas = self.deltaiter()
478 if not repo.manifestlog.getstorage(d).addgroup(deltas, revmap, trp):
478 if not repo.manifestlog.getstorage(d).addgroup(deltas, revmap, trp):
479 raise error.Abort(_("received dir revlog group is empty"))
479 raise error.Abort(_("received dir revlog group is empty"))
480
480
481 class headerlessfixup(object):
481 class headerlessfixup(object):
482 def __init__(self, fh, h):
482 def __init__(self, fh, h):
483 self._h = h
483 self._h = h
484 self._fh = fh
484 self._fh = fh
485 def read(self, n):
485 def read(self, n):
486 if self._h:
486 if self._h:
487 d, self._h = self._h[:n], self._h[n:]
487 d, self._h = self._h[:n], self._h[n:]
488 if len(d) < n:
488 if len(d) < n:
489 d += readexactly(self._fh, n - len(d))
489 d += readexactly(self._fh, n - len(d))
490 return d
490 return d
491 return readexactly(self._fh, n)
491 return readexactly(self._fh, n)
492
492
493 def _revisiondeltatochunks(delta, headerfn):
493 def _revisiondeltatochunks(delta, headerfn):
494 """Serialize a revisiondelta to changegroup chunks."""
494 """Serialize a revisiondelta to changegroup chunks."""
495
495
496 # The captured revision delta may be encoded as a delta against
496 # The captured revision delta may be encoded as a delta against
497 # a base revision or as a full revision. The changegroup format
497 # a base revision or as a full revision. The changegroup format
498 # requires that everything on the wire be deltas. So for full
498 # requires that everything on the wire be deltas. So for full
499 # revisions, we need to invent a header that says to rewrite
499 # revisions, we need to invent a header that says to rewrite
500 # data.
500 # data.
501
501
502 if delta.delta is not None:
502 if delta.delta is not None:
503 prefix, data = b'', delta.delta
503 prefix, data = b'', delta.delta
504 elif delta.basenode == nullid:
504 elif delta.basenode == nullid:
505 data = delta.revision
505 data = delta.revision
506 prefix = mdiff.trivialdiffheader(len(data))
506 prefix = mdiff.trivialdiffheader(len(data))
507 else:
507 else:
508 data = delta.revision
508 data = delta.revision
509 prefix = mdiff.replacediffheader(delta.baserevisionsize,
509 prefix = mdiff.replacediffheader(delta.baserevisionsize,
510 len(data))
510 len(data))
511
511
512 meta = headerfn(delta)
512 meta = headerfn(delta)
513
513
514 yield chunkheader(len(meta) + len(prefix) + len(data))
514 yield chunkheader(len(meta) + len(prefix) + len(data))
515 yield meta
515 yield meta
516 if prefix:
516 if prefix:
517 yield prefix
517 yield prefix
518 yield data
518 yield data
519
519
520 def _sortnodesellipsis(store, nodes, cl, lookup):
520 def _sortnodesellipsis(store, nodes, cl, lookup):
521 """Sort nodes for changegroup generation."""
521 """Sort nodes for changegroup generation."""
522 # Ellipses serving mode.
522 # Ellipses serving mode.
523 #
523 #
524 # In a perfect world, we'd generate better ellipsis-ified graphs
524 # In a perfect world, we'd generate better ellipsis-ified graphs
525 # for non-changelog revlogs. In practice, we haven't started doing
525 # for non-changelog revlogs. In practice, we haven't started doing
526 # that yet, so the resulting DAGs for the manifestlog and filelogs
526 # that yet, so the resulting DAGs for the manifestlog and filelogs
527 # are actually full of bogus parentage on all the ellipsis
527 # are actually full of bogus parentage on all the ellipsis
528 # nodes. This has the side effect that, while the contents are
528 # nodes. This has the side effect that, while the contents are
529 # correct, the individual DAGs might be completely out of whack in
529 # correct, the individual DAGs might be completely out of whack in
530 # a case like 882681bc3166 and its ancestors (back about 10
530 # a case like 882681bc3166 and its ancestors (back about 10
531 # revisions or so) in the main hg repo.
531 # revisions or so) in the main hg repo.
532 #
532 #
533 # The one invariant we *know* holds is that the new (potentially
533 # The one invariant we *know* holds is that the new (potentially
534 # bogus) DAG shape will be valid if we order the nodes in the
534 # bogus) DAG shape will be valid if we order the nodes in the
535 # order that they're introduced in dramatis personae by the
535 # order that they're introduced in dramatis personae by the
536 # changelog, so what we do is we sort the non-changelog histories
536 # changelog, so what we do is we sort the non-changelog histories
537 # by the order in which they are used by the changelog.
537 # by the order in which they are used by the changelog.
538 key = lambda n: cl.rev(lookup(n))
538 key = lambda n: cl.rev(lookup(n))
539 return sorted(nodes, key=key)
539 return sorted(nodes, key=key)
540
540
541 def _resolvenarrowrevisioninfo(cl, store, ischangelog, rev, linkrev,
541 def _resolvenarrowrevisioninfo(cl, store, ischangelog, rev, linkrev,
542 linknode, clrevtolocalrev, fullclnodes,
542 linknode, clrevtolocalrev, fullclnodes,
543 precomputedellipsis):
543 precomputedellipsis):
544 linkparents = precomputedellipsis[linkrev]
544 linkparents = precomputedellipsis[linkrev]
545 def local(clrev):
545 def local(clrev):
546 """Turn a changelog revnum into a local revnum.
546 """Turn a changelog revnum into a local revnum.
547
547
548 The ellipsis dag is stored as revnums on the changelog,
548 The ellipsis dag is stored as revnums on the changelog,
549 but when we're producing ellipsis entries for
549 but when we're producing ellipsis entries for
550 non-changelog revlogs, we need to turn those numbers into
550 non-changelog revlogs, we need to turn those numbers into
551 something local. This does that for us, and during the
551 something local. This does that for us, and during the
552 changelog sending phase will also expand the stored
552 changelog sending phase will also expand the stored
553 mappings as needed.
553 mappings as needed.
554 """
554 """
555 if clrev == nullrev:
555 if clrev == nullrev:
556 return nullrev
556 return nullrev
557
557
558 if ischangelog:
558 if ischangelog:
559 return clrev
559 return clrev
560
560
561 # Walk the ellipsis-ized changelog breadth-first looking for a
561 # Walk the ellipsis-ized changelog breadth-first looking for a
562 # change that has been linked from the current revlog.
562 # change that has been linked from the current revlog.
563 #
563 #
564 # For a flat manifest revlog only a single step should be necessary
564 # For a flat manifest revlog only a single step should be necessary
565 # as all relevant changelog entries are relevant to the flat
565 # as all relevant changelog entries are relevant to the flat
566 # manifest.
566 # manifest.
567 #
567 #
568 # For a filelog or tree manifest dirlog however not every changelog
568 # For a filelog or tree manifest dirlog however not every changelog
569 # entry will have been relevant, so we need to skip some changelog
569 # entry will have been relevant, so we need to skip some changelog
570 # nodes even after ellipsis-izing.
570 # nodes even after ellipsis-izing.
571 walk = [clrev]
571 walk = [clrev]
572 while walk:
572 while walk:
573 p = walk[0]
573 p = walk[0]
574 walk = walk[1:]
574 walk = walk[1:]
575 if p in clrevtolocalrev:
575 if p in clrevtolocalrev:
576 return clrevtolocalrev[p]
576 return clrevtolocalrev[p]
577 elif p in fullclnodes:
577 elif p in fullclnodes:
578 walk.extend([pp for pp in cl.parentrevs(p)
578 walk.extend([pp for pp in cl.parentrevs(p)
579 if pp != nullrev])
579 if pp != nullrev])
580 elif p in precomputedellipsis:
580 elif p in precomputedellipsis:
581 walk.extend([pp for pp in precomputedellipsis[p]
581 walk.extend([pp for pp in precomputedellipsis[p]
582 if pp != nullrev])
582 if pp != nullrev])
583 else:
583 else:
584 # In this case, we've got an ellipsis with parents
584 # In this case, we've got an ellipsis with parents
585 # outside the current bundle (likely an
585 # outside the current bundle (likely an
586 # incremental pull). We "know" that we can use the
586 # incremental pull). We "know" that we can use the
587 # value of this same revlog at whatever revision
587 # value of this same revlog at whatever revision
588 # is pointed to by linknode. "Know" is in scare
588 # is pointed to by linknode. "Know" is in scare
589 # quotes because I haven't done enough examination
589 # quotes because I haven't done enough examination
590 # of edge cases to convince myself this is really
590 # of edge cases to convince myself this is really
591 # a fact - it works for all the (admittedly
591 # a fact - it works for all the (admittedly
592 # thorough) cases in our testsuite, but I would be
592 # thorough) cases in our testsuite, but I would be
593 # somewhat unsurprised to find a case in the wild
593 # somewhat unsurprised to find a case in the wild
594 # where this breaks down a bit. That said, I don't
594 # where this breaks down a bit. That said, I don't
595 # know if it would hurt anything.
595 # know if it would hurt anything.
596 for i in pycompat.xrange(rev, 0, -1):
596 for i in pycompat.xrange(rev, 0, -1):
597 if store.linkrev(i) == clrev:
597 if store.linkrev(i) == clrev:
598 return i
598 return i
599 # We failed to resolve a parent for this node, so
599 # We failed to resolve a parent for this node, so
600 # we crash the changegroup construction.
600 # we crash the changegroup construction.
601 raise error.Abort(
601 raise error.Abort(
602 'unable to resolve parent while packing %r %r'
602 'unable to resolve parent while packing %r %r'
603 ' for changeset %r' % (store.indexfile, rev, clrev))
603 ' for changeset %r' % (store.indexfile, rev, clrev))
604
604
605 return nullrev
605 return nullrev
606
606
607 if not linkparents or (
607 if not linkparents or (
608 store.parentrevs(rev) == (nullrev, nullrev)):
608 store.parentrevs(rev) == (nullrev, nullrev)):
609 p1, p2 = nullrev, nullrev
609 p1, p2 = nullrev, nullrev
610 elif len(linkparents) == 1:
610 elif len(linkparents) == 1:
611 p1, = sorted(local(p) for p in linkparents)
611 p1, = sorted(local(p) for p in linkparents)
612 p2 = nullrev
612 p2 = nullrev
613 else:
613 else:
614 p1, p2 = sorted(local(p) for p in linkparents)
614 p1, p2 = sorted(local(p) for p in linkparents)
615
615
616 p1node, p2node = store.node(p1), store.node(p2)
616 p1node, p2node = store.node(p1), store.node(p2)
617
617
618 return p1node, p2node, linknode
618 return p1node, p2node, linknode
619
619
620 def deltagroup(repo, store, nodes, ischangelog, lookup, forcedeltaparentprev,
620 def deltagroup(repo, store, nodes, ischangelog, lookup, forcedeltaparentprev,
621 topic=None,
621 topic=None,
622 ellipses=False, clrevtolocalrev=None, fullclnodes=None,
622 ellipses=False, clrevtolocalrev=None, fullclnodes=None,
623 precomputedellipsis=None):
623 precomputedellipsis=None):
624 """Calculate deltas for a set of revisions.
624 """Calculate deltas for a set of revisions.
625
625
626 Is a generator of ``revisiondelta`` instances.
626 Is a generator of ``revisiondelta`` instances.
627
627
628 If topic is not None, progress detail will be generated using this
628 If topic is not None, progress detail will be generated using this
629 topic name (e.g. changesets, manifests, etc).
629 topic name (e.g. changesets, manifests, etc).
630 """
630 """
631 if not nodes:
631 if not nodes:
632 return
632 return
633
633
634 cl = repo.changelog
634 cl = repo.changelog
635
635
636 if ischangelog:
636 if ischangelog:
637 # `hg log` shows changesets in storage order. To preserve order
637 # `hg log` shows changesets in storage order. To preserve order
638 # across clones, send out changesets in storage order.
638 # across clones, send out changesets in storage order.
639 nodesorder = 'storage'
639 nodesorder = 'storage'
640 elif ellipses:
640 elif ellipses:
641 nodes = _sortnodesellipsis(store, nodes, cl, lookup)
641 nodes = _sortnodesellipsis(store, nodes, cl, lookup)
642 nodesorder = 'nodes'
642 nodesorder = 'nodes'
643 else:
643 else:
644 nodesorder = None
644 nodesorder = None
645
645
646 # Perform ellipses filtering and revision massaging. We do this before
646 # Perform ellipses filtering and revision massaging. We do this before
647 # emitrevisions() because a) filtering out revisions creates less work
647 # emitrevisions() because a) filtering out revisions creates less work
648 # for emitrevisions() b) dropping revisions would break emitrevisions()'s
648 # for emitrevisions() b) dropping revisions would break emitrevisions()'s
649 # assumptions about delta choices and we would possibly send a delta
649 # assumptions about delta choices and we would possibly send a delta
650 # referencing a missing base revision.
650 # referencing a missing base revision.
651 #
651 #
652 # Also, calling lookup() has side-effects with regards to populating
652 # Also, calling lookup() has side-effects with regards to populating
653 # data structures. If we don't call lookup() for each node or if we call
653 # data structures. If we don't call lookup() for each node or if we call
654 # lookup() after the first pass through each node, things can break -
654 # lookup() after the first pass through each node, things can break -
655 # possibly intermittently depending on the python hash seed! For that
655 # possibly intermittently depending on the python hash seed! For that
656 # reason, we store a mapping of all linknodes during the initial node
656 # reason, we store a mapping of all linknodes during the initial node
657 # pass rather than use lookup() on the output side.
657 # pass rather than use lookup() on the output side.
658 if ellipses:
658 if ellipses:
659 filtered = []
659 filtered = []
660 adjustedparents = {}
660 adjustedparents = {}
661 linknodes = {}
661 linknodes = {}
662
662
663 for node in nodes:
663 for node in nodes:
664 rev = store.rev(node)
664 rev = store.rev(node)
665 linknode = lookup(node)
665 linknode = lookup(node)
666 linkrev = cl.rev(linknode)
666 linkrev = cl.rev(linknode)
667 clrevtolocalrev[linkrev] = rev
667 clrevtolocalrev[linkrev] = rev
668
668
669 # If linknode is in fullclnodes, it means the corresponding
669 # If linknode is in fullclnodes, it means the corresponding
670 # changeset was a full changeset and is being sent unaltered.
670 # changeset was a full changeset and is being sent unaltered.
671 if linknode in fullclnodes:
671 if linknode in fullclnodes:
672 linknodes[node] = linknode
672 linknodes[node] = linknode
673
673
674 # If the corresponding changeset wasn't in the set computed
674 # If the corresponding changeset wasn't in the set computed
675 # as relevant to us, it should be dropped outright.
675 # as relevant to us, it should be dropped outright.
676 elif linkrev not in precomputedellipsis:
676 elif linkrev not in precomputedellipsis:
677 continue
677 continue
678
678
679 else:
679 else:
680 # We could probably do this later and avoid the dict
680 # We could probably do this later and avoid the dict
681 # holding state. But it likely doesn't matter.
681 # holding state. But it likely doesn't matter.
682 p1node, p2node, linknode = _resolvenarrowrevisioninfo(
682 p1node, p2node, linknode = _resolvenarrowrevisioninfo(
683 cl, store, ischangelog, rev, linkrev, linknode,
683 cl, store, ischangelog, rev, linkrev, linknode,
684 clrevtolocalrev, fullclnodes, precomputedellipsis)
684 clrevtolocalrev, fullclnodes, precomputedellipsis)
685
685
686 adjustedparents[node] = (p1node, p2node)
686 adjustedparents[node] = (p1node, p2node)
687 linknodes[node] = linknode
687 linknodes[node] = linknode
688
688
689 filtered.append(node)
689 filtered.append(node)
690
690
691 nodes = filtered
691 nodes = filtered
692
692
693 # We expect the first pass to be fast, so we only engage the progress
693 # We expect the first pass to be fast, so we only engage the progress
694 # meter for constructing the revision deltas.
694 # meter for constructing the revision deltas.
695 progress = None
695 progress = None
696 if topic is not None:
696 if topic is not None:
697 progress = repo.ui.makeprogress(topic, unit=_('chunks'),
697 progress = repo.ui.makeprogress(topic, unit=_('chunks'),
698 total=len(nodes))
698 total=len(nodes))
699
699
700 configtarget = repo.ui.config('devel', 'bundle.delta')
700 configtarget = repo.ui.config('devel', 'bundle.delta')
701 if configtarget not in ('', 'p1', 'full'):
701 if configtarget not in ('', 'p1', 'full'):
702 msg = _("""config "devel.bundle.delta" as unknown value: %s""")
702 msg = _("""config "devel.bundle.delta" as unknown value: %s""")
703 repo.ui.warn(msg % configtarget)
703 repo.ui.warn(msg % configtarget)
704
704
705 deltamode = repository.CG_DELTAMODE_STD
705 deltamode = repository.CG_DELTAMODE_STD
706 if forcedeltaparentprev:
706 if forcedeltaparentprev:
707 deltamode = repository.CG_DELTAMODE_PREV
707 deltamode = repository.CG_DELTAMODE_PREV
708 elif configtarget == 'p1':
708 elif configtarget == 'p1':
709 deltamode = repository.CG_DELTAMODE_P1
709 deltamode = repository.CG_DELTAMODE_P1
710 elif configtarget == 'full':
710 elif configtarget == 'full':
711 deltamode = repository.CG_DELTAMODE_FULL
711 deltamode = repository.CG_DELTAMODE_FULL
712
712
713 revisions = store.emitrevisions(
713 revisions = store.emitrevisions(
714 nodes,
714 nodes,
715 nodesorder=nodesorder,
715 nodesorder=nodesorder,
716 revisiondata=True,
716 revisiondata=True,
717 assumehaveparentrevisions=not ellipses,
717 assumehaveparentrevisions=not ellipses,
718 deltamode=deltamode)
718 deltamode=deltamode)
719
719
720 for i, revision in enumerate(revisions):
720 for i, revision in enumerate(revisions):
721 if progress:
721 if progress:
722 progress.update(i + 1)
722 progress.update(i + 1)
723
723
724 if ellipses:
724 if ellipses:
725 linknode = linknodes[revision.node]
725 linknode = linknodes[revision.node]
726
726
727 if revision.node in adjustedparents:
727 if revision.node in adjustedparents:
728 p1node, p2node = adjustedparents[revision.node]
728 p1node, p2node = adjustedparents[revision.node]
729 revision.p1node = p1node
729 revision.p1node = p1node
730 revision.p2node = p2node
730 revision.p2node = p2node
731 revision.flags |= repository.REVISION_FLAG_ELLIPSIS
731 revision.flags |= repository.REVISION_FLAG_ELLIPSIS
732
732
733 else:
733 else:
734 linknode = lookup(revision.node)
734 linknode = lookup(revision.node)
735
735
736 revision.linknode = linknode
736 revision.linknode = linknode
737 yield revision
737 yield revision
738
738
739 if progress:
739 if progress:
740 progress.complete()
740 progress.complete()
741
741
742 class cgpacker(object):
742 class cgpacker(object):
743 def __init__(self, repo, oldmatcher, matcher, version,
743 def __init__(self, repo, oldmatcher, matcher, version,
744 builddeltaheader, manifestsend,
744 builddeltaheader, manifestsend,
745 forcedeltaparentprev=False,
745 forcedeltaparentprev=False,
746 bundlecaps=None, ellipses=False,
746 bundlecaps=None, ellipses=False,
747 shallow=False, ellipsisroots=None, fullnodes=None):
747 shallow=False, ellipsisroots=None, fullnodes=None):
748 """Given a source repo, construct a bundler.
748 """Given a source repo, construct a bundler.
749
749
750 oldmatcher is a matcher that matches on files the client already has.
750 oldmatcher is a matcher that matches on files the client already has.
751 These will not be included in the changegroup.
751 These will not be included in the changegroup.
752
752
753 matcher is a matcher that matches on files to include in the
753 matcher is a matcher that matches on files to include in the
754 changegroup. Used to facilitate sparse changegroups.
754 changegroup. Used to facilitate sparse changegroups.
755
755
756 forcedeltaparentprev indicates whether delta parents must be against
756 forcedeltaparentprev indicates whether delta parents must be against
757 the previous revision in a delta group. This should only be used for
757 the previous revision in a delta group. This should only be used for
758 compatibility with changegroup version 1.
758 compatibility with changegroup version 1.
759
759
760 builddeltaheader is a callable that constructs the header for a group
760 builddeltaheader is a callable that constructs the header for a group
761 delta.
761 delta.
762
762
763 manifestsend is a chunk to send after manifests have been fully emitted.
763 manifestsend is a chunk to send after manifests have been fully emitted.
764
764
765 ellipses indicates whether ellipsis serving mode is enabled.
765 ellipses indicates whether ellipsis serving mode is enabled.
766
766
767 bundlecaps is optional and can be used to specify the set of
767 bundlecaps is optional and can be used to specify the set of
768 capabilities which can be used to build the bundle. While bundlecaps is
768 capabilities which can be used to build the bundle. While bundlecaps is
769 unused in core Mercurial, extensions rely on this feature to communicate
769 unused in core Mercurial, extensions rely on this feature to communicate
770 capabilities to customize the changegroup packer.
770 capabilities to customize the changegroup packer.
771
771
772 shallow indicates whether shallow data might be sent. The packer may
772 shallow indicates whether shallow data might be sent. The packer may
773 need to pack file contents not introduced by the changes being packed.
773 need to pack file contents not introduced by the changes being packed.
774
774
775 fullnodes is the set of changelog nodes which should not be ellipsis
775 fullnodes is the set of changelog nodes which should not be ellipsis
776 nodes. We store this rather than the set of nodes that should be
776 nodes. We store this rather than the set of nodes that should be
777 ellipsis because for very large histories we expect this to be
777 ellipsis because for very large histories we expect this to be
778 significantly smaller.
778 significantly smaller.
779 """
779 """
780 assert oldmatcher
780 assert oldmatcher
781 assert matcher
781 assert matcher
782 self._oldmatcher = oldmatcher
782 self._oldmatcher = oldmatcher
783 self._matcher = matcher
783 self._matcher = matcher
784
784
785 self.version = version
785 self.version = version
786 self._forcedeltaparentprev = forcedeltaparentprev
786 self._forcedeltaparentprev = forcedeltaparentprev
787 self._builddeltaheader = builddeltaheader
787 self._builddeltaheader = builddeltaheader
788 self._manifestsend = manifestsend
788 self._manifestsend = manifestsend
789 self._ellipses = ellipses
789 self._ellipses = ellipses
790
790
791 # Set of capabilities we can use to build the bundle.
791 # Set of capabilities we can use to build the bundle.
792 if bundlecaps is None:
792 if bundlecaps is None:
793 bundlecaps = set()
793 bundlecaps = set()
794 self._bundlecaps = bundlecaps
794 self._bundlecaps = bundlecaps
795 self._isshallow = shallow
795 self._isshallow = shallow
796 self._fullclnodes = fullnodes
796 self._fullclnodes = fullnodes
797
797
798 # Maps ellipsis revs to their roots at the changelog level.
798 # Maps ellipsis revs to their roots at the changelog level.
799 self._precomputedellipsis = ellipsisroots
799 self._precomputedellipsis = ellipsisroots
800
800
801 self._repo = repo
801 self._repo = repo
802
802
803 if self._repo.ui.verbose and not self._repo.ui.debugflag:
803 if self._repo.ui.verbose and not self._repo.ui.debugflag:
804 self._verbosenote = self._repo.ui.note
804 self._verbosenote = self._repo.ui.note
805 else:
805 else:
806 self._verbosenote = lambda s: None
806 self._verbosenote = lambda s: None
807
807
808 def generate(self, commonrevs, clnodes, fastpathlinkrev, source,
808 def generate(self, commonrevs, clnodes, fastpathlinkrev, source,
809 changelog=True):
809 changelog=True):
810 """Yield a sequence of changegroup byte chunks.
810 """Yield a sequence of changegroup byte chunks.
811 If changelog is False, changelog data won't be added to changegroup
811 If changelog is False, changelog data won't be added to changegroup
812 """
812 """
813
813
814 repo = self._repo
814 repo = self._repo
815 cl = repo.changelog
815 cl = repo.changelog
816
816
817 self._verbosenote(_('uncompressed size of bundle content:\n'))
817 self._verbosenote(_('uncompressed size of bundle content:\n'))
818 size = 0
818 size = 0
819
819
820 clstate, deltas = self._generatechangelog(cl, clnodes,
820 clstate, deltas = self._generatechangelog(cl, clnodes,
821 generate=changelog)
821 generate=changelog)
822 for delta in deltas:
822 for delta in deltas:
823 for chunk in _revisiondeltatochunks(delta,
823 for chunk in _revisiondeltatochunks(delta,
824 self._builddeltaheader):
824 self._builddeltaheader):
825 size += len(chunk)
825 size += len(chunk)
826 yield chunk
826 yield chunk
827
827
828 close = closechunk()
828 close = closechunk()
829 size += len(close)
829 size += len(close)
830 yield closechunk()
830 yield closechunk()
831
831
832 self._verbosenote(_('%8.i (changelog)\n') % size)
832 self._verbosenote(_('%8.i (changelog)\n') % size)
833
833
834 clrevorder = clstate['clrevorder']
834 clrevorder = clstate['clrevorder']
835 manifests = clstate['manifests']
835 manifests = clstate['manifests']
836 changedfiles = clstate['changedfiles']
836 changedfiles = clstate['changedfiles']
837
837
838 # We need to make sure that the linkrev in the changegroup refers to
838 # We need to make sure that the linkrev in the changegroup refers to
839 # the first changeset that introduced the manifest or file revision.
839 # the first changeset that introduced the manifest or file revision.
840 # The fastpath is usually safer than the slowpath, because the filelogs
840 # The fastpath is usually safer than the slowpath, because the filelogs
841 # are walked in revlog order.
841 # are walked in revlog order.
842 #
842 #
843 # When taking the slowpath when the manifest revlog uses generaldelta,
843 # When taking the slowpath when the manifest revlog uses generaldelta,
844 # the manifest may be walked in the "wrong" order. Without 'clrevorder',
844 # the manifest may be walked in the "wrong" order. Without 'clrevorder',
845 # we would get an incorrect linkrev (see fix in cc0ff93d0c0c).
845 # we would get an incorrect linkrev (see fix in cc0ff93d0c0c).
846 #
846 #
847 # When taking the fastpath, we are only vulnerable to reordering
847 # When taking the fastpath, we are only vulnerable to reordering
848 # of the changelog itself. The changelog never uses generaldelta and is
848 # of the changelog itself. The changelog never uses generaldelta and is
849 # never reordered. To handle this case, we simply take the slowpath,
849 # never reordered. To handle this case, we simply take the slowpath,
850 # which already has the 'clrevorder' logic. This was also fixed in
850 # which already has the 'clrevorder' logic. This was also fixed in
851 # cc0ff93d0c0c.
851 # cc0ff93d0c0c.
852
852
853 # Treemanifests don't work correctly with fastpathlinkrev
853 # Treemanifests don't work correctly with fastpathlinkrev
854 # either, because we don't discover which directory nodes to
854 # either, because we don't discover which directory nodes to
855 # send along with files. This could probably be fixed.
855 # send along with files. This could probably be fixed.
856 fastpathlinkrev = fastpathlinkrev and (
856 fastpathlinkrev = fastpathlinkrev and (
857 'treemanifest' not in repo.requirements)
857 'treemanifest' not in repo.requirements)
858
858
859 fnodes = {} # needed file nodes
859 fnodes = {} # needed file nodes
860
860
861 size = 0
861 size = 0
862 it = self.generatemanifests(
862 it = self.generatemanifests(
863 commonrevs, clrevorder, fastpathlinkrev, manifests, fnodes, source,
863 commonrevs, clrevorder, fastpathlinkrev, manifests, fnodes, source,
864 clstate['clrevtomanifestrev'])
864 clstate['clrevtomanifestrev'])
865
865
866 for tree, deltas in it:
866 for tree, deltas in it:
867 if tree:
867 if tree:
868 assert self.version == b'03'
868 assert self.version == b'03'
869 chunk = _fileheader(tree)
869 chunk = _fileheader(tree)
870 size += len(chunk)
870 size += len(chunk)
871 yield chunk
871 yield chunk
872
872
873 for delta in deltas:
873 for delta in deltas:
874 chunks = _revisiondeltatochunks(delta, self._builddeltaheader)
874 chunks = _revisiondeltatochunks(delta, self._builddeltaheader)
875 for chunk in chunks:
875 for chunk in chunks:
876 size += len(chunk)
876 size += len(chunk)
877 yield chunk
877 yield chunk
878
878
879 close = closechunk()
879 close = closechunk()
880 size += len(close)
880 size += len(close)
881 yield close
881 yield close
882
882
883 self._verbosenote(_('%8.i (manifests)\n') % size)
883 self._verbosenote(_('%8.i (manifests)\n') % size)
884 yield self._manifestsend
884 yield self._manifestsend
885
885
886 mfdicts = None
886 mfdicts = None
887 if self._ellipses and self._isshallow:
887 if self._ellipses and self._isshallow:
888 mfdicts = [(self._repo.manifestlog[n].read(), lr)
888 mfdicts = [(self._repo.manifestlog[n].read(), lr)
889 for (n, lr) in manifests.iteritems()]
889 for (n, lr) in manifests.iteritems()]
890
890
891 manifests.clear()
891 manifests.clear()
892 clrevs = set(cl.rev(x) for x in clnodes)
892 clrevs = set(cl.rev(x) for x in clnodes)
893
893
894 it = self.generatefiles(changedfiles, commonrevs,
894 it = self.generatefiles(changedfiles, commonrevs,
895 source, mfdicts, fastpathlinkrev,
895 source, mfdicts, fastpathlinkrev,
896 fnodes, clrevs)
896 fnodes, clrevs)
897
897
898 for path, deltas in it:
898 for path, deltas in it:
899 h = _fileheader(path)
899 h = _fileheader(path)
900 size = len(h)
900 size = len(h)
901 yield h
901 yield h
902
902
903 for delta in deltas:
903 for delta in deltas:
904 chunks = _revisiondeltatochunks(delta, self._builddeltaheader)
904 chunks = _revisiondeltatochunks(delta, self._builddeltaheader)
905 for chunk in chunks:
905 for chunk in chunks:
906 size += len(chunk)
906 size += len(chunk)
907 yield chunk
907 yield chunk
908
908
909 close = closechunk()
909 close = closechunk()
910 size += len(close)
910 size += len(close)
911 yield close
911 yield close
912
912
913 self._verbosenote(_('%8.i %s\n') % (size, path))
913 self._verbosenote(_('%8.i %s\n') % (size, path))
914
914
915 yield closechunk()
915 yield closechunk()
916
916
917 if clnodes:
917 if clnodes:
918 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
918 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
919
919
920 def _generatechangelog(self, cl, nodes, generate=True):
920 def _generatechangelog(self, cl, nodes, generate=True):
921 """Generate data for changelog chunks.
921 """Generate data for changelog chunks.
922
922
923 Returns a 2-tuple of a dict containing state and an iterable of
923 Returns a 2-tuple of a dict containing state and an iterable of
924 byte chunks. The state will not be fully populated until the
924 byte chunks. The state will not be fully populated until the
925 chunk stream has been fully consumed.
925 chunk stream has been fully consumed.
926
926
927 if generate is False, the state will be fully populated and no chunk
927 if generate is False, the state will be fully populated and no chunk
928 stream will be yielded
928 stream will be yielded
929 """
929 """
930 clrevorder = {}
930 clrevorder = {}
931 manifests = {}
931 manifests = {}
932 mfl = self._repo.manifestlog
932 mfl = self._repo.manifestlog
933 changedfiles = set()
933 changedfiles = set()
934 clrevtomanifestrev = {}
934 clrevtomanifestrev = {}
935
935
936 state = {
936 state = {
937 'clrevorder': clrevorder,
937 'clrevorder': clrevorder,
938 'manifests': manifests,
938 'manifests': manifests,
939 'changedfiles': changedfiles,
939 'changedfiles': changedfiles,
940 'clrevtomanifestrev': clrevtomanifestrev,
940 'clrevtomanifestrev': clrevtomanifestrev,
941 }
941 }
942
942
943 if not (generate or self._ellipses):
943 if not (generate or self._ellipses):
944 # sort the nodes in storage order
944 # sort the nodes in storage order
945 nodes = sorted(nodes, key=cl.rev)
945 nodes = sorted(nodes, key=cl.rev)
946 for node in nodes:
946 for node in nodes:
947 c = cl.changelogrevision(node)
947 c = cl.changelogrevision(node)
948 clrevorder[node] = len(clrevorder)
948 clrevorder[node] = len(clrevorder)
949 # record the first changeset introducing this manifest version
949 # record the first changeset introducing this manifest version
950 manifests.setdefault(c.manifest, node)
950 manifests.setdefault(c.manifest, node)
951 # Record a complete list of potentially-changed files in
951 # Record a complete list of potentially-changed files in
952 # this manifest.
952 # this manifest.
953 changedfiles.update(c.files)
953 changedfiles.update(c.files)
954
954
955 return state, ()
955 return state, ()
956
956
957 # Callback for the changelog, used to collect changed files and
957 # Callback for the changelog, used to collect changed files and
958 # manifest nodes.
958 # manifest nodes.
959 # Returns the linkrev node (identity in the changelog case).
959 # Returns the linkrev node (identity in the changelog case).
960 def lookupcl(x):
960 def lookupcl(x):
961 c = cl.changelogrevision(x)
961 c = cl.changelogrevision(x)
962 clrevorder[x] = len(clrevorder)
962 clrevorder[x] = len(clrevorder)
963
963
964 if self._ellipses:
964 if self._ellipses:
965 # Only update manifests if x is going to be sent. Otherwise we
965 # Only update manifests if x is going to be sent. Otherwise we
966 # end up with bogus linkrevs specified for manifests and
966 # end up with bogus linkrevs specified for manifests and
967 # we skip some manifest nodes that we should otherwise
967 # we skip some manifest nodes that we should otherwise
968 # have sent.
968 # have sent.
969 if (x in self._fullclnodes
969 if (x in self._fullclnodes
970 or cl.rev(x) in self._precomputedellipsis):
970 or cl.rev(x) in self._precomputedellipsis):
971
971
972 manifestnode = c.manifest
972 manifestnode = c.manifest
973 # Record the first changeset introducing this manifest
973 # Record the first changeset introducing this manifest
974 # version.
974 # version.
975 manifests.setdefault(manifestnode, x)
975 manifests.setdefault(manifestnode, x)
976 # Set this narrow-specific dict so we have the lowest
976 # Set this narrow-specific dict so we have the lowest
977 # manifest revnum to look up for this cl revnum. (Part of
977 # manifest revnum to look up for this cl revnum. (Part of
978 # mapping changelog ellipsis parents to manifest ellipsis
978 # mapping changelog ellipsis parents to manifest ellipsis
979 # parents)
979 # parents)
980 clrevtomanifestrev.setdefault(
980 clrevtomanifestrev.setdefault(
981 cl.rev(x), mfl.rev(manifestnode))
981 cl.rev(x), mfl.rev(manifestnode))
982 # We can't trust the changed files list in the changeset if the
982 # We can't trust the changed files list in the changeset if the
983 # client requested a shallow clone.
983 # client requested a shallow clone.
984 if self._isshallow:
984 if self._isshallow:
985 changedfiles.update(mfl[c.manifest].read().keys())
985 changedfiles.update(mfl[c.manifest].read().keys())
986 else:
986 else:
987 changedfiles.update(c.files)
987 changedfiles.update(c.files)
988 else:
988 else:
989 # record the first changeset introducing this manifest version
989 # record the first changeset introducing this manifest version
990 manifests.setdefault(c.manifest, x)
990 manifests.setdefault(c.manifest, x)
991 # Record a complete list of potentially-changed files in
991 # Record a complete list of potentially-changed files in
992 # this manifest.
992 # this manifest.
993 changedfiles.update(c.files)
993 changedfiles.update(c.files)
994
994
995 return x
995 return x
996
996
997 gen = deltagroup(
997 gen = deltagroup(
998 self._repo, cl, nodes, True, lookupcl,
998 self._repo, cl, nodes, True, lookupcl,
999 self._forcedeltaparentprev,
999 self._forcedeltaparentprev,
1000 ellipses=self._ellipses,
1000 ellipses=self._ellipses,
1001 topic=_('changesets'),
1001 topic=_('changesets'),
1002 clrevtolocalrev={},
1002 clrevtolocalrev={},
1003 fullclnodes=self._fullclnodes,
1003 fullclnodes=self._fullclnodes,
1004 precomputedellipsis=self._precomputedellipsis)
1004 precomputedellipsis=self._precomputedellipsis)
1005
1005
1006 return state, gen
1006 return state, gen
1007
1007
1008 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev,
1008 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev,
1009 manifests, fnodes, source, clrevtolocalrev):
1009 manifests, fnodes, source, clrevtolocalrev):
1010 """Returns an iterator of changegroup chunks containing manifests.
1010 """Returns an iterator of changegroup chunks containing manifests.
1011
1011
1012 `source` is unused here, but is used by extensions like remotefilelog to
1012 `source` is unused here, but is used by extensions like remotefilelog to
1013 change what is sent based in pulls vs pushes, etc.
1013 change what is sent based in pulls vs pushes, etc.
1014 """
1014 """
1015 repo = self._repo
1015 repo = self._repo
1016 mfl = repo.manifestlog
1016 mfl = repo.manifestlog
1017 tmfnodes = {'': manifests}
1017 tmfnodes = {'': manifests}
1018
1018
1019 # Callback for the manifest, used to collect linkrevs for filelog
1019 # Callback for the manifest, used to collect linkrevs for filelog
1020 # revisions.
1020 # revisions.
1021 # Returns the linkrev node (collected in lookupcl).
1021 # Returns the linkrev node (collected in lookupcl).
1022 def makelookupmflinknode(tree, nodes):
1022 def makelookupmflinknode(tree, nodes):
1023 if fastpathlinkrev:
1023 if fastpathlinkrev:
1024 assert not tree
1024 assert not tree
1025 return manifests.__getitem__
1025 return manifests.__getitem__
1026
1026
1027 def lookupmflinknode(x):
1027 def lookupmflinknode(x):
1028 """Callback for looking up the linknode for manifests.
1028 """Callback for looking up the linknode for manifests.
1029
1029
1030 Returns the linkrev node for the specified manifest.
1030 Returns the linkrev node for the specified manifest.
1031
1031
1032 SIDE EFFECT:
1032 SIDE EFFECT:
1033
1033
1034 1) fclnodes gets populated with the list of relevant
1034 1) fclnodes gets populated with the list of relevant
1035 file nodes if we're not using fastpathlinkrev
1035 file nodes if we're not using fastpathlinkrev
1036 2) When treemanifests are in use, collects treemanifest nodes
1036 2) When treemanifests are in use, collects treemanifest nodes
1037 to send
1037 to send
1038
1038
1039 Note that this means manifests must be completely sent to
1039 Note that this means manifests must be completely sent to
1040 the client before you can trust the list of files and
1040 the client before you can trust the list of files and
1041 treemanifests to send.
1041 treemanifests to send.
1042 """
1042 """
1043 clnode = nodes[x]
1043 clnode = nodes[x]
1044 mdata = mfl.get(tree, x).readfast(shallow=True)
1044 mdata = mfl.get(tree, x).readfast(shallow=True)
1045 for p, n, fl in mdata.iterentries():
1045 for p, n, fl in mdata.iterentries():
1046 if fl == 't': # subdirectory manifest
1046 if fl == 't': # subdirectory manifest
1047 subtree = tree + p + '/'
1047 subtree = tree + p + '/'
1048 tmfclnodes = tmfnodes.setdefault(subtree, {})
1048 tmfclnodes = tmfnodes.setdefault(subtree, {})
1049 tmfclnode = tmfclnodes.setdefault(n, clnode)
1049 tmfclnode = tmfclnodes.setdefault(n, clnode)
1050 if clrevorder[clnode] < clrevorder[tmfclnode]:
1050 if clrevorder[clnode] < clrevorder[tmfclnode]:
1051 tmfclnodes[n] = clnode
1051 tmfclnodes[n] = clnode
1052 else:
1052 else:
1053 f = tree + p
1053 f = tree + p
1054 fclnodes = fnodes.setdefault(f, {})
1054 fclnodes = fnodes.setdefault(f, {})
1055 fclnode = fclnodes.setdefault(n, clnode)
1055 fclnode = fclnodes.setdefault(n, clnode)
1056 if clrevorder[clnode] < clrevorder[fclnode]:
1056 if clrevorder[clnode] < clrevorder[fclnode]:
1057 fclnodes[n] = clnode
1057 fclnodes[n] = clnode
1058 return clnode
1058 return clnode
1059 return lookupmflinknode
1059 return lookupmflinknode
1060
1060
1061 while tmfnodes:
1061 while tmfnodes:
1062 tree, nodes = tmfnodes.popitem()
1062 tree, nodes = tmfnodes.popitem()
1063
1063
1064 should_visit = self._matcher.visitdir(tree[:-1] or '.')
1064 should_visit = self._matcher.visitdir(tree[:-1] or '.')
1065 if tree and not should_visit:
1065 if tree and not should_visit:
1066 continue
1066 continue
1067
1067
1068 store = mfl.getstorage(tree)
1068 store = mfl.getstorage(tree)
1069
1069
1070 if not should_visit:
1070 if not should_visit:
1071 # No nodes to send because this directory is out of
1071 # No nodes to send because this directory is out of
1072 # the client's view of the repository (probably
1072 # the client's view of the repository (probably
1073 # because of narrow clones). Do this even for the root
1073 # because of narrow clones). Do this even for the root
1074 # directory (tree=='')
1074 # directory (tree=='')
1075 prunednodes = []
1075 prunednodes = []
1076 else:
1076 else:
1077 # Avoid sending any manifest nodes we can prove the
1077 # Avoid sending any manifest nodes we can prove the
1078 # client already has by checking linkrevs. See the
1078 # client already has by checking linkrevs. See the
1079 # related comment in generatefiles().
1079 # related comment in generatefiles().
1080 prunednodes = self._prunemanifests(store, nodes, commonrevs)
1080 prunednodes = self._prunemanifests(store, nodes, commonrevs)
1081
1081
1082 if tree and not prunednodes:
1082 if tree and not prunednodes:
1083 continue
1083 continue
1084
1084
1085 lookupfn = makelookupmflinknode(tree, nodes)
1085 lookupfn = makelookupmflinknode(tree, nodes)
1086
1086
1087 deltas = deltagroup(
1087 deltas = deltagroup(
1088 self._repo, store, prunednodes, False, lookupfn,
1088 self._repo, store, prunednodes, False, lookupfn,
1089 self._forcedeltaparentprev,
1089 self._forcedeltaparentprev,
1090 ellipses=self._ellipses,
1090 ellipses=self._ellipses,
1091 topic=_('manifests'),
1091 topic=_('manifests'),
1092 clrevtolocalrev=clrevtolocalrev,
1092 clrevtolocalrev=clrevtolocalrev,
1093 fullclnodes=self._fullclnodes,
1093 fullclnodes=self._fullclnodes,
1094 precomputedellipsis=self._precomputedellipsis)
1094 precomputedellipsis=self._precomputedellipsis)
1095
1095
1096 if not self._oldmatcher.visitdir(store.tree[:-1] or '.'):
1096 if not self._oldmatcher.visitdir(store.tree[:-1] or '.'):
1097 yield tree, deltas
1097 yield tree, deltas
1098 else:
1098 else:
1099 # 'deltas' is a generator and we need to consume it even if
1099 # 'deltas' is a generator and we need to consume it even if
1100 # we are not going to send it because a side-effect is that
1100 # we are not going to send it because a side-effect is that
1101 # it updates tmdnodes (via lookupfn)
1101 # it updates tmdnodes (via lookupfn)
1102 for d in deltas:
1102 for d in deltas:
1103 pass
1103 pass
1104 if not tree:
1104 if not tree:
1105 yield tree, []
1105 yield tree, []
1106
1106
1107 def _prunemanifests(self, store, nodes, commonrevs):
1107 def _prunemanifests(self, store, nodes, commonrevs):
1108 # This is split out as a separate method to allow filtering
1108 # This is split out as a separate method to allow filtering
1109 # commonrevs in extension code.
1109 # commonrevs in extension code.
1110 #
1110 #
1111 # TODO(augie): this shouldn't be required, instead we should
1111 # TODO(augie): this shouldn't be required, instead we should
1112 # make filtering of revisions to send delegated to the store
1112 # make filtering of revisions to send delegated to the store
1113 # layer.
1113 # layer.
1114 frev, flr = store.rev, store.linkrev
1114 frev, flr = store.rev, store.linkrev
1115 return [n for n in nodes if flr(frev(n)) not in commonrevs]
1115 return [n for n in nodes if flr(frev(n)) not in commonrevs]
1116
1116
1117 # The 'source' parameter is useful for extensions
1117 # The 'source' parameter is useful for extensions
1118 def generatefiles(self, changedfiles, commonrevs, source,
1118 def generatefiles(self, changedfiles, commonrevs, source,
1119 mfdicts, fastpathlinkrev, fnodes, clrevs):
1119 mfdicts, fastpathlinkrev, fnodes, clrevs):
1120 changedfiles = [f for f in changedfiles
1120 changedfiles = [f for f in changedfiles
1121 if self._matcher(f) and not self._oldmatcher(f)]
1121 if self._matcher(f) and not self._oldmatcher(f)]
1122
1122
1123 if not fastpathlinkrev:
1123 if not fastpathlinkrev:
1124 def normallinknodes(unused, fname):
1124 def normallinknodes(unused, fname):
1125 return fnodes.get(fname, {})
1125 return fnodes.get(fname, {})
1126 else:
1126 else:
1127 cln = self._repo.changelog.node
1127 cln = self._repo.changelog.node
1128
1128
1129 def normallinknodes(store, fname):
1129 def normallinknodes(store, fname):
1130 flinkrev = store.linkrev
1130 flinkrev = store.linkrev
1131 fnode = store.node
1131 fnode = store.node
1132 revs = ((r, flinkrev(r)) for r in store)
1132 revs = ((r, flinkrev(r)) for r in store)
1133 return dict((fnode(r), cln(lr))
1133 return dict((fnode(r), cln(lr))
1134 for r, lr in revs if lr in clrevs)
1134 for r, lr in revs if lr in clrevs)
1135
1135
1136 clrevtolocalrev = {}
1136 clrevtolocalrev = {}
1137
1137
1138 if self._isshallow:
1138 if self._isshallow:
1139 # In a shallow clone, the linknodes callback needs to also include
1139 # In a shallow clone, the linknodes callback needs to also include
1140 # those file nodes that are in the manifests we sent but weren't
1140 # those file nodes that are in the manifests we sent but weren't
1141 # introduced by those manifests.
1141 # introduced by those manifests.
1142 commonctxs = [self._repo[c] for c in commonrevs]
1142 commonctxs = [self._repo[c] for c in commonrevs]
1143 clrev = self._repo.changelog.rev
1143 clrev = self._repo.changelog.rev
1144
1144
1145 def linknodes(flog, fname):
1145 def linknodes(flog, fname):
1146 for c in commonctxs:
1146 for c in commonctxs:
1147 try:
1147 try:
1148 fnode = c.filenode(fname)
1148 fnode = c.filenode(fname)
1149 clrevtolocalrev[c.rev()] = flog.rev(fnode)
1149 clrevtolocalrev[c.rev()] = flog.rev(fnode)
1150 except error.ManifestLookupError:
1150 except error.ManifestLookupError:
1151 pass
1151 pass
1152 links = normallinknodes(flog, fname)
1152 links = normallinknodes(flog, fname)
1153 if len(links) != len(mfdicts):
1153 if len(links) != len(mfdicts):
1154 for mf, lr in mfdicts:
1154 for mf, lr in mfdicts:
1155 fnode = mf.get(fname, None)
1155 fnode = mf.get(fname, None)
1156 if fnode in links:
1156 if fnode in links:
1157 links[fnode] = min(links[fnode], lr, key=clrev)
1157 links[fnode] = min(links[fnode], lr, key=clrev)
1158 elif fnode:
1158 elif fnode:
1159 links[fnode] = lr
1159 links[fnode] = lr
1160 return links
1160 return links
1161 else:
1161 else:
1162 linknodes = normallinknodes
1162 linknodes = normallinknodes
1163
1163
1164 repo = self._repo
1164 repo = self._repo
1165 progress = repo.ui.makeprogress(_('files'), unit=_('files'),
1165 progress = repo.ui.makeprogress(_('files'), unit=_('files'),
1166 total=len(changedfiles))
1166 total=len(changedfiles))
1167 for i, fname in enumerate(sorted(changedfiles)):
1167 for i, fname in enumerate(sorted(changedfiles)):
1168 filerevlog = repo.file(fname)
1168 filerevlog = repo.file(fname)
1169 if not filerevlog:
1169 if not filerevlog:
1170 raise error.Abort(_("empty or missing file data for %s") %
1170 raise error.Abort(_("empty or missing file data for %s") %
1171 fname)
1171 fname)
1172
1172
1173 clrevtolocalrev.clear()
1173 clrevtolocalrev.clear()
1174
1174
1175 linkrevnodes = linknodes(filerevlog, fname)
1175 linkrevnodes = linknodes(filerevlog, fname)
1176 # Lookup for filenodes, we collected the linkrev nodes above in the
1176 # Lookup for filenodes, we collected the linkrev nodes above in the
1177 # fastpath case and with lookupmf in the slowpath case.
1177 # fastpath case and with lookupmf in the slowpath case.
1178 def lookupfilelog(x):
1178 def lookupfilelog(x):
1179 return linkrevnodes[x]
1179 return linkrevnodes[x]
1180
1180
1181 frev, flr = filerevlog.rev, filerevlog.linkrev
1181 frev, flr = filerevlog.rev, filerevlog.linkrev
1182 # Skip sending any filenode we know the client already
1182 # Skip sending any filenode we know the client already
1183 # has. This avoids over-sending files relatively
1183 # has. This avoids over-sending files relatively
1184 # inexpensively, so it's not a problem if we under-filter
1184 # inexpensively, so it's not a problem if we under-filter
1185 # here.
1185 # here.
1186 filenodes = [n for n in linkrevnodes
1186 filenodes = [n for n in linkrevnodes
1187 if flr(frev(n)) not in commonrevs]
1187 if flr(frev(n)) not in commonrevs]
1188
1188
1189 if not filenodes:
1189 if not filenodes:
1190 continue
1190 continue
1191
1191
1192 progress.update(i + 1, item=fname)
1192 progress.update(i + 1, item=fname)
1193
1193
1194 deltas = deltagroup(
1194 deltas = deltagroup(
1195 self._repo, filerevlog, filenodes, False, lookupfilelog,
1195 self._repo, filerevlog, filenodes, False, lookupfilelog,
1196 self._forcedeltaparentprev,
1196 self._forcedeltaparentprev,
1197 ellipses=self._ellipses,
1197 ellipses=self._ellipses,
1198 clrevtolocalrev=clrevtolocalrev,
1198 clrevtolocalrev=clrevtolocalrev,
1199 fullclnodes=self._fullclnodes,
1199 fullclnodes=self._fullclnodes,
1200 precomputedellipsis=self._precomputedellipsis)
1200 precomputedellipsis=self._precomputedellipsis)
1201
1201
1202 yield fname, deltas
1202 yield fname, deltas
1203
1203
1204 progress.complete()
1204 progress.complete()
1205
1205
1206 def _makecg1packer(repo, oldmatcher, matcher, bundlecaps,
1206 def _makecg1packer(repo, oldmatcher, matcher, bundlecaps,
1207 ellipses=False, shallow=False, ellipsisroots=None,
1207 ellipses=False, shallow=False, ellipsisroots=None,
1208 fullnodes=None):
1208 fullnodes=None):
1209 builddeltaheader = lambda d: _CHANGEGROUPV1_DELTA_HEADER.pack(
1209 builddeltaheader = lambda d: _CHANGEGROUPV1_DELTA_HEADER.pack(
1210 d.node, d.p1node, d.p2node, d.linknode)
1210 d.node, d.p1node, d.p2node, d.linknode)
1211
1211
1212 return cgpacker(repo, oldmatcher, matcher, b'01',
1212 return cgpacker(repo, oldmatcher, matcher, b'01',
1213 builddeltaheader=builddeltaheader,
1213 builddeltaheader=builddeltaheader,
1214 manifestsend=b'',
1214 manifestsend=b'',
1215 forcedeltaparentprev=True,
1215 forcedeltaparentprev=True,
1216 bundlecaps=bundlecaps,
1216 bundlecaps=bundlecaps,
1217 ellipses=ellipses,
1217 ellipses=ellipses,
1218 shallow=shallow,
1218 shallow=shallow,
1219 ellipsisroots=ellipsisroots,
1219 ellipsisroots=ellipsisroots,
1220 fullnodes=fullnodes)
1220 fullnodes=fullnodes)
1221
1221
1222 def _makecg2packer(repo, oldmatcher, matcher, bundlecaps,
1222 def _makecg2packer(repo, oldmatcher, matcher, bundlecaps,
1223 ellipses=False, shallow=False, ellipsisroots=None,
1223 ellipses=False, shallow=False, ellipsisroots=None,
1224 fullnodes=None):
1224 fullnodes=None):
1225 builddeltaheader = lambda d: _CHANGEGROUPV2_DELTA_HEADER.pack(
1225 builddeltaheader = lambda d: _CHANGEGROUPV2_DELTA_HEADER.pack(
1226 d.node, d.p1node, d.p2node, d.basenode, d.linknode)
1226 d.node, d.p1node, d.p2node, d.basenode, d.linknode)
1227
1227
1228 return cgpacker(repo, oldmatcher, matcher, b'02',
1228 return cgpacker(repo, oldmatcher, matcher, b'02',
1229 builddeltaheader=builddeltaheader,
1229 builddeltaheader=builddeltaheader,
1230 manifestsend=b'',
1230 manifestsend=b'',
1231 bundlecaps=bundlecaps,
1231 bundlecaps=bundlecaps,
1232 ellipses=ellipses,
1232 ellipses=ellipses,
1233 shallow=shallow,
1233 shallow=shallow,
1234 ellipsisroots=ellipsisroots,
1234 ellipsisroots=ellipsisroots,
1235 fullnodes=fullnodes)
1235 fullnodes=fullnodes)
1236
1236
1237 def _makecg3packer(repo, oldmatcher, matcher, bundlecaps,
1237 def _makecg3packer(repo, oldmatcher, matcher, bundlecaps,
1238 ellipses=False, shallow=False, ellipsisroots=None,
1238 ellipses=False, shallow=False, ellipsisroots=None,
1239 fullnodes=None):
1239 fullnodes=None):
1240 builddeltaheader = lambda d: _CHANGEGROUPV3_DELTA_HEADER.pack(
1240 builddeltaheader = lambda d: _CHANGEGROUPV3_DELTA_HEADER.pack(
1241 d.node, d.p1node, d.p2node, d.basenode, d.linknode, d.flags)
1241 d.node, d.p1node, d.p2node, d.basenode, d.linknode, d.flags)
1242
1242
1243 return cgpacker(repo, oldmatcher, matcher, b'03',
1243 return cgpacker(repo, oldmatcher, matcher, b'03',
1244 builddeltaheader=builddeltaheader,
1244 builddeltaheader=builddeltaheader,
1245 manifestsend=closechunk(),
1245 manifestsend=closechunk(),
1246 bundlecaps=bundlecaps,
1246 bundlecaps=bundlecaps,
1247 ellipses=ellipses,
1247 ellipses=ellipses,
1248 shallow=shallow,
1248 shallow=shallow,
1249 ellipsisroots=ellipsisroots,
1249 ellipsisroots=ellipsisroots,
1250 fullnodes=fullnodes)
1250 fullnodes=fullnodes)
1251
1251
1252 _packermap = {'01': (_makecg1packer, cg1unpacker),
1252 _packermap = {'01': (_makecg1packer, cg1unpacker),
1253 # cg2 adds support for exchanging generaldelta
1253 # cg2 adds support for exchanging generaldelta
1254 '02': (_makecg2packer, cg2unpacker),
1254 '02': (_makecg2packer, cg2unpacker),
1255 # cg3 adds support for exchanging revlog flags and treemanifests
1255 # cg3 adds support for exchanging revlog flags and treemanifests
1256 '03': (_makecg3packer, cg3unpacker),
1256 '03': (_makecg3packer, cg3unpacker),
1257 }
1257 }
1258
1258
1259 def allsupportedversions(repo):
1259 def allsupportedversions(repo):
1260 versions = set(_packermap.keys())
1260 versions = set(_packermap.keys())
1261 if not (repo.ui.configbool('experimental', 'changegroup3') or
1261 if not (repo.ui.configbool('experimental', 'changegroup3') or
1262 repo.ui.configbool('experimental', 'treemanifest') or
1262 repo.ui.configbool('experimental', 'treemanifest') or
1263 'treemanifest' in repo.requirements):
1263 'treemanifest' in repo.requirements):
1264 versions.discard('03')
1264 versions.discard('03')
1265 return versions
1265 return versions
1266
1266
1267 # Changegroup versions that can be applied to the repo
1267 # Changegroup versions that can be applied to the repo
1268 def supportedincomingversions(repo):
1268 def supportedincomingversions(repo):
1269 return allsupportedversions(repo)
1269 return allsupportedversions(repo)
1270
1270
1271 # Changegroup versions that can be created from the repo
1271 # Changegroup versions that can be created from the repo
1272 def supportedoutgoingversions(repo):
1272 def supportedoutgoingversions(repo):
1273 versions = allsupportedversions(repo)
1273 versions = allsupportedversions(repo)
1274 if 'treemanifest' in repo.requirements:
1274 if 'treemanifest' in repo.requirements:
1275 # Versions 01 and 02 support only flat manifests and it's just too
1275 # Versions 01 and 02 support only flat manifests and it's just too
1276 # expensive to convert between the flat manifest and tree manifest on
1276 # expensive to convert between the flat manifest and tree manifest on
1277 # the fly. Since tree manifests are hashed differently, all of history
1277 # the fly. Since tree manifests are hashed differently, all of history
1278 # would have to be converted. Instead, we simply don't even pretend to
1278 # would have to be converted. Instead, we simply don't even pretend to
1279 # support versions 01 and 02.
1279 # support versions 01 and 02.
1280 versions.discard('01')
1280 versions.discard('01')
1281 versions.discard('02')
1281 versions.discard('02')
1282 if repository.NARROW_REQUIREMENT in repo.requirements:
1282 if repository.NARROW_REQUIREMENT in repo.requirements:
1283 # Versions 01 and 02 don't support revlog flags, and we need to
1283 # Versions 01 and 02 don't support revlog flags, and we need to
1284 # support that for stripping and unbundling to work.
1284 # support that for stripping and unbundling to work.
1285 versions.discard('01')
1285 versions.discard('01')
1286 versions.discard('02')
1286 versions.discard('02')
1287 if LFS_REQUIREMENT in repo.requirements:
1287 if LFS_REQUIREMENT in repo.requirements:
1288 # Versions 01 and 02 don't support revlog flags, and we need to
1288 # Versions 01 and 02 don't support revlog flags, and we need to
1289 # mark LFS entries with REVIDX_EXTSTORED.
1289 # mark LFS entries with REVIDX_EXTSTORED.
1290 versions.discard('01')
1290 versions.discard('01')
1291 versions.discard('02')
1291 versions.discard('02')
1292
1292
1293 return versions
1293 return versions
1294
1294
1295 def localversion(repo):
1295 def localversion(repo):
1296 # Finds the best version to use for bundles that are meant to be used
1296 # Finds the best version to use for bundles that are meant to be used
1297 # locally, such as those from strip and shelve, and temporary bundles.
1297 # locally, such as those from strip and shelve, and temporary bundles.
1298 return max(supportedoutgoingversions(repo))
1298 return max(supportedoutgoingversions(repo))
1299
1299
1300 def safeversion(repo):
1300 def safeversion(repo):
1301 # Finds the smallest version that it's safe to assume clients of the repo
1301 # Finds the smallest version that it's safe to assume clients of the repo
1302 # will support. For example, all hg versions that support generaldelta also
1302 # will support. For example, all hg versions that support generaldelta also
1303 # support changegroup 02.
1303 # support changegroup 02.
1304 versions = supportedoutgoingversions(repo)
1304 versions = supportedoutgoingversions(repo)
1305 if 'generaldelta' in repo.requirements:
1305 if 'generaldelta' in repo.requirements:
1306 versions.discard('01')
1306 versions.discard('01')
1307 assert versions
1307 assert versions
1308 return min(versions)
1308 return min(versions)
1309
1309
1310 def getbundler(version, repo, bundlecaps=None, oldmatcher=None,
1310 def getbundler(version, repo, bundlecaps=None, oldmatcher=None,
1311 matcher=None, ellipses=False, shallow=False,
1311 matcher=None, ellipses=False, shallow=False,
1312 ellipsisroots=None, fullnodes=None):
1312 ellipsisroots=None, fullnodes=None):
1313 assert version in supportedoutgoingversions(repo)
1313 assert version in supportedoutgoingversions(repo)
1314
1314
1315 if matcher is None:
1315 if matcher is None:
1316 matcher = matchmod.always(repo.root, '')
1316 matcher = matchmod.always()
1317 if oldmatcher is None:
1317 if oldmatcher is None:
1318 oldmatcher = matchmod.never(repo.root, '')
1318 oldmatcher = matchmod.never()
1319
1319
1320 if version == '01' and not matcher.always():
1320 if version == '01' and not matcher.always():
1321 raise error.ProgrammingError('version 01 changegroups do not support '
1321 raise error.ProgrammingError('version 01 changegroups do not support '
1322 'sparse file matchers')
1322 'sparse file matchers')
1323
1323
1324 if ellipses and version in (b'01', b'02'):
1324 if ellipses and version in (b'01', b'02'):
1325 raise error.Abort(
1325 raise error.Abort(
1326 _('ellipsis nodes require at least cg3 on client and server, '
1326 _('ellipsis nodes require at least cg3 on client and server, '
1327 'but negotiated version %s') % version)
1327 'but negotiated version %s') % version)
1328
1328
1329 # Requested files could include files not in the local store. So
1329 # Requested files could include files not in the local store. So
1330 # filter those out.
1330 # filter those out.
1331 matcher = repo.narrowmatch(matcher)
1331 matcher = repo.narrowmatch(matcher)
1332
1332
1333 fn = _packermap[version][0]
1333 fn = _packermap[version][0]
1334 return fn(repo, oldmatcher, matcher, bundlecaps, ellipses=ellipses,
1334 return fn(repo, oldmatcher, matcher, bundlecaps, ellipses=ellipses,
1335 shallow=shallow, ellipsisroots=ellipsisroots,
1335 shallow=shallow, ellipsisroots=ellipsisroots,
1336 fullnodes=fullnodes)
1336 fullnodes=fullnodes)
1337
1337
1338 def getunbundler(version, fh, alg, extras=None):
1338 def getunbundler(version, fh, alg, extras=None):
1339 return _packermap[version][1](fh, alg, extras=extras)
1339 return _packermap[version][1](fh, alg, extras=extras)
1340
1340
1341 def _changegroupinfo(repo, nodes, source):
1341 def _changegroupinfo(repo, nodes, source):
1342 if repo.ui.verbose or source == 'bundle':
1342 if repo.ui.verbose or source == 'bundle':
1343 repo.ui.status(_("%d changesets found\n") % len(nodes))
1343 repo.ui.status(_("%d changesets found\n") % len(nodes))
1344 if repo.ui.debugflag:
1344 if repo.ui.debugflag:
1345 repo.ui.debug("list of changesets:\n")
1345 repo.ui.debug("list of changesets:\n")
1346 for node in nodes:
1346 for node in nodes:
1347 repo.ui.debug("%s\n" % hex(node))
1347 repo.ui.debug("%s\n" % hex(node))
1348
1348
1349 def makechangegroup(repo, outgoing, version, source, fastpath=False,
1349 def makechangegroup(repo, outgoing, version, source, fastpath=False,
1350 bundlecaps=None):
1350 bundlecaps=None):
1351 cgstream = makestream(repo, outgoing, version, source,
1351 cgstream = makestream(repo, outgoing, version, source,
1352 fastpath=fastpath, bundlecaps=bundlecaps)
1352 fastpath=fastpath, bundlecaps=bundlecaps)
1353 return getunbundler(version, util.chunkbuffer(cgstream), None,
1353 return getunbundler(version, util.chunkbuffer(cgstream), None,
1354 {'clcount': len(outgoing.missing) })
1354 {'clcount': len(outgoing.missing) })
1355
1355
1356 def makestream(repo, outgoing, version, source, fastpath=False,
1356 def makestream(repo, outgoing, version, source, fastpath=False,
1357 bundlecaps=None, matcher=None):
1357 bundlecaps=None, matcher=None):
1358 bundler = getbundler(version, repo, bundlecaps=bundlecaps,
1358 bundler = getbundler(version, repo, bundlecaps=bundlecaps,
1359 matcher=matcher)
1359 matcher=matcher)
1360
1360
1361 repo = repo.unfiltered()
1361 repo = repo.unfiltered()
1362 commonrevs = outgoing.common
1362 commonrevs = outgoing.common
1363 csets = outgoing.missing
1363 csets = outgoing.missing
1364 heads = outgoing.missingheads
1364 heads = outgoing.missingheads
1365 # We go through the fast path if we get told to, or if all (unfiltered
1365 # We go through the fast path if we get told to, or if all (unfiltered
1366 # heads have been requested (since we then know there all linkrevs will
1366 # heads have been requested (since we then know there all linkrevs will
1367 # be pulled by the client).
1367 # be pulled by the client).
1368 heads.sort()
1368 heads.sort()
1369 fastpathlinkrev = fastpath or (
1369 fastpathlinkrev = fastpath or (
1370 repo.filtername is None and heads == sorted(repo.heads()))
1370 repo.filtername is None and heads == sorted(repo.heads()))
1371
1371
1372 repo.hook('preoutgoing', throw=True, source=source)
1372 repo.hook('preoutgoing', throw=True, source=source)
1373 _changegroupinfo(repo, csets, source)
1373 _changegroupinfo(repo, csets, source)
1374 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
1374 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
1375
1375
1376 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
1376 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
1377 revisions = 0
1377 revisions = 0
1378 files = 0
1378 files = 0
1379 progress = repo.ui.makeprogress(_('files'), unit=_('files'),
1379 progress = repo.ui.makeprogress(_('files'), unit=_('files'),
1380 total=expectedfiles)
1380 total=expectedfiles)
1381 for chunkdata in iter(source.filelogheader, {}):
1381 for chunkdata in iter(source.filelogheader, {}):
1382 files += 1
1382 files += 1
1383 f = chunkdata["filename"]
1383 f = chunkdata["filename"]
1384 repo.ui.debug("adding %s revisions\n" % f)
1384 repo.ui.debug("adding %s revisions\n" % f)
1385 progress.increment()
1385 progress.increment()
1386 fl = repo.file(f)
1386 fl = repo.file(f)
1387 o = len(fl)
1387 o = len(fl)
1388 try:
1388 try:
1389 deltas = source.deltaiter()
1389 deltas = source.deltaiter()
1390 if not fl.addgroup(deltas, revmap, trp):
1390 if not fl.addgroup(deltas, revmap, trp):
1391 raise error.Abort(_("received file revlog group is empty"))
1391 raise error.Abort(_("received file revlog group is empty"))
1392 except error.CensoredBaseError as e:
1392 except error.CensoredBaseError as e:
1393 raise error.Abort(_("received delta base is censored: %s") % e)
1393 raise error.Abort(_("received delta base is censored: %s") % e)
1394 revisions += len(fl) - o
1394 revisions += len(fl) - o
1395 if f in needfiles:
1395 if f in needfiles:
1396 needs = needfiles[f]
1396 needs = needfiles[f]
1397 for new in pycompat.xrange(o, len(fl)):
1397 for new in pycompat.xrange(o, len(fl)):
1398 n = fl.node(new)
1398 n = fl.node(new)
1399 if n in needs:
1399 if n in needs:
1400 needs.remove(n)
1400 needs.remove(n)
1401 else:
1401 else:
1402 raise error.Abort(
1402 raise error.Abort(
1403 _("received spurious file revlog entry"))
1403 _("received spurious file revlog entry"))
1404 if not needs:
1404 if not needs:
1405 del needfiles[f]
1405 del needfiles[f]
1406 progress.complete()
1406 progress.complete()
1407
1407
1408 for f, needs in needfiles.iteritems():
1408 for f, needs in needfiles.iteritems():
1409 fl = repo.file(f)
1409 fl = repo.file(f)
1410 for n in needs:
1410 for n in needs:
1411 try:
1411 try:
1412 fl.rev(n)
1412 fl.rev(n)
1413 except error.LookupError:
1413 except error.LookupError:
1414 raise error.Abort(
1414 raise error.Abort(
1415 _('missing file data for %s:%s - run hg verify') %
1415 _('missing file data for %s:%s - run hg verify') %
1416 (f, hex(n)))
1416 (f, hex(n)))
1417
1417
1418 return revisions, files
1418 return revisions, files
@@ -1,1508 +1,1508 b''
1 # dirstate.py - working directory tracking for mercurial
1 # dirstate.py - working directory tracking for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import collections
10 import collections
11 import contextlib
11 import contextlib
12 import errno
12 import errno
13 import os
13 import os
14 import stat
14 import stat
15
15
16 from .i18n import _
16 from .i18n import _
17 from .node import nullid
17 from .node import nullid
18 from . import (
18 from . import (
19 encoding,
19 encoding,
20 error,
20 error,
21 match as matchmod,
21 match as matchmod,
22 pathutil,
22 pathutil,
23 policy,
23 policy,
24 pycompat,
24 pycompat,
25 scmutil,
25 scmutil,
26 txnutil,
26 txnutil,
27 util,
27 util,
28 )
28 )
29
29
30 parsers = policy.importmod(r'parsers')
30 parsers = policy.importmod(r'parsers')
31
31
32 propertycache = util.propertycache
32 propertycache = util.propertycache
33 filecache = scmutil.filecache
33 filecache = scmutil.filecache
34 _rangemask = 0x7fffffff
34 _rangemask = 0x7fffffff
35
35
36 dirstatetuple = parsers.dirstatetuple
36 dirstatetuple = parsers.dirstatetuple
37
37
38 class repocache(filecache):
38 class repocache(filecache):
39 """filecache for files in .hg/"""
39 """filecache for files in .hg/"""
40 def join(self, obj, fname):
40 def join(self, obj, fname):
41 return obj._opener.join(fname)
41 return obj._opener.join(fname)
42
42
43 class rootcache(filecache):
43 class rootcache(filecache):
44 """filecache for files in the repository root"""
44 """filecache for files in the repository root"""
45 def join(self, obj, fname):
45 def join(self, obj, fname):
46 return obj._join(fname)
46 return obj._join(fname)
47
47
48 def _getfsnow(vfs):
48 def _getfsnow(vfs):
49 '''Get "now" timestamp on filesystem'''
49 '''Get "now" timestamp on filesystem'''
50 tmpfd, tmpname = vfs.mkstemp()
50 tmpfd, tmpname = vfs.mkstemp()
51 try:
51 try:
52 return os.fstat(tmpfd)[stat.ST_MTIME]
52 return os.fstat(tmpfd)[stat.ST_MTIME]
53 finally:
53 finally:
54 os.close(tmpfd)
54 os.close(tmpfd)
55 vfs.unlink(tmpname)
55 vfs.unlink(tmpname)
56
56
57 class dirstate(object):
57 class dirstate(object):
58
58
59 def __init__(self, opener, ui, root, validate, sparsematchfn):
59 def __init__(self, opener, ui, root, validate, sparsematchfn):
60 '''Create a new dirstate object.
60 '''Create a new dirstate object.
61
61
62 opener is an open()-like callable that can be used to open the
62 opener is an open()-like callable that can be used to open the
63 dirstate file; root is the root of the directory tracked by
63 dirstate file; root is the root of the directory tracked by
64 the dirstate.
64 the dirstate.
65 '''
65 '''
66 self._opener = opener
66 self._opener = opener
67 self._validate = validate
67 self._validate = validate
68 self._root = root
68 self._root = root
69 self._sparsematchfn = sparsematchfn
69 self._sparsematchfn = sparsematchfn
70 # ntpath.join(root, '') of Python 2.7.9 does not add sep if root is
70 # ntpath.join(root, '') of Python 2.7.9 does not add sep if root is
71 # UNC path pointing to root share (issue4557)
71 # UNC path pointing to root share (issue4557)
72 self._rootdir = pathutil.normasprefix(root)
72 self._rootdir = pathutil.normasprefix(root)
73 self._dirty = False
73 self._dirty = False
74 self._lastnormaltime = 0
74 self._lastnormaltime = 0
75 self._ui = ui
75 self._ui = ui
76 self._filecache = {}
76 self._filecache = {}
77 self._parentwriters = 0
77 self._parentwriters = 0
78 self._filename = 'dirstate'
78 self._filename = 'dirstate'
79 self._pendingfilename = '%s.pending' % self._filename
79 self._pendingfilename = '%s.pending' % self._filename
80 self._plchangecallbacks = {}
80 self._plchangecallbacks = {}
81 self._origpl = None
81 self._origpl = None
82 self._updatedfiles = set()
82 self._updatedfiles = set()
83 self._mapcls = dirstatemap
83 self._mapcls = dirstatemap
84 # Access and cache cwd early, so we don't access it for the first time
84 # Access and cache cwd early, so we don't access it for the first time
85 # after a working-copy update caused it to not exist (accessing it then
85 # after a working-copy update caused it to not exist (accessing it then
86 # raises an exception).
86 # raises an exception).
87 self._cwd
87 self._cwd
88
88
89 @contextlib.contextmanager
89 @contextlib.contextmanager
90 def parentchange(self):
90 def parentchange(self):
91 '''Context manager for handling dirstate parents.
91 '''Context manager for handling dirstate parents.
92
92
93 If an exception occurs in the scope of the context manager,
93 If an exception occurs in the scope of the context manager,
94 the incoherent dirstate won't be written when wlock is
94 the incoherent dirstate won't be written when wlock is
95 released.
95 released.
96 '''
96 '''
97 self._parentwriters += 1
97 self._parentwriters += 1
98 yield
98 yield
99 # Typically we want the "undo" step of a context manager in a
99 # Typically we want the "undo" step of a context manager in a
100 # finally block so it happens even when an exception
100 # finally block so it happens even when an exception
101 # occurs. In this case, however, we only want to decrement
101 # occurs. In this case, however, we only want to decrement
102 # parentwriters if the code in the with statement exits
102 # parentwriters if the code in the with statement exits
103 # normally, so we don't have a try/finally here on purpose.
103 # normally, so we don't have a try/finally here on purpose.
104 self._parentwriters -= 1
104 self._parentwriters -= 1
105
105
106 def pendingparentchange(self):
106 def pendingparentchange(self):
107 '''Returns true if the dirstate is in the middle of a set of changes
107 '''Returns true if the dirstate is in the middle of a set of changes
108 that modify the dirstate parent.
108 that modify the dirstate parent.
109 '''
109 '''
110 return self._parentwriters > 0
110 return self._parentwriters > 0
111
111
112 @propertycache
112 @propertycache
113 def _map(self):
113 def _map(self):
114 """Return the dirstate contents (see documentation for dirstatemap)."""
114 """Return the dirstate contents (see documentation for dirstatemap)."""
115 self._map = self._mapcls(self._ui, self._opener, self._root)
115 self._map = self._mapcls(self._ui, self._opener, self._root)
116 return self._map
116 return self._map
117
117
118 @property
118 @property
119 def _sparsematcher(self):
119 def _sparsematcher(self):
120 """The matcher for the sparse checkout.
120 """The matcher for the sparse checkout.
121
121
122 The working directory may not include every file from a manifest. The
122 The working directory may not include every file from a manifest. The
123 matcher obtained by this property will match a path if it is to be
123 matcher obtained by this property will match a path if it is to be
124 included in the working directory.
124 included in the working directory.
125 """
125 """
126 # TODO there is potential to cache this property. For now, the matcher
126 # TODO there is potential to cache this property. For now, the matcher
127 # is resolved on every access. (But the called function does use a
127 # is resolved on every access. (But the called function does use a
128 # cache to keep the lookup fast.)
128 # cache to keep the lookup fast.)
129 return self._sparsematchfn()
129 return self._sparsematchfn()
130
130
131 @repocache('branch')
131 @repocache('branch')
132 def _branch(self):
132 def _branch(self):
133 try:
133 try:
134 return self._opener.read("branch").strip() or "default"
134 return self._opener.read("branch").strip() or "default"
135 except IOError as inst:
135 except IOError as inst:
136 if inst.errno != errno.ENOENT:
136 if inst.errno != errno.ENOENT:
137 raise
137 raise
138 return "default"
138 return "default"
139
139
140 @property
140 @property
141 def _pl(self):
141 def _pl(self):
142 return self._map.parents()
142 return self._map.parents()
143
143
144 def hasdir(self, d):
144 def hasdir(self, d):
145 return self._map.hastrackeddir(d)
145 return self._map.hastrackeddir(d)
146
146
147 @rootcache('.hgignore')
147 @rootcache('.hgignore')
148 def _ignore(self):
148 def _ignore(self):
149 files = self._ignorefiles()
149 files = self._ignorefiles()
150 if not files:
150 if not files:
151 return matchmod.never(self._root, '')
151 return matchmod.never()
152
152
153 pats = ['include:%s' % f for f in files]
153 pats = ['include:%s' % f for f in files]
154 return matchmod.match(self._root, '', [], pats, warn=self._ui.warn)
154 return matchmod.match(self._root, '', [], pats, warn=self._ui.warn)
155
155
156 @propertycache
156 @propertycache
157 def _slash(self):
157 def _slash(self):
158 return self._ui.configbool('ui', 'slash') and pycompat.ossep != '/'
158 return self._ui.configbool('ui', 'slash') and pycompat.ossep != '/'
159
159
160 @propertycache
160 @propertycache
161 def _checklink(self):
161 def _checklink(self):
162 return util.checklink(self._root)
162 return util.checklink(self._root)
163
163
164 @propertycache
164 @propertycache
165 def _checkexec(self):
165 def _checkexec(self):
166 return util.checkexec(self._root)
166 return util.checkexec(self._root)
167
167
168 @propertycache
168 @propertycache
169 def _checkcase(self):
169 def _checkcase(self):
170 return not util.fscasesensitive(self._join('.hg'))
170 return not util.fscasesensitive(self._join('.hg'))
171
171
172 def _join(self, f):
172 def _join(self, f):
173 # much faster than os.path.join()
173 # much faster than os.path.join()
174 # it's safe because f is always a relative path
174 # it's safe because f is always a relative path
175 return self._rootdir + f
175 return self._rootdir + f
176
176
177 def flagfunc(self, buildfallback):
177 def flagfunc(self, buildfallback):
178 if self._checklink and self._checkexec:
178 if self._checklink and self._checkexec:
179 def f(x):
179 def f(x):
180 try:
180 try:
181 st = os.lstat(self._join(x))
181 st = os.lstat(self._join(x))
182 if util.statislink(st):
182 if util.statislink(st):
183 return 'l'
183 return 'l'
184 if util.statisexec(st):
184 if util.statisexec(st):
185 return 'x'
185 return 'x'
186 except OSError:
186 except OSError:
187 pass
187 pass
188 return ''
188 return ''
189 return f
189 return f
190
190
191 fallback = buildfallback()
191 fallback = buildfallback()
192 if self._checklink:
192 if self._checklink:
193 def f(x):
193 def f(x):
194 if os.path.islink(self._join(x)):
194 if os.path.islink(self._join(x)):
195 return 'l'
195 return 'l'
196 if 'x' in fallback(x):
196 if 'x' in fallback(x):
197 return 'x'
197 return 'x'
198 return ''
198 return ''
199 return f
199 return f
200 if self._checkexec:
200 if self._checkexec:
201 def f(x):
201 def f(x):
202 if 'l' in fallback(x):
202 if 'l' in fallback(x):
203 return 'l'
203 return 'l'
204 if util.isexec(self._join(x)):
204 if util.isexec(self._join(x)):
205 return 'x'
205 return 'x'
206 return ''
206 return ''
207 return f
207 return f
208 else:
208 else:
209 return fallback
209 return fallback
210
210
211 @propertycache
211 @propertycache
212 def _cwd(self):
212 def _cwd(self):
213 # internal config: ui.forcecwd
213 # internal config: ui.forcecwd
214 forcecwd = self._ui.config('ui', 'forcecwd')
214 forcecwd = self._ui.config('ui', 'forcecwd')
215 if forcecwd:
215 if forcecwd:
216 return forcecwd
216 return forcecwd
217 return encoding.getcwd()
217 return encoding.getcwd()
218
218
219 def getcwd(self):
219 def getcwd(self):
220 '''Return the path from which a canonical path is calculated.
220 '''Return the path from which a canonical path is calculated.
221
221
222 This path should be used to resolve file patterns or to convert
222 This path should be used to resolve file patterns or to convert
223 canonical paths back to file paths for display. It shouldn't be
223 canonical paths back to file paths for display. It shouldn't be
224 used to get real file paths. Use vfs functions instead.
224 used to get real file paths. Use vfs functions instead.
225 '''
225 '''
226 cwd = self._cwd
226 cwd = self._cwd
227 if cwd == self._root:
227 if cwd == self._root:
228 return ''
228 return ''
229 # self._root ends with a path separator if self._root is '/' or 'C:\'
229 # self._root ends with a path separator if self._root is '/' or 'C:\'
230 rootsep = self._root
230 rootsep = self._root
231 if not util.endswithsep(rootsep):
231 if not util.endswithsep(rootsep):
232 rootsep += pycompat.ossep
232 rootsep += pycompat.ossep
233 if cwd.startswith(rootsep):
233 if cwd.startswith(rootsep):
234 return cwd[len(rootsep):]
234 return cwd[len(rootsep):]
235 else:
235 else:
236 # we're outside the repo. return an absolute path.
236 # we're outside the repo. return an absolute path.
237 return cwd
237 return cwd
238
238
239 def pathto(self, f, cwd=None):
239 def pathto(self, f, cwd=None):
240 if cwd is None:
240 if cwd is None:
241 cwd = self.getcwd()
241 cwd = self.getcwd()
242 path = util.pathto(self._root, cwd, f)
242 path = util.pathto(self._root, cwd, f)
243 if self._slash:
243 if self._slash:
244 return util.pconvert(path)
244 return util.pconvert(path)
245 return path
245 return path
246
246
247 def __getitem__(self, key):
247 def __getitem__(self, key):
248 '''Return the current state of key (a filename) in the dirstate.
248 '''Return the current state of key (a filename) in the dirstate.
249
249
250 States are:
250 States are:
251 n normal
251 n normal
252 m needs merging
252 m needs merging
253 r marked for removal
253 r marked for removal
254 a marked for addition
254 a marked for addition
255 ? not tracked
255 ? not tracked
256 '''
256 '''
257 return self._map.get(key, ("?",))[0]
257 return self._map.get(key, ("?",))[0]
258
258
259 def __contains__(self, key):
259 def __contains__(self, key):
260 return key in self._map
260 return key in self._map
261
261
262 def __iter__(self):
262 def __iter__(self):
263 return iter(sorted(self._map))
263 return iter(sorted(self._map))
264
264
265 def items(self):
265 def items(self):
266 return self._map.iteritems()
266 return self._map.iteritems()
267
267
268 iteritems = items
268 iteritems = items
269
269
270 def parents(self):
270 def parents(self):
271 return [self._validate(p) for p in self._pl]
271 return [self._validate(p) for p in self._pl]
272
272
273 def p1(self):
273 def p1(self):
274 return self._validate(self._pl[0])
274 return self._validate(self._pl[0])
275
275
276 def p2(self):
276 def p2(self):
277 return self._validate(self._pl[1])
277 return self._validate(self._pl[1])
278
278
279 def branch(self):
279 def branch(self):
280 return encoding.tolocal(self._branch)
280 return encoding.tolocal(self._branch)
281
281
282 def setparents(self, p1, p2=nullid):
282 def setparents(self, p1, p2=nullid):
283 """Set dirstate parents to p1 and p2.
283 """Set dirstate parents to p1 and p2.
284
284
285 When moving from two parents to one, 'm' merged entries a
285 When moving from two parents to one, 'm' merged entries a
286 adjusted to normal and previous copy records discarded and
286 adjusted to normal and previous copy records discarded and
287 returned by the call.
287 returned by the call.
288
288
289 See localrepo.setparents()
289 See localrepo.setparents()
290 """
290 """
291 if self._parentwriters == 0:
291 if self._parentwriters == 0:
292 raise ValueError("cannot set dirstate parent without "
292 raise ValueError("cannot set dirstate parent without "
293 "calling dirstate.beginparentchange")
293 "calling dirstate.beginparentchange")
294
294
295 self._dirty = True
295 self._dirty = True
296 oldp2 = self._pl[1]
296 oldp2 = self._pl[1]
297 if self._origpl is None:
297 if self._origpl is None:
298 self._origpl = self._pl
298 self._origpl = self._pl
299 self._map.setparents(p1, p2)
299 self._map.setparents(p1, p2)
300 copies = {}
300 copies = {}
301 if oldp2 != nullid and p2 == nullid:
301 if oldp2 != nullid and p2 == nullid:
302 candidatefiles = self._map.nonnormalset.union(
302 candidatefiles = self._map.nonnormalset.union(
303 self._map.otherparentset)
303 self._map.otherparentset)
304 for f in candidatefiles:
304 for f in candidatefiles:
305 s = self._map.get(f)
305 s = self._map.get(f)
306 if s is None:
306 if s is None:
307 continue
307 continue
308
308
309 # Discard 'm' markers when moving away from a merge state
309 # Discard 'm' markers when moving away from a merge state
310 if s[0] == 'm':
310 if s[0] == 'm':
311 source = self._map.copymap.get(f)
311 source = self._map.copymap.get(f)
312 if source:
312 if source:
313 copies[f] = source
313 copies[f] = source
314 self.normallookup(f)
314 self.normallookup(f)
315 # Also fix up otherparent markers
315 # Also fix up otherparent markers
316 elif s[0] == 'n' and s[2] == -2:
316 elif s[0] == 'n' and s[2] == -2:
317 source = self._map.copymap.get(f)
317 source = self._map.copymap.get(f)
318 if source:
318 if source:
319 copies[f] = source
319 copies[f] = source
320 self.add(f)
320 self.add(f)
321 return copies
321 return copies
322
322
323 def setbranch(self, branch):
323 def setbranch(self, branch):
324 self.__class__._branch.set(self, encoding.fromlocal(branch))
324 self.__class__._branch.set(self, encoding.fromlocal(branch))
325 f = self._opener('branch', 'w', atomictemp=True, checkambig=True)
325 f = self._opener('branch', 'w', atomictemp=True, checkambig=True)
326 try:
326 try:
327 f.write(self._branch + '\n')
327 f.write(self._branch + '\n')
328 f.close()
328 f.close()
329
329
330 # make sure filecache has the correct stat info for _branch after
330 # make sure filecache has the correct stat info for _branch after
331 # replacing the underlying file
331 # replacing the underlying file
332 ce = self._filecache['_branch']
332 ce = self._filecache['_branch']
333 if ce:
333 if ce:
334 ce.refresh()
334 ce.refresh()
335 except: # re-raises
335 except: # re-raises
336 f.discard()
336 f.discard()
337 raise
337 raise
338
338
339 def invalidate(self):
339 def invalidate(self):
340 '''Causes the next access to reread the dirstate.
340 '''Causes the next access to reread the dirstate.
341
341
342 This is different from localrepo.invalidatedirstate() because it always
342 This is different from localrepo.invalidatedirstate() because it always
343 rereads the dirstate. Use localrepo.invalidatedirstate() if you want to
343 rereads the dirstate. Use localrepo.invalidatedirstate() if you want to
344 check whether the dirstate has changed before rereading it.'''
344 check whether the dirstate has changed before rereading it.'''
345
345
346 for a in (r"_map", r"_branch", r"_ignore"):
346 for a in (r"_map", r"_branch", r"_ignore"):
347 if a in self.__dict__:
347 if a in self.__dict__:
348 delattr(self, a)
348 delattr(self, a)
349 self._lastnormaltime = 0
349 self._lastnormaltime = 0
350 self._dirty = False
350 self._dirty = False
351 self._updatedfiles.clear()
351 self._updatedfiles.clear()
352 self._parentwriters = 0
352 self._parentwriters = 0
353 self._origpl = None
353 self._origpl = None
354
354
355 def copy(self, source, dest):
355 def copy(self, source, dest):
356 """Mark dest as a copy of source. Unmark dest if source is None."""
356 """Mark dest as a copy of source. Unmark dest if source is None."""
357 if source == dest:
357 if source == dest:
358 return
358 return
359 self._dirty = True
359 self._dirty = True
360 if source is not None:
360 if source is not None:
361 self._map.copymap[dest] = source
361 self._map.copymap[dest] = source
362 self._updatedfiles.add(source)
362 self._updatedfiles.add(source)
363 self._updatedfiles.add(dest)
363 self._updatedfiles.add(dest)
364 elif self._map.copymap.pop(dest, None):
364 elif self._map.copymap.pop(dest, None):
365 self._updatedfiles.add(dest)
365 self._updatedfiles.add(dest)
366
366
367 def copied(self, file):
367 def copied(self, file):
368 return self._map.copymap.get(file, None)
368 return self._map.copymap.get(file, None)
369
369
370 def copies(self):
370 def copies(self):
371 return self._map.copymap
371 return self._map.copymap
372
372
373 def _addpath(self, f, state, mode, size, mtime):
373 def _addpath(self, f, state, mode, size, mtime):
374 oldstate = self[f]
374 oldstate = self[f]
375 if state == 'a' or oldstate == 'r':
375 if state == 'a' or oldstate == 'r':
376 scmutil.checkfilename(f)
376 scmutil.checkfilename(f)
377 if self._map.hastrackeddir(f):
377 if self._map.hastrackeddir(f):
378 raise error.Abort(_('directory %r already in dirstate') %
378 raise error.Abort(_('directory %r already in dirstate') %
379 pycompat.bytestr(f))
379 pycompat.bytestr(f))
380 # shadows
380 # shadows
381 for d in util.finddirs(f):
381 for d in util.finddirs(f):
382 if self._map.hastrackeddir(d):
382 if self._map.hastrackeddir(d):
383 break
383 break
384 entry = self._map.get(d)
384 entry = self._map.get(d)
385 if entry is not None and entry[0] != 'r':
385 if entry is not None and entry[0] != 'r':
386 raise error.Abort(
386 raise error.Abort(
387 _('file %r in dirstate clashes with %r') %
387 _('file %r in dirstate clashes with %r') %
388 (pycompat.bytestr(d), pycompat.bytestr(f)))
388 (pycompat.bytestr(d), pycompat.bytestr(f)))
389 self._dirty = True
389 self._dirty = True
390 self._updatedfiles.add(f)
390 self._updatedfiles.add(f)
391 self._map.addfile(f, oldstate, state, mode, size, mtime)
391 self._map.addfile(f, oldstate, state, mode, size, mtime)
392
392
393 def normal(self, f):
393 def normal(self, f):
394 '''Mark a file normal and clean.'''
394 '''Mark a file normal and clean.'''
395 s = os.lstat(self._join(f))
395 s = os.lstat(self._join(f))
396 mtime = s[stat.ST_MTIME]
396 mtime = s[stat.ST_MTIME]
397 self._addpath(f, 'n', s.st_mode,
397 self._addpath(f, 'n', s.st_mode,
398 s.st_size & _rangemask, mtime & _rangemask)
398 s.st_size & _rangemask, mtime & _rangemask)
399 self._map.copymap.pop(f, None)
399 self._map.copymap.pop(f, None)
400 if f in self._map.nonnormalset:
400 if f in self._map.nonnormalset:
401 self._map.nonnormalset.remove(f)
401 self._map.nonnormalset.remove(f)
402 if mtime > self._lastnormaltime:
402 if mtime > self._lastnormaltime:
403 # Remember the most recent modification timeslot for status(),
403 # Remember the most recent modification timeslot for status(),
404 # to make sure we won't miss future size-preserving file content
404 # to make sure we won't miss future size-preserving file content
405 # modifications that happen within the same timeslot.
405 # modifications that happen within the same timeslot.
406 self._lastnormaltime = mtime
406 self._lastnormaltime = mtime
407
407
408 def normallookup(self, f):
408 def normallookup(self, f):
409 '''Mark a file normal, but possibly dirty.'''
409 '''Mark a file normal, but possibly dirty.'''
410 if self._pl[1] != nullid:
410 if self._pl[1] != nullid:
411 # if there is a merge going on and the file was either
411 # if there is a merge going on and the file was either
412 # in state 'm' (-1) or coming from other parent (-2) before
412 # in state 'm' (-1) or coming from other parent (-2) before
413 # being removed, restore that state.
413 # being removed, restore that state.
414 entry = self._map.get(f)
414 entry = self._map.get(f)
415 if entry is not None:
415 if entry is not None:
416 if entry[0] == 'r' and entry[2] in (-1, -2):
416 if entry[0] == 'r' and entry[2] in (-1, -2):
417 source = self._map.copymap.get(f)
417 source = self._map.copymap.get(f)
418 if entry[2] == -1:
418 if entry[2] == -1:
419 self.merge(f)
419 self.merge(f)
420 elif entry[2] == -2:
420 elif entry[2] == -2:
421 self.otherparent(f)
421 self.otherparent(f)
422 if source:
422 if source:
423 self.copy(source, f)
423 self.copy(source, f)
424 return
424 return
425 if entry[0] == 'm' or entry[0] == 'n' and entry[2] == -2:
425 if entry[0] == 'm' or entry[0] == 'n' and entry[2] == -2:
426 return
426 return
427 self._addpath(f, 'n', 0, -1, -1)
427 self._addpath(f, 'n', 0, -1, -1)
428 self._map.copymap.pop(f, None)
428 self._map.copymap.pop(f, None)
429
429
430 def otherparent(self, f):
430 def otherparent(self, f):
431 '''Mark as coming from the other parent, always dirty.'''
431 '''Mark as coming from the other parent, always dirty.'''
432 if self._pl[1] == nullid:
432 if self._pl[1] == nullid:
433 raise error.Abort(_("setting %r to other parent "
433 raise error.Abort(_("setting %r to other parent "
434 "only allowed in merges") % f)
434 "only allowed in merges") % f)
435 if f in self and self[f] == 'n':
435 if f in self and self[f] == 'n':
436 # merge-like
436 # merge-like
437 self._addpath(f, 'm', 0, -2, -1)
437 self._addpath(f, 'm', 0, -2, -1)
438 else:
438 else:
439 # add-like
439 # add-like
440 self._addpath(f, 'n', 0, -2, -1)
440 self._addpath(f, 'n', 0, -2, -1)
441 self._map.copymap.pop(f, None)
441 self._map.copymap.pop(f, None)
442
442
443 def add(self, f):
443 def add(self, f):
444 '''Mark a file added.'''
444 '''Mark a file added.'''
445 self._addpath(f, 'a', 0, -1, -1)
445 self._addpath(f, 'a', 0, -1, -1)
446 self._map.copymap.pop(f, None)
446 self._map.copymap.pop(f, None)
447
447
448 def remove(self, f):
448 def remove(self, f):
449 '''Mark a file removed.'''
449 '''Mark a file removed.'''
450 self._dirty = True
450 self._dirty = True
451 oldstate = self[f]
451 oldstate = self[f]
452 size = 0
452 size = 0
453 if self._pl[1] != nullid:
453 if self._pl[1] != nullid:
454 entry = self._map.get(f)
454 entry = self._map.get(f)
455 if entry is not None:
455 if entry is not None:
456 # backup the previous state
456 # backup the previous state
457 if entry[0] == 'm': # merge
457 if entry[0] == 'm': # merge
458 size = -1
458 size = -1
459 elif entry[0] == 'n' and entry[2] == -2: # other parent
459 elif entry[0] == 'n' and entry[2] == -2: # other parent
460 size = -2
460 size = -2
461 self._map.otherparentset.add(f)
461 self._map.otherparentset.add(f)
462 self._updatedfiles.add(f)
462 self._updatedfiles.add(f)
463 self._map.removefile(f, oldstate, size)
463 self._map.removefile(f, oldstate, size)
464 if size == 0:
464 if size == 0:
465 self._map.copymap.pop(f, None)
465 self._map.copymap.pop(f, None)
466
466
467 def merge(self, f):
467 def merge(self, f):
468 '''Mark a file merged.'''
468 '''Mark a file merged.'''
469 if self._pl[1] == nullid:
469 if self._pl[1] == nullid:
470 return self.normallookup(f)
470 return self.normallookup(f)
471 return self.otherparent(f)
471 return self.otherparent(f)
472
472
473 def drop(self, f):
473 def drop(self, f):
474 '''Drop a file from the dirstate'''
474 '''Drop a file from the dirstate'''
475 oldstate = self[f]
475 oldstate = self[f]
476 if self._map.dropfile(f, oldstate):
476 if self._map.dropfile(f, oldstate):
477 self._dirty = True
477 self._dirty = True
478 self._updatedfiles.add(f)
478 self._updatedfiles.add(f)
479 self._map.copymap.pop(f, None)
479 self._map.copymap.pop(f, None)
480
480
481 def _discoverpath(self, path, normed, ignoremissing, exists, storemap):
481 def _discoverpath(self, path, normed, ignoremissing, exists, storemap):
482 if exists is None:
482 if exists is None:
483 exists = os.path.lexists(os.path.join(self._root, path))
483 exists = os.path.lexists(os.path.join(self._root, path))
484 if not exists:
484 if not exists:
485 # Maybe a path component exists
485 # Maybe a path component exists
486 if not ignoremissing and '/' in path:
486 if not ignoremissing and '/' in path:
487 d, f = path.rsplit('/', 1)
487 d, f = path.rsplit('/', 1)
488 d = self._normalize(d, False, ignoremissing, None)
488 d = self._normalize(d, False, ignoremissing, None)
489 folded = d + "/" + f
489 folded = d + "/" + f
490 else:
490 else:
491 # No path components, preserve original case
491 # No path components, preserve original case
492 folded = path
492 folded = path
493 else:
493 else:
494 # recursively normalize leading directory components
494 # recursively normalize leading directory components
495 # against dirstate
495 # against dirstate
496 if '/' in normed:
496 if '/' in normed:
497 d, f = normed.rsplit('/', 1)
497 d, f = normed.rsplit('/', 1)
498 d = self._normalize(d, False, ignoremissing, True)
498 d = self._normalize(d, False, ignoremissing, True)
499 r = self._root + "/" + d
499 r = self._root + "/" + d
500 folded = d + "/" + util.fspath(f, r)
500 folded = d + "/" + util.fspath(f, r)
501 else:
501 else:
502 folded = util.fspath(normed, self._root)
502 folded = util.fspath(normed, self._root)
503 storemap[normed] = folded
503 storemap[normed] = folded
504
504
505 return folded
505 return folded
506
506
507 def _normalizefile(self, path, isknown, ignoremissing=False, exists=None):
507 def _normalizefile(self, path, isknown, ignoremissing=False, exists=None):
508 normed = util.normcase(path)
508 normed = util.normcase(path)
509 folded = self._map.filefoldmap.get(normed, None)
509 folded = self._map.filefoldmap.get(normed, None)
510 if folded is None:
510 if folded is None:
511 if isknown:
511 if isknown:
512 folded = path
512 folded = path
513 else:
513 else:
514 folded = self._discoverpath(path, normed, ignoremissing, exists,
514 folded = self._discoverpath(path, normed, ignoremissing, exists,
515 self._map.filefoldmap)
515 self._map.filefoldmap)
516 return folded
516 return folded
517
517
518 def _normalize(self, path, isknown, ignoremissing=False, exists=None):
518 def _normalize(self, path, isknown, ignoremissing=False, exists=None):
519 normed = util.normcase(path)
519 normed = util.normcase(path)
520 folded = self._map.filefoldmap.get(normed, None)
520 folded = self._map.filefoldmap.get(normed, None)
521 if folded is None:
521 if folded is None:
522 folded = self._map.dirfoldmap.get(normed, None)
522 folded = self._map.dirfoldmap.get(normed, None)
523 if folded is None:
523 if folded is None:
524 if isknown:
524 if isknown:
525 folded = path
525 folded = path
526 else:
526 else:
527 # store discovered result in dirfoldmap so that future
527 # store discovered result in dirfoldmap so that future
528 # normalizefile calls don't start matching directories
528 # normalizefile calls don't start matching directories
529 folded = self._discoverpath(path, normed, ignoremissing, exists,
529 folded = self._discoverpath(path, normed, ignoremissing, exists,
530 self._map.dirfoldmap)
530 self._map.dirfoldmap)
531 return folded
531 return folded
532
532
533 def normalize(self, path, isknown=False, ignoremissing=False):
533 def normalize(self, path, isknown=False, ignoremissing=False):
534 '''
534 '''
535 normalize the case of a pathname when on a casefolding filesystem
535 normalize the case of a pathname when on a casefolding filesystem
536
536
537 isknown specifies whether the filename came from walking the
537 isknown specifies whether the filename came from walking the
538 disk, to avoid extra filesystem access.
538 disk, to avoid extra filesystem access.
539
539
540 If ignoremissing is True, missing path are returned
540 If ignoremissing is True, missing path are returned
541 unchanged. Otherwise, we try harder to normalize possibly
541 unchanged. Otherwise, we try harder to normalize possibly
542 existing path components.
542 existing path components.
543
543
544 The normalized case is determined based on the following precedence:
544 The normalized case is determined based on the following precedence:
545
545
546 - version of name already stored in the dirstate
546 - version of name already stored in the dirstate
547 - version of name stored on disk
547 - version of name stored on disk
548 - version provided via command arguments
548 - version provided via command arguments
549 '''
549 '''
550
550
551 if self._checkcase:
551 if self._checkcase:
552 return self._normalize(path, isknown, ignoremissing)
552 return self._normalize(path, isknown, ignoremissing)
553 return path
553 return path
554
554
555 def clear(self):
555 def clear(self):
556 self._map.clear()
556 self._map.clear()
557 self._lastnormaltime = 0
557 self._lastnormaltime = 0
558 self._updatedfiles.clear()
558 self._updatedfiles.clear()
559 self._dirty = True
559 self._dirty = True
560
560
561 def rebuild(self, parent, allfiles, changedfiles=None):
561 def rebuild(self, parent, allfiles, changedfiles=None):
562 if changedfiles is None:
562 if changedfiles is None:
563 # Rebuild entire dirstate
563 # Rebuild entire dirstate
564 changedfiles = allfiles
564 changedfiles = allfiles
565 lastnormaltime = self._lastnormaltime
565 lastnormaltime = self._lastnormaltime
566 self.clear()
566 self.clear()
567 self._lastnormaltime = lastnormaltime
567 self._lastnormaltime = lastnormaltime
568
568
569 if self._origpl is None:
569 if self._origpl is None:
570 self._origpl = self._pl
570 self._origpl = self._pl
571 self._map.setparents(parent, nullid)
571 self._map.setparents(parent, nullid)
572 for f in changedfiles:
572 for f in changedfiles:
573 if f in allfiles:
573 if f in allfiles:
574 self.normallookup(f)
574 self.normallookup(f)
575 else:
575 else:
576 self.drop(f)
576 self.drop(f)
577
577
578 self._dirty = True
578 self._dirty = True
579
579
580 def identity(self):
580 def identity(self):
581 '''Return identity of dirstate itself to detect changing in storage
581 '''Return identity of dirstate itself to detect changing in storage
582
582
583 If identity of previous dirstate is equal to this, writing
583 If identity of previous dirstate is equal to this, writing
584 changes based on the former dirstate out can keep consistency.
584 changes based on the former dirstate out can keep consistency.
585 '''
585 '''
586 return self._map.identity
586 return self._map.identity
587
587
588 def write(self, tr):
588 def write(self, tr):
589 if not self._dirty:
589 if not self._dirty:
590 return
590 return
591
591
592 filename = self._filename
592 filename = self._filename
593 if tr:
593 if tr:
594 # 'dirstate.write()' is not only for writing in-memory
594 # 'dirstate.write()' is not only for writing in-memory
595 # changes out, but also for dropping ambiguous timestamp.
595 # changes out, but also for dropping ambiguous timestamp.
596 # delayed writing re-raise "ambiguous timestamp issue".
596 # delayed writing re-raise "ambiguous timestamp issue".
597 # See also the wiki page below for detail:
597 # See also the wiki page below for detail:
598 # https://www.mercurial-scm.org/wiki/DirstateTransactionPlan
598 # https://www.mercurial-scm.org/wiki/DirstateTransactionPlan
599
599
600 # emulate dropping timestamp in 'parsers.pack_dirstate'
600 # emulate dropping timestamp in 'parsers.pack_dirstate'
601 now = _getfsnow(self._opener)
601 now = _getfsnow(self._opener)
602 self._map.clearambiguoustimes(self._updatedfiles, now)
602 self._map.clearambiguoustimes(self._updatedfiles, now)
603
603
604 # emulate that all 'dirstate.normal' results are written out
604 # emulate that all 'dirstate.normal' results are written out
605 self._lastnormaltime = 0
605 self._lastnormaltime = 0
606 self._updatedfiles.clear()
606 self._updatedfiles.clear()
607
607
608 # delay writing in-memory changes out
608 # delay writing in-memory changes out
609 tr.addfilegenerator('dirstate', (self._filename,),
609 tr.addfilegenerator('dirstate', (self._filename,),
610 self._writedirstate, location='plain')
610 self._writedirstate, location='plain')
611 return
611 return
612
612
613 st = self._opener(filename, "w", atomictemp=True, checkambig=True)
613 st = self._opener(filename, "w", atomictemp=True, checkambig=True)
614 self._writedirstate(st)
614 self._writedirstate(st)
615
615
616 def addparentchangecallback(self, category, callback):
616 def addparentchangecallback(self, category, callback):
617 """add a callback to be called when the wd parents are changed
617 """add a callback to be called when the wd parents are changed
618
618
619 Callback will be called with the following arguments:
619 Callback will be called with the following arguments:
620 dirstate, (oldp1, oldp2), (newp1, newp2)
620 dirstate, (oldp1, oldp2), (newp1, newp2)
621
621
622 Category is a unique identifier to allow overwriting an old callback
622 Category is a unique identifier to allow overwriting an old callback
623 with a newer callback.
623 with a newer callback.
624 """
624 """
625 self._plchangecallbacks[category] = callback
625 self._plchangecallbacks[category] = callback
626
626
627 def _writedirstate(self, st):
627 def _writedirstate(self, st):
628 # notify callbacks about parents change
628 # notify callbacks about parents change
629 if self._origpl is not None and self._origpl != self._pl:
629 if self._origpl is not None and self._origpl != self._pl:
630 for c, callback in sorted(self._plchangecallbacks.iteritems()):
630 for c, callback in sorted(self._plchangecallbacks.iteritems()):
631 callback(self, self._origpl, self._pl)
631 callback(self, self._origpl, self._pl)
632 self._origpl = None
632 self._origpl = None
633 # use the modification time of the newly created temporary file as the
633 # use the modification time of the newly created temporary file as the
634 # filesystem's notion of 'now'
634 # filesystem's notion of 'now'
635 now = util.fstat(st)[stat.ST_MTIME] & _rangemask
635 now = util.fstat(st)[stat.ST_MTIME] & _rangemask
636
636
637 # enough 'delaywrite' prevents 'pack_dirstate' from dropping
637 # enough 'delaywrite' prevents 'pack_dirstate' from dropping
638 # timestamp of each entries in dirstate, because of 'now > mtime'
638 # timestamp of each entries in dirstate, because of 'now > mtime'
639 delaywrite = self._ui.configint('debug', 'dirstate.delaywrite')
639 delaywrite = self._ui.configint('debug', 'dirstate.delaywrite')
640 if delaywrite > 0:
640 if delaywrite > 0:
641 # do we have any files to delay for?
641 # do we have any files to delay for?
642 for f, e in self._map.iteritems():
642 for f, e in self._map.iteritems():
643 if e[0] == 'n' and e[3] == now:
643 if e[0] == 'n' and e[3] == now:
644 import time # to avoid useless import
644 import time # to avoid useless import
645 # rather than sleep n seconds, sleep until the next
645 # rather than sleep n seconds, sleep until the next
646 # multiple of n seconds
646 # multiple of n seconds
647 clock = time.time()
647 clock = time.time()
648 start = int(clock) - (int(clock) % delaywrite)
648 start = int(clock) - (int(clock) % delaywrite)
649 end = start + delaywrite
649 end = start + delaywrite
650 time.sleep(end - clock)
650 time.sleep(end - clock)
651 now = end # trust our estimate that the end is near now
651 now = end # trust our estimate that the end is near now
652 break
652 break
653
653
654 self._map.write(st, now)
654 self._map.write(st, now)
655 self._lastnormaltime = 0
655 self._lastnormaltime = 0
656 self._dirty = False
656 self._dirty = False
657
657
658 def _dirignore(self, f):
658 def _dirignore(self, f):
659 if f == '.':
659 if f == '.':
660 return False
660 return False
661 if self._ignore(f):
661 if self._ignore(f):
662 return True
662 return True
663 for p in util.finddirs(f):
663 for p in util.finddirs(f):
664 if self._ignore(p):
664 if self._ignore(p):
665 return True
665 return True
666 return False
666 return False
667
667
668 def _ignorefiles(self):
668 def _ignorefiles(self):
669 files = []
669 files = []
670 if os.path.exists(self._join('.hgignore')):
670 if os.path.exists(self._join('.hgignore')):
671 files.append(self._join('.hgignore'))
671 files.append(self._join('.hgignore'))
672 for name, path in self._ui.configitems("ui"):
672 for name, path in self._ui.configitems("ui"):
673 if name == 'ignore' or name.startswith('ignore.'):
673 if name == 'ignore' or name.startswith('ignore.'):
674 # we need to use os.path.join here rather than self._join
674 # we need to use os.path.join here rather than self._join
675 # because path is arbitrary and user-specified
675 # because path is arbitrary and user-specified
676 files.append(os.path.join(self._rootdir, util.expandpath(path)))
676 files.append(os.path.join(self._rootdir, util.expandpath(path)))
677 return files
677 return files
678
678
679 def _ignorefileandline(self, f):
679 def _ignorefileandline(self, f):
680 files = collections.deque(self._ignorefiles())
680 files = collections.deque(self._ignorefiles())
681 visited = set()
681 visited = set()
682 while files:
682 while files:
683 i = files.popleft()
683 i = files.popleft()
684 patterns = matchmod.readpatternfile(i, self._ui.warn,
684 patterns = matchmod.readpatternfile(i, self._ui.warn,
685 sourceinfo=True)
685 sourceinfo=True)
686 for pattern, lineno, line in patterns:
686 for pattern, lineno, line in patterns:
687 kind, p = matchmod._patsplit(pattern, 'glob')
687 kind, p = matchmod._patsplit(pattern, 'glob')
688 if kind == "subinclude":
688 if kind == "subinclude":
689 if p not in visited:
689 if p not in visited:
690 files.append(p)
690 files.append(p)
691 continue
691 continue
692 m = matchmod.match(self._root, '', [], [pattern],
692 m = matchmod.match(self._root, '', [], [pattern],
693 warn=self._ui.warn)
693 warn=self._ui.warn)
694 if m(f):
694 if m(f):
695 return (i, lineno, line)
695 return (i, lineno, line)
696 visited.add(i)
696 visited.add(i)
697 return (None, -1, "")
697 return (None, -1, "")
698
698
699 def _walkexplicit(self, match, subrepos):
699 def _walkexplicit(self, match, subrepos):
700 '''Get stat data about the files explicitly specified by match.
700 '''Get stat data about the files explicitly specified by match.
701
701
702 Return a triple (results, dirsfound, dirsnotfound).
702 Return a triple (results, dirsfound, dirsnotfound).
703 - results is a mapping from filename to stat result. It also contains
703 - results is a mapping from filename to stat result. It also contains
704 listings mapping subrepos and .hg to None.
704 listings mapping subrepos and .hg to None.
705 - dirsfound is a list of files found to be directories.
705 - dirsfound is a list of files found to be directories.
706 - dirsnotfound is a list of files that the dirstate thinks are
706 - dirsnotfound is a list of files that the dirstate thinks are
707 directories and that were not found.'''
707 directories and that were not found.'''
708
708
709 def badtype(mode):
709 def badtype(mode):
710 kind = _('unknown')
710 kind = _('unknown')
711 if stat.S_ISCHR(mode):
711 if stat.S_ISCHR(mode):
712 kind = _('character device')
712 kind = _('character device')
713 elif stat.S_ISBLK(mode):
713 elif stat.S_ISBLK(mode):
714 kind = _('block device')
714 kind = _('block device')
715 elif stat.S_ISFIFO(mode):
715 elif stat.S_ISFIFO(mode):
716 kind = _('fifo')
716 kind = _('fifo')
717 elif stat.S_ISSOCK(mode):
717 elif stat.S_ISSOCK(mode):
718 kind = _('socket')
718 kind = _('socket')
719 elif stat.S_ISDIR(mode):
719 elif stat.S_ISDIR(mode):
720 kind = _('directory')
720 kind = _('directory')
721 return _('unsupported file type (type is %s)') % kind
721 return _('unsupported file type (type is %s)') % kind
722
722
723 matchedir = match.explicitdir
723 matchedir = match.explicitdir
724 badfn = match.bad
724 badfn = match.bad
725 dmap = self._map
725 dmap = self._map
726 lstat = os.lstat
726 lstat = os.lstat
727 getkind = stat.S_IFMT
727 getkind = stat.S_IFMT
728 dirkind = stat.S_IFDIR
728 dirkind = stat.S_IFDIR
729 regkind = stat.S_IFREG
729 regkind = stat.S_IFREG
730 lnkkind = stat.S_IFLNK
730 lnkkind = stat.S_IFLNK
731 join = self._join
731 join = self._join
732 dirsfound = []
732 dirsfound = []
733 foundadd = dirsfound.append
733 foundadd = dirsfound.append
734 dirsnotfound = []
734 dirsnotfound = []
735 notfoundadd = dirsnotfound.append
735 notfoundadd = dirsnotfound.append
736
736
737 if not match.isexact() and self._checkcase:
737 if not match.isexact() and self._checkcase:
738 normalize = self._normalize
738 normalize = self._normalize
739 else:
739 else:
740 normalize = None
740 normalize = None
741
741
742 files = sorted(match.files())
742 files = sorted(match.files())
743 subrepos.sort()
743 subrepos.sort()
744 i, j = 0, 0
744 i, j = 0, 0
745 while i < len(files) and j < len(subrepos):
745 while i < len(files) and j < len(subrepos):
746 subpath = subrepos[j] + "/"
746 subpath = subrepos[j] + "/"
747 if files[i] < subpath:
747 if files[i] < subpath:
748 i += 1
748 i += 1
749 continue
749 continue
750 while i < len(files) and files[i].startswith(subpath):
750 while i < len(files) and files[i].startswith(subpath):
751 del files[i]
751 del files[i]
752 j += 1
752 j += 1
753
753
754 if not files or '.' in files:
754 if not files or '.' in files:
755 files = ['.']
755 files = ['.']
756 results = dict.fromkeys(subrepos)
756 results = dict.fromkeys(subrepos)
757 results['.hg'] = None
757 results['.hg'] = None
758
758
759 for ff in files:
759 for ff in files:
760 # constructing the foldmap is expensive, so don't do it for the
760 # constructing the foldmap is expensive, so don't do it for the
761 # common case where files is ['.']
761 # common case where files is ['.']
762 if normalize and ff != '.':
762 if normalize and ff != '.':
763 nf = normalize(ff, False, True)
763 nf = normalize(ff, False, True)
764 else:
764 else:
765 nf = ff
765 nf = ff
766 if nf in results:
766 if nf in results:
767 continue
767 continue
768
768
769 try:
769 try:
770 st = lstat(join(nf))
770 st = lstat(join(nf))
771 kind = getkind(st.st_mode)
771 kind = getkind(st.st_mode)
772 if kind == dirkind:
772 if kind == dirkind:
773 if nf in dmap:
773 if nf in dmap:
774 # file replaced by dir on disk but still in dirstate
774 # file replaced by dir on disk but still in dirstate
775 results[nf] = None
775 results[nf] = None
776 if matchedir:
776 if matchedir:
777 matchedir(nf)
777 matchedir(nf)
778 foundadd((nf, ff))
778 foundadd((nf, ff))
779 elif kind == regkind or kind == lnkkind:
779 elif kind == regkind or kind == lnkkind:
780 results[nf] = st
780 results[nf] = st
781 else:
781 else:
782 badfn(ff, badtype(kind))
782 badfn(ff, badtype(kind))
783 if nf in dmap:
783 if nf in dmap:
784 results[nf] = None
784 results[nf] = None
785 except OSError as inst: # nf not found on disk - it is dirstate only
785 except OSError as inst: # nf not found on disk - it is dirstate only
786 if nf in dmap: # does it exactly match a missing file?
786 if nf in dmap: # does it exactly match a missing file?
787 results[nf] = None
787 results[nf] = None
788 else: # does it match a missing directory?
788 else: # does it match a missing directory?
789 if self._map.hasdir(nf):
789 if self._map.hasdir(nf):
790 if matchedir:
790 if matchedir:
791 matchedir(nf)
791 matchedir(nf)
792 notfoundadd(nf)
792 notfoundadd(nf)
793 else:
793 else:
794 badfn(ff, encoding.strtolocal(inst.strerror))
794 badfn(ff, encoding.strtolocal(inst.strerror))
795
795
796 # match.files() may contain explicitly-specified paths that shouldn't
796 # match.files() may contain explicitly-specified paths that shouldn't
797 # be taken; drop them from the list of files found. dirsfound/notfound
797 # be taken; drop them from the list of files found. dirsfound/notfound
798 # aren't filtered here because they will be tested later.
798 # aren't filtered here because they will be tested later.
799 if match.anypats():
799 if match.anypats():
800 for f in list(results):
800 for f in list(results):
801 if f == '.hg' or f in subrepos:
801 if f == '.hg' or f in subrepos:
802 # keep sentinel to disable further out-of-repo walks
802 # keep sentinel to disable further out-of-repo walks
803 continue
803 continue
804 if not match(f):
804 if not match(f):
805 del results[f]
805 del results[f]
806
806
807 # Case insensitive filesystems cannot rely on lstat() failing to detect
807 # Case insensitive filesystems cannot rely on lstat() failing to detect
808 # a case-only rename. Prune the stat object for any file that does not
808 # a case-only rename. Prune the stat object for any file that does not
809 # match the case in the filesystem, if there are multiple files that
809 # match the case in the filesystem, if there are multiple files that
810 # normalize to the same path.
810 # normalize to the same path.
811 if match.isexact() and self._checkcase:
811 if match.isexact() and self._checkcase:
812 normed = {}
812 normed = {}
813
813
814 for f, st in results.iteritems():
814 for f, st in results.iteritems():
815 if st is None:
815 if st is None:
816 continue
816 continue
817
817
818 nc = util.normcase(f)
818 nc = util.normcase(f)
819 paths = normed.get(nc)
819 paths = normed.get(nc)
820
820
821 if paths is None:
821 if paths is None:
822 paths = set()
822 paths = set()
823 normed[nc] = paths
823 normed[nc] = paths
824
824
825 paths.add(f)
825 paths.add(f)
826
826
827 for norm, paths in normed.iteritems():
827 for norm, paths in normed.iteritems():
828 if len(paths) > 1:
828 if len(paths) > 1:
829 for path in paths:
829 for path in paths:
830 folded = self._discoverpath(path, norm, True, None,
830 folded = self._discoverpath(path, norm, True, None,
831 self._map.dirfoldmap)
831 self._map.dirfoldmap)
832 if path != folded:
832 if path != folded:
833 results[path] = None
833 results[path] = None
834
834
835 return results, dirsfound, dirsnotfound
835 return results, dirsfound, dirsnotfound
836
836
837 def walk(self, match, subrepos, unknown, ignored, full=True):
837 def walk(self, match, subrepos, unknown, ignored, full=True):
838 '''
838 '''
839 Walk recursively through the directory tree, finding all files
839 Walk recursively through the directory tree, finding all files
840 matched by match.
840 matched by match.
841
841
842 If full is False, maybe skip some known-clean files.
842 If full is False, maybe skip some known-clean files.
843
843
844 Return a dict mapping filename to stat-like object (either
844 Return a dict mapping filename to stat-like object (either
845 mercurial.osutil.stat instance or return value of os.stat()).
845 mercurial.osutil.stat instance or return value of os.stat()).
846
846
847 '''
847 '''
848 # full is a flag that extensions that hook into walk can use -- this
848 # full is a flag that extensions that hook into walk can use -- this
849 # implementation doesn't use it at all. This satisfies the contract
849 # implementation doesn't use it at all. This satisfies the contract
850 # because we only guarantee a "maybe".
850 # because we only guarantee a "maybe".
851
851
852 if ignored:
852 if ignored:
853 ignore = util.never
853 ignore = util.never
854 dirignore = util.never
854 dirignore = util.never
855 elif unknown:
855 elif unknown:
856 ignore = self._ignore
856 ignore = self._ignore
857 dirignore = self._dirignore
857 dirignore = self._dirignore
858 else:
858 else:
859 # if not unknown and not ignored, drop dir recursion and step 2
859 # if not unknown and not ignored, drop dir recursion and step 2
860 ignore = util.always
860 ignore = util.always
861 dirignore = util.always
861 dirignore = util.always
862
862
863 matchfn = match.matchfn
863 matchfn = match.matchfn
864 matchalways = match.always()
864 matchalways = match.always()
865 matchtdir = match.traversedir
865 matchtdir = match.traversedir
866 dmap = self._map
866 dmap = self._map
867 listdir = util.listdir
867 listdir = util.listdir
868 lstat = os.lstat
868 lstat = os.lstat
869 dirkind = stat.S_IFDIR
869 dirkind = stat.S_IFDIR
870 regkind = stat.S_IFREG
870 regkind = stat.S_IFREG
871 lnkkind = stat.S_IFLNK
871 lnkkind = stat.S_IFLNK
872 join = self._join
872 join = self._join
873
873
874 exact = skipstep3 = False
874 exact = skipstep3 = False
875 if match.isexact(): # match.exact
875 if match.isexact(): # match.exact
876 exact = True
876 exact = True
877 dirignore = util.always # skip step 2
877 dirignore = util.always # skip step 2
878 elif match.prefix(): # match.match, no patterns
878 elif match.prefix(): # match.match, no patterns
879 skipstep3 = True
879 skipstep3 = True
880
880
881 if not exact and self._checkcase:
881 if not exact and self._checkcase:
882 normalize = self._normalize
882 normalize = self._normalize
883 normalizefile = self._normalizefile
883 normalizefile = self._normalizefile
884 skipstep3 = False
884 skipstep3 = False
885 else:
885 else:
886 normalize = self._normalize
886 normalize = self._normalize
887 normalizefile = None
887 normalizefile = None
888
888
889 # step 1: find all explicit files
889 # step 1: find all explicit files
890 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
890 results, work, dirsnotfound = self._walkexplicit(match, subrepos)
891
891
892 skipstep3 = skipstep3 and not (work or dirsnotfound)
892 skipstep3 = skipstep3 and not (work or dirsnotfound)
893 work = [d for d in work if not dirignore(d[0])]
893 work = [d for d in work if not dirignore(d[0])]
894
894
895 # step 2: visit subdirectories
895 # step 2: visit subdirectories
896 def traverse(work, alreadynormed):
896 def traverse(work, alreadynormed):
897 wadd = work.append
897 wadd = work.append
898 while work:
898 while work:
899 nd = work.pop()
899 nd = work.pop()
900 visitentries = match.visitchildrenset(nd)
900 visitentries = match.visitchildrenset(nd)
901 if not visitentries:
901 if not visitentries:
902 continue
902 continue
903 if visitentries == 'this' or visitentries == 'all':
903 if visitentries == 'this' or visitentries == 'all':
904 visitentries = None
904 visitentries = None
905 skip = None
905 skip = None
906 if nd == '.':
906 if nd == '.':
907 nd = ''
907 nd = ''
908 else:
908 else:
909 skip = '.hg'
909 skip = '.hg'
910 try:
910 try:
911 entries = listdir(join(nd), stat=True, skip=skip)
911 entries = listdir(join(nd), stat=True, skip=skip)
912 except OSError as inst:
912 except OSError as inst:
913 if inst.errno in (errno.EACCES, errno.ENOENT):
913 if inst.errno in (errno.EACCES, errno.ENOENT):
914 match.bad(self.pathto(nd),
914 match.bad(self.pathto(nd),
915 encoding.strtolocal(inst.strerror))
915 encoding.strtolocal(inst.strerror))
916 continue
916 continue
917 raise
917 raise
918 for f, kind, st in entries:
918 for f, kind, st in entries:
919 # Some matchers may return files in the visitentries set,
919 # Some matchers may return files in the visitentries set,
920 # instead of 'this', if the matcher explicitly mentions them
920 # instead of 'this', if the matcher explicitly mentions them
921 # and is not an exactmatcher. This is acceptable; we do not
921 # and is not an exactmatcher. This is acceptable; we do not
922 # make any hard assumptions about file-or-directory below
922 # make any hard assumptions about file-or-directory below
923 # based on the presence of `f` in visitentries. If
923 # based on the presence of `f` in visitentries. If
924 # visitchildrenset returned a set, we can always skip the
924 # visitchildrenset returned a set, we can always skip the
925 # entries *not* in the set it provided regardless of whether
925 # entries *not* in the set it provided regardless of whether
926 # they're actually a file or a directory.
926 # they're actually a file or a directory.
927 if visitentries and f not in visitentries:
927 if visitentries and f not in visitentries:
928 continue
928 continue
929 if normalizefile:
929 if normalizefile:
930 # even though f might be a directory, we're only
930 # even though f might be a directory, we're only
931 # interested in comparing it to files currently in the
931 # interested in comparing it to files currently in the
932 # dmap -- therefore normalizefile is enough
932 # dmap -- therefore normalizefile is enough
933 nf = normalizefile(nd and (nd + "/" + f) or f, True,
933 nf = normalizefile(nd and (nd + "/" + f) or f, True,
934 True)
934 True)
935 else:
935 else:
936 nf = nd and (nd + "/" + f) or f
936 nf = nd and (nd + "/" + f) or f
937 if nf not in results:
937 if nf not in results:
938 if kind == dirkind:
938 if kind == dirkind:
939 if not ignore(nf):
939 if not ignore(nf):
940 if matchtdir:
940 if matchtdir:
941 matchtdir(nf)
941 matchtdir(nf)
942 wadd(nf)
942 wadd(nf)
943 if nf in dmap and (matchalways or matchfn(nf)):
943 if nf in dmap and (matchalways or matchfn(nf)):
944 results[nf] = None
944 results[nf] = None
945 elif kind == regkind or kind == lnkkind:
945 elif kind == regkind or kind == lnkkind:
946 if nf in dmap:
946 if nf in dmap:
947 if matchalways or matchfn(nf):
947 if matchalways or matchfn(nf):
948 results[nf] = st
948 results[nf] = st
949 elif ((matchalways or matchfn(nf))
949 elif ((matchalways or matchfn(nf))
950 and not ignore(nf)):
950 and not ignore(nf)):
951 # unknown file -- normalize if necessary
951 # unknown file -- normalize if necessary
952 if not alreadynormed:
952 if not alreadynormed:
953 nf = normalize(nf, False, True)
953 nf = normalize(nf, False, True)
954 results[nf] = st
954 results[nf] = st
955 elif nf in dmap and (matchalways or matchfn(nf)):
955 elif nf in dmap and (matchalways or matchfn(nf)):
956 results[nf] = None
956 results[nf] = None
957
957
958 for nd, d in work:
958 for nd, d in work:
959 # alreadynormed means that processwork doesn't have to do any
959 # alreadynormed means that processwork doesn't have to do any
960 # expensive directory normalization
960 # expensive directory normalization
961 alreadynormed = not normalize or nd == d
961 alreadynormed = not normalize or nd == d
962 traverse([d], alreadynormed)
962 traverse([d], alreadynormed)
963
963
964 for s in subrepos:
964 for s in subrepos:
965 del results[s]
965 del results[s]
966 del results['.hg']
966 del results['.hg']
967
967
968 # step 3: visit remaining files from dmap
968 # step 3: visit remaining files from dmap
969 if not skipstep3 and not exact:
969 if not skipstep3 and not exact:
970 # If a dmap file is not in results yet, it was either
970 # If a dmap file is not in results yet, it was either
971 # a) not matching matchfn b) ignored, c) missing, or d) under a
971 # a) not matching matchfn b) ignored, c) missing, or d) under a
972 # symlink directory.
972 # symlink directory.
973 if not results and matchalways:
973 if not results and matchalways:
974 visit = [f for f in dmap]
974 visit = [f for f in dmap]
975 else:
975 else:
976 visit = [f for f in dmap if f not in results and matchfn(f)]
976 visit = [f for f in dmap if f not in results and matchfn(f)]
977 visit.sort()
977 visit.sort()
978
978
979 if unknown:
979 if unknown:
980 # unknown == True means we walked all dirs under the roots
980 # unknown == True means we walked all dirs under the roots
981 # that wasn't ignored, and everything that matched was stat'ed
981 # that wasn't ignored, and everything that matched was stat'ed
982 # and is already in results.
982 # and is already in results.
983 # The rest must thus be ignored or under a symlink.
983 # The rest must thus be ignored or under a symlink.
984 audit_path = pathutil.pathauditor(self._root, cached=True)
984 audit_path = pathutil.pathauditor(self._root, cached=True)
985
985
986 for nf in iter(visit):
986 for nf in iter(visit):
987 # If a stat for the same file was already added with a
987 # If a stat for the same file was already added with a
988 # different case, don't add one for this, since that would
988 # different case, don't add one for this, since that would
989 # make it appear as if the file exists under both names
989 # make it appear as if the file exists under both names
990 # on disk.
990 # on disk.
991 if (normalizefile and
991 if (normalizefile and
992 normalizefile(nf, True, True) in results):
992 normalizefile(nf, True, True) in results):
993 results[nf] = None
993 results[nf] = None
994 # Report ignored items in the dmap as long as they are not
994 # Report ignored items in the dmap as long as they are not
995 # under a symlink directory.
995 # under a symlink directory.
996 elif audit_path.check(nf):
996 elif audit_path.check(nf):
997 try:
997 try:
998 results[nf] = lstat(join(nf))
998 results[nf] = lstat(join(nf))
999 # file was just ignored, no links, and exists
999 # file was just ignored, no links, and exists
1000 except OSError:
1000 except OSError:
1001 # file doesn't exist
1001 # file doesn't exist
1002 results[nf] = None
1002 results[nf] = None
1003 else:
1003 else:
1004 # It's either missing or under a symlink directory
1004 # It's either missing or under a symlink directory
1005 # which we in this case report as missing
1005 # which we in this case report as missing
1006 results[nf] = None
1006 results[nf] = None
1007 else:
1007 else:
1008 # We may not have walked the full directory tree above,
1008 # We may not have walked the full directory tree above,
1009 # so stat and check everything we missed.
1009 # so stat and check everything we missed.
1010 iv = iter(visit)
1010 iv = iter(visit)
1011 for st in util.statfiles([join(i) for i in visit]):
1011 for st in util.statfiles([join(i) for i in visit]):
1012 results[next(iv)] = st
1012 results[next(iv)] = st
1013 return results
1013 return results
1014
1014
1015 def status(self, match, subrepos, ignored, clean, unknown):
1015 def status(self, match, subrepos, ignored, clean, unknown):
1016 '''Determine the status of the working copy relative to the
1016 '''Determine the status of the working copy relative to the
1017 dirstate and return a pair of (unsure, status), where status is of type
1017 dirstate and return a pair of (unsure, status), where status is of type
1018 scmutil.status and:
1018 scmutil.status and:
1019
1019
1020 unsure:
1020 unsure:
1021 files that might have been modified since the dirstate was
1021 files that might have been modified since the dirstate was
1022 written, but need to be read to be sure (size is the same
1022 written, but need to be read to be sure (size is the same
1023 but mtime differs)
1023 but mtime differs)
1024 status.modified:
1024 status.modified:
1025 files that have definitely been modified since the dirstate
1025 files that have definitely been modified since the dirstate
1026 was written (different size or mode)
1026 was written (different size or mode)
1027 status.clean:
1027 status.clean:
1028 files that have definitely not been modified since the
1028 files that have definitely not been modified since the
1029 dirstate was written
1029 dirstate was written
1030 '''
1030 '''
1031 listignored, listclean, listunknown = ignored, clean, unknown
1031 listignored, listclean, listunknown = ignored, clean, unknown
1032 lookup, modified, added, unknown, ignored = [], [], [], [], []
1032 lookup, modified, added, unknown, ignored = [], [], [], [], []
1033 removed, deleted, clean = [], [], []
1033 removed, deleted, clean = [], [], []
1034
1034
1035 dmap = self._map
1035 dmap = self._map
1036 dmap.preload()
1036 dmap.preload()
1037 dcontains = dmap.__contains__
1037 dcontains = dmap.__contains__
1038 dget = dmap.__getitem__
1038 dget = dmap.__getitem__
1039 ladd = lookup.append # aka "unsure"
1039 ladd = lookup.append # aka "unsure"
1040 madd = modified.append
1040 madd = modified.append
1041 aadd = added.append
1041 aadd = added.append
1042 uadd = unknown.append
1042 uadd = unknown.append
1043 iadd = ignored.append
1043 iadd = ignored.append
1044 radd = removed.append
1044 radd = removed.append
1045 dadd = deleted.append
1045 dadd = deleted.append
1046 cadd = clean.append
1046 cadd = clean.append
1047 mexact = match.exact
1047 mexact = match.exact
1048 dirignore = self._dirignore
1048 dirignore = self._dirignore
1049 checkexec = self._checkexec
1049 checkexec = self._checkexec
1050 copymap = self._map.copymap
1050 copymap = self._map.copymap
1051 lastnormaltime = self._lastnormaltime
1051 lastnormaltime = self._lastnormaltime
1052
1052
1053 # We need to do full walks when either
1053 # We need to do full walks when either
1054 # - we're listing all clean files, or
1054 # - we're listing all clean files, or
1055 # - match.traversedir does something, because match.traversedir should
1055 # - match.traversedir does something, because match.traversedir should
1056 # be called for every dir in the working dir
1056 # be called for every dir in the working dir
1057 full = listclean or match.traversedir is not None
1057 full = listclean or match.traversedir is not None
1058 for fn, st in self.walk(match, subrepos, listunknown, listignored,
1058 for fn, st in self.walk(match, subrepos, listunknown, listignored,
1059 full=full).iteritems():
1059 full=full).iteritems():
1060 if not dcontains(fn):
1060 if not dcontains(fn):
1061 if (listignored or mexact(fn)) and dirignore(fn):
1061 if (listignored or mexact(fn)) and dirignore(fn):
1062 if listignored:
1062 if listignored:
1063 iadd(fn)
1063 iadd(fn)
1064 else:
1064 else:
1065 uadd(fn)
1065 uadd(fn)
1066 continue
1066 continue
1067
1067
1068 # This is equivalent to 'state, mode, size, time = dmap[fn]' but not
1068 # This is equivalent to 'state, mode, size, time = dmap[fn]' but not
1069 # written like that for performance reasons. dmap[fn] is not a
1069 # written like that for performance reasons. dmap[fn] is not a
1070 # Python tuple in compiled builds. The CPython UNPACK_SEQUENCE
1070 # Python tuple in compiled builds. The CPython UNPACK_SEQUENCE
1071 # opcode has fast paths when the value to be unpacked is a tuple or
1071 # opcode has fast paths when the value to be unpacked is a tuple or
1072 # a list, but falls back to creating a full-fledged iterator in
1072 # a list, but falls back to creating a full-fledged iterator in
1073 # general. That is much slower than simply accessing and storing the
1073 # general. That is much slower than simply accessing and storing the
1074 # tuple members one by one.
1074 # tuple members one by one.
1075 t = dget(fn)
1075 t = dget(fn)
1076 state = t[0]
1076 state = t[0]
1077 mode = t[1]
1077 mode = t[1]
1078 size = t[2]
1078 size = t[2]
1079 time = t[3]
1079 time = t[3]
1080
1080
1081 if not st and state in "nma":
1081 if not st and state in "nma":
1082 dadd(fn)
1082 dadd(fn)
1083 elif state == 'n':
1083 elif state == 'n':
1084 if (size >= 0 and
1084 if (size >= 0 and
1085 ((size != st.st_size and size != st.st_size & _rangemask)
1085 ((size != st.st_size and size != st.st_size & _rangemask)
1086 or ((mode ^ st.st_mode) & 0o100 and checkexec))
1086 or ((mode ^ st.st_mode) & 0o100 and checkexec))
1087 or size == -2 # other parent
1087 or size == -2 # other parent
1088 or fn in copymap):
1088 or fn in copymap):
1089 madd(fn)
1089 madd(fn)
1090 elif (time != st[stat.ST_MTIME]
1090 elif (time != st[stat.ST_MTIME]
1091 and time != st[stat.ST_MTIME] & _rangemask):
1091 and time != st[stat.ST_MTIME] & _rangemask):
1092 ladd(fn)
1092 ladd(fn)
1093 elif st[stat.ST_MTIME] == lastnormaltime:
1093 elif st[stat.ST_MTIME] == lastnormaltime:
1094 # fn may have just been marked as normal and it may have
1094 # fn may have just been marked as normal and it may have
1095 # changed in the same second without changing its size.
1095 # changed in the same second without changing its size.
1096 # This can happen if we quickly do multiple commits.
1096 # This can happen if we quickly do multiple commits.
1097 # Force lookup, so we don't miss such a racy file change.
1097 # Force lookup, so we don't miss such a racy file change.
1098 ladd(fn)
1098 ladd(fn)
1099 elif listclean:
1099 elif listclean:
1100 cadd(fn)
1100 cadd(fn)
1101 elif state == 'm':
1101 elif state == 'm':
1102 madd(fn)
1102 madd(fn)
1103 elif state == 'a':
1103 elif state == 'a':
1104 aadd(fn)
1104 aadd(fn)
1105 elif state == 'r':
1105 elif state == 'r':
1106 radd(fn)
1106 radd(fn)
1107
1107
1108 return (lookup, scmutil.status(modified, added, removed, deleted,
1108 return (lookup, scmutil.status(modified, added, removed, deleted,
1109 unknown, ignored, clean))
1109 unknown, ignored, clean))
1110
1110
1111 def matches(self, match):
1111 def matches(self, match):
1112 '''
1112 '''
1113 return files in the dirstate (in whatever state) filtered by match
1113 return files in the dirstate (in whatever state) filtered by match
1114 '''
1114 '''
1115 dmap = self._map
1115 dmap = self._map
1116 if match.always():
1116 if match.always():
1117 return dmap.keys()
1117 return dmap.keys()
1118 files = match.files()
1118 files = match.files()
1119 if match.isexact():
1119 if match.isexact():
1120 # fast path -- filter the other way around, since typically files is
1120 # fast path -- filter the other way around, since typically files is
1121 # much smaller than dmap
1121 # much smaller than dmap
1122 return [f for f in files if f in dmap]
1122 return [f for f in files if f in dmap]
1123 if match.prefix() and all(fn in dmap for fn in files):
1123 if match.prefix() and all(fn in dmap for fn in files):
1124 # fast path -- all the values are known to be files, so just return
1124 # fast path -- all the values are known to be files, so just return
1125 # that
1125 # that
1126 return list(files)
1126 return list(files)
1127 return [f for f in dmap if match(f)]
1127 return [f for f in dmap if match(f)]
1128
1128
1129 def _actualfilename(self, tr):
1129 def _actualfilename(self, tr):
1130 if tr:
1130 if tr:
1131 return self._pendingfilename
1131 return self._pendingfilename
1132 else:
1132 else:
1133 return self._filename
1133 return self._filename
1134
1134
1135 def savebackup(self, tr, backupname):
1135 def savebackup(self, tr, backupname):
1136 '''Save current dirstate into backup file'''
1136 '''Save current dirstate into backup file'''
1137 filename = self._actualfilename(tr)
1137 filename = self._actualfilename(tr)
1138 assert backupname != filename
1138 assert backupname != filename
1139
1139
1140 # use '_writedirstate' instead of 'write' to write changes certainly,
1140 # use '_writedirstate' instead of 'write' to write changes certainly,
1141 # because the latter omits writing out if transaction is running.
1141 # because the latter omits writing out if transaction is running.
1142 # output file will be used to create backup of dirstate at this point.
1142 # output file will be used to create backup of dirstate at this point.
1143 if self._dirty or not self._opener.exists(filename):
1143 if self._dirty or not self._opener.exists(filename):
1144 self._writedirstate(self._opener(filename, "w", atomictemp=True,
1144 self._writedirstate(self._opener(filename, "w", atomictemp=True,
1145 checkambig=True))
1145 checkambig=True))
1146
1146
1147 if tr:
1147 if tr:
1148 # ensure that subsequent tr.writepending returns True for
1148 # ensure that subsequent tr.writepending returns True for
1149 # changes written out above, even if dirstate is never
1149 # changes written out above, even if dirstate is never
1150 # changed after this
1150 # changed after this
1151 tr.addfilegenerator('dirstate', (self._filename,),
1151 tr.addfilegenerator('dirstate', (self._filename,),
1152 self._writedirstate, location='plain')
1152 self._writedirstate, location='plain')
1153
1153
1154 # ensure that pending file written above is unlinked at
1154 # ensure that pending file written above is unlinked at
1155 # failure, even if tr.writepending isn't invoked until the
1155 # failure, even if tr.writepending isn't invoked until the
1156 # end of this transaction
1156 # end of this transaction
1157 tr.registertmp(filename, location='plain')
1157 tr.registertmp(filename, location='plain')
1158
1158
1159 self._opener.tryunlink(backupname)
1159 self._opener.tryunlink(backupname)
1160 # hardlink backup is okay because _writedirstate is always called
1160 # hardlink backup is okay because _writedirstate is always called
1161 # with an "atomictemp=True" file.
1161 # with an "atomictemp=True" file.
1162 util.copyfile(self._opener.join(filename),
1162 util.copyfile(self._opener.join(filename),
1163 self._opener.join(backupname), hardlink=True)
1163 self._opener.join(backupname), hardlink=True)
1164
1164
1165 def restorebackup(self, tr, backupname):
1165 def restorebackup(self, tr, backupname):
1166 '''Restore dirstate by backup file'''
1166 '''Restore dirstate by backup file'''
1167 # this "invalidate()" prevents "wlock.release()" from writing
1167 # this "invalidate()" prevents "wlock.release()" from writing
1168 # changes of dirstate out after restoring from backup file
1168 # changes of dirstate out after restoring from backup file
1169 self.invalidate()
1169 self.invalidate()
1170 filename = self._actualfilename(tr)
1170 filename = self._actualfilename(tr)
1171 o = self._opener
1171 o = self._opener
1172 if util.samefile(o.join(backupname), o.join(filename)):
1172 if util.samefile(o.join(backupname), o.join(filename)):
1173 o.unlink(backupname)
1173 o.unlink(backupname)
1174 else:
1174 else:
1175 o.rename(backupname, filename, checkambig=True)
1175 o.rename(backupname, filename, checkambig=True)
1176
1176
1177 def clearbackup(self, tr, backupname):
1177 def clearbackup(self, tr, backupname):
1178 '''Clear backup file'''
1178 '''Clear backup file'''
1179 self._opener.unlink(backupname)
1179 self._opener.unlink(backupname)
1180
1180
1181 class dirstatemap(object):
1181 class dirstatemap(object):
1182 """Map encapsulating the dirstate's contents.
1182 """Map encapsulating the dirstate's contents.
1183
1183
1184 The dirstate contains the following state:
1184 The dirstate contains the following state:
1185
1185
1186 - `identity` is the identity of the dirstate file, which can be used to
1186 - `identity` is the identity of the dirstate file, which can be used to
1187 detect when changes have occurred to the dirstate file.
1187 detect when changes have occurred to the dirstate file.
1188
1188
1189 - `parents` is a pair containing the parents of the working copy. The
1189 - `parents` is a pair containing the parents of the working copy. The
1190 parents are updated by calling `setparents`.
1190 parents are updated by calling `setparents`.
1191
1191
1192 - the state map maps filenames to tuples of (state, mode, size, mtime),
1192 - the state map maps filenames to tuples of (state, mode, size, mtime),
1193 where state is a single character representing 'normal', 'added',
1193 where state is a single character representing 'normal', 'added',
1194 'removed', or 'merged'. It is read by treating the dirstate as a
1194 'removed', or 'merged'. It is read by treating the dirstate as a
1195 dict. File state is updated by calling the `addfile`, `removefile` and
1195 dict. File state is updated by calling the `addfile`, `removefile` and
1196 `dropfile` methods.
1196 `dropfile` methods.
1197
1197
1198 - `copymap` maps destination filenames to their source filename.
1198 - `copymap` maps destination filenames to their source filename.
1199
1199
1200 The dirstate also provides the following views onto the state:
1200 The dirstate also provides the following views onto the state:
1201
1201
1202 - `nonnormalset` is a set of the filenames that have state other
1202 - `nonnormalset` is a set of the filenames that have state other
1203 than 'normal', or are normal but have an mtime of -1 ('normallookup').
1203 than 'normal', or are normal but have an mtime of -1 ('normallookup').
1204
1204
1205 - `otherparentset` is a set of the filenames that are marked as coming
1205 - `otherparentset` is a set of the filenames that are marked as coming
1206 from the second parent when the dirstate is currently being merged.
1206 from the second parent when the dirstate is currently being merged.
1207
1207
1208 - `filefoldmap` is a dict mapping normalized filenames to the denormalized
1208 - `filefoldmap` is a dict mapping normalized filenames to the denormalized
1209 form that they appear as in the dirstate.
1209 form that they appear as in the dirstate.
1210
1210
1211 - `dirfoldmap` is a dict mapping normalized directory names to the
1211 - `dirfoldmap` is a dict mapping normalized directory names to the
1212 denormalized form that they appear as in the dirstate.
1212 denormalized form that they appear as in the dirstate.
1213 """
1213 """
1214
1214
1215 def __init__(self, ui, opener, root):
1215 def __init__(self, ui, opener, root):
1216 self._ui = ui
1216 self._ui = ui
1217 self._opener = opener
1217 self._opener = opener
1218 self._root = root
1218 self._root = root
1219 self._filename = 'dirstate'
1219 self._filename = 'dirstate'
1220
1220
1221 self._parents = None
1221 self._parents = None
1222 self._dirtyparents = False
1222 self._dirtyparents = False
1223
1223
1224 # for consistent view between _pl() and _read() invocations
1224 # for consistent view between _pl() and _read() invocations
1225 self._pendingmode = None
1225 self._pendingmode = None
1226
1226
1227 @propertycache
1227 @propertycache
1228 def _map(self):
1228 def _map(self):
1229 self._map = {}
1229 self._map = {}
1230 self.read()
1230 self.read()
1231 return self._map
1231 return self._map
1232
1232
1233 @propertycache
1233 @propertycache
1234 def copymap(self):
1234 def copymap(self):
1235 self.copymap = {}
1235 self.copymap = {}
1236 self._map
1236 self._map
1237 return self.copymap
1237 return self.copymap
1238
1238
1239 def clear(self):
1239 def clear(self):
1240 self._map.clear()
1240 self._map.clear()
1241 self.copymap.clear()
1241 self.copymap.clear()
1242 self.setparents(nullid, nullid)
1242 self.setparents(nullid, nullid)
1243 util.clearcachedproperty(self, "_dirs")
1243 util.clearcachedproperty(self, "_dirs")
1244 util.clearcachedproperty(self, "_alldirs")
1244 util.clearcachedproperty(self, "_alldirs")
1245 util.clearcachedproperty(self, "filefoldmap")
1245 util.clearcachedproperty(self, "filefoldmap")
1246 util.clearcachedproperty(self, "dirfoldmap")
1246 util.clearcachedproperty(self, "dirfoldmap")
1247 util.clearcachedproperty(self, "nonnormalset")
1247 util.clearcachedproperty(self, "nonnormalset")
1248 util.clearcachedproperty(self, "otherparentset")
1248 util.clearcachedproperty(self, "otherparentset")
1249
1249
1250 def items(self):
1250 def items(self):
1251 return self._map.iteritems()
1251 return self._map.iteritems()
1252
1252
1253 # forward for python2,3 compat
1253 # forward for python2,3 compat
1254 iteritems = items
1254 iteritems = items
1255
1255
1256 def __len__(self):
1256 def __len__(self):
1257 return len(self._map)
1257 return len(self._map)
1258
1258
1259 def __iter__(self):
1259 def __iter__(self):
1260 return iter(self._map)
1260 return iter(self._map)
1261
1261
1262 def get(self, key, default=None):
1262 def get(self, key, default=None):
1263 return self._map.get(key, default)
1263 return self._map.get(key, default)
1264
1264
1265 def __contains__(self, key):
1265 def __contains__(self, key):
1266 return key in self._map
1266 return key in self._map
1267
1267
1268 def __getitem__(self, key):
1268 def __getitem__(self, key):
1269 return self._map[key]
1269 return self._map[key]
1270
1270
1271 def keys(self):
1271 def keys(self):
1272 return self._map.keys()
1272 return self._map.keys()
1273
1273
1274 def preload(self):
1274 def preload(self):
1275 """Loads the underlying data, if it's not already loaded"""
1275 """Loads the underlying data, if it's not already loaded"""
1276 self._map
1276 self._map
1277
1277
1278 def addfile(self, f, oldstate, state, mode, size, mtime):
1278 def addfile(self, f, oldstate, state, mode, size, mtime):
1279 """Add a tracked file to the dirstate."""
1279 """Add a tracked file to the dirstate."""
1280 if oldstate in "?r" and r"_dirs" in self.__dict__:
1280 if oldstate in "?r" and r"_dirs" in self.__dict__:
1281 self._dirs.addpath(f)
1281 self._dirs.addpath(f)
1282 if oldstate == "?" and r"_alldirs" in self.__dict__:
1282 if oldstate == "?" and r"_alldirs" in self.__dict__:
1283 self._alldirs.addpath(f)
1283 self._alldirs.addpath(f)
1284 self._map[f] = dirstatetuple(state, mode, size, mtime)
1284 self._map[f] = dirstatetuple(state, mode, size, mtime)
1285 if state != 'n' or mtime == -1:
1285 if state != 'n' or mtime == -1:
1286 self.nonnormalset.add(f)
1286 self.nonnormalset.add(f)
1287 if size == -2:
1287 if size == -2:
1288 self.otherparentset.add(f)
1288 self.otherparentset.add(f)
1289
1289
1290 def removefile(self, f, oldstate, size):
1290 def removefile(self, f, oldstate, size):
1291 """
1291 """
1292 Mark a file as removed in the dirstate.
1292 Mark a file as removed in the dirstate.
1293
1293
1294 The `size` parameter is used to store sentinel values that indicate
1294 The `size` parameter is used to store sentinel values that indicate
1295 the file's previous state. In the future, we should refactor this
1295 the file's previous state. In the future, we should refactor this
1296 to be more explicit about what that state is.
1296 to be more explicit about what that state is.
1297 """
1297 """
1298 if oldstate not in "?r" and r"_dirs" in self.__dict__:
1298 if oldstate not in "?r" and r"_dirs" in self.__dict__:
1299 self._dirs.delpath(f)
1299 self._dirs.delpath(f)
1300 if oldstate == "?" and r"_alldirs" in self.__dict__:
1300 if oldstate == "?" and r"_alldirs" in self.__dict__:
1301 self._alldirs.addpath(f)
1301 self._alldirs.addpath(f)
1302 if r"filefoldmap" in self.__dict__:
1302 if r"filefoldmap" in self.__dict__:
1303 normed = util.normcase(f)
1303 normed = util.normcase(f)
1304 self.filefoldmap.pop(normed, None)
1304 self.filefoldmap.pop(normed, None)
1305 self._map[f] = dirstatetuple('r', 0, size, 0)
1305 self._map[f] = dirstatetuple('r', 0, size, 0)
1306 self.nonnormalset.add(f)
1306 self.nonnormalset.add(f)
1307
1307
1308 def dropfile(self, f, oldstate):
1308 def dropfile(self, f, oldstate):
1309 """
1309 """
1310 Remove a file from the dirstate. Returns True if the file was
1310 Remove a file from the dirstate. Returns True if the file was
1311 previously recorded.
1311 previously recorded.
1312 """
1312 """
1313 exists = self._map.pop(f, None) is not None
1313 exists = self._map.pop(f, None) is not None
1314 if exists:
1314 if exists:
1315 if oldstate != "r" and r"_dirs" in self.__dict__:
1315 if oldstate != "r" and r"_dirs" in self.__dict__:
1316 self._dirs.delpath(f)
1316 self._dirs.delpath(f)
1317 if r"_alldirs" in self.__dict__:
1317 if r"_alldirs" in self.__dict__:
1318 self._alldirs.delpath(f)
1318 self._alldirs.delpath(f)
1319 if r"filefoldmap" in self.__dict__:
1319 if r"filefoldmap" in self.__dict__:
1320 normed = util.normcase(f)
1320 normed = util.normcase(f)
1321 self.filefoldmap.pop(normed, None)
1321 self.filefoldmap.pop(normed, None)
1322 self.nonnormalset.discard(f)
1322 self.nonnormalset.discard(f)
1323 return exists
1323 return exists
1324
1324
1325 def clearambiguoustimes(self, files, now):
1325 def clearambiguoustimes(self, files, now):
1326 for f in files:
1326 for f in files:
1327 e = self.get(f)
1327 e = self.get(f)
1328 if e is not None and e[0] == 'n' and e[3] == now:
1328 if e is not None and e[0] == 'n' and e[3] == now:
1329 self._map[f] = dirstatetuple(e[0], e[1], e[2], -1)
1329 self._map[f] = dirstatetuple(e[0], e[1], e[2], -1)
1330 self.nonnormalset.add(f)
1330 self.nonnormalset.add(f)
1331
1331
1332 def nonnormalentries(self):
1332 def nonnormalentries(self):
1333 '''Compute the nonnormal dirstate entries from the dmap'''
1333 '''Compute the nonnormal dirstate entries from the dmap'''
1334 try:
1334 try:
1335 return parsers.nonnormalotherparententries(self._map)
1335 return parsers.nonnormalotherparententries(self._map)
1336 except AttributeError:
1336 except AttributeError:
1337 nonnorm = set()
1337 nonnorm = set()
1338 otherparent = set()
1338 otherparent = set()
1339 for fname, e in self._map.iteritems():
1339 for fname, e in self._map.iteritems():
1340 if e[0] != 'n' or e[3] == -1:
1340 if e[0] != 'n' or e[3] == -1:
1341 nonnorm.add(fname)
1341 nonnorm.add(fname)
1342 if e[0] == 'n' and e[2] == -2:
1342 if e[0] == 'n' and e[2] == -2:
1343 otherparent.add(fname)
1343 otherparent.add(fname)
1344 return nonnorm, otherparent
1344 return nonnorm, otherparent
1345
1345
1346 @propertycache
1346 @propertycache
1347 def filefoldmap(self):
1347 def filefoldmap(self):
1348 """Returns a dictionary mapping normalized case paths to their
1348 """Returns a dictionary mapping normalized case paths to their
1349 non-normalized versions.
1349 non-normalized versions.
1350 """
1350 """
1351 try:
1351 try:
1352 makefilefoldmap = parsers.make_file_foldmap
1352 makefilefoldmap = parsers.make_file_foldmap
1353 except AttributeError:
1353 except AttributeError:
1354 pass
1354 pass
1355 else:
1355 else:
1356 return makefilefoldmap(self._map, util.normcasespec,
1356 return makefilefoldmap(self._map, util.normcasespec,
1357 util.normcasefallback)
1357 util.normcasefallback)
1358
1358
1359 f = {}
1359 f = {}
1360 normcase = util.normcase
1360 normcase = util.normcase
1361 for name, s in self._map.iteritems():
1361 for name, s in self._map.iteritems():
1362 if s[0] != 'r':
1362 if s[0] != 'r':
1363 f[normcase(name)] = name
1363 f[normcase(name)] = name
1364 f['.'] = '.' # prevents useless util.fspath() invocation
1364 f['.'] = '.' # prevents useless util.fspath() invocation
1365 return f
1365 return f
1366
1366
1367 def hastrackeddir(self, d):
1367 def hastrackeddir(self, d):
1368 """
1368 """
1369 Returns True if the dirstate contains a tracked (not removed) file
1369 Returns True if the dirstate contains a tracked (not removed) file
1370 in this directory.
1370 in this directory.
1371 """
1371 """
1372 return d in self._dirs
1372 return d in self._dirs
1373
1373
1374 def hasdir(self, d):
1374 def hasdir(self, d):
1375 """
1375 """
1376 Returns True if the dirstate contains a file (tracked or removed)
1376 Returns True if the dirstate contains a file (tracked or removed)
1377 in this directory.
1377 in this directory.
1378 """
1378 """
1379 return d in self._alldirs
1379 return d in self._alldirs
1380
1380
1381 @propertycache
1381 @propertycache
1382 def _dirs(self):
1382 def _dirs(self):
1383 return util.dirs(self._map, 'r')
1383 return util.dirs(self._map, 'r')
1384
1384
1385 @propertycache
1385 @propertycache
1386 def _alldirs(self):
1386 def _alldirs(self):
1387 return util.dirs(self._map)
1387 return util.dirs(self._map)
1388
1388
1389 def _opendirstatefile(self):
1389 def _opendirstatefile(self):
1390 fp, mode = txnutil.trypending(self._root, self._opener, self._filename)
1390 fp, mode = txnutil.trypending(self._root, self._opener, self._filename)
1391 if self._pendingmode is not None and self._pendingmode != mode:
1391 if self._pendingmode is not None and self._pendingmode != mode:
1392 fp.close()
1392 fp.close()
1393 raise error.Abort(_('working directory state may be '
1393 raise error.Abort(_('working directory state may be '
1394 'changed parallelly'))
1394 'changed parallelly'))
1395 self._pendingmode = mode
1395 self._pendingmode = mode
1396 return fp
1396 return fp
1397
1397
1398 def parents(self):
1398 def parents(self):
1399 if not self._parents:
1399 if not self._parents:
1400 try:
1400 try:
1401 fp = self._opendirstatefile()
1401 fp = self._opendirstatefile()
1402 st = fp.read(40)
1402 st = fp.read(40)
1403 fp.close()
1403 fp.close()
1404 except IOError as err:
1404 except IOError as err:
1405 if err.errno != errno.ENOENT:
1405 if err.errno != errno.ENOENT:
1406 raise
1406 raise
1407 # File doesn't exist, so the current state is empty
1407 # File doesn't exist, so the current state is empty
1408 st = ''
1408 st = ''
1409
1409
1410 l = len(st)
1410 l = len(st)
1411 if l == 40:
1411 if l == 40:
1412 self._parents = (st[:20], st[20:40])
1412 self._parents = (st[:20], st[20:40])
1413 elif l == 0:
1413 elif l == 0:
1414 self._parents = (nullid, nullid)
1414 self._parents = (nullid, nullid)
1415 else:
1415 else:
1416 raise error.Abort(_('working directory state appears '
1416 raise error.Abort(_('working directory state appears '
1417 'damaged!'))
1417 'damaged!'))
1418
1418
1419 return self._parents
1419 return self._parents
1420
1420
1421 def setparents(self, p1, p2):
1421 def setparents(self, p1, p2):
1422 self._parents = (p1, p2)
1422 self._parents = (p1, p2)
1423 self._dirtyparents = True
1423 self._dirtyparents = True
1424
1424
1425 def read(self):
1425 def read(self):
1426 # ignore HG_PENDING because identity is used only for writing
1426 # ignore HG_PENDING because identity is used only for writing
1427 self.identity = util.filestat.frompath(
1427 self.identity = util.filestat.frompath(
1428 self._opener.join(self._filename))
1428 self._opener.join(self._filename))
1429
1429
1430 try:
1430 try:
1431 fp = self._opendirstatefile()
1431 fp = self._opendirstatefile()
1432 try:
1432 try:
1433 st = fp.read()
1433 st = fp.read()
1434 finally:
1434 finally:
1435 fp.close()
1435 fp.close()
1436 except IOError as err:
1436 except IOError as err:
1437 if err.errno != errno.ENOENT:
1437 if err.errno != errno.ENOENT:
1438 raise
1438 raise
1439 return
1439 return
1440 if not st:
1440 if not st:
1441 return
1441 return
1442
1442
1443 if util.safehasattr(parsers, 'dict_new_presized'):
1443 if util.safehasattr(parsers, 'dict_new_presized'):
1444 # Make an estimate of the number of files in the dirstate based on
1444 # Make an estimate of the number of files in the dirstate based on
1445 # its size. From a linear regression on a set of real-world repos,
1445 # its size. From a linear regression on a set of real-world repos,
1446 # all over 10,000 files, the size of a dirstate entry is 85
1446 # all over 10,000 files, the size of a dirstate entry is 85
1447 # bytes. The cost of resizing is significantly higher than the cost
1447 # bytes. The cost of resizing is significantly higher than the cost
1448 # of filling in a larger presized dict, so subtract 20% from the
1448 # of filling in a larger presized dict, so subtract 20% from the
1449 # size.
1449 # size.
1450 #
1450 #
1451 # This heuristic is imperfect in many ways, so in a future dirstate
1451 # This heuristic is imperfect in many ways, so in a future dirstate
1452 # format update it makes sense to just record the number of entries
1452 # format update it makes sense to just record the number of entries
1453 # on write.
1453 # on write.
1454 self._map = parsers.dict_new_presized(len(st) // 71)
1454 self._map = parsers.dict_new_presized(len(st) // 71)
1455
1455
1456 # Python's garbage collector triggers a GC each time a certain number
1456 # Python's garbage collector triggers a GC each time a certain number
1457 # of container objects (the number being defined by
1457 # of container objects (the number being defined by
1458 # gc.get_threshold()) are allocated. parse_dirstate creates a tuple
1458 # gc.get_threshold()) are allocated. parse_dirstate creates a tuple
1459 # for each file in the dirstate. The C version then immediately marks
1459 # for each file in the dirstate. The C version then immediately marks
1460 # them as not to be tracked by the collector. However, this has no
1460 # them as not to be tracked by the collector. However, this has no
1461 # effect on when GCs are triggered, only on what objects the GC looks
1461 # effect on when GCs are triggered, only on what objects the GC looks
1462 # into. This means that O(number of files) GCs are unavoidable.
1462 # into. This means that O(number of files) GCs are unavoidable.
1463 # Depending on when in the process's lifetime the dirstate is parsed,
1463 # Depending on when in the process's lifetime the dirstate is parsed,
1464 # this can get very expensive. As a workaround, disable GC while
1464 # this can get very expensive. As a workaround, disable GC while
1465 # parsing the dirstate.
1465 # parsing the dirstate.
1466 #
1466 #
1467 # (we cannot decorate the function directly since it is in a C module)
1467 # (we cannot decorate the function directly since it is in a C module)
1468 parse_dirstate = util.nogc(parsers.parse_dirstate)
1468 parse_dirstate = util.nogc(parsers.parse_dirstate)
1469 p = parse_dirstate(self._map, self.copymap, st)
1469 p = parse_dirstate(self._map, self.copymap, st)
1470 if not self._dirtyparents:
1470 if not self._dirtyparents:
1471 self.setparents(*p)
1471 self.setparents(*p)
1472
1472
1473 # Avoid excess attribute lookups by fast pathing certain checks
1473 # Avoid excess attribute lookups by fast pathing certain checks
1474 self.__contains__ = self._map.__contains__
1474 self.__contains__ = self._map.__contains__
1475 self.__getitem__ = self._map.__getitem__
1475 self.__getitem__ = self._map.__getitem__
1476 self.get = self._map.get
1476 self.get = self._map.get
1477
1477
1478 def write(self, st, now):
1478 def write(self, st, now):
1479 st.write(parsers.pack_dirstate(self._map, self.copymap,
1479 st.write(parsers.pack_dirstate(self._map, self.copymap,
1480 self.parents(), now))
1480 self.parents(), now))
1481 st.close()
1481 st.close()
1482 self._dirtyparents = False
1482 self._dirtyparents = False
1483 self.nonnormalset, self.otherparentset = self.nonnormalentries()
1483 self.nonnormalset, self.otherparentset = self.nonnormalentries()
1484
1484
1485 @propertycache
1485 @propertycache
1486 def nonnormalset(self):
1486 def nonnormalset(self):
1487 nonnorm, otherparents = self.nonnormalentries()
1487 nonnorm, otherparents = self.nonnormalentries()
1488 self.otherparentset = otherparents
1488 self.otherparentset = otherparents
1489 return nonnorm
1489 return nonnorm
1490
1490
1491 @propertycache
1491 @propertycache
1492 def otherparentset(self):
1492 def otherparentset(self):
1493 nonnorm, otherparents = self.nonnormalentries()
1493 nonnorm, otherparents = self.nonnormalentries()
1494 self.nonnormalset = nonnorm
1494 self.nonnormalset = nonnorm
1495 return otherparents
1495 return otherparents
1496
1496
1497 @propertycache
1497 @propertycache
1498 def identity(self):
1498 def identity(self):
1499 self._map
1499 self._map
1500 return self.identity
1500 return self.identity
1501
1501
1502 @propertycache
1502 @propertycache
1503 def dirfoldmap(self):
1503 def dirfoldmap(self):
1504 f = {}
1504 f = {}
1505 normcase = util.normcase
1505 normcase = util.normcase
1506 for name in self._dirs:
1506 for name in self._dirs:
1507 f[normcase(name)] = name
1507 f[normcase(name)] = name
1508 return f
1508 return f
@@ -1,560 +1,559 b''
1 # fileset.py - file set queries for mercurial
1 # fileset.py - file set queries for mercurial
2 #
2 #
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import re
11 import re
12
12
13 from .i18n import _
13 from .i18n import _
14 from . import (
14 from . import (
15 error,
15 error,
16 filesetlang,
16 filesetlang,
17 match as matchmod,
17 match as matchmod,
18 merge,
18 merge,
19 pycompat,
19 pycompat,
20 registrar,
20 registrar,
21 scmutil,
21 scmutil,
22 util,
22 util,
23 )
23 )
24 from .utils import (
24 from .utils import (
25 stringutil,
25 stringutil,
26 )
26 )
27
27
28 # common weight constants
28 # common weight constants
29 _WEIGHT_CHECK_FILENAME = filesetlang.WEIGHT_CHECK_FILENAME
29 _WEIGHT_CHECK_FILENAME = filesetlang.WEIGHT_CHECK_FILENAME
30 _WEIGHT_READ_CONTENTS = filesetlang.WEIGHT_READ_CONTENTS
30 _WEIGHT_READ_CONTENTS = filesetlang.WEIGHT_READ_CONTENTS
31 _WEIGHT_STATUS = filesetlang.WEIGHT_STATUS
31 _WEIGHT_STATUS = filesetlang.WEIGHT_STATUS
32 _WEIGHT_STATUS_THOROUGH = filesetlang.WEIGHT_STATUS_THOROUGH
32 _WEIGHT_STATUS_THOROUGH = filesetlang.WEIGHT_STATUS_THOROUGH
33
33
34 # helpers for processing parsed tree
34 # helpers for processing parsed tree
35 getsymbol = filesetlang.getsymbol
35 getsymbol = filesetlang.getsymbol
36 getstring = filesetlang.getstring
36 getstring = filesetlang.getstring
37 _getkindpat = filesetlang.getkindpat
37 _getkindpat = filesetlang.getkindpat
38 getpattern = filesetlang.getpattern
38 getpattern = filesetlang.getpattern
39 getargs = filesetlang.getargs
39 getargs = filesetlang.getargs
40
40
41 def getmatch(mctx, x):
41 def getmatch(mctx, x):
42 if not x:
42 if not x:
43 raise error.ParseError(_("missing argument"))
43 raise error.ParseError(_("missing argument"))
44 return methods[x[0]](mctx, *x[1:])
44 return methods[x[0]](mctx, *x[1:])
45
45
46 def getmatchwithstatus(mctx, x, hint):
46 def getmatchwithstatus(mctx, x, hint):
47 keys = set(getstring(hint, 'status hint must be a string').split())
47 keys = set(getstring(hint, 'status hint must be a string').split())
48 return getmatch(mctx.withstatus(keys), x)
48 return getmatch(mctx.withstatus(keys), x)
49
49
50 def stringmatch(mctx, x):
50 def stringmatch(mctx, x):
51 return mctx.matcher([x])
51 return mctx.matcher([x])
52
52
53 def kindpatmatch(mctx, x, y):
53 def kindpatmatch(mctx, x, y):
54 return stringmatch(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
54 return stringmatch(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
55 _("pattern must be a string")))
55 _("pattern must be a string")))
56
56
57 def patternsmatch(mctx, *xs):
57 def patternsmatch(mctx, *xs):
58 allkinds = matchmod.allpatternkinds
58 allkinds = matchmod.allpatternkinds
59 patterns = [getpattern(x, allkinds, _("pattern must be a string"))
59 patterns = [getpattern(x, allkinds, _("pattern must be a string"))
60 for x in xs]
60 for x in xs]
61 return mctx.matcher(patterns)
61 return mctx.matcher(patterns)
62
62
63 def andmatch(mctx, x, y):
63 def andmatch(mctx, x, y):
64 xm = getmatch(mctx, x)
64 xm = getmatch(mctx, x)
65 ym = getmatch(mctx.narrowed(xm), y)
65 ym = getmatch(mctx.narrowed(xm), y)
66 return matchmod.intersectmatchers(xm, ym)
66 return matchmod.intersectmatchers(xm, ym)
67
67
68 def ormatch(mctx, *xs):
68 def ormatch(mctx, *xs):
69 ms = [getmatch(mctx, x) for x in xs]
69 ms = [getmatch(mctx, x) for x in xs]
70 return matchmod.unionmatcher(ms)
70 return matchmod.unionmatcher(ms)
71
71
72 def notmatch(mctx, x):
72 def notmatch(mctx, x):
73 m = getmatch(mctx, x)
73 m = getmatch(mctx, x)
74 return mctx.predicate(lambda f: not m(f), predrepr=('<not %r>', m))
74 return mctx.predicate(lambda f: not m(f), predrepr=('<not %r>', m))
75
75
76 def minusmatch(mctx, x, y):
76 def minusmatch(mctx, x, y):
77 xm = getmatch(mctx, x)
77 xm = getmatch(mctx, x)
78 ym = getmatch(mctx.narrowed(xm), y)
78 ym = getmatch(mctx.narrowed(xm), y)
79 return matchmod.differencematcher(xm, ym)
79 return matchmod.differencematcher(xm, ym)
80
80
81 def listmatch(mctx, *xs):
81 def listmatch(mctx, *xs):
82 raise error.ParseError(_("can't use a list in this context"),
82 raise error.ParseError(_("can't use a list in this context"),
83 hint=_('see \'hg help "filesets.x or y"\''))
83 hint=_('see \'hg help "filesets.x or y"\''))
84
84
85 def func(mctx, a, b):
85 def func(mctx, a, b):
86 funcname = getsymbol(a)
86 funcname = getsymbol(a)
87 if funcname in symbols:
87 if funcname in symbols:
88 return symbols[funcname](mctx, b)
88 return symbols[funcname](mctx, b)
89
89
90 keep = lambda fn: getattr(fn, '__doc__', None) is not None
90 keep = lambda fn: getattr(fn, '__doc__', None) is not None
91
91
92 syms = [s for (s, fn) in symbols.items() if keep(fn)]
92 syms = [s for (s, fn) in symbols.items() if keep(fn)]
93 raise error.UnknownIdentifier(funcname, syms)
93 raise error.UnknownIdentifier(funcname, syms)
94
94
95 # symbols are callable like:
95 # symbols are callable like:
96 # fun(mctx, x)
96 # fun(mctx, x)
97 # with:
97 # with:
98 # mctx - current matchctx instance
98 # mctx - current matchctx instance
99 # x - argument in tree form
99 # x - argument in tree form
100 symbols = filesetlang.symbols
100 symbols = filesetlang.symbols
101
101
102 predicate = registrar.filesetpredicate(symbols)
102 predicate = registrar.filesetpredicate(symbols)
103
103
104 @predicate('modified()', callstatus=True, weight=_WEIGHT_STATUS)
104 @predicate('modified()', callstatus=True, weight=_WEIGHT_STATUS)
105 def modified(mctx, x):
105 def modified(mctx, x):
106 """File that is modified according to :hg:`status`.
106 """File that is modified according to :hg:`status`.
107 """
107 """
108 # i18n: "modified" is a keyword
108 # i18n: "modified" is a keyword
109 getargs(x, 0, 0, _("modified takes no arguments"))
109 getargs(x, 0, 0, _("modified takes no arguments"))
110 s = set(mctx.status().modified)
110 s = set(mctx.status().modified)
111 return mctx.predicate(s.__contains__, predrepr='modified')
111 return mctx.predicate(s.__contains__, predrepr='modified')
112
112
113 @predicate('added()', callstatus=True, weight=_WEIGHT_STATUS)
113 @predicate('added()', callstatus=True, weight=_WEIGHT_STATUS)
114 def added(mctx, x):
114 def added(mctx, x):
115 """File that is added according to :hg:`status`.
115 """File that is added according to :hg:`status`.
116 """
116 """
117 # i18n: "added" is a keyword
117 # i18n: "added" is a keyword
118 getargs(x, 0, 0, _("added takes no arguments"))
118 getargs(x, 0, 0, _("added takes no arguments"))
119 s = set(mctx.status().added)
119 s = set(mctx.status().added)
120 return mctx.predicate(s.__contains__, predrepr='added')
120 return mctx.predicate(s.__contains__, predrepr='added')
121
121
122 @predicate('removed()', callstatus=True, weight=_WEIGHT_STATUS)
122 @predicate('removed()', callstatus=True, weight=_WEIGHT_STATUS)
123 def removed(mctx, x):
123 def removed(mctx, x):
124 """File that is removed according to :hg:`status`.
124 """File that is removed according to :hg:`status`.
125 """
125 """
126 # i18n: "removed" is a keyword
126 # i18n: "removed" is a keyword
127 getargs(x, 0, 0, _("removed takes no arguments"))
127 getargs(x, 0, 0, _("removed takes no arguments"))
128 s = set(mctx.status().removed)
128 s = set(mctx.status().removed)
129 return mctx.predicate(s.__contains__, predrepr='removed')
129 return mctx.predicate(s.__contains__, predrepr='removed')
130
130
131 @predicate('deleted()', callstatus=True, weight=_WEIGHT_STATUS)
131 @predicate('deleted()', callstatus=True, weight=_WEIGHT_STATUS)
132 def deleted(mctx, x):
132 def deleted(mctx, x):
133 """Alias for ``missing()``.
133 """Alias for ``missing()``.
134 """
134 """
135 # i18n: "deleted" is a keyword
135 # i18n: "deleted" is a keyword
136 getargs(x, 0, 0, _("deleted takes no arguments"))
136 getargs(x, 0, 0, _("deleted takes no arguments"))
137 s = set(mctx.status().deleted)
137 s = set(mctx.status().deleted)
138 return mctx.predicate(s.__contains__, predrepr='deleted')
138 return mctx.predicate(s.__contains__, predrepr='deleted')
139
139
140 @predicate('missing()', callstatus=True, weight=_WEIGHT_STATUS)
140 @predicate('missing()', callstatus=True, weight=_WEIGHT_STATUS)
141 def missing(mctx, x):
141 def missing(mctx, x):
142 """File that is missing according to :hg:`status`.
142 """File that is missing according to :hg:`status`.
143 """
143 """
144 # i18n: "missing" is a keyword
144 # i18n: "missing" is a keyword
145 getargs(x, 0, 0, _("missing takes no arguments"))
145 getargs(x, 0, 0, _("missing takes no arguments"))
146 s = set(mctx.status().deleted)
146 s = set(mctx.status().deleted)
147 return mctx.predicate(s.__contains__, predrepr='deleted')
147 return mctx.predicate(s.__contains__, predrepr='deleted')
148
148
149 @predicate('unknown()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
149 @predicate('unknown()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
150 def unknown(mctx, x):
150 def unknown(mctx, x):
151 """File that is unknown according to :hg:`status`."""
151 """File that is unknown according to :hg:`status`."""
152 # i18n: "unknown" is a keyword
152 # i18n: "unknown" is a keyword
153 getargs(x, 0, 0, _("unknown takes no arguments"))
153 getargs(x, 0, 0, _("unknown takes no arguments"))
154 s = set(mctx.status().unknown)
154 s = set(mctx.status().unknown)
155 return mctx.predicate(s.__contains__, predrepr='unknown')
155 return mctx.predicate(s.__contains__, predrepr='unknown')
156
156
157 @predicate('ignored()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
157 @predicate('ignored()', callstatus=True, weight=_WEIGHT_STATUS_THOROUGH)
158 def ignored(mctx, x):
158 def ignored(mctx, x):
159 """File that is ignored according to :hg:`status`."""
159 """File that is ignored according to :hg:`status`."""
160 # i18n: "ignored" is a keyword
160 # i18n: "ignored" is a keyword
161 getargs(x, 0, 0, _("ignored takes no arguments"))
161 getargs(x, 0, 0, _("ignored takes no arguments"))
162 s = set(mctx.status().ignored)
162 s = set(mctx.status().ignored)
163 return mctx.predicate(s.__contains__, predrepr='ignored')
163 return mctx.predicate(s.__contains__, predrepr='ignored')
164
164
165 @predicate('clean()', callstatus=True, weight=_WEIGHT_STATUS)
165 @predicate('clean()', callstatus=True, weight=_WEIGHT_STATUS)
166 def clean(mctx, x):
166 def clean(mctx, x):
167 """File that is clean according to :hg:`status`.
167 """File that is clean according to :hg:`status`.
168 """
168 """
169 # i18n: "clean" is a keyword
169 # i18n: "clean" is a keyword
170 getargs(x, 0, 0, _("clean takes no arguments"))
170 getargs(x, 0, 0, _("clean takes no arguments"))
171 s = set(mctx.status().clean)
171 s = set(mctx.status().clean)
172 return mctx.predicate(s.__contains__, predrepr='clean')
172 return mctx.predicate(s.__contains__, predrepr='clean')
173
173
174 @predicate('tracked()')
174 @predicate('tracked()')
175 def tracked(mctx, x):
175 def tracked(mctx, x):
176 """File that is under Mercurial control."""
176 """File that is under Mercurial control."""
177 # i18n: "tracked" is a keyword
177 # i18n: "tracked" is a keyword
178 getargs(x, 0, 0, _("tracked takes no arguments"))
178 getargs(x, 0, 0, _("tracked takes no arguments"))
179 return mctx.predicate(mctx.ctx.__contains__, predrepr='tracked')
179 return mctx.predicate(mctx.ctx.__contains__, predrepr='tracked')
180
180
181 @predicate('binary()', weight=_WEIGHT_READ_CONTENTS)
181 @predicate('binary()', weight=_WEIGHT_READ_CONTENTS)
182 def binary(mctx, x):
182 def binary(mctx, x):
183 """File that appears to be binary (contains NUL bytes).
183 """File that appears to be binary (contains NUL bytes).
184 """
184 """
185 # i18n: "binary" is a keyword
185 # i18n: "binary" is a keyword
186 getargs(x, 0, 0, _("binary takes no arguments"))
186 getargs(x, 0, 0, _("binary takes no arguments"))
187 return mctx.fpredicate(lambda fctx: fctx.isbinary(),
187 return mctx.fpredicate(lambda fctx: fctx.isbinary(),
188 predrepr='binary', cache=True)
188 predrepr='binary', cache=True)
189
189
190 @predicate('exec()')
190 @predicate('exec()')
191 def exec_(mctx, x):
191 def exec_(mctx, x):
192 """File that is marked as executable.
192 """File that is marked as executable.
193 """
193 """
194 # i18n: "exec" is a keyword
194 # i18n: "exec" is a keyword
195 getargs(x, 0, 0, _("exec takes no arguments"))
195 getargs(x, 0, 0, _("exec takes no arguments"))
196 ctx = mctx.ctx
196 ctx = mctx.ctx
197 return mctx.predicate(lambda f: ctx.flags(f) == 'x', predrepr='exec')
197 return mctx.predicate(lambda f: ctx.flags(f) == 'x', predrepr='exec')
198
198
199 @predicate('symlink()')
199 @predicate('symlink()')
200 def symlink(mctx, x):
200 def symlink(mctx, x):
201 """File that is marked as a symlink.
201 """File that is marked as a symlink.
202 """
202 """
203 # i18n: "symlink" is a keyword
203 # i18n: "symlink" is a keyword
204 getargs(x, 0, 0, _("symlink takes no arguments"))
204 getargs(x, 0, 0, _("symlink takes no arguments"))
205 ctx = mctx.ctx
205 ctx = mctx.ctx
206 return mctx.predicate(lambda f: ctx.flags(f) == 'l', predrepr='symlink')
206 return mctx.predicate(lambda f: ctx.flags(f) == 'l', predrepr='symlink')
207
207
208 @predicate('resolved()', weight=_WEIGHT_STATUS)
208 @predicate('resolved()', weight=_WEIGHT_STATUS)
209 def resolved(mctx, x):
209 def resolved(mctx, x):
210 """File that is marked resolved according to :hg:`resolve -l`.
210 """File that is marked resolved according to :hg:`resolve -l`.
211 """
211 """
212 # i18n: "resolved" is a keyword
212 # i18n: "resolved" is a keyword
213 getargs(x, 0, 0, _("resolved takes no arguments"))
213 getargs(x, 0, 0, _("resolved takes no arguments"))
214 if mctx.ctx.rev() is not None:
214 if mctx.ctx.rev() is not None:
215 return mctx.never()
215 return mctx.never()
216 ms = merge.mergestate.read(mctx.ctx.repo())
216 ms = merge.mergestate.read(mctx.ctx.repo())
217 return mctx.predicate(lambda f: f in ms and ms[f] == 'r',
217 return mctx.predicate(lambda f: f in ms and ms[f] == 'r',
218 predrepr='resolved')
218 predrepr='resolved')
219
219
220 @predicate('unresolved()', weight=_WEIGHT_STATUS)
220 @predicate('unresolved()', weight=_WEIGHT_STATUS)
221 def unresolved(mctx, x):
221 def unresolved(mctx, x):
222 """File that is marked unresolved according to :hg:`resolve -l`.
222 """File that is marked unresolved according to :hg:`resolve -l`.
223 """
223 """
224 # i18n: "unresolved" is a keyword
224 # i18n: "unresolved" is a keyword
225 getargs(x, 0, 0, _("unresolved takes no arguments"))
225 getargs(x, 0, 0, _("unresolved takes no arguments"))
226 if mctx.ctx.rev() is not None:
226 if mctx.ctx.rev() is not None:
227 return mctx.never()
227 return mctx.never()
228 ms = merge.mergestate.read(mctx.ctx.repo())
228 ms = merge.mergestate.read(mctx.ctx.repo())
229 return mctx.predicate(lambda f: f in ms and ms[f] == 'u',
229 return mctx.predicate(lambda f: f in ms and ms[f] == 'u',
230 predrepr='unresolved')
230 predrepr='unresolved')
231
231
232 @predicate('hgignore()', weight=_WEIGHT_STATUS)
232 @predicate('hgignore()', weight=_WEIGHT_STATUS)
233 def hgignore(mctx, x):
233 def hgignore(mctx, x):
234 """File that matches the active .hgignore pattern.
234 """File that matches the active .hgignore pattern.
235 """
235 """
236 # i18n: "hgignore" is a keyword
236 # i18n: "hgignore" is a keyword
237 getargs(x, 0, 0, _("hgignore takes no arguments"))
237 getargs(x, 0, 0, _("hgignore takes no arguments"))
238 return mctx.ctx.repo().dirstate._ignore
238 return mctx.ctx.repo().dirstate._ignore
239
239
240 @predicate('portable()', weight=_WEIGHT_CHECK_FILENAME)
240 @predicate('portable()', weight=_WEIGHT_CHECK_FILENAME)
241 def portable(mctx, x):
241 def portable(mctx, x):
242 """File that has a portable name. (This doesn't include filenames with case
242 """File that has a portable name. (This doesn't include filenames with case
243 collisions.)
243 collisions.)
244 """
244 """
245 # i18n: "portable" is a keyword
245 # i18n: "portable" is a keyword
246 getargs(x, 0, 0, _("portable takes no arguments"))
246 getargs(x, 0, 0, _("portable takes no arguments"))
247 return mctx.predicate(lambda f: util.checkwinfilename(f) is None,
247 return mctx.predicate(lambda f: util.checkwinfilename(f) is None,
248 predrepr='portable')
248 predrepr='portable')
249
249
250 @predicate('grep(regex)', weight=_WEIGHT_READ_CONTENTS)
250 @predicate('grep(regex)', weight=_WEIGHT_READ_CONTENTS)
251 def grep(mctx, x):
251 def grep(mctx, x):
252 """File contains the given regular expression.
252 """File contains the given regular expression.
253 """
253 """
254 try:
254 try:
255 # i18n: "grep" is a keyword
255 # i18n: "grep" is a keyword
256 r = re.compile(getstring(x, _("grep requires a pattern")))
256 r = re.compile(getstring(x, _("grep requires a pattern")))
257 except re.error as e:
257 except re.error as e:
258 raise error.ParseError(_('invalid match pattern: %s') %
258 raise error.ParseError(_('invalid match pattern: %s') %
259 stringutil.forcebytestr(e))
259 stringutil.forcebytestr(e))
260 return mctx.fpredicate(lambda fctx: r.search(fctx.data()),
260 return mctx.fpredicate(lambda fctx: r.search(fctx.data()),
261 predrepr=('grep(%r)', r.pattern), cache=True)
261 predrepr=('grep(%r)', r.pattern), cache=True)
262
262
263 def _sizetomax(s):
263 def _sizetomax(s):
264 try:
264 try:
265 s = s.strip().lower()
265 s = s.strip().lower()
266 for k, v in util._sizeunits:
266 for k, v in util._sizeunits:
267 if s.endswith(k):
267 if s.endswith(k):
268 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
268 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
269 n = s[:-len(k)]
269 n = s[:-len(k)]
270 inc = 1.0
270 inc = 1.0
271 if "." in n:
271 if "." in n:
272 inc /= 10 ** len(n.split(".")[1])
272 inc /= 10 ** len(n.split(".")[1])
273 return int((float(n) + inc) * v) - 1
273 return int((float(n) + inc) * v) - 1
274 # no extension, this is a precise value
274 # no extension, this is a precise value
275 return int(s)
275 return int(s)
276 except ValueError:
276 except ValueError:
277 raise error.ParseError(_("couldn't parse size: %s") % s)
277 raise error.ParseError(_("couldn't parse size: %s") % s)
278
278
279 def sizematcher(expr):
279 def sizematcher(expr):
280 """Return a function(size) -> bool from the ``size()`` expression"""
280 """Return a function(size) -> bool from the ``size()`` expression"""
281 expr = expr.strip()
281 expr = expr.strip()
282 if '-' in expr: # do we have a range?
282 if '-' in expr: # do we have a range?
283 a, b = expr.split('-', 1)
283 a, b = expr.split('-', 1)
284 a = util.sizetoint(a)
284 a = util.sizetoint(a)
285 b = util.sizetoint(b)
285 b = util.sizetoint(b)
286 return lambda x: x >= a and x <= b
286 return lambda x: x >= a and x <= b
287 elif expr.startswith("<="):
287 elif expr.startswith("<="):
288 a = util.sizetoint(expr[2:])
288 a = util.sizetoint(expr[2:])
289 return lambda x: x <= a
289 return lambda x: x <= a
290 elif expr.startswith("<"):
290 elif expr.startswith("<"):
291 a = util.sizetoint(expr[1:])
291 a = util.sizetoint(expr[1:])
292 return lambda x: x < a
292 return lambda x: x < a
293 elif expr.startswith(">="):
293 elif expr.startswith(">="):
294 a = util.sizetoint(expr[2:])
294 a = util.sizetoint(expr[2:])
295 return lambda x: x >= a
295 return lambda x: x >= a
296 elif expr.startswith(">"):
296 elif expr.startswith(">"):
297 a = util.sizetoint(expr[1:])
297 a = util.sizetoint(expr[1:])
298 return lambda x: x > a
298 return lambda x: x > a
299 else:
299 else:
300 a = util.sizetoint(expr)
300 a = util.sizetoint(expr)
301 b = _sizetomax(expr)
301 b = _sizetomax(expr)
302 return lambda x: x >= a and x <= b
302 return lambda x: x >= a and x <= b
303
303
304 @predicate('size(expression)', weight=_WEIGHT_STATUS)
304 @predicate('size(expression)', weight=_WEIGHT_STATUS)
305 def size(mctx, x):
305 def size(mctx, x):
306 """File size matches the given expression. Examples:
306 """File size matches the given expression. Examples:
307
307
308 - size('1k') - files from 1024 to 2047 bytes
308 - size('1k') - files from 1024 to 2047 bytes
309 - size('< 20k') - files less than 20480 bytes
309 - size('< 20k') - files less than 20480 bytes
310 - size('>= .5MB') - files at least 524288 bytes
310 - size('>= .5MB') - files at least 524288 bytes
311 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
311 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
312 """
312 """
313 # i18n: "size" is a keyword
313 # i18n: "size" is a keyword
314 expr = getstring(x, _("size requires an expression"))
314 expr = getstring(x, _("size requires an expression"))
315 m = sizematcher(expr)
315 m = sizematcher(expr)
316 return mctx.fpredicate(lambda fctx: m(fctx.size()),
316 return mctx.fpredicate(lambda fctx: m(fctx.size()),
317 predrepr=('size(%r)', expr), cache=True)
317 predrepr=('size(%r)', expr), cache=True)
318
318
319 @predicate('encoding(name)', weight=_WEIGHT_READ_CONTENTS)
319 @predicate('encoding(name)', weight=_WEIGHT_READ_CONTENTS)
320 def encoding(mctx, x):
320 def encoding(mctx, x):
321 """File can be successfully decoded with the given character
321 """File can be successfully decoded with the given character
322 encoding. May not be useful for encodings other than ASCII and
322 encoding. May not be useful for encodings other than ASCII and
323 UTF-8.
323 UTF-8.
324 """
324 """
325
325
326 # i18n: "encoding" is a keyword
326 # i18n: "encoding" is a keyword
327 enc = getstring(x, _("encoding requires an encoding name"))
327 enc = getstring(x, _("encoding requires an encoding name"))
328
328
329 def encp(fctx):
329 def encp(fctx):
330 d = fctx.data()
330 d = fctx.data()
331 try:
331 try:
332 d.decode(pycompat.sysstr(enc))
332 d.decode(pycompat.sysstr(enc))
333 return True
333 return True
334 except LookupError:
334 except LookupError:
335 raise error.Abort(_("unknown encoding '%s'") % enc)
335 raise error.Abort(_("unknown encoding '%s'") % enc)
336 except UnicodeDecodeError:
336 except UnicodeDecodeError:
337 return False
337 return False
338
338
339 return mctx.fpredicate(encp, predrepr=('encoding(%r)', enc), cache=True)
339 return mctx.fpredicate(encp, predrepr=('encoding(%r)', enc), cache=True)
340
340
341 @predicate('eol(style)', weight=_WEIGHT_READ_CONTENTS)
341 @predicate('eol(style)', weight=_WEIGHT_READ_CONTENTS)
342 def eol(mctx, x):
342 def eol(mctx, x):
343 """File contains newlines of the given style (dos, unix, mac). Binary
343 """File contains newlines of the given style (dos, unix, mac). Binary
344 files are excluded, files with mixed line endings match multiple
344 files are excluded, files with mixed line endings match multiple
345 styles.
345 styles.
346 """
346 """
347
347
348 # i18n: "eol" is a keyword
348 # i18n: "eol" is a keyword
349 enc = getstring(x, _("eol requires a style name"))
349 enc = getstring(x, _("eol requires a style name"))
350
350
351 def eolp(fctx):
351 def eolp(fctx):
352 if fctx.isbinary():
352 if fctx.isbinary():
353 return False
353 return False
354 d = fctx.data()
354 d = fctx.data()
355 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
355 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
356 return True
356 return True
357 elif enc == 'unix' and re.search('(?<!\r)\n', d):
357 elif enc == 'unix' and re.search('(?<!\r)\n', d):
358 return True
358 return True
359 elif enc == 'mac' and re.search('\r(?!\n)', d):
359 elif enc == 'mac' and re.search('\r(?!\n)', d):
360 return True
360 return True
361 return False
361 return False
362 return mctx.fpredicate(eolp, predrepr=('eol(%r)', enc), cache=True)
362 return mctx.fpredicate(eolp, predrepr=('eol(%r)', enc), cache=True)
363
363
364 @predicate('copied()')
364 @predicate('copied()')
365 def copied(mctx, x):
365 def copied(mctx, x):
366 """File that is recorded as being copied.
366 """File that is recorded as being copied.
367 """
367 """
368 # i18n: "copied" is a keyword
368 # i18n: "copied" is a keyword
369 getargs(x, 0, 0, _("copied takes no arguments"))
369 getargs(x, 0, 0, _("copied takes no arguments"))
370 def copiedp(fctx):
370 def copiedp(fctx):
371 p = fctx.parents()
371 p = fctx.parents()
372 return p and p[0].path() != fctx.path()
372 return p and p[0].path() != fctx.path()
373 return mctx.fpredicate(copiedp, predrepr='copied', cache=True)
373 return mctx.fpredicate(copiedp, predrepr='copied', cache=True)
374
374
375 @predicate('revs(revs, pattern)', weight=_WEIGHT_STATUS)
375 @predicate('revs(revs, pattern)', weight=_WEIGHT_STATUS)
376 def revs(mctx, x):
376 def revs(mctx, x):
377 """Evaluate set in the specified revisions. If the revset match multiple
377 """Evaluate set in the specified revisions. If the revset match multiple
378 revs, this will return file matching pattern in any of the revision.
378 revs, this will return file matching pattern in any of the revision.
379 """
379 """
380 # i18n: "revs" is a keyword
380 # i18n: "revs" is a keyword
381 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
381 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
382 # i18n: "revs" is a keyword
382 # i18n: "revs" is a keyword
383 revspec = getstring(r, _("first argument to revs must be a revision"))
383 revspec = getstring(r, _("first argument to revs must be a revision"))
384 repo = mctx.ctx.repo()
384 repo = mctx.ctx.repo()
385 revs = scmutil.revrange(repo, [revspec])
385 revs = scmutil.revrange(repo, [revspec])
386
386
387 matchers = []
387 matchers = []
388 for r in revs:
388 for r in revs:
389 ctx = repo[r]
389 ctx = repo[r]
390 mc = mctx.switch(ctx.p1(), ctx)
390 mc = mctx.switch(ctx.p1(), ctx)
391 matchers.append(getmatch(mc, x))
391 matchers.append(getmatch(mc, x))
392 if not matchers:
392 if not matchers:
393 return mctx.never()
393 return mctx.never()
394 if len(matchers) == 1:
394 if len(matchers) == 1:
395 return matchers[0]
395 return matchers[0]
396 return matchmod.unionmatcher(matchers)
396 return matchmod.unionmatcher(matchers)
397
397
398 @predicate('status(base, rev, pattern)', weight=_WEIGHT_STATUS)
398 @predicate('status(base, rev, pattern)', weight=_WEIGHT_STATUS)
399 def status(mctx, x):
399 def status(mctx, x):
400 """Evaluate predicate using status change between ``base`` and
400 """Evaluate predicate using status change between ``base`` and
401 ``rev``. Examples:
401 ``rev``. Examples:
402
402
403 - ``status(3, 7, added())`` - matches files added from "3" to "7"
403 - ``status(3, 7, added())`` - matches files added from "3" to "7"
404 """
404 """
405 repo = mctx.ctx.repo()
405 repo = mctx.ctx.repo()
406 # i18n: "status" is a keyword
406 # i18n: "status" is a keyword
407 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
407 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
408 # i18n: "status" is a keyword
408 # i18n: "status" is a keyword
409 baseerr = _("first argument to status must be a revision")
409 baseerr = _("first argument to status must be a revision")
410 baserevspec = getstring(b, baseerr)
410 baserevspec = getstring(b, baseerr)
411 if not baserevspec:
411 if not baserevspec:
412 raise error.ParseError(baseerr)
412 raise error.ParseError(baseerr)
413 reverr = _("second argument to status must be a revision")
413 reverr = _("second argument to status must be a revision")
414 revspec = getstring(r, reverr)
414 revspec = getstring(r, reverr)
415 if not revspec:
415 if not revspec:
416 raise error.ParseError(reverr)
416 raise error.ParseError(reverr)
417 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
417 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
418 mc = mctx.switch(basectx, ctx)
418 mc = mctx.switch(basectx, ctx)
419 return getmatch(mc, x)
419 return getmatch(mc, x)
420
420
421 @predicate('subrepo([pattern])')
421 @predicate('subrepo([pattern])')
422 def subrepo(mctx, x):
422 def subrepo(mctx, x):
423 """Subrepositories whose paths match the given pattern.
423 """Subrepositories whose paths match the given pattern.
424 """
424 """
425 # i18n: "subrepo" is a keyword
425 # i18n: "subrepo" is a keyword
426 getargs(x, 0, 1, _("subrepo takes at most one argument"))
426 getargs(x, 0, 1, _("subrepo takes at most one argument"))
427 ctx = mctx.ctx
427 ctx = mctx.ctx
428 sstate = ctx.substate
428 sstate = ctx.substate
429 if x:
429 if x:
430 pat = getpattern(x, matchmod.allpatternkinds,
430 pat = getpattern(x, matchmod.allpatternkinds,
431 # i18n: "subrepo" is a keyword
431 # i18n: "subrepo" is a keyword
432 _("subrepo requires a pattern or no arguments"))
432 _("subrepo requires a pattern or no arguments"))
433 fast = not matchmod.patkind(pat)
433 fast = not matchmod.patkind(pat)
434 if fast:
434 if fast:
435 def m(s):
435 def m(s):
436 return (s == pat)
436 return (s == pat)
437 else:
437 else:
438 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
438 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
439 return mctx.predicate(lambda f: f in sstate and m(f),
439 return mctx.predicate(lambda f: f in sstate and m(f),
440 predrepr=('subrepo(%r)', pat))
440 predrepr=('subrepo(%r)', pat))
441 else:
441 else:
442 return mctx.predicate(sstate.__contains__, predrepr='subrepo')
442 return mctx.predicate(sstate.__contains__, predrepr='subrepo')
443
443
444 methods = {
444 methods = {
445 'withstatus': getmatchwithstatus,
445 'withstatus': getmatchwithstatus,
446 'string': stringmatch,
446 'string': stringmatch,
447 'symbol': stringmatch,
447 'symbol': stringmatch,
448 'kindpat': kindpatmatch,
448 'kindpat': kindpatmatch,
449 'patterns': patternsmatch,
449 'patterns': patternsmatch,
450 'and': andmatch,
450 'and': andmatch,
451 'or': ormatch,
451 'or': ormatch,
452 'minus': minusmatch,
452 'minus': minusmatch,
453 'list': listmatch,
453 'list': listmatch,
454 'not': notmatch,
454 'not': notmatch,
455 'func': func,
455 'func': func,
456 }
456 }
457
457
458 class matchctx(object):
458 class matchctx(object):
459 def __init__(self, basectx, ctx, badfn=None):
459 def __init__(self, basectx, ctx, badfn=None):
460 self._basectx = basectx
460 self._basectx = basectx
461 self.ctx = ctx
461 self.ctx = ctx
462 self._badfn = badfn
462 self._badfn = badfn
463 self._match = None
463 self._match = None
464 self._status = None
464 self._status = None
465
465
466 def narrowed(self, match):
466 def narrowed(self, match):
467 """Create matchctx for a sub-tree narrowed by the given matcher"""
467 """Create matchctx for a sub-tree narrowed by the given matcher"""
468 mctx = matchctx(self._basectx, self.ctx, self._badfn)
468 mctx = matchctx(self._basectx, self.ctx, self._badfn)
469 mctx._match = match
469 mctx._match = match
470 # leave wider status which we don't have to care
470 # leave wider status which we don't have to care
471 mctx._status = self._status
471 mctx._status = self._status
472 return mctx
472 return mctx
473
473
474 def switch(self, basectx, ctx):
474 def switch(self, basectx, ctx):
475 mctx = matchctx(basectx, ctx, self._badfn)
475 mctx = matchctx(basectx, ctx, self._badfn)
476 mctx._match = self._match
476 mctx._match = self._match
477 return mctx
477 return mctx
478
478
479 def withstatus(self, keys):
479 def withstatus(self, keys):
480 """Create matchctx which has precomputed status specified by the keys"""
480 """Create matchctx which has precomputed status specified by the keys"""
481 mctx = matchctx(self._basectx, self.ctx, self._badfn)
481 mctx = matchctx(self._basectx, self.ctx, self._badfn)
482 mctx._match = self._match
482 mctx._match = self._match
483 mctx._buildstatus(keys)
483 mctx._buildstatus(keys)
484 return mctx
484 return mctx
485
485
486 def _buildstatus(self, keys):
486 def _buildstatus(self, keys):
487 self._status = self._basectx.status(self.ctx, self._match,
487 self._status = self._basectx.status(self.ctx, self._match,
488 listignored='ignored' in keys,
488 listignored='ignored' in keys,
489 listclean='clean' in keys,
489 listclean='clean' in keys,
490 listunknown='unknown' in keys)
490 listunknown='unknown' in keys)
491
491
492 def status(self):
492 def status(self):
493 return self._status
493 return self._status
494
494
495 def matcher(self, patterns):
495 def matcher(self, patterns):
496 return self.ctx.match(patterns, badfn=self._badfn)
496 return self.ctx.match(patterns, badfn=self._badfn)
497
497
498 def predicate(self, predfn, predrepr=None, cache=False):
498 def predicate(self, predfn, predrepr=None, cache=False):
499 """Create a matcher to select files by predfn(filename)"""
499 """Create a matcher to select files by predfn(filename)"""
500 if cache:
500 if cache:
501 predfn = util.cachefunc(predfn)
501 predfn = util.cachefunc(predfn)
502 return matchmod.predicatematcher(predfn, predrepr=predrepr,
502 return matchmod.predicatematcher(predfn, predrepr=predrepr,
503 badfn=self._badfn)
503 badfn=self._badfn)
504
504
505 def fpredicate(self, predfn, predrepr=None, cache=False):
505 def fpredicate(self, predfn, predrepr=None, cache=False):
506 """Create a matcher to select files by predfn(fctx) at the current
506 """Create a matcher to select files by predfn(fctx) at the current
507 revision
507 revision
508
508
509 Missing files are ignored.
509 Missing files are ignored.
510 """
510 """
511 ctx = self.ctx
511 ctx = self.ctx
512 if ctx.rev() is None:
512 if ctx.rev() is None:
513 def fctxpredfn(f):
513 def fctxpredfn(f):
514 try:
514 try:
515 fctx = ctx[f]
515 fctx = ctx[f]
516 except error.LookupError:
516 except error.LookupError:
517 return False
517 return False
518 try:
518 try:
519 fctx.audit()
519 fctx.audit()
520 except error.Abort:
520 except error.Abort:
521 return False
521 return False
522 try:
522 try:
523 return predfn(fctx)
523 return predfn(fctx)
524 except (IOError, OSError) as e:
524 except (IOError, OSError) as e:
525 # open()-ing a directory fails with EACCES on Windows
525 # open()-ing a directory fails with EACCES on Windows
526 if e.errno in (errno.ENOENT, errno.EACCES, errno.ENOTDIR,
526 if e.errno in (errno.ENOENT, errno.EACCES, errno.ENOTDIR,
527 errno.EISDIR):
527 errno.EISDIR):
528 return False
528 return False
529 raise
529 raise
530 else:
530 else:
531 def fctxpredfn(f):
531 def fctxpredfn(f):
532 try:
532 try:
533 fctx = ctx[f]
533 fctx = ctx[f]
534 except error.LookupError:
534 except error.LookupError:
535 return False
535 return False
536 return predfn(fctx)
536 return predfn(fctx)
537 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
537 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
538
538
539 def never(self):
539 def never(self):
540 """Create a matcher to select nothing"""
540 """Create a matcher to select nothing"""
541 repo = self.ctx.repo()
541 return matchmod.never(badfn=self._badfn)
542 return matchmod.never(repo.root, repo.getcwd(), badfn=self._badfn)
543
542
544 def match(ctx, expr, badfn=None):
543 def match(ctx, expr, badfn=None):
545 """Create a matcher for a single fileset expression"""
544 """Create a matcher for a single fileset expression"""
546 tree = filesetlang.parse(expr)
545 tree = filesetlang.parse(expr)
547 tree = filesetlang.analyze(tree)
546 tree = filesetlang.analyze(tree)
548 tree = filesetlang.optimize(tree)
547 tree = filesetlang.optimize(tree)
549 mctx = matchctx(ctx.p1(), ctx, badfn=badfn)
548 mctx = matchctx(ctx.p1(), ctx, badfn=badfn)
550 return getmatch(mctx, tree)
549 return getmatch(mctx, tree)
551
550
552
551
553 def loadpredicate(ui, extname, registrarobj):
552 def loadpredicate(ui, extname, registrarobj):
554 """Load fileset predicates from specified registrarobj
553 """Load fileset predicates from specified registrarobj
555 """
554 """
556 for name, func in registrarobj._table.iteritems():
555 for name, func in registrarobj._table.iteritems():
557 symbols[name] = func
556 symbols[name] = func
558
557
559 # tell hggettext to extract docstrings from these functions:
558 # tell hggettext to extract docstrings from these functions:
560 i18nfunctions = symbols.values()
559 i18nfunctions = symbols.values()
@@ -1,813 +1,813 b''
1 # hgweb/webutil.py - utility library for the web interface.
1 # hgweb/webutil.py - utility library for the web interface.
2 #
2 #
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
3 # Copyright 21 May 2005 - (c) 2005 Jake Edge <jake@edge2.net>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 from __future__ import absolute_import
9 from __future__ import absolute_import
10
10
11 import copy
11 import copy
12 import difflib
12 import difflib
13 import os
13 import os
14 import re
14 import re
15
15
16 from ..i18n import _
16 from ..i18n import _
17 from ..node import hex, nullid, short
17 from ..node import hex, nullid, short
18
18
19 from .common import (
19 from .common import (
20 ErrorResponse,
20 ErrorResponse,
21 HTTP_BAD_REQUEST,
21 HTTP_BAD_REQUEST,
22 HTTP_NOT_FOUND,
22 HTTP_NOT_FOUND,
23 paritygen,
23 paritygen,
24 )
24 )
25
25
26 from .. import (
26 from .. import (
27 context,
27 context,
28 diffutil,
28 diffutil,
29 error,
29 error,
30 match,
30 match,
31 mdiff,
31 mdiff,
32 obsutil,
32 obsutil,
33 patch,
33 patch,
34 pathutil,
34 pathutil,
35 pycompat,
35 pycompat,
36 scmutil,
36 scmutil,
37 templatefilters,
37 templatefilters,
38 templatekw,
38 templatekw,
39 templateutil,
39 templateutil,
40 ui as uimod,
40 ui as uimod,
41 util,
41 util,
42 )
42 )
43
43
44 from ..utils import (
44 from ..utils import (
45 stringutil,
45 stringutil,
46 )
46 )
47
47
48 archivespecs = util.sortdict((
48 archivespecs = util.sortdict((
49 ('zip', ('application/zip', 'zip', '.zip', None)),
49 ('zip', ('application/zip', 'zip', '.zip', None)),
50 ('gz', ('application/x-gzip', 'tgz', '.tar.gz', None)),
50 ('gz', ('application/x-gzip', 'tgz', '.tar.gz', None)),
51 ('bz2', ('application/x-bzip2', 'tbz2', '.tar.bz2', None)),
51 ('bz2', ('application/x-bzip2', 'tbz2', '.tar.bz2', None)),
52 ))
52 ))
53
53
54 def archivelist(ui, nodeid, url=None):
54 def archivelist(ui, nodeid, url=None):
55 allowed = ui.configlist('web', 'allow-archive', untrusted=True)
55 allowed = ui.configlist('web', 'allow-archive', untrusted=True)
56 archives = []
56 archives = []
57
57
58 for typ, spec in archivespecs.iteritems():
58 for typ, spec in archivespecs.iteritems():
59 if typ in allowed or ui.configbool('web', 'allow' + typ,
59 if typ in allowed or ui.configbool('web', 'allow' + typ,
60 untrusted=True):
60 untrusted=True):
61 archives.append({
61 archives.append({
62 'type': typ,
62 'type': typ,
63 'extension': spec[2],
63 'extension': spec[2],
64 'node': nodeid,
64 'node': nodeid,
65 'url': url,
65 'url': url,
66 })
66 })
67
67
68 return templateutil.mappinglist(archives)
68 return templateutil.mappinglist(archives)
69
69
70 def up(p):
70 def up(p):
71 if p[0:1] != "/":
71 if p[0:1] != "/":
72 p = "/" + p
72 p = "/" + p
73 if p[-1:] == "/":
73 if p[-1:] == "/":
74 p = p[:-1]
74 p = p[:-1]
75 up = os.path.dirname(p)
75 up = os.path.dirname(p)
76 if up == "/":
76 if up == "/":
77 return "/"
77 return "/"
78 return up + "/"
78 return up + "/"
79
79
80 def _navseq(step, firststep=None):
80 def _navseq(step, firststep=None):
81 if firststep:
81 if firststep:
82 yield firststep
82 yield firststep
83 if firststep >= 20 and firststep <= 40:
83 if firststep >= 20 and firststep <= 40:
84 firststep = 50
84 firststep = 50
85 yield firststep
85 yield firststep
86 assert step > 0
86 assert step > 0
87 assert firststep > 0
87 assert firststep > 0
88 while step <= firststep:
88 while step <= firststep:
89 step *= 10
89 step *= 10
90 while True:
90 while True:
91 yield 1 * step
91 yield 1 * step
92 yield 3 * step
92 yield 3 * step
93 step *= 10
93 step *= 10
94
94
95 class revnav(object):
95 class revnav(object):
96
96
97 def __init__(self, repo):
97 def __init__(self, repo):
98 """Navigation generation object
98 """Navigation generation object
99
99
100 :repo: repo object we generate nav for
100 :repo: repo object we generate nav for
101 """
101 """
102 # used for hex generation
102 # used for hex generation
103 self._revlog = repo.changelog
103 self._revlog = repo.changelog
104
104
105 def __nonzero__(self):
105 def __nonzero__(self):
106 """return True if any revision to navigate over"""
106 """return True if any revision to navigate over"""
107 return self._first() is not None
107 return self._first() is not None
108
108
109 __bool__ = __nonzero__
109 __bool__ = __nonzero__
110
110
111 def _first(self):
111 def _first(self):
112 """return the minimum non-filtered changeset or None"""
112 """return the minimum non-filtered changeset or None"""
113 try:
113 try:
114 return next(iter(self._revlog))
114 return next(iter(self._revlog))
115 except StopIteration:
115 except StopIteration:
116 return None
116 return None
117
117
118 def hex(self, rev):
118 def hex(self, rev):
119 return hex(self._revlog.node(rev))
119 return hex(self._revlog.node(rev))
120
120
121 def gen(self, pos, pagelen, limit):
121 def gen(self, pos, pagelen, limit):
122 """computes label and revision id for navigation link
122 """computes label and revision id for navigation link
123
123
124 :pos: is the revision relative to which we generate navigation.
124 :pos: is the revision relative to which we generate navigation.
125 :pagelen: the size of each navigation page
125 :pagelen: the size of each navigation page
126 :limit: how far shall we link
126 :limit: how far shall we link
127
127
128 The return is:
128 The return is:
129 - a single element mappinglist
129 - a single element mappinglist
130 - containing a dictionary with a `before` and `after` key
130 - containing a dictionary with a `before` and `after` key
131 - values are dictionaries with `label` and `node` keys
131 - values are dictionaries with `label` and `node` keys
132 """
132 """
133 if not self:
133 if not self:
134 # empty repo
134 # empty repo
135 return templateutil.mappinglist([
135 return templateutil.mappinglist([
136 {'before': templateutil.mappinglist([]),
136 {'before': templateutil.mappinglist([]),
137 'after': templateutil.mappinglist([])},
137 'after': templateutil.mappinglist([])},
138 ])
138 ])
139
139
140 targets = []
140 targets = []
141 for f in _navseq(1, pagelen):
141 for f in _navseq(1, pagelen):
142 if f > limit:
142 if f > limit:
143 break
143 break
144 targets.append(pos + f)
144 targets.append(pos + f)
145 targets.append(pos - f)
145 targets.append(pos - f)
146 targets.sort()
146 targets.sort()
147
147
148 first = self._first()
148 first = self._first()
149 navbefore = [{'label': '(%i)' % first, 'node': self.hex(first)}]
149 navbefore = [{'label': '(%i)' % first, 'node': self.hex(first)}]
150 navafter = []
150 navafter = []
151 for rev in targets:
151 for rev in targets:
152 if rev not in self._revlog:
152 if rev not in self._revlog:
153 continue
153 continue
154 if pos < rev < limit:
154 if pos < rev < limit:
155 navafter.append({'label': '+%d' % abs(rev - pos),
155 navafter.append({'label': '+%d' % abs(rev - pos),
156 'node': self.hex(rev)})
156 'node': self.hex(rev)})
157 if 0 < rev < pos:
157 if 0 < rev < pos:
158 navbefore.append({'label': '-%d' % abs(rev - pos),
158 navbefore.append({'label': '-%d' % abs(rev - pos),
159 'node': self.hex(rev)})
159 'node': self.hex(rev)})
160
160
161 navafter.append({'label': 'tip', 'node': 'tip'})
161 navafter.append({'label': 'tip', 'node': 'tip'})
162
162
163 # TODO: maybe this can be a scalar object supporting tomap()
163 # TODO: maybe this can be a scalar object supporting tomap()
164 return templateutil.mappinglist([
164 return templateutil.mappinglist([
165 {'before': templateutil.mappinglist(navbefore),
165 {'before': templateutil.mappinglist(navbefore),
166 'after': templateutil.mappinglist(navafter)},
166 'after': templateutil.mappinglist(navafter)},
167 ])
167 ])
168
168
169 class filerevnav(revnav):
169 class filerevnav(revnav):
170
170
171 def __init__(self, repo, path):
171 def __init__(self, repo, path):
172 """Navigation generation object
172 """Navigation generation object
173
173
174 :repo: repo object we generate nav for
174 :repo: repo object we generate nav for
175 :path: path of the file we generate nav for
175 :path: path of the file we generate nav for
176 """
176 """
177 # used for iteration
177 # used for iteration
178 self._changelog = repo.unfiltered().changelog
178 self._changelog = repo.unfiltered().changelog
179 # used for hex generation
179 # used for hex generation
180 self._revlog = repo.file(path)
180 self._revlog = repo.file(path)
181
181
182 def hex(self, rev):
182 def hex(self, rev):
183 return hex(self._changelog.node(self._revlog.linkrev(rev)))
183 return hex(self._changelog.node(self._revlog.linkrev(rev)))
184
184
185 # TODO: maybe this can be a wrapper class for changectx/filectx list, which
185 # TODO: maybe this can be a wrapper class for changectx/filectx list, which
186 # yields {'ctx': ctx}
186 # yields {'ctx': ctx}
187 def _ctxsgen(context, ctxs):
187 def _ctxsgen(context, ctxs):
188 for s in ctxs:
188 for s in ctxs:
189 d = {
189 d = {
190 'node': s.hex(),
190 'node': s.hex(),
191 'rev': s.rev(),
191 'rev': s.rev(),
192 'user': s.user(),
192 'user': s.user(),
193 'date': s.date(),
193 'date': s.date(),
194 'description': s.description(),
194 'description': s.description(),
195 'branch': s.branch(),
195 'branch': s.branch(),
196 }
196 }
197 if util.safehasattr(s, 'path'):
197 if util.safehasattr(s, 'path'):
198 d['file'] = s.path()
198 d['file'] = s.path()
199 yield d
199 yield d
200
200
201 def _siblings(siblings=None, hiderev=None):
201 def _siblings(siblings=None, hiderev=None):
202 if siblings is None:
202 if siblings is None:
203 siblings = []
203 siblings = []
204 siblings = [s for s in siblings if s.node() != nullid]
204 siblings = [s for s in siblings if s.node() != nullid]
205 if len(siblings) == 1 and siblings[0].rev() == hiderev:
205 if len(siblings) == 1 and siblings[0].rev() == hiderev:
206 siblings = []
206 siblings = []
207 return templateutil.mappinggenerator(_ctxsgen, args=(siblings,))
207 return templateutil.mappinggenerator(_ctxsgen, args=(siblings,))
208
208
209 def difffeatureopts(req, ui, section):
209 def difffeatureopts(req, ui, section):
210 diffopts = diffutil.difffeatureopts(ui, untrusted=True,
210 diffopts = diffutil.difffeatureopts(ui, untrusted=True,
211 section=section, whitespace=True)
211 section=section, whitespace=True)
212
212
213 for k in ('ignorews', 'ignorewsamount', 'ignorewseol', 'ignoreblanklines'):
213 for k in ('ignorews', 'ignorewsamount', 'ignorewseol', 'ignoreblanklines'):
214 v = req.qsparams.get(k)
214 v = req.qsparams.get(k)
215 if v is not None:
215 if v is not None:
216 v = stringutil.parsebool(v)
216 v = stringutil.parsebool(v)
217 setattr(diffopts, k, v if v is not None else True)
217 setattr(diffopts, k, v if v is not None else True)
218
218
219 return diffopts
219 return diffopts
220
220
221 def annotate(req, fctx, ui):
221 def annotate(req, fctx, ui):
222 diffopts = difffeatureopts(req, ui, 'annotate')
222 diffopts = difffeatureopts(req, ui, 'annotate')
223 return fctx.annotate(follow=True, diffopts=diffopts)
223 return fctx.annotate(follow=True, diffopts=diffopts)
224
224
225 def parents(ctx, hide=None):
225 def parents(ctx, hide=None):
226 if isinstance(ctx, context.basefilectx):
226 if isinstance(ctx, context.basefilectx):
227 introrev = ctx.introrev()
227 introrev = ctx.introrev()
228 if ctx.changectx().rev() != introrev:
228 if ctx.changectx().rev() != introrev:
229 return _siblings([ctx.repo()[introrev]], hide)
229 return _siblings([ctx.repo()[introrev]], hide)
230 return _siblings(ctx.parents(), hide)
230 return _siblings(ctx.parents(), hide)
231
231
232 def children(ctx, hide=None):
232 def children(ctx, hide=None):
233 return _siblings(ctx.children(), hide)
233 return _siblings(ctx.children(), hide)
234
234
235 def renamelink(fctx):
235 def renamelink(fctx):
236 r = fctx.renamed()
236 r = fctx.renamed()
237 if r:
237 if r:
238 return templateutil.mappinglist([{'file': r[0], 'node': hex(r[1])}])
238 return templateutil.mappinglist([{'file': r[0], 'node': hex(r[1])}])
239 return templateutil.mappinglist([])
239 return templateutil.mappinglist([])
240
240
241 def nodetagsdict(repo, node):
241 def nodetagsdict(repo, node):
242 return templateutil.hybridlist(repo.nodetags(node), name='name')
242 return templateutil.hybridlist(repo.nodetags(node), name='name')
243
243
244 def nodebookmarksdict(repo, node):
244 def nodebookmarksdict(repo, node):
245 return templateutil.hybridlist(repo.nodebookmarks(node), name='name')
245 return templateutil.hybridlist(repo.nodebookmarks(node), name='name')
246
246
247 def nodebranchdict(repo, ctx):
247 def nodebranchdict(repo, ctx):
248 branches = []
248 branches = []
249 branch = ctx.branch()
249 branch = ctx.branch()
250 # If this is an empty repo, ctx.node() == nullid,
250 # If this is an empty repo, ctx.node() == nullid,
251 # ctx.branch() == 'default'.
251 # ctx.branch() == 'default'.
252 try:
252 try:
253 branchnode = repo.branchtip(branch)
253 branchnode = repo.branchtip(branch)
254 except error.RepoLookupError:
254 except error.RepoLookupError:
255 branchnode = None
255 branchnode = None
256 if branchnode == ctx.node():
256 if branchnode == ctx.node():
257 branches.append(branch)
257 branches.append(branch)
258 return templateutil.hybridlist(branches, name='name')
258 return templateutil.hybridlist(branches, name='name')
259
259
260 def nodeinbranch(repo, ctx):
260 def nodeinbranch(repo, ctx):
261 branches = []
261 branches = []
262 branch = ctx.branch()
262 branch = ctx.branch()
263 try:
263 try:
264 branchnode = repo.branchtip(branch)
264 branchnode = repo.branchtip(branch)
265 except error.RepoLookupError:
265 except error.RepoLookupError:
266 branchnode = None
266 branchnode = None
267 if branch != 'default' and branchnode != ctx.node():
267 if branch != 'default' and branchnode != ctx.node():
268 branches.append(branch)
268 branches.append(branch)
269 return templateutil.hybridlist(branches, name='name')
269 return templateutil.hybridlist(branches, name='name')
270
270
271 def nodebranchnodefault(ctx):
271 def nodebranchnodefault(ctx):
272 branches = []
272 branches = []
273 branch = ctx.branch()
273 branch = ctx.branch()
274 if branch != 'default':
274 if branch != 'default':
275 branches.append(branch)
275 branches.append(branch)
276 return templateutil.hybridlist(branches, name='name')
276 return templateutil.hybridlist(branches, name='name')
277
277
278 def _nodenamesgen(context, f, node, name):
278 def _nodenamesgen(context, f, node, name):
279 for t in f(node):
279 for t in f(node):
280 yield {name: t}
280 yield {name: t}
281
281
282 def showtag(repo, t1, node=nullid):
282 def showtag(repo, t1, node=nullid):
283 args = (repo.nodetags, node, 'tag')
283 args = (repo.nodetags, node, 'tag')
284 return templateutil.mappinggenerator(_nodenamesgen, args=args, name=t1)
284 return templateutil.mappinggenerator(_nodenamesgen, args=args, name=t1)
285
285
286 def showbookmark(repo, t1, node=nullid):
286 def showbookmark(repo, t1, node=nullid):
287 args = (repo.nodebookmarks, node, 'bookmark')
287 args = (repo.nodebookmarks, node, 'bookmark')
288 return templateutil.mappinggenerator(_nodenamesgen, args=args, name=t1)
288 return templateutil.mappinggenerator(_nodenamesgen, args=args, name=t1)
289
289
290 def branchentries(repo, stripecount, limit=0):
290 def branchentries(repo, stripecount, limit=0):
291 tips = []
291 tips = []
292 heads = repo.heads()
292 heads = repo.heads()
293 parity = paritygen(stripecount)
293 parity = paritygen(stripecount)
294 sortkey = lambda item: (not item[1], item[0].rev())
294 sortkey = lambda item: (not item[1], item[0].rev())
295
295
296 def entries(context):
296 def entries(context):
297 count = 0
297 count = 0
298 if not tips:
298 if not tips:
299 for tag, hs, tip, closed in repo.branchmap().iterbranches():
299 for tag, hs, tip, closed in repo.branchmap().iterbranches():
300 tips.append((repo[tip], closed))
300 tips.append((repo[tip], closed))
301 for ctx, closed in sorted(tips, key=sortkey, reverse=True):
301 for ctx, closed in sorted(tips, key=sortkey, reverse=True):
302 if limit > 0 and count >= limit:
302 if limit > 0 and count >= limit:
303 return
303 return
304 count += 1
304 count += 1
305 if closed:
305 if closed:
306 status = 'closed'
306 status = 'closed'
307 elif ctx.node() not in heads:
307 elif ctx.node() not in heads:
308 status = 'inactive'
308 status = 'inactive'
309 else:
309 else:
310 status = 'open'
310 status = 'open'
311 yield {
311 yield {
312 'parity': next(parity),
312 'parity': next(parity),
313 'branch': ctx.branch(),
313 'branch': ctx.branch(),
314 'status': status,
314 'status': status,
315 'node': ctx.hex(),
315 'node': ctx.hex(),
316 'date': ctx.date()
316 'date': ctx.date()
317 }
317 }
318
318
319 return templateutil.mappinggenerator(entries)
319 return templateutil.mappinggenerator(entries)
320
320
321 def cleanpath(repo, path):
321 def cleanpath(repo, path):
322 path = path.lstrip('/')
322 path = path.lstrip('/')
323 auditor = pathutil.pathauditor(repo.root, realfs=False)
323 auditor = pathutil.pathauditor(repo.root, realfs=False)
324 return pathutil.canonpath(repo.root, '', path, auditor=auditor)
324 return pathutil.canonpath(repo.root, '', path, auditor=auditor)
325
325
326 def changectx(repo, req):
326 def changectx(repo, req):
327 changeid = "tip"
327 changeid = "tip"
328 if 'node' in req.qsparams:
328 if 'node' in req.qsparams:
329 changeid = req.qsparams['node']
329 changeid = req.qsparams['node']
330 ipos = changeid.find(':')
330 ipos = changeid.find(':')
331 if ipos != -1:
331 if ipos != -1:
332 changeid = changeid[(ipos + 1):]
332 changeid = changeid[(ipos + 1):]
333
333
334 return scmutil.revsymbol(repo, changeid)
334 return scmutil.revsymbol(repo, changeid)
335
335
336 def basechangectx(repo, req):
336 def basechangectx(repo, req):
337 if 'node' in req.qsparams:
337 if 'node' in req.qsparams:
338 changeid = req.qsparams['node']
338 changeid = req.qsparams['node']
339 ipos = changeid.find(':')
339 ipos = changeid.find(':')
340 if ipos != -1:
340 if ipos != -1:
341 changeid = changeid[:ipos]
341 changeid = changeid[:ipos]
342 return scmutil.revsymbol(repo, changeid)
342 return scmutil.revsymbol(repo, changeid)
343
343
344 return None
344 return None
345
345
346 def filectx(repo, req):
346 def filectx(repo, req):
347 if 'file' not in req.qsparams:
347 if 'file' not in req.qsparams:
348 raise ErrorResponse(HTTP_NOT_FOUND, 'file not given')
348 raise ErrorResponse(HTTP_NOT_FOUND, 'file not given')
349 path = cleanpath(repo, req.qsparams['file'])
349 path = cleanpath(repo, req.qsparams['file'])
350 if 'node' in req.qsparams:
350 if 'node' in req.qsparams:
351 changeid = req.qsparams['node']
351 changeid = req.qsparams['node']
352 elif 'filenode' in req.qsparams:
352 elif 'filenode' in req.qsparams:
353 changeid = req.qsparams['filenode']
353 changeid = req.qsparams['filenode']
354 else:
354 else:
355 raise ErrorResponse(HTTP_NOT_FOUND, 'node or filenode not given')
355 raise ErrorResponse(HTTP_NOT_FOUND, 'node or filenode not given')
356 try:
356 try:
357 fctx = scmutil.revsymbol(repo, changeid)[path]
357 fctx = scmutil.revsymbol(repo, changeid)[path]
358 except error.RepoError:
358 except error.RepoError:
359 fctx = repo.filectx(path, fileid=changeid)
359 fctx = repo.filectx(path, fileid=changeid)
360
360
361 return fctx
361 return fctx
362
362
363 def linerange(req):
363 def linerange(req):
364 linerange = req.qsparams.getall('linerange')
364 linerange = req.qsparams.getall('linerange')
365 if not linerange:
365 if not linerange:
366 return None
366 return None
367 if len(linerange) > 1:
367 if len(linerange) > 1:
368 raise ErrorResponse(HTTP_BAD_REQUEST,
368 raise ErrorResponse(HTTP_BAD_REQUEST,
369 'redundant linerange parameter')
369 'redundant linerange parameter')
370 try:
370 try:
371 fromline, toline = map(int, linerange[0].split(':', 1))
371 fromline, toline = map(int, linerange[0].split(':', 1))
372 except ValueError:
372 except ValueError:
373 raise ErrorResponse(HTTP_BAD_REQUEST,
373 raise ErrorResponse(HTTP_BAD_REQUEST,
374 'invalid linerange parameter')
374 'invalid linerange parameter')
375 try:
375 try:
376 return util.processlinerange(fromline, toline)
376 return util.processlinerange(fromline, toline)
377 except error.ParseError as exc:
377 except error.ParseError as exc:
378 raise ErrorResponse(HTTP_BAD_REQUEST, pycompat.bytestr(exc))
378 raise ErrorResponse(HTTP_BAD_REQUEST, pycompat.bytestr(exc))
379
379
380 def formatlinerange(fromline, toline):
380 def formatlinerange(fromline, toline):
381 return '%d:%d' % (fromline + 1, toline)
381 return '%d:%d' % (fromline + 1, toline)
382
382
383 def _succsandmarkersgen(context, mapping):
383 def _succsandmarkersgen(context, mapping):
384 repo = context.resource(mapping, 'repo')
384 repo = context.resource(mapping, 'repo')
385 itemmappings = templatekw.showsuccsandmarkers(context, mapping)
385 itemmappings = templatekw.showsuccsandmarkers(context, mapping)
386 for item in itemmappings.tovalue(context, mapping):
386 for item in itemmappings.tovalue(context, mapping):
387 item['successors'] = _siblings(repo[successor]
387 item['successors'] = _siblings(repo[successor]
388 for successor in item['successors'])
388 for successor in item['successors'])
389 yield item
389 yield item
390
390
391 def succsandmarkers(context, mapping):
391 def succsandmarkers(context, mapping):
392 return templateutil.mappinggenerator(_succsandmarkersgen, args=(mapping,))
392 return templateutil.mappinggenerator(_succsandmarkersgen, args=(mapping,))
393
393
394 # teach templater succsandmarkers is switched to (context, mapping) API
394 # teach templater succsandmarkers is switched to (context, mapping) API
395 succsandmarkers._requires = {'repo', 'ctx'}
395 succsandmarkers._requires = {'repo', 'ctx'}
396
396
397 def _whyunstablegen(context, mapping):
397 def _whyunstablegen(context, mapping):
398 repo = context.resource(mapping, 'repo')
398 repo = context.resource(mapping, 'repo')
399 ctx = context.resource(mapping, 'ctx')
399 ctx = context.resource(mapping, 'ctx')
400
400
401 entries = obsutil.whyunstable(repo, ctx)
401 entries = obsutil.whyunstable(repo, ctx)
402 for entry in entries:
402 for entry in entries:
403 if entry.get('divergentnodes'):
403 if entry.get('divergentnodes'):
404 entry['divergentnodes'] = _siblings(entry['divergentnodes'])
404 entry['divergentnodes'] = _siblings(entry['divergentnodes'])
405 yield entry
405 yield entry
406
406
407 def whyunstable(context, mapping):
407 def whyunstable(context, mapping):
408 return templateutil.mappinggenerator(_whyunstablegen, args=(mapping,))
408 return templateutil.mappinggenerator(_whyunstablegen, args=(mapping,))
409
409
410 whyunstable._requires = {'repo', 'ctx'}
410 whyunstable._requires = {'repo', 'ctx'}
411
411
412 # helper to mark a function as a new-style template keyword; can be removed
412 # helper to mark a function as a new-style template keyword; can be removed
413 # once old-style function gets unsupported and new-style becomes the default
413 # once old-style function gets unsupported and new-style becomes the default
414 def _kwfunc(f):
414 def _kwfunc(f):
415 f._requires = ()
415 f._requires = ()
416 return f
416 return f
417
417
418 def commonentry(repo, ctx):
418 def commonentry(repo, ctx):
419 node = scmutil.binnode(ctx)
419 node = scmutil.binnode(ctx)
420 return {
420 return {
421 # TODO: perhaps ctx.changectx() should be assigned if ctx is a
421 # TODO: perhaps ctx.changectx() should be assigned if ctx is a
422 # filectx, but I'm not pretty sure if that would always work because
422 # filectx, but I'm not pretty sure if that would always work because
423 # fctx.parents() != fctx.changectx.parents() for example.
423 # fctx.parents() != fctx.changectx.parents() for example.
424 'ctx': ctx,
424 'ctx': ctx,
425 'rev': ctx.rev(),
425 'rev': ctx.rev(),
426 'node': hex(node),
426 'node': hex(node),
427 'author': ctx.user(),
427 'author': ctx.user(),
428 'desc': ctx.description(),
428 'desc': ctx.description(),
429 'date': ctx.date(),
429 'date': ctx.date(),
430 'extra': ctx.extra(),
430 'extra': ctx.extra(),
431 'phase': ctx.phasestr(),
431 'phase': ctx.phasestr(),
432 'obsolete': ctx.obsolete(),
432 'obsolete': ctx.obsolete(),
433 'succsandmarkers': succsandmarkers,
433 'succsandmarkers': succsandmarkers,
434 'instabilities': templateutil.hybridlist(ctx.instabilities(),
434 'instabilities': templateutil.hybridlist(ctx.instabilities(),
435 name='instability'),
435 name='instability'),
436 'whyunstable': whyunstable,
436 'whyunstable': whyunstable,
437 'branch': nodebranchnodefault(ctx),
437 'branch': nodebranchnodefault(ctx),
438 'inbranch': nodeinbranch(repo, ctx),
438 'inbranch': nodeinbranch(repo, ctx),
439 'branches': nodebranchdict(repo, ctx),
439 'branches': nodebranchdict(repo, ctx),
440 'tags': nodetagsdict(repo, node),
440 'tags': nodetagsdict(repo, node),
441 'bookmarks': nodebookmarksdict(repo, node),
441 'bookmarks': nodebookmarksdict(repo, node),
442 'parent': _kwfunc(lambda context, mapping: parents(ctx)),
442 'parent': _kwfunc(lambda context, mapping: parents(ctx)),
443 'child': _kwfunc(lambda context, mapping: children(ctx)),
443 'child': _kwfunc(lambda context, mapping: children(ctx)),
444 }
444 }
445
445
446 def changelistentry(web, ctx):
446 def changelistentry(web, ctx):
447 '''Obtain a dictionary to be used for entries in a changelist.
447 '''Obtain a dictionary to be used for entries in a changelist.
448
448
449 This function is called when producing items for the "entries" list passed
449 This function is called when producing items for the "entries" list passed
450 to the "shortlog" and "changelog" templates.
450 to the "shortlog" and "changelog" templates.
451 '''
451 '''
452 repo = web.repo
452 repo = web.repo
453 rev = ctx.rev()
453 rev = ctx.rev()
454 n = scmutil.binnode(ctx)
454 n = scmutil.binnode(ctx)
455 showtags = showtag(repo, 'changelogtag', n)
455 showtags = showtag(repo, 'changelogtag', n)
456 files = listfilediffs(ctx.files(), n, web.maxfiles)
456 files = listfilediffs(ctx.files(), n, web.maxfiles)
457
457
458 entry = commonentry(repo, ctx)
458 entry = commonentry(repo, ctx)
459 entry.update({
459 entry.update({
460 'allparents': _kwfunc(lambda context, mapping: parents(ctx)),
460 'allparents': _kwfunc(lambda context, mapping: parents(ctx)),
461 'parent': _kwfunc(lambda context, mapping: parents(ctx, rev - 1)),
461 'parent': _kwfunc(lambda context, mapping: parents(ctx, rev - 1)),
462 'child': _kwfunc(lambda context, mapping: children(ctx, rev + 1)),
462 'child': _kwfunc(lambda context, mapping: children(ctx, rev + 1)),
463 'changelogtag': showtags,
463 'changelogtag': showtags,
464 'files': files,
464 'files': files,
465 })
465 })
466 return entry
466 return entry
467
467
468 def changelistentries(web, revs, maxcount, parityfn):
468 def changelistentries(web, revs, maxcount, parityfn):
469 """Emit up to N records for an iterable of revisions."""
469 """Emit up to N records for an iterable of revisions."""
470 repo = web.repo
470 repo = web.repo
471
471
472 count = 0
472 count = 0
473 for rev in revs:
473 for rev in revs:
474 if count >= maxcount:
474 if count >= maxcount:
475 break
475 break
476
476
477 count += 1
477 count += 1
478
478
479 entry = changelistentry(web, repo[rev])
479 entry = changelistentry(web, repo[rev])
480 entry['parity'] = next(parityfn)
480 entry['parity'] = next(parityfn)
481
481
482 yield entry
482 yield entry
483
483
484 def symrevorshortnode(req, ctx):
484 def symrevorshortnode(req, ctx):
485 if 'node' in req.qsparams:
485 if 'node' in req.qsparams:
486 return templatefilters.revescape(req.qsparams['node'])
486 return templatefilters.revescape(req.qsparams['node'])
487 else:
487 else:
488 return short(scmutil.binnode(ctx))
488 return short(scmutil.binnode(ctx))
489
489
490 def _listfilesgen(context, ctx, stripecount):
490 def _listfilesgen(context, ctx, stripecount):
491 parity = paritygen(stripecount)
491 parity = paritygen(stripecount)
492 for blockno, f in enumerate(ctx.files()):
492 for blockno, f in enumerate(ctx.files()):
493 template = 'filenodelink' if f in ctx else 'filenolink'
493 template = 'filenodelink' if f in ctx else 'filenolink'
494 yield context.process(template, {
494 yield context.process(template, {
495 'node': ctx.hex(),
495 'node': ctx.hex(),
496 'file': f,
496 'file': f,
497 'blockno': blockno + 1,
497 'blockno': blockno + 1,
498 'parity': next(parity),
498 'parity': next(parity),
499 })
499 })
500
500
501 def changesetentry(web, ctx):
501 def changesetentry(web, ctx):
502 '''Obtain a dictionary to be used to render the "changeset" template.'''
502 '''Obtain a dictionary to be used to render the "changeset" template.'''
503
503
504 showtags = showtag(web.repo, 'changesettag', scmutil.binnode(ctx))
504 showtags = showtag(web.repo, 'changesettag', scmutil.binnode(ctx))
505 showbookmarks = showbookmark(web.repo, 'changesetbookmark',
505 showbookmarks = showbookmark(web.repo, 'changesetbookmark',
506 scmutil.binnode(ctx))
506 scmutil.binnode(ctx))
507 showbranch = nodebranchnodefault(ctx)
507 showbranch = nodebranchnodefault(ctx)
508
508
509 basectx = basechangectx(web.repo, web.req)
509 basectx = basechangectx(web.repo, web.req)
510 if basectx is None:
510 if basectx is None:
511 basectx = ctx.p1()
511 basectx = ctx.p1()
512
512
513 style = web.config('web', 'style')
513 style = web.config('web', 'style')
514 if 'style' in web.req.qsparams:
514 if 'style' in web.req.qsparams:
515 style = web.req.qsparams['style']
515 style = web.req.qsparams['style']
516
516
517 diff = diffs(web, ctx, basectx, None, style)
517 diff = diffs(web, ctx, basectx, None, style)
518
518
519 parity = paritygen(web.stripecount)
519 parity = paritygen(web.stripecount)
520 diffstatsgen = diffstatgen(web.repo.ui, ctx, basectx)
520 diffstatsgen = diffstatgen(web.repo.ui, ctx, basectx)
521 diffstats = diffstat(ctx, diffstatsgen, parity)
521 diffstats = diffstat(ctx, diffstatsgen, parity)
522
522
523 return dict(
523 return dict(
524 diff=diff,
524 diff=diff,
525 symrev=symrevorshortnode(web.req, ctx),
525 symrev=symrevorshortnode(web.req, ctx),
526 basenode=basectx.hex(),
526 basenode=basectx.hex(),
527 changesettag=showtags,
527 changesettag=showtags,
528 changesetbookmark=showbookmarks,
528 changesetbookmark=showbookmarks,
529 changesetbranch=showbranch,
529 changesetbranch=showbranch,
530 files=templateutil.mappedgenerator(_listfilesgen,
530 files=templateutil.mappedgenerator(_listfilesgen,
531 args=(ctx, web.stripecount)),
531 args=(ctx, web.stripecount)),
532 diffsummary=_kwfunc(lambda context, mapping: diffsummary(diffstatsgen)),
532 diffsummary=_kwfunc(lambda context, mapping: diffsummary(diffstatsgen)),
533 diffstat=diffstats,
533 diffstat=diffstats,
534 archives=web.archivelist(ctx.hex()),
534 archives=web.archivelist(ctx.hex()),
535 **pycompat.strkwargs(commonentry(web.repo, ctx)))
535 **pycompat.strkwargs(commonentry(web.repo, ctx)))
536
536
537 def _listfilediffsgen(context, files, node, max):
537 def _listfilediffsgen(context, files, node, max):
538 for f in files[:max]:
538 for f in files[:max]:
539 yield context.process('filedifflink', {'node': hex(node), 'file': f})
539 yield context.process('filedifflink', {'node': hex(node), 'file': f})
540 if len(files) > max:
540 if len(files) > max:
541 yield context.process('fileellipses', {})
541 yield context.process('fileellipses', {})
542
542
543 def listfilediffs(files, node, max):
543 def listfilediffs(files, node, max):
544 return templateutil.mappedgenerator(_listfilediffsgen,
544 return templateutil.mappedgenerator(_listfilediffsgen,
545 args=(files, node, max))
545 args=(files, node, max))
546
546
547 def _prettyprintdifflines(context, lines, blockno, lineidprefix):
547 def _prettyprintdifflines(context, lines, blockno, lineidprefix):
548 for lineno, l in enumerate(lines, 1):
548 for lineno, l in enumerate(lines, 1):
549 difflineno = "%d.%d" % (blockno, lineno)
549 difflineno = "%d.%d" % (blockno, lineno)
550 if l.startswith('+'):
550 if l.startswith('+'):
551 ltype = "difflineplus"
551 ltype = "difflineplus"
552 elif l.startswith('-'):
552 elif l.startswith('-'):
553 ltype = "difflineminus"
553 ltype = "difflineminus"
554 elif l.startswith('@'):
554 elif l.startswith('@'):
555 ltype = "difflineat"
555 ltype = "difflineat"
556 else:
556 else:
557 ltype = "diffline"
557 ltype = "diffline"
558 yield context.process(ltype, {
558 yield context.process(ltype, {
559 'line': l,
559 'line': l,
560 'lineno': lineno,
560 'lineno': lineno,
561 'lineid': lineidprefix + "l%s" % difflineno,
561 'lineid': lineidprefix + "l%s" % difflineno,
562 'linenumber': "% 8s" % difflineno,
562 'linenumber': "% 8s" % difflineno,
563 })
563 })
564
564
565 def _diffsgen(context, repo, ctx, basectx, files, style, stripecount,
565 def _diffsgen(context, repo, ctx, basectx, files, style, stripecount,
566 linerange, lineidprefix):
566 linerange, lineidprefix):
567 if files:
567 if files:
568 m = match.exact(repo.root, repo.getcwd(), files)
568 m = match.exact(files)
569 else:
569 else:
570 m = match.always(repo.root, repo.getcwd())
570 m = match.always()
571
571
572 diffopts = patch.diffopts(repo.ui, untrusted=True)
572 diffopts = patch.diffopts(repo.ui, untrusted=True)
573 parity = paritygen(stripecount)
573 parity = paritygen(stripecount)
574
574
575 diffhunks = patch.diffhunks(repo, basectx, ctx, m, opts=diffopts)
575 diffhunks = patch.diffhunks(repo, basectx, ctx, m, opts=diffopts)
576 for blockno, (fctx1, fctx2, header, hunks) in enumerate(diffhunks, 1):
576 for blockno, (fctx1, fctx2, header, hunks) in enumerate(diffhunks, 1):
577 if style != 'raw':
577 if style != 'raw':
578 header = header[1:]
578 header = header[1:]
579 lines = [h + '\n' for h in header]
579 lines = [h + '\n' for h in header]
580 for hunkrange, hunklines in hunks:
580 for hunkrange, hunklines in hunks:
581 if linerange is not None and hunkrange is not None:
581 if linerange is not None and hunkrange is not None:
582 s1, l1, s2, l2 = hunkrange
582 s1, l1, s2, l2 = hunkrange
583 if not mdiff.hunkinrange((s2, l2), linerange):
583 if not mdiff.hunkinrange((s2, l2), linerange):
584 continue
584 continue
585 lines.extend(hunklines)
585 lines.extend(hunklines)
586 if lines:
586 if lines:
587 l = templateutil.mappedgenerator(_prettyprintdifflines,
587 l = templateutil.mappedgenerator(_prettyprintdifflines,
588 args=(lines, blockno,
588 args=(lines, blockno,
589 lineidprefix))
589 lineidprefix))
590 yield {
590 yield {
591 'parity': next(parity),
591 'parity': next(parity),
592 'blockno': blockno,
592 'blockno': blockno,
593 'lines': l,
593 'lines': l,
594 }
594 }
595
595
596 def diffs(web, ctx, basectx, files, style, linerange=None, lineidprefix=''):
596 def diffs(web, ctx, basectx, files, style, linerange=None, lineidprefix=''):
597 args = (web.repo, ctx, basectx, files, style, web.stripecount,
597 args = (web.repo, ctx, basectx, files, style, web.stripecount,
598 linerange, lineidprefix)
598 linerange, lineidprefix)
599 return templateutil.mappinggenerator(_diffsgen, args=args, name='diffblock')
599 return templateutil.mappinggenerator(_diffsgen, args=args, name='diffblock')
600
600
601 def _compline(type, leftlineno, leftline, rightlineno, rightline):
601 def _compline(type, leftlineno, leftline, rightlineno, rightline):
602 lineid = leftlineno and ("l%d" % leftlineno) or ''
602 lineid = leftlineno and ("l%d" % leftlineno) or ''
603 lineid += rightlineno and ("r%d" % rightlineno) or ''
603 lineid += rightlineno and ("r%d" % rightlineno) or ''
604 llno = '%d' % leftlineno if leftlineno else ''
604 llno = '%d' % leftlineno if leftlineno else ''
605 rlno = '%d' % rightlineno if rightlineno else ''
605 rlno = '%d' % rightlineno if rightlineno else ''
606 return {
606 return {
607 'type': type,
607 'type': type,
608 'lineid': lineid,
608 'lineid': lineid,
609 'leftlineno': leftlineno,
609 'leftlineno': leftlineno,
610 'leftlinenumber': "% 6s" % llno,
610 'leftlinenumber': "% 6s" % llno,
611 'leftline': leftline or '',
611 'leftline': leftline or '',
612 'rightlineno': rightlineno,
612 'rightlineno': rightlineno,
613 'rightlinenumber': "% 6s" % rlno,
613 'rightlinenumber': "% 6s" % rlno,
614 'rightline': rightline or '',
614 'rightline': rightline or '',
615 }
615 }
616
616
617 def _getcompblockgen(context, leftlines, rightlines, opcodes):
617 def _getcompblockgen(context, leftlines, rightlines, opcodes):
618 for type, llo, lhi, rlo, rhi in opcodes:
618 for type, llo, lhi, rlo, rhi in opcodes:
619 type = pycompat.sysbytes(type)
619 type = pycompat.sysbytes(type)
620 len1 = lhi - llo
620 len1 = lhi - llo
621 len2 = rhi - rlo
621 len2 = rhi - rlo
622 count = min(len1, len2)
622 count = min(len1, len2)
623 for i in pycompat.xrange(count):
623 for i in pycompat.xrange(count):
624 yield _compline(type=type,
624 yield _compline(type=type,
625 leftlineno=llo + i + 1,
625 leftlineno=llo + i + 1,
626 leftline=leftlines[llo + i],
626 leftline=leftlines[llo + i],
627 rightlineno=rlo + i + 1,
627 rightlineno=rlo + i + 1,
628 rightline=rightlines[rlo + i])
628 rightline=rightlines[rlo + i])
629 if len1 > len2:
629 if len1 > len2:
630 for i in pycompat.xrange(llo + count, lhi):
630 for i in pycompat.xrange(llo + count, lhi):
631 yield _compline(type=type,
631 yield _compline(type=type,
632 leftlineno=i + 1,
632 leftlineno=i + 1,
633 leftline=leftlines[i],
633 leftline=leftlines[i],
634 rightlineno=None,
634 rightlineno=None,
635 rightline=None)
635 rightline=None)
636 elif len2 > len1:
636 elif len2 > len1:
637 for i in pycompat.xrange(rlo + count, rhi):
637 for i in pycompat.xrange(rlo + count, rhi):
638 yield _compline(type=type,
638 yield _compline(type=type,
639 leftlineno=None,
639 leftlineno=None,
640 leftline=None,
640 leftline=None,
641 rightlineno=i + 1,
641 rightlineno=i + 1,
642 rightline=rightlines[i])
642 rightline=rightlines[i])
643
643
644 def _getcompblock(leftlines, rightlines, opcodes):
644 def _getcompblock(leftlines, rightlines, opcodes):
645 args = (leftlines, rightlines, opcodes)
645 args = (leftlines, rightlines, opcodes)
646 return templateutil.mappinggenerator(_getcompblockgen, args=args,
646 return templateutil.mappinggenerator(_getcompblockgen, args=args,
647 name='comparisonline')
647 name='comparisonline')
648
648
649 def _comparegen(context, contextnum, leftlines, rightlines):
649 def _comparegen(context, contextnum, leftlines, rightlines):
650 '''Generator function that provides side-by-side comparison data.'''
650 '''Generator function that provides side-by-side comparison data.'''
651 s = difflib.SequenceMatcher(None, leftlines, rightlines)
651 s = difflib.SequenceMatcher(None, leftlines, rightlines)
652 if contextnum < 0:
652 if contextnum < 0:
653 l = _getcompblock(leftlines, rightlines, s.get_opcodes())
653 l = _getcompblock(leftlines, rightlines, s.get_opcodes())
654 yield {'lines': l}
654 yield {'lines': l}
655 else:
655 else:
656 for oc in s.get_grouped_opcodes(n=contextnum):
656 for oc in s.get_grouped_opcodes(n=contextnum):
657 l = _getcompblock(leftlines, rightlines, oc)
657 l = _getcompblock(leftlines, rightlines, oc)
658 yield {'lines': l}
658 yield {'lines': l}
659
659
660 def compare(contextnum, leftlines, rightlines):
660 def compare(contextnum, leftlines, rightlines):
661 args = (contextnum, leftlines, rightlines)
661 args = (contextnum, leftlines, rightlines)
662 return templateutil.mappinggenerator(_comparegen, args=args,
662 return templateutil.mappinggenerator(_comparegen, args=args,
663 name='comparisonblock')
663 name='comparisonblock')
664
664
665 def diffstatgen(ui, ctx, basectx):
665 def diffstatgen(ui, ctx, basectx):
666 '''Generator function that provides the diffstat data.'''
666 '''Generator function that provides the diffstat data.'''
667
667
668 diffopts = patch.diffopts(ui, {'noprefix': False})
668 diffopts = patch.diffopts(ui, {'noprefix': False})
669 stats = patch.diffstatdata(
669 stats = patch.diffstatdata(
670 util.iterlines(ctx.diff(basectx, opts=diffopts)))
670 util.iterlines(ctx.diff(basectx, opts=diffopts)))
671 maxname, maxtotal, addtotal, removetotal, binary = patch.diffstatsum(stats)
671 maxname, maxtotal, addtotal, removetotal, binary = patch.diffstatsum(stats)
672 while True:
672 while True:
673 yield stats, maxname, maxtotal, addtotal, removetotal, binary
673 yield stats, maxname, maxtotal, addtotal, removetotal, binary
674
674
675 def diffsummary(statgen):
675 def diffsummary(statgen):
676 '''Return a short summary of the diff.'''
676 '''Return a short summary of the diff.'''
677
677
678 stats, maxname, maxtotal, addtotal, removetotal, binary = next(statgen)
678 stats, maxname, maxtotal, addtotal, removetotal, binary = next(statgen)
679 return _(' %d files changed, %d insertions(+), %d deletions(-)\n') % (
679 return _(' %d files changed, %d insertions(+), %d deletions(-)\n') % (
680 len(stats), addtotal, removetotal)
680 len(stats), addtotal, removetotal)
681
681
682 def _diffstattmplgen(context, ctx, statgen, parity):
682 def _diffstattmplgen(context, ctx, statgen, parity):
683 stats, maxname, maxtotal, addtotal, removetotal, binary = next(statgen)
683 stats, maxname, maxtotal, addtotal, removetotal, binary = next(statgen)
684 files = ctx.files()
684 files = ctx.files()
685
685
686 def pct(i):
686 def pct(i):
687 if maxtotal == 0:
687 if maxtotal == 0:
688 return 0
688 return 0
689 return (float(i) / maxtotal) * 100
689 return (float(i) / maxtotal) * 100
690
690
691 fileno = 0
691 fileno = 0
692 for filename, adds, removes, isbinary in stats:
692 for filename, adds, removes, isbinary in stats:
693 template = 'diffstatlink' if filename in files else 'diffstatnolink'
693 template = 'diffstatlink' if filename in files else 'diffstatnolink'
694 total = adds + removes
694 total = adds + removes
695 fileno += 1
695 fileno += 1
696 yield context.process(template, {
696 yield context.process(template, {
697 'node': ctx.hex(),
697 'node': ctx.hex(),
698 'file': filename,
698 'file': filename,
699 'fileno': fileno,
699 'fileno': fileno,
700 'total': total,
700 'total': total,
701 'addpct': pct(adds),
701 'addpct': pct(adds),
702 'removepct': pct(removes),
702 'removepct': pct(removes),
703 'parity': next(parity),
703 'parity': next(parity),
704 })
704 })
705
705
706 def diffstat(ctx, statgen, parity):
706 def diffstat(ctx, statgen, parity):
707 '''Return a diffstat template for each file in the diff.'''
707 '''Return a diffstat template for each file in the diff.'''
708 args = (ctx, statgen, parity)
708 args = (ctx, statgen, parity)
709 return templateutil.mappedgenerator(_diffstattmplgen, args=args)
709 return templateutil.mappedgenerator(_diffstattmplgen, args=args)
710
710
711 class sessionvars(templateutil.wrapped):
711 class sessionvars(templateutil.wrapped):
712 def __init__(self, vars, start='?'):
712 def __init__(self, vars, start='?'):
713 self._start = start
713 self._start = start
714 self._vars = vars
714 self._vars = vars
715
715
716 def __getitem__(self, key):
716 def __getitem__(self, key):
717 return self._vars[key]
717 return self._vars[key]
718
718
719 def __setitem__(self, key, value):
719 def __setitem__(self, key, value):
720 self._vars[key] = value
720 self._vars[key] = value
721
721
722 def __copy__(self):
722 def __copy__(self):
723 return sessionvars(copy.copy(self._vars), self._start)
723 return sessionvars(copy.copy(self._vars), self._start)
724
724
725 def contains(self, context, mapping, item):
725 def contains(self, context, mapping, item):
726 item = templateutil.unwrapvalue(context, mapping, item)
726 item = templateutil.unwrapvalue(context, mapping, item)
727 return item in self._vars
727 return item in self._vars
728
728
729 def getmember(self, context, mapping, key):
729 def getmember(self, context, mapping, key):
730 key = templateutil.unwrapvalue(context, mapping, key)
730 key = templateutil.unwrapvalue(context, mapping, key)
731 return self._vars.get(key)
731 return self._vars.get(key)
732
732
733 def getmin(self, context, mapping):
733 def getmin(self, context, mapping):
734 raise error.ParseError(_('not comparable'))
734 raise error.ParseError(_('not comparable'))
735
735
736 def getmax(self, context, mapping):
736 def getmax(self, context, mapping):
737 raise error.ParseError(_('not comparable'))
737 raise error.ParseError(_('not comparable'))
738
738
739 def filter(self, context, mapping, select):
739 def filter(self, context, mapping, select):
740 # implement if necessary
740 # implement if necessary
741 raise error.ParseError(_('not filterable'))
741 raise error.ParseError(_('not filterable'))
742
742
743 def itermaps(self, context):
743 def itermaps(self, context):
744 separator = self._start
744 separator = self._start
745 for key, value in sorted(self._vars.iteritems()):
745 for key, value in sorted(self._vars.iteritems()):
746 yield {'name': key,
746 yield {'name': key,
747 'value': pycompat.bytestr(value),
747 'value': pycompat.bytestr(value),
748 'separator': separator,
748 'separator': separator,
749 }
749 }
750 separator = '&'
750 separator = '&'
751
751
752 def join(self, context, mapping, sep):
752 def join(self, context, mapping, sep):
753 # could be '{separator}{name}={value|urlescape}'
753 # could be '{separator}{name}={value|urlescape}'
754 raise error.ParseError(_('not displayable without template'))
754 raise error.ParseError(_('not displayable without template'))
755
755
756 def show(self, context, mapping):
756 def show(self, context, mapping):
757 return self.join(context, '')
757 return self.join(context, '')
758
758
759 def tobool(self, context, mapping):
759 def tobool(self, context, mapping):
760 return bool(self._vars)
760 return bool(self._vars)
761
761
762 def tovalue(self, context, mapping):
762 def tovalue(self, context, mapping):
763 return self._vars
763 return self._vars
764
764
765 class wsgiui(uimod.ui):
765 class wsgiui(uimod.ui):
766 # default termwidth breaks under mod_wsgi
766 # default termwidth breaks under mod_wsgi
767 def termwidth(self):
767 def termwidth(self):
768 return 80
768 return 80
769
769
770 def getwebsubs(repo):
770 def getwebsubs(repo):
771 websubtable = []
771 websubtable = []
772 websubdefs = repo.ui.configitems('websub')
772 websubdefs = repo.ui.configitems('websub')
773 # we must maintain interhg backwards compatibility
773 # we must maintain interhg backwards compatibility
774 websubdefs += repo.ui.configitems('interhg')
774 websubdefs += repo.ui.configitems('interhg')
775 for key, pattern in websubdefs:
775 for key, pattern in websubdefs:
776 # grab the delimiter from the character after the "s"
776 # grab the delimiter from the character after the "s"
777 unesc = pattern[1:2]
777 unesc = pattern[1:2]
778 delim = stringutil.reescape(unesc)
778 delim = stringutil.reescape(unesc)
779
779
780 # identify portions of the pattern, taking care to avoid escaped
780 # identify portions of the pattern, taking care to avoid escaped
781 # delimiters. the replace format and flags are optional, but
781 # delimiters. the replace format and flags are optional, but
782 # delimiters are required.
782 # delimiters are required.
783 match = re.match(
783 match = re.match(
784 br'^s%s(.+)(?:(?<=\\\\)|(?<!\\))%s(.*)%s([ilmsux])*$'
784 br'^s%s(.+)(?:(?<=\\\\)|(?<!\\))%s(.*)%s([ilmsux])*$'
785 % (delim, delim, delim), pattern)
785 % (delim, delim, delim), pattern)
786 if not match:
786 if not match:
787 repo.ui.warn(_("websub: invalid pattern for %s: %s\n")
787 repo.ui.warn(_("websub: invalid pattern for %s: %s\n")
788 % (key, pattern))
788 % (key, pattern))
789 continue
789 continue
790
790
791 # we need to unescape the delimiter for regexp and format
791 # we need to unescape the delimiter for regexp and format
792 delim_re = re.compile(br'(?<!\\)\\%s' % delim)
792 delim_re = re.compile(br'(?<!\\)\\%s' % delim)
793 regexp = delim_re.sub(unesc, match.group(1))
793 regexp = delim_re.sub(unesc, match.group(1))
794 format = delim_re.sub(unesc, match.group(2))
794 format = delim_re.sub(unesc, match.group(2))
795
795
796 # the pattern allows for 6 regexp flags, so set them if necessary
796 # the pattern allows for 6 regexp flags, so set them if necessary
797 flagin = match.group(3)
797 flagin = match.group(3)
798 flags = 0
798 flags = 0
799 if flagin:
799 if flagin:
800 for flag in flagin.upper():
800 for flag in flagin.upper():
801 flags |= re.__dict__[flag]
801 flags |= re.__dict__[flag]
802
802
803 try:
803 try:
804 regexp = re.compile(regexp, flags)
804 regexp = re.compile(regexp, flags)
805 websubtable.append((regexp, format))
805 websubtable.append((regexp, format))
806 except re.error:
806 except re.error:
807 repo.ui.warn(_("websub: invalid regexp for %s: %s\n")
807 repo.ui.warn(_("websub: invalid regexp for %s: %s\n")
808 % (key, regexp))
808 % (key, regexp))
809 return websubtable
809 return websubtable
810
810
811 def getgraphnode(repo, ctx):
811 def getgraphnode(repo, ctx):
812 return (templatekw.getgraphnodecurrent(repo, ctx) +
812 return (templatekw.getgraphnodecurrent(repo, ctx) +
813 templatekw.getgraphnodesymbol(ctx))
813 templatekw.getgraphnodesymbol(ctx))
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
1 NO CONTENT: modified file
NO CONTENT: modified file
The requested commit or file is too big and content was truncated. Show full diff
General Comments 0
You need to be logged in to leave comments. Login now