##// END OF EJS Templates
fileset: rewrite predicates to return matcher not closed to subset (API) (BC)...
Yuya Nishihara -
r38948:ff5b6fca default
parent child Browse files
Show More
@@ -1,399 +1,401 b''
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will assume the remote repository also handles blob storage
90 # if unset, lfs will assume the remote repository also handles blob storage
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
91 # for http(s) URLs. Otherwise, lfs will prompt to set this when it must
92 # use this value.
92 # use this value.
93 # (default: unset)
93 # (default: unset)
94 url = https://example.com/repo.git/info/lfs
94 url = https://example.com/repo.git/info/lfs
95
95
96 # Which files to track in LFS. Path tests are "**.extname" for file
96 # Which files to track in LFS. Path tests are "**.extname" for file
97 # extensions, and "path:under/some/directory" for path prefix. Both
97 # extensions, and "path:under/some/directory" for path prefix. Both
98 # are relative to the repository root.
98 # are relative to the repository root.
99 # File size can be tested with the "size()" fileset, and tests can be
99 # File size can be tested with the "size()" fileset, and tests can be
100 # joined with fileset operators. (See "hg help filesets.operators".)
100 # joined with fileset operators. (See "hg help filesets.operators".)
101 #
101 #
102 # Some examples:
102 # Some examples:
103 # - all() # everything
103 # - all() # everything
104 # - none() # nothing
104 # - none() # nothing
105 # - size(">20MB") # larger than 20MB
105 # - size(">20MB") # larger than 20MB
106 # - !**.txt # anything not a *.txt file
106 # - !**.txt # anything not a *.txt file
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
107 # - **.zip | **.tar.gz | **.7z # some types of compressed files
108 # - path:bin # files under "bin" in the project root
108 # - path:bin # files under "bin" in the project root
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
109 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
110 # | (path:bin & !path:/bin/README) | size(">1GB")
110 # | (path:bin & !path:/bin/README) | size(">1GB")
111 # (default: none())
111 # (default: none())
112 #
112 #
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
113 # This is ignored if there is a tracked '.hglfs' file, and this setting
114 # will eventually be deprecated and removed.
114 # will eventually be deprecated and removed.
115 track = size(">10M")
115 track = size(">10M")
116
116
117 # how many times to retry before giving up on transferring an object
117 # how many times to retry before giving up on transferring an object
118 retry = 5
118 retry = 5
119
119
120 # the local directory to store lfs files for sharing across local clones.
120 # the local directory to store lfs files for sharing across local clones.
121 # If not set, the cache is located in an OS specific cache location.
121 # If not set, the cache is located in an OS specific cache location.
122 usercache = /path/to/global/cache
122 usercache = /path/to/global/cache
123 """
123 """
124
124
125 from __future__ import absolute_import
125 from __future__ import absolute_import
126
126
127 from mercurial.i18n import _
127 from mercurial.i18n import _
128
128
129 from mercurial import (
129 from mercurial import (
130 bundle2,
130 bundle2,
131 changegroup,
131 changegroup,
132 cmdutil,
132 cmdutil,
133 config,
133 config,
134 context,
134 context,
135 error,
135 error,
136 exchange,
136 exchange,
137 extensions,
137 extensions,
138 filelog,
138 filelog,
139 fileset,
139 fileset,
140 hg,
140 hg,
141 localrepo,
141 localrepo,
142 minifileset,
142 minifileset,
143 node,
143 node,
144 pycompat,
144 pycompat,
145 registrar,
145 registrar,
146 revlog,
146 revlog,
147 scmutil,
147 scmutil,
148 templateutil,
148 templateutil,
149 upgrade,
149 upgrade,
150 util,
150 util,
151 vfs as vfsmod,
151 vfs as vfsmod,
152 wireprotoserver,
152 wireprotoserver,
153 wireprotov1server,
153 wireprotov1server,
154 )
154 )
155
155
156 from . import (
156 from . import (
157 blobstore,
157 blobstore,
158 wireprotolfsserver,
158 wireprotolfsserver,
159 wrapper,
159 wrapper,
160 )
160 )
161
161
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
162 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
163 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
164 # be specifying the version(s) of Mercurial they are tested with, or
164 # be specifying the version(s) of Mercurial they are tested with, or
165 # leave the attribute unspecified.
165 # leave the attribute unspecified.
166 testedwith = 'ships-with-hg-core'
166 testedwith = 'ships-with-hg-core'
167
167
168 configtable = {}
168 configtable = {}
169 configitem = registrar.configitem(configtable)
169 configitem = registrar.configitem(configtable)
170
170
171 configitem('experimental', 'lfs.serve',
171 configitem('experimental', 'lfs.serve',
172 default=True,
172 default=True,
173 )
173 )
174 configitem('experimental', 'lfs.user-agent',
174 configitem('experimental', 'lfs.user-agent',
175 default=None,
175 default=None,
176 )
176 )
177 configitem('experimental', 'lfs.disableusercache',
177 configitem('experimental', 'lfs.disableusercache',
178 default=False,
178 default=False,
179 )
179 )
180 configitem('experimental', 'lfs.worker-enable',
180 configitem('experimental', 'lfs.worker-enable',
181 default=False,
181 default=False,
182 )
182 )
183
183
184 configitem('lfs', 'url',
184 configitem('lfs', 'url',
185 default=None,
185 default=None,
186 )
186 )
187 configitem('lfs', 'usercache',
187 configitem('lfs', 'usercache',
188 default=None,
188 default=None,
189 )
189 )
190 # Deprecated
190 # Deprecated
191 configitem('lfs', 'threshold',
191 configitem('lfs', 'threshold',
192 default=None,
192 default=None,
193 )
193 )
194 configitem('lfs', 'track',
194 configitem('lfs', 'track',
195 default='none()',
195 default='none()',
196 )
196 )
197 configitem('lfs', 'retry',
197 configitem('lfs', 'retry',
198 default=5,
198 default=5,
199 )
199 )
200
200
201 cmdtable = {}
201 cmdtable = {}
202 command = registrar.command(cmdtable)
202 command = registrar.command(cmdtable)
203
203
204 templatekeyword = registrar.templatekeyword()
204 templatekeyword = registrar.templatekeyword()
205 filesetpredicate = registrar.filesetpredicate()
205 filesetpredicate = registrar.filesetpredicate()
206
206
207 def featuresetup(ui, supported):
207 def featuresetup(ui, supported):
208 # don't die on seeing a repo with the lfs requirement
208 # don't die on seeing a repo with the lfs requirement
209 supported |= {'lfs'}
209 supported |= {'lfs'}
210
210
211 def uisetup(ui):
211 def uisetup(ui):
212 localrepo.featuresetupfuncs.add(featuresetup)
212 localrepo.featuresetupfuncs.add(featuresetup)
213
213
214 def reposetup(ui, repo):
214 def reposetup(ui, repo):
215 # Nothing to do with a remote repo
215 # Nothing to do with a remote repo
216 if not repo.local():
216 if not repo.local():
217 return
217 return
218
218
219 repo.svfs.lfslocalblobstore = blobstore.local(repo)
219 repo.svfs.lfslocalblobstore = blobstore.local(repo)
220 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
220 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
221
221
222 class lfsrepo(repo.__class__):
222 class lfsrepo(repo.__class__):
223 @localrepo.unfilteredmethod
223 @localrepo.unfilteredmethod
224 def commitctx(self, ctx, error=False):
224 def commitctx(self, ctx, error=False):
225 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
225 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
226 return super(lfsrepo, self).commitctx(ctx, error)
226 return super(lfsrepo, self).commitctx(ctx, error)
227
227
228 repo.__class__ = lfsrepo
228 repo.__class__ = lfsrepo
229
229
230 if 'lfs' not in repo.requirements:
230 if 'lfs' not in repo.requirements:
231 def checkrequireslfs(ui, repo, **kwargs):
231 def checkrequireslfs(ui, repo, **kwargs):
232 if 'lfs' not in repo.requirements:
232 if 'lfs' not in repo.requirements:
233 last = kwargs.get(r'node_last')
233 last = kwargs.get(r'node_last')
234 _bin = node.bin
234 _bin = node.bin
235 if last:
235 if last:
236 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
236 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
237 else:
237 else:
238 s = repo.set('%n', _bin(kwargs[r'node']))
238 s = repo.set('%n', _bin(kwargs[r'node']))
239 match = repo.narrowmatch()
239 match = repo.narrowmatch()
240 for ctx in s:
240 for ctx in s:
241 # TODO: is there a way to just walk the files in the commit?
241 # TODO: is there a way to just walk the files in the commit?
242 if any(ctx[f].islfs() for f in ctx.files()
242 if any(ctx[f].islfs() for f in ctx.files()
243 if f in ctx and match(f)):
243 if f in ctx and match(f)):
244 repo.requirements.add('lfs')
244 repo.requirements.add('lfs')
245 repo._writerequirements()
245 repo._writerequirements()
246 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
246 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
247 break
247 break
248
248
249 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
249 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
250 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
250 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
251 else:
251 else:
252 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
252 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
253
253
254 def _trackedmatcher(repo):
254 def _trackedmatcher(repo):
255 """Return a function (path, size) -> bool indicating whether or not to
255 """Return a function (path, size) -> bool indicating whether or not to
256 track a given file with lfs."""
256 track a given file with lfs."""
257 if not repo.wvfs.exists('.hglfs'):
257 if not repo.wvfs.exists('.hglfs'):
258 # No '.hglfs' in wdir. Fallback to config for now.
258 # No '.hglfs' in wdir. Fallback to config for now.
259 trackspec = repo.ui.config('lfs', 'track')
259 trackspec = repo.ui.config('lfs', 'track')
260
260
261 # deprecated config: lfs.threshold
261 # deprecated config: lfs.threshold
262 threshold = repo.ui.configbytes('lfs', 'threshold')
262 threshold = repo.ui.configbytes('lfs', 'threshold')
263 if threshold:
263 if threshold:
264 fileset.parse(trackspec) # make sure syntax errors are confined
264 fileset.parse(trackspec) # make sure syntax errors are confined
265 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
265 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
266
266
267 return minifileset.compile(trackspec)
267 return minifileset.compile(trackspec)
268
268
269 data = repo.wvfs.tryread('.hglfs')
269 data = repo.wvfs.tryread('.hglfs')
270 if not data:
270 if not data:
271 return lambda p, s: False
271 return lambda p, s: False
272
272
273 # Parse errors here will abort with a message that points to the .hglfs file
273 # Parse errors here will abort with a message that points to the .hglfs file
274 # and line number.
274 # and line number.
275 cfg = config.config()
275 cfg = config.config()
276 cfg.parse('.hglfs', data)
276 cfg.parse('.hglfs', data)
277
277
278 try:
278 try:
279 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
279 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
280 for pattern, rule in cfg.items('track')]
280 for pattern, rule in cfg.items('track')]
281 except error.ParseError as e:
281 except error.ParseError as e:
282 # The original exception gives no indicator that the error is in the
282 # The original exception gives no indicator that the error is in the
283 # .hglfs file, so add that.
283 # .hglfs file, so add that.
284
284
285 # TODO: See if the line number of the file can be made available.
285 # TODO: See if the line number of the file can be made available.
286 raise error.Abort(_('parse error in .hglfs: %s') % e)
286 raise error.Abort(_('parse error in .hglfs: %s') % e)
287
287
288 def _match(path, size):
288 def _match(path, size):
289 for pat, rule in rules:
289 for pat, rule in rules:
290 if pat(path, size):
290 if pat(path, size):
291 return rule(path, size)
291 return rule(path, size)
292
292
293 return False
293 return False
294
294
295 return _match
295 return _match
296
296
297 def wrapfilelog(filelog):
297 def wrapfilelog(filelog):
298 wrapfunction = extensions.wrapfunction
298 wrapfunction = extensions.wrapfunction
299
299
300 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
300 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
301 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
301 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
302 wrapfunction(filelog, 'size', wrapper.filelogsize)
302 wrapfunction(filelog, 'size', wrapper.filelogsize)
303
303
304 def extsetup(ui):
304 def extsetup(ui):
305 wrapfilelog(filelog.filelog)
305 wrapfilelog(filelog.filelog)
306
306
307 wrapfunction = extensions.wrapfunction
307 wrapfunction = extensions.wrapfunction
308
308
309 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
309 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
310 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
310 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
311
311
312 wrapfunction(upgrade, '_finishdatamigration',
312 wrapfunction(upgrade, '_finishdatamigration',
313 wrapper.upgradefinishdatamigration)
313 wrapper.upgradefinishdatamigration)
314
314
315 wrapfunction(upgrade, 'preservedrequirements',
315 wrapfunction(upgrade, 'preservedrequirements',
316 wrapper.upgraderequirements)
316 wrapper.upgraderequirements)
317
317
318 wrapfunction(upgrade, 'supporteddestrequirements',
318 wrapfunction(upgrade, 'supporteddestrequirements',
319 wrapper.upgraderequirements)
319 wrapper.upgraderequirements)
320
320
321 wrapfunction(changegroup,
321 wrapfunction(changegroup,
322 'allsupportedversions',
322 'allsupportedversions',
323 wrapper.allsupportedversions)
323 wrapper.allsupportedversions)
324
324
325 wrapfunction(exchange, 'push', wrapper.push)
325 wrapfunction(exchange, 'push', wrapper.push)
326 wrapfunction(wireprotov1server, '_capabilities', wrapper._capabilities)
326 wrapfunction(wireprotov1server, '_capabilities', wrapper._capabilities)
327 wrapfunction(wireprotoserver, 'handlewsgirequest',
327 wrapfunction(wireprotoserver, 'handlewsgirequest',
328 wireprotolfsserver.handlewsgirequest)
328 wireprotolfsserver.handlewsgirequest)
329
329
330 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
330 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
331 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
331 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
332 context.basefilectx.islfs = wrapper.filectxislfs
332 context.basefilectx.islfs = wrapper.filectxislfs
333
333
334 revlog.addflagprocessor(
334 revlog.addflagprocessor(
335 revlog.REVIDX_EXTSTORED,
335 revlog.REVIDX_EXTSTORED,
336 (
336 (
337 wrapper.readfromstore,
337 wrapper.readfromstore,
338 wrapper.writetostore,
338 wrapper.writetostore,
339 wrapper.bypasscheckhash,
339 wrapper.bypasscheckhash,
340 ),
340 ),
341 )
341 )
342
342
343 wrapfunction(hg, 'clone', wrapper.hgclone)
343 wrapfunction(hg, 'clone', wrapper.hgclone)
344 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
344 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
345
345
346 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
346 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
347
347
348 # Make bundle choose changegroup3 instead of changegroup2. This affects
348 # Make bundle choose changegroup3 instead of changegroup2. This affects
349 # "hg bundle" command. Note: it does not cover all bundle formats like
349 # "hg bundle" command. Note: it does not cover all bundle formats like
350 # "packed1". Using "packed1" with lfs will likely cause trouble.
350 # "packed1". Using "packed1" with lfs will likely cause trouble.
351 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
351 exchange._bundlespeccontentopts["v2"]["cg.version"] = "03"
352
352
353 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
353 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
354 # options and blob stores are passed from othervfs to the new readonlyvfs.
354 # options and blob stores are passed from othervfs to the new readonlyvfs.
355 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
355 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
356
356
357 # when writing a bundle via "hg bundle" command, upload related LFS blobs
357 # when writing a bundle via "hg bundle" command, upload related LFS blobs
358 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
358 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
359
359
360 @filesetpredicate('lfs()', callstatus=True)
360 @filesetpredicate('lfs()', callstatus=True)
361 def lfsfileset(mctx, x):
361 def lfsfileset(mctx, x):
362 """File that uses LFS storage."""
362 """File that uses LFS storage."""
363 # i18n: "lfs" is a keyword
363 # i18n: "lfs" is a keyword
364 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
364 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
365 return [f for f in mctx.subset
365 ctx = mctx.ctx
366 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
366 def lfsfilep(f):
367 return wrapper.pointerfromctx(ctx, f, removed=True) is not None
368 return mctx.predicate(lfsfilep, predrepr='<lfs>')
367
369
368 @templatekeyword('lfs_files', requires={'ctx'})
370 @templatekeyword('lfs_files', requires={'ctx'})
369 def lfsfiles(context, mapping):
371 def lfsfiles(context, mapping):
370 """List of strings. All files modified, added, or removed by this
372 """List of strings. All files modified, added, or removed by this
371 changeset."""
373 changeset."""
372 ctx = context.resource(mapping, 'ctx')
374 ctx = context.resource(mapping, 'ctx')
373
375
374 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
376 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
375 files = sorted(pointers.keys())
377 files = sorted(pointers.keys())
376
378
377 def pointer(v):
379 def pointer(v):
378 # In the file spec, version is first and the other keys are sorted.
380 # In the file spec, version is first and the other keys are sorted.
379 sortkeyfunc = lambda x: (x[0] != 'version', x)
381 sortkeyfunc = lambda x: (x[0] != 'version', x)
380 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
382 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
381 return util.sortdict(items)
383 return util.sortdict(items)
382
384
383 makemap = lambda v: {
385 makemap = lambda v: {
384 'file': v,
386 'file': v,
385 'lfsoid': pointers[v].oid() if pointers[v] else None,
387 'lfsoid': pointers[v].oid() if pointers[v] else None,
386 'lfspointer': templateutil.hybriddict(pointer(v)),
388 'lfspointer': templateutil.hybriddict(pointer(v)),
387 }
389 }
388
390
389 # TODO: make the separator ', '?
391 # TODO: make the separator ', '?
390 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
392 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
391 return templateutil.hybrid(f, files, makemap, pycompat.identity)
393 return templateutil.hybrid(f, files, makemap, pycompat.identity)
392
394
393 @command('debuglfsupload',
395 @command('debuglfsupload',
394 [('r', 'rev', [], _('upload large files introduced by REV'))])
396 [('r', 'rev', [], _('upload large files introduced by REV'))])
395 def debuglfsupload(ui, repo, **opts):
397 def debuglfsupload(ui, repo, **opts):
396 """upload lfs blobs added by the working copy parent or given revisions"""
398 """upload lfs blobs added by the working copy parent or given revisions"""
397 revs = opts.get(r'rev', [])
399 revs = opts.get(r'rev', [])
398 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
400 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
399 wrapper.uploadblobs(repo, pointers)
401 wrapper.uploadblobs(repo, pointers)
@@ -1,728 +1,721 b''
1 # fileset.py - file set queries for mercurial
1 # fileset.py - file set queries for mercurial
2 #
2 #
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
3 # Copyright 2010 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import re
11 import re
12
12
13 from .i18n import _
13 from .i18n import _
14 from . import (
14 from . import (
15 error,
15 error,
16 match as matchmod,
16 match as matchmod,
17 merge,
17 merge,
18 parser,
18 parser,
19 pycompat,
19 pycompat,
20 registrar,
20 registrar,
21 scmutil,
21 scmutil,
22 util,
22 util,
23 )
23 )
24 from .utils import (
24 from .utils import (
25 stringutil,
25 stringutil,
26 )
26 )
27
27
28 elements = {
28 elements = {
29 # token-type: binding-strength, primary, prefix, infix, suffix
29 # token-type: binding-strength, primary, prefix, infix, suffix
30 "(": (20, None, ("group", 1, ")"), ("func", 1, ")"), None),
30 "(": (20, None, ("group", 1, ")"), ("func", 1, ")"), None),
31 ":": (15, None, None, ("kindpat", 15), None),
31 ":": (15, None, None, ("kindpat", 15), None),
32 "-": (5, None, ("negate", 19), ("minus", 5), None),
32 "-": (5, None, ("negate", 19), ("minus", 5), None),
33 "not": (10, None, ("not", 10), None, None),
33 "not": (10, None, ("not", 10), None, None),
34 "!": (10, None, ("not", 10), None, None),
34 "!": (10, None, ("not", 10), None, None),
35 "and": (5, None, None, ("and", 5), None),
35 "and": (5, None, None, ("and", 5), None),
36 "&": (5, None, None, ("and", 5), None),
36 "&": (5, None, None, ("and", 5), None),
37 "or": (4, None, None, ("or", 4), None),
37 "or": (4, None, None, ("or", 4), None),
38 "|": (4, None, None, ("or", 4), None),
38 "|": (4, None, None, ("or", 4), None),
39 "+": (4, None, None, ("or", 4), None),
39 "+": (4, None, None, ("or", 4), None),
40 ",": (2, None, None, ("list", 2), None),
40 ",": (2, None, None, ("list", 2), None),
41 ")": (0, None, None, None, None),
41 ")": (0, None, None, None, None),
42 "symbol": (0, "symbol", None, None, None),
42 "symbol": (0, "symbol", None, None, None),
43 "string": (0, "string", None, None, None),
43 "string": (0, "string", None, None, None),
44 "end": (0, None, None, None, None),
44 "end": (0, None, None, None, None),
45 }
45 }
46
46
47 keywords = {'and', 'or', 'not'}
47 keywords = {'and', 'or', 'not'}
48
48
49 globchars = ".*{}[]?/\\_"
49 globchars = ".*{}[]?/\\_"
50
50
51 def tokenize(program):
51 def tokenize(program):
52 pos, l = 0, len(program)
52 pos, l = 0, len(program)
53 program = pycompat.bytestr(program)
53 program = pycompat.bytestr(program)
54 while pos < l:
54 while pos < l:
55 c = program[pos]
55 c = program[pos]
56 if c.isspace(): # skip inter-token whitespace
56 if c.isspace(): # skip inter-token whitespace
57 pass
57 pass
58 elif c in "(),-:|&+!": # handle simple operators
58 elif c in "(),-:|&+!": # handle simple operators
59 yield (c, None, pos)
59 yield (c, None, pos)
60 elif (c in '"\'' or c == 'r' and
60 elif (c in '"\'' or c == 'r' and
61 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
61 program[pos:pos + 2] in ("r'", 'r"')): # handle quoted strings
62 if c == 'r':
62 if c == 'r':
63 pos += 1
63 pos += 1
64 c = program[pos]
64 c = program[pos]
65 decode = lambda x: x
65 decode = lambda x: x
66 else:
66 else:
67 decode = parser.unescapestr
67 decode = parser.unescapestr
68 pos += 1
68 pos += 1
69 s = pos
69 s = pos
70 while pos < l: # find closing quote
70 while pos < l: # find closing quote
71 d = program[pos]
71 d = program[pos]
72 if d == '\\': # skip over escaped characters
72 if d == '\\': # skip over escaped characters
73 pos += 2
73 pos += 2
74 continue
74 continue
75 if d == c:
75 if d == c:
76 yield ('string', decode(program[s:pos]), s)
76 yield ('string', decode(program[s:pos]), s)
77 break
77 break
78 pos += 1
78 pos += 1
79 else:
79 else:
80 raise error.ParseError(_("unterminated string"), s)
80 raise error.ParseError(_("unterminated string"), s)
81 elif c.isalnum() or c in globchars or ord(c) > 127:
81 elif c.isalnum() or c in globchars or ord(c) > 127:
82 # gather up a symbol/keyword
82 # gather up a symbol/keyword
83 s = pos
83 s = pos
84 pos += 1
84 pos += 1
85 while pos < l: # find end of symbol
85 while pos < l: # find end of symbol
86 d = program[pos]
86 d = program[pos]
87 if not (d.isalnum() or d in globchars or ord(d) > 127):
87 if not (d.isalnum() or d in globchars or ord(d) > 127):
88 break
88 break
89 pos += 1
89 pos += 1
90 sym = program[s:pos]
90 sym = program[s:pos]
91 if sym in keywords: # operator keywords
91 if sym in keywords: # operator keywords
92 yield (sym, None, s)
92 yield (sym, None, s)
93 else:
93 else:
94 yield ('symbol', sym, s)
94 yield ('symbol', sym, s)
95 pos -= 1
95 pos -= 1
96 else:
96 else:
97 raise error.ParseError(_("syntax error"), pos)
97 raise error.ParseError(_("syntax error"), pos)
98 pos += 1
98 pos += 1
99 yield ('end', None, pos)
99 yield ('end', None, pos)
100
100
101 def parse(expr):
101 def parse(expr):
102 p = parser.parser(elements)
102 p = parser.parser(elements)
103 tree, pos = p.parse(tokenize(expr))
103 tree, pos = p.parse(tokenize(expr))
104 if pos != len(expr):
104 if pos != len(expr):
105 raise error.ParseError(_("invalid token"), pos)
105 raise error.ParseError(_("invalid token"), pos)
106 return tree
106 return tree
107
107
108 def getsymbol(x):
108 def getsymbol(x):
109 if x and x[0] == 'symbol':
109 if x and x[0] == 'symbol':
110 return x[1]
110 return x[1]
111 raise error.ParseError(_('not a symbol'))
111 raise error.ParseError(_('not a symbol'))
112
112
113 def getstring(x, err):
113 def getstring(x, err):
114 if x and (x[0] == 'string' or x[0] == 'symbol'):
114 if x and (x[0] == 'string' or x[0] == 'symbol'):
115 return x[1]
115 return x[1]
116 raise error.ParseError(err)
116 raise error.ParseError(err)
117
117
118 def _getkindpat(x, y, allkinds, err):
118 def _getkindpat(x, y, allkinds, err):
119 kind = getsymbol(x)
119 kind = getsymbol(x)
120 pat = getstring(y, err)
120 pat = getstring(y, err)
121 if kind not in allkinds:
121 if kind not in allkinds:
122 raise error.ParseError(_("invalid pattern kind: %s") % kind)
122 raise error.ParseError(_("invalid pattern kind: %s") % kind)
123 return '%s:%s' % (kind, pat)
123 return '%s:%s' % (kind, pat)
124
124
125 def getpattern(x, allkinds, err):
125 def getpattern(x, allkinds, err):
126 if x and x[0] == 'kindpat':
126 if x and x[0] == 'kindpat':
127 return _getkindpat(x[1], x[2], allkinds, err)
127 return _getkindpat(x[1], x[2], allkinds, err)
128 return getstring(x, err)
128 return getstring(x, err)
129
129
130 def getlist(x):
130 def getlist(x):
131 if not x:
131 if not x:
132 return []
132 return []
133 if x[0] == 'list':
133 if x[0] == 'list':
134 return getlist(x[1]) + [x[2]]
134 return getlist(x[1]) + [x[2]]
135 return [x]
135 return [x]
136
136
137 def getargs(x, min, max, err):
137 def getargs(x, min, max, err):
138 l = getlist(x)
138 l = getlist(x)
139 if len(l) < min or len(l) > max:
139 if len(l) < min or len(l) > max:
140 raise error.ParseError(err)
140 raise error.ParseError(err)
141 return l
141 return l
142
142
143 def getset(mctx, x):
143 def getmatch(mctx, x):
144 if not x:
144 if not x:
145 raise error.ParseError(_("missing argument"))
145 raise error.ParseError(_("missing argument"))
146 return methods[x[0]](mctx, *x[1:])
146 return methods[x[0]](mctx, *x[1:])
147
147
148 def stringset(mctx, x):
148 def stringmatch(mctx, x):
149 m = mctx.matcher([x])
149 return mctx.matcher([x])
150 return [f for f in mctx.subset if m(f)]
151
150
152 def kindpatset(mctx, x, y):
151 def kindpatmatch(mctx, x, y):
153 return stringset(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
152 return stringmatch(mctx, _getkindpat(x, y, matchmod.allpatternkinds,
154 _("pattern must be a string")))
153 _("pattern must be a string")))
155
154
156 def andset(mctx, x, y):
155 def andmatch(mctx, x, y):
157 xl = set(getset(mctx, x))
156 xm = getmatch(mctx, x)
158 yl = getset(mctx, y)
157 ym = getmatch(mctx, y)
159 return [f for f in yl if f in xl]
158 return matchmod.intersectmatchers(xm, ym)
160
159
161 def orset(mctx, x, y):
160 def ormatch(mctx, x, y):
162 # needs optimizing
161 xm = getmatch(mctx, x)
163 xl = getset(mctx, x)
162 ym = getmatch(mctx, y)
164 yl = getset(mctx, y)
163 return matchmod.unionmatcher([xm, ym])
165 return xl + [f for f in yl if f not in xl]
166
164
167 def notset(mctx, x):
165 def notmatch(mctx, x):
168 s = set(getset(mctx, x))
166 m = getmatch(mctx, x)
169 return [r for r in mctx.subset if r not in s]
167 return mctx.predicate(lambda f: not m(f), predrepr=('<not %r>', m))
170
168
171 def minusset(mctx, x, y):
169 def minusmatch(mctx, x, y):
172 xl = getset(mctx, x)
170 xm = getmatch(mctx, x)
173 yl = set(getset(mctx, y))
171 ym = getmatch(mctx, y)
174 return [f for f in xl if f not in yl]
172 return matchmod.differencematcher(xm, ym)
175
173
176 def negateset(mctx, x):
174 def negatematch(mctx, x):
177 raise error.ParseError(_("can't use negate operator in this context"))
175 raise error.ParseError(_("can't use negate operator in this context"))
178
176
179 def listset(mctx, a, b):
177 def listmatch(mctx, x, y):
180 raise error.ParseError(_("can't use a list in this context"),
178 raise error.ParseError(_("can't use a list in this context"),
181 hint=_('see hg help "filesets.x or y"'))
179 hint=_('see hg help "filesets.x or y"'))
182
180
183 def func(mctx, a, b):
181 def func(mctx, a, b):
184 funcname = getsymbol(a)
182 funcname = getsymbol(a)
185 if funcname in symbols:
183 if funcname in symbols:
186 enabled = mctx._existingenabled
184 enabled = mctx._existingenabled
187 mctx._existingenabled = funcname in _existingcallers
185 mctx._existingenabled = funcname in _existingcallers
188 try:
186 try:
189 return symbols[funcname](mctx, b)
187 return symbols[funcname](mctx, b)
190 finally:
188 finally:
191 mctx._existingenabled = enabled
189 mctx._existingenabled = enabled
192
190
193 keep = lambda fn: getattr(fn, '__doc__', None) is not None
191 keep = lambda fn: getattr(fn, '__doc__', None) is not None
194
192
195 syms = [s for (s, fn) in symbols.items() if keep(fn)]
193 syms = [s for (s, fn) in symbols.items() if keep(fn)]
196 raise error.UnknownIdentifier(funcname, syms)
194 raise error.UnknownIdentifier(funcname, syms)
197
195
198 # symbols are callable like:
196 # symbols are callable like:
199 # fun(mctx, x)
197 # fun(mctx, x)
200 # with:
198 # with:
201 # mctx - current matchctx instance
199 # mctx - current matchctx instance
202 # x - argument in tree form
200 # x - argument in tree form
203 symbols = {}
201 symbols = {}
204
202
205 # filesets using matchctx.status()
203 # filesets using matchctx.status()
206 _statuscallers = set()
204 _statuscallers = set()
207
205
208 # filesets using matchctx.existing()
206 # filesets using matchctx.existing()
209 _existingcallers = set()
207 _existingcallers = set()
210
208
211 predicate = registrar.filesetpredicate()
209 predicate = registrar.filesetpredicate()
212
210
213 @predicate('modified()', callstatus=True)
211 @predicate('modified()', callstatus=True)
214 def modified(mctx, x):
212 def modified(mctx, x):
215 """File that is modified according to :hg:`status`.
213 """File that is modified according to :hg:`status`.
216 """
214 """
217 # i18n: "modified" is a keyword
215 # i18n: "modified" is a keyword
218 getargs(x, 0, 0, _("modified takes no arguments"))
216 getargs(x, 0, 0, _("modified takes no arguments"))
219 s = set(mctx.status().modified)
217 s = set(mctx.status().modified)
220 return [f for f in mctx.subset if f in s]
218 return mctx.predicate(s.__contains__, predrepr='modified')
221
219
222 @predicate('added()', callstatus=True)
220 @predicate('added()', callstatus=True)
223 def added(mctx, x):
221 def added(mctx, x):
224 """File that is added according to :hg:`status`.
222 """File that is added according to :hg:`status`.
225 """
223 """
226 # i18n: "added" is a keyword
224 # i18n: "added" is a keyword
227 getargs(x, 0, 0, _("added takes no arguments"))
225 getargs(x, 0, 0, _("added takes no arguments"))
228 s = set(mctx.status().added)
226 s = set(mctx.status().added)
229 return [f for f in mctx.subset if f in s]
227 return mctx.predicate(s.__contains__, predrepr='added')
230
228
231 @predicate('removed()', callstatus=True)
229 @predicate('removed()', callstatus=True)
232 def removed(mctx, x):
230 def removed(mctx, x):
233 """File that is removed according to :hg:`status`.
231 """File that is removed according to :hg:`status`.
234 """
232 """
235 # i18n: "removed" is a keyword
233 # i18n: "removed" is a keyword
236 getargs(x, 0, 0, _("removed takes no arguments"))
234 getargs(x, 0, 0, _("removed takes no arguments"))
237 s = set(mctx.status().removed)
235 s = set(mctx.status().removed)
238 return [f for f in mctx.subset if f in s]
236 return mctx.predicate(s.__contains__, predrepr='removed')
239
237
240 @predicate('deleted()', callstatus=True)
238 @predicate('deleted()', callstatus=True)
241 def deleted(mctx, x):
239 def deleted(mctx, x):
242 """Alias for ``missing()``.
240 """Alias for ``missing()``.
243 """
241 """
244 # i18n: "deleted" is a keyword
242 # i18n: "deleted" is a keyword
245 getargs(x, 0, 0, _("deleted takes no arguments"))
243 getargs(x, 0, 0, _("deleted takes no arguments"))
246 s = set(mctx.status().deleted)
244 s = set(mctx.status().deleted)
247 return [f for f in mctx.subset if f in s]
245 return mctx.predicate(s.__contains__, predrepr='deleted')
248
246
249 @predicate('missing()', callstatus=True)
247 @predicate('missing()', callstatus=True)
250 def missing(mctx, x):
248 def missing(mctx, x):
251 """File that is missing according to :hg:`status`.
249 """File that is missing according to :hg:`status`.
252 """
250 """
253 # i18n: "missing" is a keyword
251 # i18n: "missing" is a keyword
254 getargs(x, 0, 0, _("missing takes no arguments"))
252 getargs(x, 0, 0, _("missing takes no arguments"))
255 s = set(mctx.status().deleted)
253 s = set(mctx.status().deleted)
256 return [f for f in mctx.subset if f in s]
254 return mctx.predicate(s.__contains__, predrepr='deleted')
257
255
258 @predicate('unknown()', callstatus=True)
256 @predicate('unknown()', callstatus=True)
259 def unknown(mctx, x):
257 def unknown(mctx, x):
260 """File that is unknown according to :hg:`status`. These files will only be
258 """File that is unknown according to :hg:`status`."""
261 considered if this predicate is used.
262 """
263 # i18n: "unknown" is a keyword
259 # i18n: "unknown" is a keyword
264 getargs(x, 0, 0, _("unknown takes no arguments"))
260 getargs(x, 0, 0, _("unknown takes no arguments"))
265 s = set(mctx.status().unknown)
261 s = set(mctx.status().unknown)
266 return [f for f in mctx.subset if f in s]
262 return mctx.predicate(s.__contains__, predrepr='unknown')
267
263
268 @predicate('ignored()', callstatus=True)
264 @predicate('ignored()', callstatus=True)
269 def ignored(mctx, x):
265 def ignored(mctx, x):
270 """File that is ignored according to :hg:`status`. These files will only be
266 """File that is ignored according to :hg:`status`."""
271 considered if this predicate is used.
272 """
273 # i18n: "ignored" is a keyword
267 # i18n: "ignored" is a keyword
274 getargs(x, 0, 0, _("ignored takes no arguments"))
268 getargs(x, 0, 0, _("ignored takes no arguments"))
275 s = set(mctx.status().ignored)
269 s = set(mctx.status().ignored)
276 return [f for f in mctx.subset if f in s]
270 return mctx.predicate(s.__contains__, predrepr='ignored')
277
271
278 @predicate('clean()', callstatus=True)
272 @predicate('clean()', callstatus=True)
279 def clean(mctx, x):
273 def clean(mctx, x):
280 """File that is clean according to :hg:`status`.
274 """File that is clean according to :hg:`status`.
281 """
275 """
282 # i18n: "clean" is a keyword
276 # i18n: "clean" is a keyword
283 getargs(x, 0, 0, _("clean takes no arguments"))
277 getargs(x, 0, 0, _("clean takes no arguments"))
284 s = set(mctx.status().clean)
278 s = set(mctx.status().clean)
285 return [f for f in mctx.subset if f in s]
279 return mctx.predicate(s.__contains__, predrepr='clean')
286
280
287 @predicate('tracked()')
281 @predicate('tracked()')
288 def tracked(mctx, x):
282 def tracked(mctx, x):
289 """File that is under Mercurial control."""
283 """File that is under Mercurial control."""
290 # i18n: "tracked" is a keyword
284 # i18n: "tracked" is a keyword
291 getargs(x, 0, 0, _("tracked takes no arguments"))
285 getargs(x, 0, 0, _("tracked takes no arguments"))
292 return [f for f in mctx.subset if f in mctx.ctx]
286 return mctx.predicate(mctx.ctx.__contains__, predrepr='tracked')
293
287
294 @predicate('binary()', callexisting=True)
288 @predicate('binary()', callexisting=True)
295 def binary(mctx, x):
289 def binary(mctx, x):
296 """File that appears to be binary (contains NUL bytes).
290 """File that appears to be binary (contains NUL bytes).
297 """
291 """
298 # i18n: "binary" is a keyword
292 # i18n: "binary" is a keyword
299 getargs(x, 0, 0, _("binary takes no arguments"))
293 getargs(x, 0, 0, _("binary takes no arguments"))
300 return [f for f in mctx.existing() if mctx.ctx[f].isbinary()]
294 return mctx.fpredicate(lambda fctx: fctx.isbinary(),
295 predrepr='binary', cache=True)
301
296
302 @predicate('exec()', callexisting=True)
297 @predicate('exec()', callexisting=True)
303 def exec_(mctx, x):
298 def exec_(mctx, x):
304 """File that is marked as executable.
299 """File that is marked as executable.
305 """
300 """
306 # i18n: "exec" is a keyword
301 # i18n: "exec" is a keyword
307 getargs(x, 0, 0, _("exec takes no arguments"))
302 getargs(x, 0, 0, _("exec takes no arguments"))
308 return [f for f in mctx.existing() if mctx.ctx.flags(f) == 'x']
303 ctx = mctx.ctx
304 return mctx.predicate(lambda f: ctx.flags(f) == 'x', predrepr='exec')
309
305
310 @predicate('symlink()', callexisting=True)
306 @predicate('symlink()', callexisting=True)
311 def symlink(mctx, x):
307 def symlink(mctx, x):
312 """File that is marked as a symlink.
308 """File that is marked as a symlink.
313 """
309 """
314 # i18n: "symlink" is a keyword
310 # i18n: "symlink" is a keyword
315 getargs(x, 0, 0, _("symlink takes no arguments"))
311 getargs(x, 0, 0, _("symlink takes no arguments"))
316 return [f for f in mctx.existing() if mctx.ctx.flags(f) == 'l']
312 ctx = mctx.ctx
313 return mctx.predicate(lambda f: ctx.flags(f) == 'l', predrepr='symlink')
317
314
318 @predicate('resolved()')
315 @predicate('resolved()')
319 def resolved(mctx, x):
316 def resolved(mctx, x):
320 """File that is marked resolved according to :hg:`resolve -l`.
317 """File that is marked resolved according to :hg:`resolve -l`.
321 """
318 """
322 # i18n: "resolved" is a keyword
319 # i18n: "resolved" is a keyword
323 getargs(x, 0, 0, _("resolved takes no arguments"))
320 getargs(x, 0, 0, _("resolved takes no arguments"))
324 if mctx.ctx.rev() is not None:
321 if mctx.ctx.rev() is not None:
325 return []
322 return mctx.never()
326 ms = merge.mergestate.read(mctx.ctx.repo())
323 ms = merge.mergestate.read(mctx.ctx.repo())
327 return [f for f in mctx.subset if f in ms and ms[f] == 'r']
324 return mctx.predicate(lambda f: f in ms and ms[f] == 'r',
325 predrepr='resolved')
328
326
329 @predicate('unresolved()')
327 @predicate('unresolved()')
330 def unresolved(mctx, x):
328 def unresolved(mctx, x):
331 """File that is marked unresolved according to :hg:`resolve -l`.
329 """File that is marked unresolved according to :hg:`resolve -l`.
332 """
330 """
333 # i18n: "unresolved" is a keyword
331 # i18n: "unresolved" is a keyword
334 getargs(x, 0, 0, _("unresolved takes no arguments"))
332 getargs(x, 0, 0, _("unresolved takes no arguments"))
335 if mctx.ctx.rev() is not None:
333 if mctx.ctx.rev() is not None:
336 return []
334 return mctx.never()
337 ms = merge.mergestate.read(mctx.ctx.repo())
335 ms = merge.mergestate.read(mctx.ctx.repo())
338 return [f for f in mctx.subset if f in ms and ms[f] == 'u']
336 return mctx.predicate(lambda f: f in ms and ms[f] == 'u',
337 predrepr='unresolved')
339
338
340 @predicate('hgignore()')
339 @predicate('hgignore()')
341 def hgignore(mctx, x):
340 def hgignore(mctx, x):
342 """File that matches the active .hgignore pattern.
341 """File that matches the active .hgignore pattern.
343 """
342 """
344 # i18n: "hgignore" is a keyword
343 # i18n: "hgignore" is a keyword
345 getargs(x, 0, 0, _("hgignore takes no arguments"))
344 getargs(x, 0, 0, _("hgignore takes no arguments"))
346 ignore = mctx.ctx.repo().dirstate._ignore
345 return mctx.ctx.repo().dirstate._ignore
347 return [f for f in mctx.subset if ignore(f)]
348
346
349 @predicate('portable()')
347 @predicate('portable()')
350 def portable(mctx, x):
348 def portable(mctx, x):
351 """File that has a portable name. (This doesn't include filenames with case
349 """File that has a portable name. (This doesn't include filenames with case
352 collisions.)
350 collisions.)
353 """
351 """
354 # i18n: "portable" is a keyword
352 # i18n: "portable" is a keyword
355 getargs(x, 0, 0, _("portable takes no arguments"))
353 getargs(x, 0, 0, _("portable takes no arguments"))
356 checkwinfilename = util.checkwinfilename
354 return mctx.predicate(lambda f: util.checkwinfilename(f) is None,
357 return [f for f in mctx.subset if checkwinfilename(f) is None]
355 predrepr='portable')
358
356
359 @predicate('grep(regex)', callexisting=True)
357 @predicate('grep(regex)', callexisting=True)
360 def grep(mctx, x):
358 def grep(mctx, x):
361 """File contains the given regular expression.
359 """File contains the given regular expression.
362 """
360 """
363 try:
361 try:
364 # i18n: "grep" is a keyword
362 # i18n: "grep" is a keyword
365 r = re.compile(getstring(x, _("grep requires a pattern")))
363 r = re.compile(getstring(x, _("grep requires a pattern")))
366 except re.error as e:
364 except re.error as e:
367 raise error.ParseError(_('invalid match pattern: %s') %
365 raise error.ParseError(_('invalid match pattern: %s') %
368 stringutil.forcebytestr(e))
366 stringutil.forcebytestr(e))
369 return [f for f in mctx.existing() if r.search(mctx.ctx[f].data())]
367 return mctx.fpredicate(lambda fctx: r.search(fctx.data()),
368 predrepr=('grep(%r)', r.pattern), cache=True)
370
369
371 def _sizetomax(s):
370 def _sizetomax(s):
372 try:
371 try:
373 s = s.strip().lower()
372 s = s.strip().lower()
374 for k, v in util._sizeunits:
373 for k, v in util._sizeunits:
375 if s.endswith(k):
374 if s.endswith(k):
376 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
375 # max(4k) = 5k - 1, max(4.5k) = 4.6k - 1
377 n = s[:-len(k)]
376 n = s[:-len(k)]
378 inc = 1.0
377 inc = 1.0
379 if "." in n:
378 if "." in n:
380 inc /= 10 ** len(n.split(".")[1])
379 inc /= 10 ** len(n.split(".")[1])
381 return int((float(n) + inc) * v) - 1
380 return int((float(n) + inc) * v) - 1
382 # no extension, this is a precise value
381 # no extension, this is a precise value
383 return int(s)
382 return int(s)
384 except ValueError:
383 except ValueError:
385 raise error.ParseError(_("couldn't parse size: %s") % s)
384 raise error.ParseError(_("couldn't parse size: %s") % s)
386
385
387 def sizematcher(expr):
386 def sizematcher(expr):
388 """Return a function(size) -> bool from the ``size()`` expression"""
387 """Return a function(size) -> bool from the ``size()`` expression"""
389 expr = expr.strip()
388 expr = expr.strip()
390 if '-' in expr: # do we have a range?
389 if '-' in expr: # do we have a range?
391 a, b = expr.split('-', 1)
390 a, b = expr.split('-', 1)
392 a = util.sizetoint(a)
391 a = util.sizetoint(a)
393 b = util.sizetoint(b)
392 b = util.sizetoint(b)
394 return lambda x: x >= a and x <= b
393 return lambda x: x >= a and x <= b
395 elif expr.startswith("<="):
394 elif expr.startswith("<="):
396 a = util.sizetoint(expr[2:])
395 a = util.sizetoint(expr[2:])
397 return lambda x: x <= a
396 return lambda x: x <= a
398 elif expr.startswith("<"):
397 elif expr.startswith("<"):
399 a = util.sizetoint(expr[1:])
398 a = util.sizetoint(expr[1:])
400 return lambda x: x < a
399 return lambda x: x < a
401 elif expr.startswith(">="):
400 elif expr.startswith(">="):
402 a = util.sizetoint(expr[2:])
401 a = util.sizetoint(expr[2:])
403 return lambda x: x >= a
402 return lambda x: x >= a
404 elif expr.startswith(">"):
403 elif expr.startswith(">"):
405 a = util.sizetoint(expr[1:])
404 a = util.sizetoint(expr[1:])
406 return lambda x: x > a
405 return lambda x: x > a
407 else:
406 else:
408 a = util.sizetoint(expr)
407 a = util.sizetoint(expr)
409 b = _sizetomax(expr)
408 b = _sizetomax(expr)
410 return lambda x: x >= a and x <= b
409 return lambda x: x >= a and x <= b
411
410
412 @predicate('size(expression)', callexisting=True)
411 @predicate('size(expression)', callexisting=True)
413 def size(mctx, x):
412 def size(mctx, x):
414 """File size matches the given expression. Examples:
413 """File size matches the given expression. Examples:
415
414
416 - size('1k') - files from 1024 to 2047 bytes
415 - size('1k') - files from 1024 to 2047 bytes
417 - size('< 20k') - files less than 20480 bytes
416 - size('< 20k') - files less than 20480 bytes
418 - size('>= .5MB') - files at least 524288 bytes
417 - size('>= .5MB') - files at least 524288 bytes
419 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
418 - size('4k - 1MB') - files from 4096 bytes to 1048576 bytes
420 """
419 """
421 # i18n: "size" is a keyword
420 # i18n: "size" is a keyword
422 expr = getstring(x, _("size requires an expression"))
421 expr = getstring(x, _("size requires an expression"))
423 m = sizematcher(expr)
422 m = sizematcher(expr)
424 return [f for f in mctx.existing() if m(mctx.ctx[f].size())]
423 return mctx.fpredicate(lambda fctx: m(fctx.size()),
424 predrepr=('size(%r)', expr), cache=True)
425
425
426 @predicate('encoding(name)', callexisting=True)
426 @predicate('encoding(name)', callexisting=True)
427 def encoding(mctx, x):
427 def encoding(mctx, x):
428 """File can be successfully decoded with the given character
428 """File can be successfully decoded with the given character
429 encoding. May not be useful for encodings other than ASCII and
429 encoding. May not be useful for encodings other than ASCII and
430 UTF-8.
430 UTF-8.
431 """
431 """
432
432
433 # i18n: "encoding" is a keyword
433 # i18n: "encoding" is a keyword
434 enc = getstring(x, _("encoding requires an encoding name"))
434 enc = getstring(x, _("encoding requires an encoding name"))
435
435
436 s = []
436 def encp(fctx):
437 for f in mctx.existing():
437 d = fctx.data()
438 d = mctx.ctx[f].data()
439 try:
438 try:
440 d.decode(pycompat.sysstr(enc))
439 d.decode(pycompat.sysstr(enc))
440 return True
441 except LookupError:
441 except LookupError:
442 raise error.Abort(_("unknown encoding '%s'") % enc)
442 raise error.Abort(_("unknown encoding '%s'") % enc)
443 except UnicodeDecodeError:
443 except UnicodeDecodeError:
444 continue
444 return False
445 s.append(f)
446
445
447 return s
446 return mctx.fpredicate(encp, predrepr=('encoding(%r)', enc), cache=True)
448
447
449 @predicate('eol(style)', callexisting=True)
448 @predicate('eol(style)', callexisting=True)
450 def eol(mctx, x):
449 def eol(mctx, x):
451 """File contains newlines of the given style (dos, unix, mac). Binary
450 """File contains newlines of the given style (dos, unix, mac). Binary
452 files are excluded, files with mixed line endings match multiple
451 files are excluded, files with mixed line endings match multiple
453 styles.
452 styles.
454 """
453 """
455
454
456 # i18n: "eol" is a keyword
455 # i18n: "eol" is a keyword
457 enc = getstring(x, _("eol requires a style name"))
456 enc = getstring(x, _("eol requires a style name"))
458
457
459 s = []
458 def eolp(fctx):
460 for f in mctx.existing():
461 fctx = mctx.ctx[f]
462 if fctx.isbinary():
459 if fctx.isbinary():
463 continue
460 return False
464 d = fctx.data()
461 d = fctx.data()
465 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
462 if (enc == 'dos' or enc == 'win') and '\r\n' in d:
466 s.append(f)
463 return True
467 elif enc == 'unix' and re.search('(?<!\r)\n', d):
464 elif enc == 'unix' and re.search('(?<!\r)\n', d):
468 s.append(f)
465 return True
469 elif enc == 'mac' and re.search('\r(?!\n)', d):
466 elif enc == 'mac' and re.search('\r(?!\n)', d):
470 s.append(f)
467 return True
471 return s
468 return False
469 return mctx.fpredicate(eolp, predrepr=('eol(%r)', enc), cache=True)
472
470
473 @predicate('copied()')
471 @predicate('copied()')
474 def copied(mctx, x):
472 def copied(mctx, x):
475 """File that is recorded as being copied.
473 """File that is recorded as being copied.
476 """
474 """
477 # i18n: "copied" is a keyword
475 # i18n: "copied" is a keyword
478 getargs(x, 0, 0, _("copied takes no arguments"))
476 getargs(x, 0, 0, _("copied takes no arguments"))
479 s = []
477 def copiedp(fctx):
480 for f in mctx.subset:
478 p = fctx.parents()
481 if f in mctx.ctx:
479 return p and p[0].path() != fctx.path()
482 p = mctx.ctx[f].parents()
480 return mctx.fpredicate(copiedp, predrepr='copied', cache=True)
483 if p and p[0].path() != f:
484 s.append(f)
485 return s
486
481
487 @predicate('revs(revs, pattern)')
482 @predicate('revs(revs, pattern)')
488 def revs(mctx, x):
483 def revs(mctx, x):
489 """Evaluate set in the specified revisions. If the revset match multiple
484 """Evaluate set in the specified revisions. If the revset match multiple
490 revs, this will return file matching pattern in any of the revision.
485 revs, this will return file matching pattern in any of the revision.
491 """
486 """
492 # i18n: "revs" is a keyword
487 # i18n: "revs" is a keyword
493 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
488 r, x = getargs(x, 2, 2, _("revs takes two arguments"))
494 # i18n: "revs" is a keyword
489 # i18n: "revs" is a keyword
495 revspec = getstring(r, _("first argument to revs must be a revision"))
490 revspec = getstring(r, _("first argument to revs must be a revision"))
496 repo = mctx.ctx.repo()
491 repo = mctx.ctx.repo()
497 revs = scmutil.revrange(repo, [revspec])
492 revs = scmutil.revrange(repo, [revspec])
498
493
499 found = set()
494 matchers = []
500 result = []
501 for r in revs:
495 for r in revs:
502 ctx = repo[r]
496 ctx = repo[r]
503 for f in getset(mctx.switch(ctx, _buildstatus(ctx, x)), x):
497 matchers.append(getmatch(mctx.switch(ctx, _buildstatus(ctx, x)), x))
504 if f not in found:
498 if not matchers:
505 found.add(f)
499 return mctx.never()
506 result.append(f)
500 if len(matchers) == 1:
507 return result
501 return matchers[0]
502 return matchmod.unionmatcher(matchers)
508
503
509 @predicate('status(base, rev, pattern)')
504 @predicate('status(base, rev, pattern)')
510 def status(mctx, x):
505 def status(mctx, x):
511 """Evaluate predicate using status change between ``base`` and
506 """Evaluate predicate using status change between ``base`` and
512 ``rev``. Examples:
507 ``rev``. Examples:
513
508
514 - ``status(3, 7, added())`` - matches files added from "3" to "7"
509 - ``status(3, 7, added())`` - matches files added from "3" to "7"
515 """
510 """
516 repo = mctx.ctx.repo()
511 repo = mctx.ctx.repo()
517 # i18n: "status" is a keyword
512 # i18n: "status" is a keyword
518 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
513 b, r, x = getargs(x, 3, 3, _("status takes three arguments"))
519 # i18n: "status" is a keyword
514 # i18n: "status" is a keyword
520 baseerr = _("first argument to status must be a revision")
515 baseerr = _("first argument to status must be a revision")
521 baserevspec = getstring(b, baseerr)
516 baserevspec = getstring(b, baseerr)
522 if not baserevspec:
517 if not baserevspec:
523 raise error.ParseError(baseerr)
518 raise error.ParseError(baseerr)
524 reverr = _("second argument to status must be a revision")
519 reverr = _("second argument to status must be a revision")
525 revspec = getstring(r, reverr)
520 revspec = getstring(r, reverr)
526 if not revspec:
521 if not revspec:
527 raise error.ParseError(reverr)
522 raise error.ParseError(reverr)
528 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
523 basectx, ctx = scmutil.revpair(repo, [baserevspec, revspec])
529 return getset(mctx.switch(ctx, _buildstatus(ctx, x, basectx=basectx)), x)
524 return getmatch(mctx.switch(ctx, _buildstatus(ctx, x, basectx=basectx)), x)
530
525
531 @predicate('subrepo([pattern])')
526 @predicate('subrepo([pattern])')
532 def subrepo(mctx, x):
527 def subrepo(mctx, x):
533 """Subrepositories whose paths match the given pattern.
528 """Subrepositories whose paths match the given pattern.
534 """
529 """
535 # i18n: "subrepo" is a keyword
530 # i18n: "subrepo" is a keyword
536 getargs(x, 0, 1, _("subrepo takes at most one argument"))
531 getargs(x, 0, 1, _("subrepo takes at most one argument"))
537 ctx = mctx.ctx
532 ctx = mctx.ctx
538 sstate = sorted(ctx.substate)
533 sstate = ctx.substate
539 if x:
534 if x:
540 pat = getpattern(x, matchmod.allpatternkinds,
535 pat = getpattern(x, matchmod.allpatternkinds,
541 # i18n: "subrepo" is a keyword
536 # i18n: "subrepo" is a keyword
542 _("subrepo requires a pattern or no arguments"))
537 _("subrepo requires a pattern or no arguments"))
543 fast = not matchmod.patkind(pat)
538 fast = not matchmod.patkind(pat)
544 if fast:
539 if fast:
545 def m(s):
540 def m(s):
546 return (s == pat)
541 return (s == pat)
547 else:
542 else:
548 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
543 m = matchmod.match(ctx.repo().root, '', [pat], ctx=ctx)
549 return [sub for sub in sstate if m(sub)]
544 return mctx.predicate(lambda f: f in sstate and m(f),
545 predrepr=('subrepo(%r)', pat))
550 else:
546 else:
551 return [sub for sub in sstate]
547 return mctx.predicate(sstate.__contains__, predrepr='subrepo')
552
548
553 methods = {
549 methods = {
554 'string': stringset,
550 'string': stringmatch,
555 'symbol': stringset,
551 'symbol': stringmatch,
556 'kindpat': kindpatset,
552 'kindpat': kindpatmatch,
557 'and': andset,
553 'and': andmatch,
558 'or': orset,
554 'or': ormatch,
559 'minus': minusset,
555 'minus': minusmatch,
560 'negate': negateset,
556 'negate': negatematch,
561 'list': listset,
557 'list': listmatch,
562 'group': getset,
558 'group': getmatch,
563 'not': notset,
559 'not': notmatch,
564 'func': func,
560 'func': func,
565 }
561 }
566
562
567 class matchctx(object):
563 class matchctx(object):
568 def __init__(self, ctx, subset, status=None, badfn=None):
564 def __init__(self, ctx, subset, status=None, badfn=None):
569 self.ctx = ctx
565 self.ctx = ctx
570 self.subset = subset
566 self.subset = subset
571 self._status = status
567 self._status = status
572 self._badfn = badfn
568 self._badfn = badfn
573 self._existingenabled = False
569 self._existingenabled = False
574 def status(self):
570 def status(self):
575 return self._status
571 return self._status
576
572
577 def matcher(self, patterns):
573 def matcher(self, patterns):
578 return self.ctx.match(patterns, badfn=self._badfn)
574 return self.ctx.match(patterns, badfn=self._badfn)
579
575
580 def predicate(self, predfn, predrepr=None, cache=False):
576 def predicate(self, predfn, predrepr=None, cache=False):
581 """Create a matcher to select files by predfn(filename)"""
577 """Create a matcher to select files by predfn(filename)"""
582 if cache:
578 if cache:
583 predfn = util.cachefunc(predfn)
579 predfn = util.cachefunc(predfn)
584 repo = self.ctx.repo()
580 repo = self.ctx.repo()
585 return matchmod.predicatematcher(repo.root, repo.getcwd(), predfn,
581 return matchmod.predicatematcher(repo.root, repo.getcwd(), predfn,
586 predrepr=predrepr, badfn=self._badfn)
582 predrepr=predrepr, badfn=self._badfn)
587
583
588 def fpredicate(self, predfn, predrepr=None, cache=False):
584 def fpredicate(self, predfn, predrepr=None, cache=False):
589 """Create a matcher to select files by predfn(fctx) at the current
585 """Create a matcher to select files by predfn(fctx) at the current
590 revision
586 revision
591
587
592 Missing files are ignored.
588 Missing files are ignored.
593 """
589 """
594 ctx = self.ctx
590 ctx = self.ctx
595 if ctx.rev() is None:
591 if ctx.rev() is None:
596 def fctxpredfn(f):
592 def fctxpredfn(f):
597 try:
593 try:
598 fctx = ctx[f]
594 fctx = ctx[f]
599 except error.LookupError:
595 except error.LookupError:
600 return False
596 return False
601 try:
597 try:
602 fctx.audit()
598 fctx.audit()
603 except error.Abort:
599 except error.Abort:
604 return False
600 return False
605 try:
601 try:
606 return predfn(fctx)
602 return predfn(fctx)
607 except (IOError, OSError) as e:
603 except (IOError, OSError) as e:
608 if e.errno in (errno.ENOENT, errno.ENOTDIR, errno.EISDIR):
604 if e.errno in (errno.ENOENT, errno.ENOTDIR, errno.EISDIR):
609 return False
605 return False
610 raise
606 raise
611 else:
607 else:
612 def fctxpredfn(f):
608 def fctxpredfn(f):
613 try:
609 try:
614 fctx = ctx[f]
610 fctx = ctx[f]
615 except error.LookupError:
611 except error.LookupError:
616 return False
612 return False
617 return predfn(fctx)
613 return predfn(fctx)
618 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
614 return self.predicate(fctxpredfn, predrepr=predrepr, cache=cache)
619
615
620 def never(self):
616 def never(self):
621 """Create a matcher to select nothing"""
617 """Create a matcher to select nothing"""
622 repo = self.ctx.repo()
618 repo = self.ctx.repo()
623 return matchmod.nevermatcher(repo.root, repo.getcwd(),
619 return matchmod.nevermatcher(repo.root, repo.getcwd(),
624 badfn=self._badfn)
620 badfn=self._badfn)
625
621
626 def filter(self, files):
622 def filter(self, files):
627 return [f for f in files if f in self.subset]
623 return [f for f in files if f in self.subset]
628 def existing(self):
624 def existing(self):
629 if not self._existingenabled:
625 if not self._existingenabled:
630 raise error.ProgrammingError('unexpected existing() invocation')
626 raise error.ProgrammingError('unexpected existing() invocation')
631 if self._status is not None:
627 if self._status is not None:
632 removed = set(self._status[3])
628 removed = set(self._status[3])
633 unknown = set(self._status[4] + self._status[5])
629 unknown = set(self._status[4] + self._status[5])
634 else:
630 else:
635 removed = set()
631 removed = set()
636 unknown = set()
632 unknown = set()
637 return (f for f in self.subset
633 return (f for f in self.subset
638 if (f in self.ctx and f not in removed) or f in unknown)
634 if (f in self.ctx and f not in removed) or f in unknown)
639
635
640 def switch(self, ctx, status=None):
636 def switch(self, ctx, status=None):
641 subset = self.filter(_buildsubset(ctx, status))
637 subset = self.filter(_buildsubset(ctx, status))
642 return matchctx(ctx, subset, status, self._badfn)
638 return matchctx(ctx, subset, status, self._badfn)
643
639
644 class fullmatchctx(matchctx):
640 class fullmatchctx(matchctx):
645 """A match context where any files in any revisions should be valid"""
641 """A match context where any files in any revisions should be valid"""
646
642
647 def __init__(self, ctx, status=None, badfn=None):
643 def __init__(self, ctx, status=None, badfn=None):
648 subset = _buildsubset(ctx, status)
644 subset = _buildsubset(ctx, status)
649 super(fullmatchctx, self).__init__(ctx, subset, status, badfn)
645 super(fullmatchctx, self).__init__(ctx, subset, status, badfn)
650 def switch(self, ctx, status=None):
646 def switch(self, ctx, status=None):
651 return fullmatchctx(ctx, status, self._badfn)
647 return fullmatchctx(ctx, status, self._badfn)
652
648
653 # filesets using matchctx.switch()
649 # filesets using matchctx.switch()
654 _switchcallers = [
650 _switchcallers = [
655 'revs',
651 'revs',
656 'status',
652 'status',
657 ]
653 ]
658
654
659 def _intree(funcs, tree):
655 def _intree(funcs, tree):
660 if isinstance(tree, tuple):
656 if isinstance(tree, tuple):
661 if tree[0] == 'func' and tree[1][0] == 'symbol':
657 if tree[0] == 'func' and tree[1][0] == 'symbol':
662 if tree[1][1] in funcs:
658 if tree[1][1] in funcs:
663 return True
659 return True
664 if tree[1][1] in _switchcallers:
660 if tree[1][1] in _switchcallers:
665 # arguments won't be evaluated in the current context
661 # arguments won't be evaluated in the current context
666 return False
662 return False
667 for s in tree[1:]:
663 for s in tree[1:]:
668 if _intree(funcs, s):
664 if _intree(funcs, s):
669 return True
665 return True
670 return False
666 return False
671
667
672 def _buildsubset(ctx, status):
668 def _buildsubset(ctx, status):
673 if status:
669 if status:
674 subset = []
670 subset = []
675 for c in status:
671 for c in status:
676 subset.extend(c)
672 subset.extend(c)
677 return subset
673 return subset
678 else:
674 else:
679 return list(ctx.walk(ctx.match([])))
675 return list(ctx.walk(ctx.match([])))
680
676
681 def match(ctx, expr, badfn=None):
677 def match(ctx, expr, badfn=None):
682 """Create a matcher for a single fileset expression"""
678 """Create a matcher for a single fileset expression"""
683 repo = ctx.repo()
684 tree = parse(expr)
679 tree = parse(expr)
685 fset = getset(fullmatchctx(ctx, _buildstatus(ctx, tree), badfn=badfn), tree)
680 mctx = fullmatchctx(ctx, _buildstatus(ctx, tree), badfn=badfn)
686 return matchmod.predicatematcher(repo.root, repo.getcwd(),
681 return getmatch(mctx, tree)
687 fset.__contains__,
688 predrepr='fileset', badfn=badfn)
689
682
690 def _buildstatus(ctx, tree, basectx=None):
683 def _buildstatus(ctx, tree, basectx=None):
691 # do we need status info?
684 # do we need status info?
692
685
693 # temporaty boolean to simplify the next conditional
686 # temporaty boolean to simplify the next conditional
694 purewdir = ctx.rev() is None and basectx is None
687 purewdir = ctx.rev() is None and basectx is None
695
688
696 if (_intree(_statuscallers, tree) or
689 if (_intree(_statuscallers, tree) or
697 # Using matchctx.existing() on a workingctx requires us to check
690 # Using matchctx.existing() on a workingctx requires us to check
698 # for deleted files.
691 # for deleted files.
699 (purewdir and _intree(_existingcallers, tree))):
692 (purewdir and _intree(_existingcallers, tree))):
700 unknown = _intree(['unknown'], tree)
693 unknown = _intree(['unknown'], tree)
701 ignored = _intree(['ignored'], tree)
694 ignored = _intree(['ignored'], tree)
702
695
703 r = ctx.repo()
696 r = ctx.repo()
704 if basectx is None:
697 if basectx is None:
705 basectx = ctx.p1()
698 basectx = ctx.p1()
706 return r.status(basectx, ctx,
699 return r.status(basectx, ctx,
707 unknown=unknown, ignored=ignored, clean=True)
700 unknown=unknown, ignored=ignored, clean=True)
708 else:
701 else:
709 return None
702 return None
710
703
711 def prettyformat(tree):
704 def prettyformat(tree):
712 return parser.prettyformat(tree, ('string', 'symbol'))
705 return parser.prettyformat(tree, ('string', 'symbol'))
713
706
714 def loadpredicate(ui, extname, registrarobj):
707 def loadpredicate(ui, extname, registrarobj):
715 """Load fileset predicates from specified registrarobj
708 """Load fileset predicates from specified registrarobj
716 """
709 """
717 for name, func in registrarobj._table.iteritems():
710 for name, func in registrarobj._table.iteritems():
718 symbols[name] = func
711 symbols[name] = func
719 if func._callstatus:
712 if func._callstatus:
720 _statuscallers.add(name)
713 _statuscallers.add(name)
721 if func._callexisting:
714 if func._callexisting:
722 _existingcallers.add(name)
715 _existingcallers.add(name)
723
716
724 # load built-in predicates explicitly to setup _statuscallers/_existingcallers
717 # load built-in predicates explicitly to setup _statuscallers/_existingcallers
725 loadpredicate(None, None, predicate)
718 loadpredicate(None, None, predicate)
726
719
727 # tell hggettext to extract docstrings from these functions:
720 # tell hggettext to extract docstrings from these functions:
728 i18nfunctions = symbols.values()
721 i18nfunctions = symbols.values()
@@ -1,676 +1,697 b''
1 $ fileset() {
1 $ fileset() {
2 > hg debugfileset --all-files "$@"
2 > hg debugfileset --all-files "$@"
3 > }
3 > }
4
4
5 $ hg init repo
5 $ hg init repo
6 $ cd repo
6 $ cd repo
7 $ echo a > a1
7 $ echo a > a1
8 $ echo a > a2
8 $ echo a > a2
9 $ echo b > b1
9 $ echo b > b1
10 $ echo b > b2
10 $ echo b > b2
11 $ hg ci -Am addfiles
11 $ hg ci -Am addfiles
12 adding a1
12 adding a1
13 adding a2
13 adding a2
14 adding b1
14 adding b1
15 adding b2
15 adding b2
16
16
17 Test operators and basic patterns
17 Test operators and basic patterns
18
18
19 $ fileset -v a1
19 $ fileset -v a1
20 (symbol 'a1')
20 (symbol 'a1')
21 a1
21 a1
22 $ fileset -v 'a*'
22 $ fileset -v 'a*'
23 (symbol 'a*')
23 (symbol 'a*')
24 a1
24 a1
25 a2
25 a2
26 $ fileset -v '"re:a\d"'
26 $ fileset -v '"re:a\d"'
27 (string 're:a\\d')
27 (string 're:a\\d')
28 a1
28 a1
29 a2
29 a2
30 $ fileset -v '!re:"a\d"'
30 $ fileset -v '!re:"a\d"'
31 (not
31 (not
32 (kindpat
32 (kindpat
33 (symbol 're')
33 (symbol 're')
34 (string 'a\\d')))
34 (string 'a\\d')))
35 b1
35 b1
36 b2
36 b2
37 $ fileset -v 'path:a1 or glob:b?'
37 $ fileset -v 'path:a1 or glob:b?'
38 (or
38 (or
39 (kindpat
39 (kindpat
40 (symbol 'path')
40 (symbol 'path')
41 (symbol 'a1'))
41 (symbol 'a1'))
42 (kindpat
42 (kindpat
43 (symbol 'glob')
43 (symbol 'glob')
44 (symbol 'b?')))
44 (symbol 'b?')))
45 a1
45 a1
46 b1
46 b1
47 b2
47 b2
48 $ fileset -v 'a1 or a2'
48 $ fileset -v 'a1 or a2'
49 (or
49 (or
50 (symbol 'a1')
50 (symbol 'a1')
51 (symbol 'a2'))
51 (symbol 'a2'))
52 a1
52 a1
53 a2
53 a2
54 $ fileset 'a1 | a2'
54 $ fileset 'a1 | a2'
55 a1
55 a1
56 a2
56 a2
57 $ fileset 'a* and "*1"'
57 $ fileset 'a* and "*1"'
58 a1
58 a1
59 $ fileset 'a* & "*1"'
59 $ fileset 'a* & "*1"'
60 a1
60 a1
61 $ fileset 'not (r"a*")'
61 $ fileset 'not (r"a*")'
62 b1
62 b1
63 b2
63 b2
64 $ fileset '! ("a*")'
64 $ fileset '! ("a*")'
65 b1
65 b1
66 b2
66 b2
67 $ fileset 'a* - a1'
67 $ fileset 'a* - a1'
68 a2
68 a2
69 $ fileset 'a_b'
69 $ fileset 'a_b'
70 $ fileset '"\xy"'
70 $ fileset '"\xy"'
71 hg: parse error: invalid \x escape* (glob)
71 hg: parse error: invalid \x escape* (glob)
72 [255]
72 [255]
73
73
74 Test invalid syntax
74 Test invalid syntax
75
75
76 $ fileset -v '"added"()'
76 $ fileset -v '"added"()'
77 (func
77 (func
78 (string 'added')
78 (string 'added')
79 None)
79 None)
80 hg: parse error: not a symbol
80 hg: parse error: not a symbol
81 [255]
81 [255]
82 $ fileset -v '()()'
82 $ fileset -v '()()'
83 (func
83 (func
84 (group
84 (group
85 None)
85 None)
86 None)
86 None)
87 hg: parse error: not a symbol
87 hg: parse error: not a symbol
88 [255]
88 [255]
89 $ fileset -v -- '-x'
89 $ fileset -v -- '-x'
90 (negate
90 (negate
91 (symbol 'x'))
91 (symbol 'x'))
92 hg: parse error: can't use negate operator in this context
92 hg: parse error: can't use negate operator in this context
93 [255]
93 [255]
94 $ fileset -v -- '-()'
94 $ fileset -v -- '-()'
95 (negate
95 (negate
96 (group
96 (group
97 None))
97 None))
98 hg: parse error: can't use negate operator in this context
98 hg: parse error: can't use negate operator in this context
99 [255]
99 [255]
100
100
101 $ fileset '"path":.'
101 $ fileset '"path":.'
102 hg: parse error: not a symbol
102 hg: parse error: not a symbol
103 [255]
103 [255]
104 $ fileset 'path:foo bar'
104 $ fileset 'path:foo bar'
105 hg: parse error at 9: invalid token
105 hg: parse error at 9: invalid token
106 [255]
106 [255]
107 $ fileset 'foo:bar:baz'
107 $ fileset 'foo:bar:baz'
108 hg: parse error: not a symbol
108 hg: parse error: not a symbol
109 [255]
109 [255]
110 $ fileset 'foo:bar()'
110 $ fileset 'foo:bar()'
111 hg: parse error: pattern must be a string
111 hg: parse error: pattern must be a string
112 [255]
112 [255]
113 $ fileset 'foo:bar'
113 $ fileset 'foo:bar'
114 hg: parse error: invalid pattern kind: foo
114 hg: parse error: invalid pattern kind: foo
115 [255]
115 [255]
116
116
117 Test files status
117 Test files status
118
118
119 $ rm a1
119 $ rm a1
120 $ hg rm a2
120 $ hg rm a2
121 $ echo b >> b2
121 $ echo b >> b2
122 $ hg cp b1 c1
122 $ hg cp b1 c1
123 $ echo c > c2
123 $ echo c > c2
124 $ echo c > c3
124 $ echo c > c3
125 $ cat > .hgignore <<EOF
125 $ cat > .hgignore <<EOF
126 > \.hgignore
126 > \.hgignore
127 > 2$
127 > 2$
128 > EOF
128 > EOF
129 $ fileset 'modified()'
129 $ fileset 'modified()'
130 b2
130 b2
131 $ fileset 'added()'
131 $ fileset 'added()'
132 c1
132 c1
133 $ fileset 'removed()'
133 $ fileset 'removed()'
134 a2
134 a2
135 $ fileset 'deleted()'
135 $ fileset 'deleted()'
136 a1
136 a1
137 $ fileset 'missing()'
137 $ fileset 'missing()'
138 a1
138 a1
139 $ fileset 'unknown()'
139 $ fileset 'unknown()'
140 c3
140 c3
141 $ fileset 'ignored()'
141 $ fileset 'ignored()'
142 .hgignore
142 .hgignore
143 c2
143 c2
144 $ fileset 'hgignore()'
144 $ fileset 'hgignore()'
145 .hgignore
145 a2
146 a2
146 b2
147 b2
148 c2
147 $ fileset 'clean()'
149 $ fileset 'clean()'
148 b1
150 b1
149 $ fileset 'copied()'
151 $ fileset 'copied()'
150 c1
152 c1
151
153
152 Test files status in different revisions
154 Test files status in different revisions
153
155
154 $ hg status -m
156 $ hg status -m
155 M b2
157 M b2
156 $ fileset -r0 'revs("wdir()", modified())' --traceback
158 $ fileset -r0 'revs("wdir()", modified())' --traceback
157 b2
159 b2
158 $ hg status -a
160 $ hg status -a
159 A c1
161 A c1
160 $ fileset -r0 'revs("wdir()", added())'
162 $ fileset -r0 'revs("wdir()", added())'
161 c1
163 c1
162 $ hg status --change 0 -a
164 $ hg status --change 0 -a
163 A a1
165 A a1
164 A a2
166 A a2
165 A b1
167 A b1
166 A b2
168 A b2
167 $ hg status -mru
169 $ hg status -mru
168 M b2
170 M b2
169 R a2
171 R a2
170 ? c3
172 ? c3
171 $ fileset -r0 'added() and revs("wdir()", modified() or removed() or unknown())'
173 $ fileset -r0 'added() and revs("wdir()", modified() or removed() or unknown())'
172 a2
174 a2
173 b2
175 b2
174 $ fileset -r0 'added() or revs("wdir()", added())'
176 $ fileset -r0 'added() or revs("wdir()", added())'
175 a1
177 a1
176 a2
178 a2
177 b1
179 b1
178 b2
180 b2
179 c1
181 c1
180
182
181 Test files properties
183 Test files properties
182
184
183 >>> open('bin', 'wb').write(b'\0a') and None
185 >>> open('bin', 'wb').write(b'\0a') and None
184 $ fileset 'binary()'
186 $ fileset 'binary()'
187 bin
185 $ fileset 'binary() and unknown()'
188 $ fileset 'binary() and unknown()'
186 bin
189 bin
187 $ echo '^bin$' >> .hgignore
190 $ echo '^bin$' >> .hgignore
188 $ fileset 'binary() and ignored()'
191 $ fileset 'binary() and ignored()'
189 bin
192 bin
190 $ hg add bin
193 $ hg add bin
191 $ fileset 'binary()'
194 $ fileset 'binary()'
192 bin
195 bin
193
196
194 $ fileset 'grep("b{1}")'
197 $ fileset 'grep("b{1}")'
198 .hgignore
195 b1
199 b1
196 b2
200 b2
197 c1
201 c1
198 $ fileset 'grep("missingparens(")'
202 $ fileset 'grep("missingparens(")'
199 hg: parse error: invalid match pattern: (unbalanced parenthesis|missing \)).* (re)
203 hg: parse error: invalid match pattern: (unbalanced parenthesis|missing \)).* (re)
200 [255]
204 [255]
201
205
202 #if execbit
206 #if execbit
203 $ chmod +x b2
207 $ chmod +x b2
204 $ fileset 'exec()'
208 $ fileset 'exec()'
205 b2
209 b2
206 #endif
210 #endif
207
211
208 #if symlink
212 #if symlink
209 $ ln -s b2 b2link
213 $ ln -s b2 b2link
210 $ fileset 'symlink() and unknown()'
214 $ fileset 'symlink() and unknown()'
211 b2link
215 b2link
212 $ hg add b2link
216 $ hg add b2link
213 #endif
217 #endif
214
218
215 #if no-windows
219 #if no-windows
216 $ echo foo > con.xml
220 $ echo foo > con.xml
217 $ fileset 'not portable()'
221 $ fileset 'not portable()'
218 con.xml
222 con.xml
219 $ hg --config ui.portablefilenames=ignore add con.xml
223 $ hg --config ui.portablefilenames=ignore add con.xml
220 #endif
224 #endif
221
225
222 >>> open('1k', 'wb').write(b' '*1024) and None
226 >>> open('1k', 'wb').write(b' '*1024) and None
223 >>> open('2k', 'wb').write(b' '*2048) and None
227 >>> open('2k', 'wb').write(b' '*2048) and None
224 $ hg add 1k 2k
228 $ hg add 1k 2k
225 $ fileset 'size("bar")'
229 $ fileset 'size("bar")'
226 hg: parse error: couldn't parse size: bar
230 hg: parse error: couldn't parse size: bar
227 [255]
231 [255]
228 $ fileset '(1k, 2k)'
232 $ fileset '(1k, 2k)'
229 hg: parse error: can't use a list in this context
233 hg: parse error: can't use a list in this context
230 (see hg help "filesets.x or y")
234 (see hg help "filesets.x or y")
231 [255]
235 [255]
232 $ fileset 'size(1k)'
236 $ fileset 'size(1k)'
233 1k
237 1k
234 $ fileset '(1k or 2k) and size("< 2k")'
238 $ fileset '(1k or 2k) and size("< 2k")'
235 1k
239 1k
236 $ fileset '(1k or 2k) and size("<=2k")'
240 $ fileset '(1k or 2k) and size("<=2k")'
237 1k
241 1k
238 2k
242 2k
239 $ fileset '(1k or 2k) and size("> 1k")'
243 $ fileset '(1k or 2k) and size("> 1k")'
240 2k
244 2k
241 $ fileset '(1k or 2k) and size(">=1K")'
245 $ fileset '(1k or 2k) and size(">=1K")'
242 1k
246 1k
243 2k
247 2k
244 $ fileset '(1k or 2k) and size(".5KB - 1.5kB")'
248 $ fileset '(1k or 2k) and size(".5KB - 1.5kB")'
245 1k
249 1k
246 $ fileset 'size("1M")'
250 $ fileset 'size("1M")'
247 $ fileset 'size("1 GB")'
251 $ fileset 'size("1 GB")'
248
252
249 Test merge states
253 Test merge states
250
254
251 $ hg ci -m manychanges
255 $ hg ci -m manychanges
252 $ hg file -r . 'set:copied() & modified()'
256 $ hg file -r . 'set:copied() & modified()'
253 [1]
257 [1]
254 $ hg up -C 0
258 $ hg up -C 0
255 * files updated, 0 files merged, * files removed, 0 files unresolved (glob)
259 * files updated, 0 files merged, * files removed, 0 files unresolved (glob)
256 $ echo c >> b2
260 $ echo c >> b2
257 $ hg ci -m diverging b2
261 $ hg ci -m diverging b2
258 created new head
262 created new head
259 $ fileset 'resolved()'
263 $ fileset 'resolved()'
260 $ fileset 'unresolved()'
264 $ fileset 'unresolved()'
261 $ hg merge
265 $ hg merge
262 merging b2
266 merging b2
263 warning: conflicts while merging b2! (edit, then use 'hg resolve --mark')
267 warning: conflicts while merging b2! (edit, then use 'hg resolve --mark')
264 * files updated, 0 files merged, 1 files removed, 1 files unresolved (glob)
268 * files updated, 0 files merged, 1 files removed, 1 files unresolved (glob)
265 use 'hg resolve' to retry unresolved file merges or 'hg merge --abort' to abandon
269 use 'hg resolve' to retry unresolved file merges or 'hg merge --abort' to abandon
266 [1]
270 [1]
267 $ fileset 'resolved()'
271 $ fileset 'resolved()'
268 $ fileset 'unresolved()'
272 $ fileset 'unresolved()'
269 b2
273 b2
270 $ echo e > b2
274 $ echo e > b2
271 $ hg resolve -m b2
275 $ hg resolve -m b2
272 (no more unresolved files)
276 (no more unresolved files)
273 $ fileset 'resolved()'
277 $ fileset 'resolved()'
274 b2
278 b2
275 $ fileset 'unresolved()'
279 $ fileset 'unresolved()'
276 $ hg ci -m merge
280 $ hg ci -m merge
277
281
278 Test subrepo predicate
282 Test subrepo predicate
279
283
280 $ hg init sub
284 $ hg init sub
281 $ echo a > sub/suba
285 $ echo a > sub/suba
282 $ hg -R sub add sub/suba
286 $ hg -R sub add sub/suba
283 $ hg -R sub ci -m sub
287 $ hg -R sub ci -m sub
284 $ echo 'sub = sub' > .hgsub
288 $ echo 'sub = sub' > .hgsub
285 $ hg init sub2
289 $ hg init sub2
286 $ echo b > sub2/b
290 $ echo b > sub2/b
287 $ hg -R sub2 ci -Am sub2
291 $ hg -R sub2 ci -Am sub2
288 adding b
292 adding b
289 $ echo 'sub2 = sub2' >> .hgsub
293 $ echo 'sub2 = sub2' >> .hgsub
290 $ fileset 'subrepo()'
294 $ fileset 'subrepo()'
291 $ hg add .hgsub
295 $ hg add .hgsub
292 $ fileset 'subrepo()'
296 $ fileset 'subrepo()'
293 sub
297 sub
294 sub2
298 sub2
295 $ fileset 'subrepo("sub")'
299 $ fileset 'subrepo("sub")'
296 sub
300 sub
297 $ fileset 'subrepo("glob:*")'
301 $ fileset 'subrepo("glob:*")'
298 sub
302 sub
299 sub2
303 sub2
300 $ hg ci -m subrepo
304 $ hg ci -m subrepo
301
305
302 Test that .hgsubstate is updated as appropriate during a conversion. The
306 Test that .hgsubstate is updated as appropriate during a conversion. The
303 saverev property is enough to alter the hashes of the subrepo.
307 saverev property is enough to alter the hashes of the subrepo.
304
308
305 $ hg init ../converted
309 $ hg init ../converted
306 $ hg --config extensions.convert= convert --config convert.hg.saverev=True \
310 $ hg --config extensions.convert= convert --config convert.hg.saverev=True \
307 > sub ../converted/sub
311 > sub ../converted/sub
308 initializing destination ../converted/sub repository
312 initializing destination ../converted/sub repository
309 scanning source...
313 scanning source...
310 sorting...
314 sorting...
311 converting...
315 converting...
312 0 sub
316 0 sub
313 $ hg clone -U sub2 ../converted/sub2
317 $ hg clone -U sub2 ../converted/sub2
314 $ hg --config extensions.convert= convert --config convert.hg.saverev=True \
318 $ hg --config extensions.convert= convert --config convert.hg.saverev=True \
315 > . ../converted
319 > . ../converted
316 scanning source...
320 scanning source...
317 sorting...
321 sorting...
318 converting...
322 converting...
319 4 addfiles
323 4 addfiles
320 3 manychanges
324 3 manychanges
321 2 diverging
325 2 diverging
322 1 merge
326 1 merge
323 0 subrepo
327 0 subrepo
324 no ".hgsubstate" updates will be made for "sub2"
328 no ".hgsubstate" updates will be made for "sub2"
325 $ hg up -q -R ../converted -r tip
329 $ hg up -q -R ../converted -r tip
326 $ hg --cwd ../converted cat sub/suba sub2/b -r tip
330 $ hg --cwd ../converted cat sub/suba sub2/b -r tip
327 a
331 a
328 b
332 b
329 $ oldnode=`hg log -r tip -T "{node}\n"`
333 $ oldnode=`hg log -r tip -T "{node}\n"`
330 $ newnode=`hg log -R ../converted -r tip -T "{node}\n"`
334 $ newnode=`hg log -R ../converted -r tip -T "{node}\n"`
331 $ [ "$oldnode" != "$newnode" ] || echo "nothing changed"
335 $ [ "$oldnode" != "$newnode" ] || echo "nothing changed"
332
336
333 Test with a revision
337 Test with a revision
334
338
335 $ hg log -G --template '{rev} {desc}\n'
339 $ hg log -G --template '{rev} {desc}\n'
336 @ 4 subrepo
340 @ 4 subrepo
337 |
341 |
338 o 3 merge
342 o 3 merge
339 |\
343 |\
340 | o 2 diverging
344 | o 2 diverging
341 | |
345 | |
342 o | 1 manychanges
346 o | 1 manychanges
343 |/
347 |/
344 o 0 addfiles
348 o 0 addfiles
345
349
346 $ echo unknown > unknown
350 $ echo unknown > unknown
347 $ fileset -r1 'modified()'
351 $ fileset -r1 'modified()'
348 b2
352 b2
349 $ fileset -r1 'added() and c1'
353 $ fileset -r1 'added() and c1'
350 c1
354 c1
351 $ fileset -r1 'removed()'
355 $ fileset -r1 'removed()'
352 a2
356 a2
353 $ fileset -r1 'deleted()'
357 $ fileset -r1 'deleted()'
354 $ fileset -r1 'unknown()'
358 $ fileset -r1 'unknown()'
355 $ fileset -r1 'ignored()'
359 $ fileset -r1 'ignored()'
356 $ fileset -r1 'hgignore()'
360 $ fileset -r1 'hgignore()'
361 .hgignore
362 a2
357 b2
363 b2
358 bin
364 bin
365 c2
366 sub2
359 $ fileset -r1 'binary()'
367 $ fileset -r1 'binary()'
360 bin
368 bin
361 $ fileset -r1 'size(1k)'
369 $ fileset -r1 'size(1k)'
362 1k
370 1k
363 $ fileset -r3 'resolved()'
371 $ fileset -r3 'resolved()'
364 $ fileset -r3 'unresolved()'
372 $ fileset -r3 'unresolved()'
365
373
366 #if execbit
374 #if execbit
367 $ fileset -r1 'exec()'
375 $ fileset -r1 'exec()'
368 b2
376 b2
369 #endif
377 #endif
370
378
371 #if symlink
379 #if symlink
372 $ fileset -r1 'symlink()'
380 $ fileset -r1 'symlink()'
373 b2link
381 b2link
374 #endif
382 #endif
375
383
376 #if no-windows
384 #if no-windows
377 $ fileset -r1 'not portable()'
385 $ fileset -r1 'not portable()'
378 con.xml
386 con.xml
379 $ hg forget 'con.xml'
387 $ hg forget 'con.xml'
380 #endif
388 #endif
381
389
382 $ fileset -r4 'subrepo("re:su.*")'
390 $ fileset -r4 'subrepo("re:su.*")'
383 sub
391 sub
384 sub2
392 sub2
385 $ fileset -r4 'subrepo(re:su.*)'
393 $ fileset -r4 'subrepo(re:su.*)'
386 sub
394 sub
387 sub2
395 sub2
388 $ fileset -r4 'subrepo("sub")'
396 $ fileset -r4 'subrepo("sub")'
389 sub
397 sub
390 $ fileset -r4 'b2 or c1'
398 $ fileset -r4 'b2 or c1'
391 b2
399 b2
392 c1
400 c1
393
401
394 >>> open('dos', 'wb').write(b"dos\r\n") and None
402 >>> open('dos', 'wb').write(b"dos\r\n") and None
395 >>> open('mixed', 'wb').write(b"dos\r\nunix\n") and None
403 >>> open('mixed', 'wb').write(b"dos\r\nunix\n") and None
396 >>> open('mac', 'wb').write(b"mac\r") and None
404 >>> open('mac', 'wb').write(b"mac\r") and None
397 $ hg add dos mixed mac
405 $ hg add dos mixed mac
398
406
399 (remove a1, to examine safety of 'eol' on removed files)
407 (remove a1, to examine safety of 'eol' on removed files)
400 $ rm a1
408 $ rm a1
401
409
402 $ fileset 'eol(dos)'
410 $ fileset 'eol(dos)'
403 dos
411 dos
404 mixed
412 mixed
405 $ fileset 'eol(unix)'
413 $ fileset 'eol(unix)'
414 .hgignore
406 .hgsub
415 .hgsub
407 .hgsubstate
416 .hgsubstate
408 b1
417 b1
409 b2
418 b2
419 b2.orig
410 c1
420 c1
421 c2
422 c3
423 con.xml
411 mixed
424 mixed
425 unknown
412 $ fileset 'eol(mac)'
426 $ fileset 'eol(mac)'
413 mac
427 mac
414
428
415 Test safety of 'encoding' on removed files
429 Test safety of 'encoding' on removed files
416
430
417 $ fileset 'encoding("ascii")'
431 $ fileset 'encoding("ascii")'
432 .hgignore
418 .hgsub
433 .hgsub
419 .hgsubstate
434 .hgsubstate
420 1k
435 1k
421 2k
436 2k
422 b1
437 b1
423 b2
438 b2
439 b2.orig
424 b2link (symlink !)
440 b2link (symlink !)
425 bin
441 bin
426 c1
442 c1
443 c2
444 c3
445 con.xml
427 dos
446 dos
428 mac
447 mac
429 mixed
448 mixed
449 unknown
430
450
431 Test detection of unintentional 'matchctx.existing()' invocation
451 Test detection of unintentional 'matchctx.existing()' invocation
432
452
433 $ cat > $TESTTMP/existingcaller.py <<EOF
453 $ cat > $TESTTMP/existingcaller.py <<EOF
434 > from mercurial import registrar
454 > from mercurial import registrar
435 >
455 >
436 > filesetpredicate = registrar.filesetpredicate()
456 > filesetpredicate = registrar.filesetpredicate()
437 > @filesetpredicate(b'existingcaller()', callexisting=False)
457 > @filesetpredicate(b'existingcaller()', callexisting=False)
438 > def existingcaller(mctx, x):
458 > def existingcaller(mctx, x):
439 > # this 'mctx.existing()' invocation is unintentional
459 > # this 'mctx.existing()' invocation is unintentional
440 > return [f for f in mctx.existing()]
460 > existing = set(mctx.existing())
461 > return mctx.predicate(existing.__contains__, cache=False)
441 > EOF
462 > EOF
442
463
443 $ cat >> .hg/hgrc <<EOF
464 $ cat >> .hg/hgrc <<EOF
444 > [extensions]
465 > [extensions]
445 > existingcaller = $TESTTMP/existingcaller.py
466 > existingcaller = $TESTTMP/existingcaller.py
446 > EOF
467 > EOF
447
468
448 $ fileset 'existingcaller()' 2>&1 | tail -1
469 $ fileset 'existingcaller()' 2>&1 | tail -1
449 *ProgrammingError: *unexpected existing() invocation* (glob)
470 *ProgrammingError: *unexpected existing() invocation* (glob)
450
471
451 Test 'revs(...)'
472 Test 'revs(...)'
452 ================
473 ================
453
474
454 small reminder of the repository state
475 small reminder of the repository state
455
476
456 $ hg log -G
477 $ hg log -G
457 @ changeset: 4:* (glob)
478 @ changeset: 4:* (glob)
458 | tag: tip
479 | tag: tip
459 | user: test
480 | user: test
460 | date: Thu Jan 01 00:00:00 1970 +0000
481 | date: Thu Jan 01 00:00:00 1970 +0000
461 | summary: subrepo
482 | summary: subrepo
462 |
483 |
463 o changeset: 3:* (glob)
484 o changeset: 3:* (glob)
464 |\ parent: 2:55b05bdebf36
485 |\ parent: 2:55b05bdebf36
465 | | parent: 1:* (glob)
486 | | parent: 1:* (glob)
466 | | user: test
487 | | user: test
467 | | date: Thu Jan 01 00:00:00 1970 +0000
488 | | date: Thu Jan 01 00:00:00 1970 +0000
468 | | summary: merge
489 | | summary: merge
469 | |
490 | |
470 | o changeset: 2:55b05bdebf36
491 | o changeset: 2:55b05bdebf36
471 | | parent: 0:8a9576c51c1f
492 | | parent: 0:8a9576c51c1f
472 | | user: test
493 | | user: test
473 | | date: Thu Jan 01 00:00:00 1970 +0000
494 | | date: Thu Jan 01 00:00:00 1970 +0000
474 | | summary: diverging
495 | | summary: diverging
475 | |
496 | |
476 o | changeset: 1:* (glob)
497 o | changeset: 1:* (glob)
477 |/ user: test
498 |/ user: test
478 | date: Thu Jan 01 00:00:00 1970 +0000
499 | date: Thu Jan 01 00:00:00 1970 +0000
479 | summary: manychanges
500 | summary: manychanges
480 |
501 |
481 o changeset: 0:8a9576c51c1f
502 o changeset: 0:8a9576c51c1f
482 user: test
503 user: test
483 date: Thu Jan 01 00:00:00 1970 +0000
504 date: Thu Jan 01 00:00:00 1970 +0000
484 summary: addfiles
505 summary: addfiles
485
506
486 $ hg status --change 0
507 $ hg status --change 0
487 A a1
508 A a1
488 A a2
509 A a2
489 A b1
510 A b1
490 A b2
511 A b2
491 $ hg status --change 1
512 $ hg status --change 1
492 M b2
513 M b2
493 A 1k
514 A 1k
494 A 2k
515 A 2k
495 A b2link (no-windows !)
516 A b2link (no-windows !)
496 A bin
517 A bin
497 A c1
518 A c1
498 A con.xml (no-windows !)
519 A con.xml (no-windows !)
499 R a2
520 R a2
500 $ hg status --change 2
521 $ hg status --change 2
501 M b2
522 M b2
502 $ hg status --change 3
523 $ hg status --change 3
503 M b2
524 M b2
504 A 1k
525 A 1k
505 A 2k
526 A 2k
506 A b2link (no-windows !)
527 A b2link (no-windows !)
507 A bin
528 A bin
508 A c1
529 A c1
509 A con.xml (no-windows !)
530 A con.xml (no-windows !)
510 R a2
531 R a2
511 $ hg status --change 4
532 $ hg status --change 4
512 A .hgsub
533 A .hgsub
513 A .hgsubstate
534 A .hgsubstate
514 $ hg status
535 $ hg status
515 A dos
536 A dos
516 A mac
537 A mac
517 A mixed
538 A mixed
518 R con.xml (no-windows !)
539 R con.xml (no-windows !)
519 ! a1
540 ! a1
520 ? b2.orig
541 ? b2.orig
521 ? c3
542 ? c3
522 ? unknown
543 ? unknown
523
544
524 Test files at -r0 should be filtered by files at wdir
545 Test files at -r0 should be filtered by files at wdir
525 -----------------------------------------------------
546 -----------------------------------------------------
526
547
527 $ fileset -r0 'tracked() and revs("wdir()", tracked())'
548 $ fileset -r0 'tracked() and revs("wdir()", tracked())'
528 a1
549 a1
529 b1
550 b1
530 b2
551 b2
531
552
532 Test that "revs()" work at all
553 Test that "revs()" work at all
533 ------------------------------
554 ------------------------------
534
555
535 $ fileset "revs('2', modified())"
556 $ fileset "revs('2', modified())"
536 b2
557 b2
537
558
538 Test that "revs()" work for file missing in the working copy/current context
559 Test that "revs()" work for file missing in the working copy/current context
539 ----------------------------------------------------------------------------
560 ----------------------------------------------------------------------------
540
561
541 (a2 not in working copy)
562 (a2 not in working copy)
542
563
543 $ fileset "revs('0', added())"
564 $ fileset "revs('0', added())"
544 a1
565 a1
545 a2
566 a2
546 b1
567 b1
547 b2
568 b2
548
569
549 (none of the file exist in "0")
570 (none of the file exist in "0")
550
571
551 $ fileset -r 0 "revs('4', added())"
572 $ fileset -r 0 "revs('4', added())"
552 .hgsub
573 .hgsub
553 .hgsubstate
574 .hgsubstate
554
575
555 Call with empty revset
576 Call with empty revset
556 --------------------------
577 --------------------------
557
578
558 $ fileset "revs('2-2', modified())"
579 $ fileset "revs('2-2', modified())"
559
580
560 Call with revset matching multiple revs
581 Call with revset matching multiple revs
561 ---------------------------------------
582 ---------------------------------------
562
583
563 $ fileset "revs('0+4', added())"
584 $ fileset "revs('0+4', added())"
564 .hgsub
585 .hgsub
565 .hgsubstate
586 .hgsubstate
566 a1
587 a1
567 a2
588 a2
568 b1
589 b1
569 b2
590 b2
570
591
571 overlapping set
592 overlapping set
572
593
573 $ fileset "revs('1+2', modified())"
594 $ fileset "revs('1+2', modified())"
574 b2
595 b2
575
596
576 test 'status(...)'
597 test 'status(...)'
577 =================
598 =================
578
599
579 Simple case
600 Simple case
580 -----------
601 -----------
581
602
582 $ fileset "status(3, 4, added())"
603 $ fileset "status(3, 4, added())"
583 .hgsub
604 .hgsub
584 .hgsubstate
605 .hgsubstate
585
606
586 use rev to restrict matched file
607 use rev to restrict matched file
587 -----------------------------------------
608 -----------------------------------------
588
609
589 $ hg status --removed --rev 0 --rev 1
610 $ hg status --removed --rev 0 --rev 1
590 R a2
611 R a2
591 $ fileset "status(0, 1, removed())"
612 $ fileset "status(0, 1, removed())"
592 a2
613 a2
593 $ fileset "tracked() and status(0, 1, removed())"
614 $ fileset "tracked() and status(0, 1, removed())"
594 $ fileset -r 4 "status(0, 1, removed())"
615 $ fileset -r 4 "status(0, 1, removed())"
595 a2
616 a2
596 $ fileset -r 4 "tracked() and status(0, 1, removed())"
617 $ fileset -r 4 "tracked() and status(0, 1, removed())"
597 $ fileset "revs('4', tracked() and status(0, 1, removed()))"
618 $ fileset "revs('4', tracked() and status(0, 1, removed()))"
598 $ fileset "revs('0', tracked() and status(0, 1, removed()))"
619 $ fileset "revs('0', tracked() and status(0, 1, removed()))"
599 a2
620 a2
600
621
601 check wdir()
622 check wdir()
602 ------------
623 ------------
603
624
604 $ hg status --removed --rev 4
625 $ hg status --removed --rev 4
605 R con.xml (no-windows !)
626 R con.xml (no-windows !)
606 $ fileset "status(4, 'wdir()', removed())"
627 $ fileset "status(4, 'wdir()', removed())"
607 con.xml (no-windows !)
628 con.xml (no-windows !)
608
629
609 $ hg status --removed --rev 2
630 $ hg status --removed --rev 2
610 R a2
631 R a2
611 $ fileset "status('2', 'wdir()', removed())"
632 $ fileset "status('2', 'wdir()', removed())"
612 a2
633 a2
613
634
614 test backward status
635 test backward status
615 --------------------
636 --------------------
616
637
617 $ hg status --removed --rev 0 --rev 4
638 $ hg status --removed --rev 0 --rev 4
618 R a2
639 R a2
619 $ hg status --added --rev 4 --rev 0
640 $ hg status --added --rev 4 --rev 0
620 A a2
641 A a2
621 $ fileset "status(4, 0, added())"
642 $ fileset "status(4, 0, added())"
622 a2
643 a2
623
644
624 test cross branch status
645 test cross branch status
625 ------------------------
646 ------------------------
626
647
627 $ hg status --added --rev 1 --rev 2
648 $ hg status --added --rev 1 --rev 2
628 A a2
649 A a2
629 $ fileset "status(1, 2, added())"
650 $ fileset "status(1, 2, added())"
630 a2
651 a2
631
652
632 test with multi revs revset
653 test with multi revs revset
633 ---------------------------
654 ---------------------------
634 $ hg status --added --rev 0:1 --rev 3:4
655 $ hg status --added --rev 0:1 --rev 3:4
635 A .hgsub
656 A .hgsub
636 A .hgsubstate
657 A .hgsubstate
637 A 1k
658 A 1k
638 A 2k
659 A 2k
639 A b2link (no-windows !)
660 A b2link (no-windows !)
640 A bin
661 A bin
641 A c1
662 A c1
642 A con.xml (no-windows !)
663 A con.xml (no-windows !)
643 $ fileset "status('0:1', '3:4', added())"
664 $ fileset "status('0:1', '3:4', added())"
644 .hgsub
665 .hgsub
645 .hgsubstate
666 .hgsubstate
646 1k
667 1k
647 2k
668 2k
648 b2link (no-windows !)
669 b2link (no-windows !)
649 bin
670 bin
650 c1
671 c1
651 con.xml (no-windows !)
672 con.xml (no-windows !)
652
673
653 tests with empty value
674 tests with empty value
654 ----------------------
675 ----------------------
655
676
656 Fully empty revset
677 Fully empty revset
657
678
658 $ fileset "status('', '4', added())"
679 $ fileset "status('', '4', added())"
659 hg: parse error: first argument to status must be a revision
680 hg: parse error: first argument to status must be a revision
660 [255]
681 [255]
661 $ fileset "status('2', '', added())"
682 $ fileset "status('2', '', added())"
662 hg: parse error: second argument to status must be a revision
683 hg: parse error: second argument to status must be a revision
663 [255]
684 [255]
664
685
665 Empty revset will error at the revset layer
686 Empty revset will error at the revset layer
666
687
667 $ fileset "status(' ', '4', added())"
688 $ fileset "status(' ', '4', added())"
668 hg: parse error at 1: not a prefix: end
689 hg: parse error at 1: not a prefix: end
669 (
690 (
670 ^ here)
691 ^ here)
671 [255]
692 [255]
672 $ fileset "status('2', ' ', added())"
693 $ fileset "status('2', ' ', added())"
673 hg: parse error at 1: not a prefix: end
694 hg: parse error at 1: not a prefix: end
674 (
695 (
675 ^ here)
696 ^ here)
676 [255]
697 [255]
@@ -1,1119 +1,1119 b''
1 #require no-reposimplestore no-chg
1 #require no-reposimplestore no-chg
2
2
3 # Initial setup
3 # Initial setup
4
4
5 $ cat >> $HGRCPATH << EOF
5 $ cat >> $HGRCPATH << EOF
6 > [extensions]
6 > [extensions]
7 > lfs=
7 > lfs=
8 > [lfs]
8 > [lfs]
9 > # Test deprecated config
9 > # Test deprecated config
10 > threshold=1000B
10 > threshold=1000B
11 > EOF
11 > EOF
12
12
13 $ LONG=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
13 $ LONG=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
14
14
15 # Prepare server and enable extension
15 # Prepare server and enable extension
16 $ hg init server
16 $ hg init server
17 $ hg clone -q server client
17 $ hg clone -q server client
18 $ cd client
18 $ cd client
19
19
20 # Commit small file
20 # Commit small file
21 $ echo s > smallfile
21 $ echo s > smallfile
22 $ echo '**.py = LF' > .hgeol
22 $ echo '**.py = LF' > .hgeol
23 $ hg --config lfs.track='"size(\">1000B\")"' commit -Aqm "add small file"
23 $ hg --config lfs.track='"size(\">1000B\")"' commit -Aqm "add small file"
24 hg: parse error: unsupported file pattern: size(">1000B")
24 hg: parse error: unsupported file pattern: size(">1000B")
25 (paths must be prefixed with "path:")
25 (paths must be prefixed with "path:")
26 [255]
26 [255]
27 $ hg --config lfs.track='size(">1000B")' commit -Aqm "add small file"
27 $ hg --config lfs.track='size(">1000B")' commit -Aqm "add small file"
28
28
29 # Commit large file
29 # Commit large file
30 $ echo $LONG > largefile
30 $ echo $LONG > largefile
31 $ grep lfs .hg/requires
31 $ grep lfs .hg/requires
32 [1]
32 [1]
33 $ hg commit --traceback -Aqm "add large file"
33 $ hg commit --traceback -Aqm "add large file"
34 $ grep lfs .hg/requires
34 $ grep lfs .hg/requires
35 lfs
35 lfs
36
36
37 # Ensure metadata is stored
37 # Ensure metadata is stored
38 $ hg debugdata largefile 0
38 $ hg debugdata largefile 0
39 version https://git-lfs.github.com/spec/v1
39 version https://git-lfs.github.com/spec/v1
40 oid sha256:f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
40 oid sha256:f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
41 size 1501
41 size 1501
42 x-is-binary 0
42 x-is-binary 0
43
43
44 # Check the blobstore is populated
44 # Check the blobstore is populated
45 $ find .hg/store/lfs/objects | sort
45 $ find .hg/store/lfs/objects | sort
46 .hg/store/lfs/objects
46 .hg/store/lfs/objects
47 .hg/store/lfs/objects/f1
47 .hg/store/lfs/objects/f1
48 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
48 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
49
49
50 # Check the blob stored contains the actual contents of the file
50 # Check the blob stored contains the actual contents of the file
51 $ cat .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
51 $ cat .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
52 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
52 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
53
53
54 # Push changes to the server
54 # Push changes to the server
55
55
56 $ hg push
56 $ hg push
57 pushing to $TESTTMP/server
57 pushing to $TESTTMP/server
58 searching for changes
58 searching for changes
59 abort: lfs.url needs to be configured
59 abort: lfs.url needs to be configured
60 [255]
60 [255]
61
61
62 $ cat >> $HGRCPATH << EOF
62 $ cat >> $HGRCPATH << EOF
63 > [lfs]
63 > [lfs]
64 > url=file:$TESTTMP/dummy-remote/
64 > url=file:$TESTTMP/dummy-remote/
65 > EOF
65 > EOF
66
66
67 Push to a local non-lfs repo with the extension enabled will add the
67 Push to a local non-lfs repo with the extension enabled will add the
68 lfs requirement
68 lfs requirement
69
69
70 $ grep lfs $TESTTMP/server/.hg/requires
70 $ grep lfs $TESTTMP/server/.hg/requires
71 [1]
71 [1]
72 $ hg push -v | egrep -v '^(uncompressed| )'
72 $ hg push -v | egrep -v '^(uncompressed| )'
73 pushing to $TESTTMP/server
73 pushing to $TESTTMP/server
74 searching for changes
74 searching for changes
75 lfs: found f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b in the local lfs store
75 lfs: found f11e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b in the local lfs store
76 2 changesets found
76 2 changesets found
77 adding changesets
77 adding changesets
78 adding manifests
78 adding manifests
79 adding file changes
79 adding file changes
80 added 2 changesets with 3 changes to 3 files
80 added 2 changesets with 3 changes to 3 files
81 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
81 calling hook pretxnchangegroup.lfs: hgext.lfs.checkrequireslfs
82 $ grep lfs $TESTTMP/server/.hg/requires
82 $ grep lfs $TESTTMP/server/.hg/requires
83 lfs
83 lfs
84
84
85 # Unknown URL scheme
85 # Unknown URL scheme
86
86
87 $ hg push --config lfs.url=ftp://foobar
87 $ hg push --config lfs.url=ftp://foobar
88 abort: lfs: unknown url scheme: ftp
88 abort: lfs: unknown url scheme: ftp
89 [255]
89 [255]
90
90
91 $ cd ../
91 $ cd ../
92
92
93 # Initialize new client (not cloning) and setup extension
93 # Initialize new client (not cloning) and setup extension
94 $ hg init client2
94 $ hg init client2
95 $ cd client2
95 $ cd client2
96 $ cat >> .hg/hgrc <<EOF
96 $ cat >> .hg/hgrc <<EOF
97 > [paths]
97 > [paths]
98 > default = $TESTTMP/server
98 > default = $TESTTMP/server
99 > EOF
99 > EOF
100
100
101 # Pull from server
101 # Pull from server
102
102
103 Pulling a local lfs repo into a local non-lfs repo with the extension
103 Pulling a local lfs repo into a local non-lfs repo with the extension
104 enabled adds the lfs requirement
104 enabled adds the lfs requirement
105
105
106 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
106 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
107 $TESTTMP/server/.hg/requires:lfs
107 $TESTTMP/server/.hg/requires:lfs
108 $ hg pull default
108 $ hg pull default
109 pulling from $TESTTMP/server
109 pulling from $TESTTMP/server
110 requesting all changes
110 requesting all changes
111 adding changesets
111 adding changesets
112 adding manifests
112 adding manifests
113 adding file changes
113 adding file changes
114 added 2 changesets with 3 changes to 3 files
114 added 2 changesets with 3 changes to 3 files
115 new changesets 0ead593177f7:b88141481348
115 new changesets 0ead593177f7:b88141481348
116 (run 'hg update' to get a working copy)
116 (run 'hg update' to get a working copy)
117 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
117 $ grep lfs .hg/requires $TESTTMP/server/.hg/requires
118 .hg/requires:lfs
118 .hg/requires:lfs
119 $TESTTMP/server/.hg/requires:lfs
119 $TESTTMP/server/.hg/requires:lfs
120
120
121 # Check the blobstore is not yet populated
121 # Check the blobstore is not yet populated
122 $ [ -d .hg/store/lfs/objects ]
122 $ [ -d .hg/store/lfs/objects ]
123 [1]
123 [1]
124
124
125 # Update to the last revision containing the large file
125 # Update to the last revision containing the large file
126 $ hg update
126 $ hg update
127 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
127 3 files updated, 0 files merged, 0 files removed, 0 files unresolved
128
128
129 # Check the blobstore has been populated on update
129 # Check the blobstore has been populated on update
130 $ find .hg/store/lfs/objects | sort
130 $ find .hg/store/lfs/objects | sort
131 .hg/store/lfs/objects
131 .hg/store/lfs/objects
132 .hg/store/lfs/objects/f1
132 .hg/store/lfs/objects/f1
133 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
133 .hg/store/lfs/objects/f1/1e77c257047a398492d8d6cb9f6acf3aa7c4384bb23080b43546053e183e4b
134
134
135 # Check the contents of the file are fetched from blobstore when requested
135 # Check the contents of the file are fetched from blobstore when requested
136 $ hg cat -r . largefile
136 $ hg cat -r . largefile
137 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
137 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
138
138
139 # Check the file has been copied in the working copy
139 # Check the file has been copied in the working copy
140 $ cat largefile
140 $ cat largefile
141 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
141 AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
142
142
143 $ cd ..
143 $ cd ..
144
144
145 # Check rename, and switch between large and small files
145 # Check rename, and switch between large and small files
146
146
147 $ hg init repo3
147 $ hg init repo3
148 $ cd repo3
148 $ cd repo3
149 $ cat >> .hg/hgrc << EOF
149 $ cat >> .hg/hgrc << EOF
150 > [lfs]
150 > [lfs]
151 > track=size(">10B")
151 > track=size(">10B")
152 > EOF
152 > EOF
153
153
154 $ echo LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS > large
154 $ echo LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS > large
155 $ echo SHORTER > small
155 $ echo SHORTER > small
156 $ hg add . -q
156 $ hg add . -q
157 $ hg commit -m 'commit with lfs content'
157 $ hg commit -m 'commit with lfs content'
158
158
159 $ hg files -r . 'set:added()'
159 $ hg files -r . 'set:added()'
160 large
160 large
161 small
161 small
162 $ hg files -r . 'set:added() & lfs()'
162 $ hg files -r . 'set:added() & lfs()'
163 large
163 large
164
164
165 $ hg mv large l
165 $ hg mv large l
166 $ hg mv small s
166 $ hg mv small s
167 $ hg status 'set:removed()'
167 $ hg status 'set:removed()'
168 R large
168 R large
169 R small
169 R small
170 $ hg status 'set:removed() & lfs()'
170 $ hg status 'set:removed() & lfs()'
171 R large
171 R large
172 $ hg commit -m 'renames'
172 $ hg commit -m 'renames'
173
173
174 $ hg files -r . 'set:copied()'
174 $ hg files -r . 'set:copied()'
175 l
175 l
176 s
176 s
177 $ hg files -r . 'set:copied() & lfs()'
177 $ hg files -r . 'set:copied() & lfs()'
178 l
178 l
179 $ hg status --change . 'set:removed()'
179 $ hg status --change . 'set:removed()'
180 R large
180 R large
181 R small
181 R small
182 $ hg status --change . 'set:removed() & lfs()'
182 $ hg status --change . 'set:removed() & lfs()'
183 R large
183 R large
184
184
185 $ echo SHORT > l
185 $ echo SHORT > l
186 $ echo BECOME-LARGER-FROM-SHORTER > s
186 $ echo BECOME-LARGER-FROM-SHORTER > s
187 $ hg commit -m 'large to small, small to large'
187 $ hg commit -m 'large to small, small to large'
188
188
189 $ echo 1 >> l
189 $ echo 1 >> l
190 $ echo 2 >> s
190 $ echo 2 >> s
191 $ hg commit -m 'random modifications'
191 $ hg commit -m 'random modifications'
192
192
193 $ echo RESTORE-TO-BE-LARGE > l
193 $ echo RESTORE-TO-BE-LARGE > l
194 $ echo SHORTER > s
194 $ echo SHORTER > s
195 $ hg commit -m 'switch large and small again'
195 $ hg commit -m 'switch large and small again'
196
196
197 # Test lfs_files template
197 # Test lfs_files template
198
198
199 $ hg log -r 'all()' -T '{rev} {join(lfs_files, ", ")}\n'
199 $ hg log -r 'all()' -T '{rev} {join(lfs_files, ", ")}\n'
200 0 large
200 0 large
201 1 l, large
201 1 l, large
202 2 s
202 2 s
203 3 s
203 3 s
204 4 l
204 4 l
205
205
206 # Push and pull the above repo
206 # Push and pull the above repo
207
207
208 $ hg --cwd .. init repo4
208 $ hg --cwd .. init repo4
209 $ hg push ../repo4
209 $ hg push ../repo4
210 pushing to ../repo4
210 pushing to ../repo4
211 searching for changes
211 searching for changes
212 adding changesets
212 adding changesets
213 adding manifests
213 adding manifests
214 adding file changes
214 adding file changes
215 added 5 changesets with 10 changes to 4 files
215 added 5 changesets with 10 changes to 4 files
216
216
217 $ hg --cwd .. init repo5
217 $ hg --cwd .. init repo5
218 $ hg --cwd ../repo5 pull ../repo3
218 $ hg --cwd ../repo5 pull ../repo3
219 pulling from ../repo3
219 pulling from ../repo3
220 requesting all changes
220 requesting all changes
221 adding changesets
221 adding changesets
222 adding manifests
222 adding manifests
223 adding file changes
223 adding file changes
224 added 5 changesets with 10 changes to 4 files
224 added 5 changesets with 10 changes to 4 files
225 new changesets fd47a419c4f7:5adf850972b9
225 new changesets fd47a419c4f7:5adf850972b9
226 (run 'hg update' to get a working copy)
226 (run 'hg update' to get a working copy)
227
227
228 $ cd ..
228 $ cd ..
229
229
230 # Test clone
230 # Test clone
231
231
232 $ hg init repo6
232 $ hg init repo6
233 $ cd repo6
233 $ cd repo6
234 $ cat >> .hg/hgrc << EOF
234 $ cat >> .hg/hgrc << EOF
235 > [lfs]
235 > [lfs]
236 > track=size(">30B")
236 > track=size(">30B")
237 > EOF
237 > EOF
238
238
239 $ echo LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES > large
239 $ echo LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES > large
240 $ echo SMALL > small
240 $ echo SMALL > small
241 $ hg commit -Aqm 'create a lfs file' large small
241 $ hg commit -Aqm 'create a lfs file' large small
242 $ hg debuglfsupload -r 'all()' -v
242 $ hg debuglfsupload -r 'all()' -v
243 lfs: found 8e92251415339ae9b148c8da89ed5ec665905166a1ab11b09dca8fad83344738 in the local lfs store
243 lfs: found 8e92251415339ae9b148c8da89ed5ec665905166a1ab11b09dca8fad83344738 in the local lfs store
244
244
245 $ cd ..
245 $ cd ..
246
246
247 $ hg clone repo6 repo7
247 $ hg clone repo6 repo7
248 updating to branch default
248 updating to branch default
249 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
249 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
250 $ cd repo7
250 $ cd repo7
251 $ hg config extensions --debug | grep lfs
251 $ hg config extensions --debug | grep lfs
252 $TESTTMP/repo7/.hg/hgrc:*: extensions.lfs= (glob)
252 $TESTTMP/repo7/.hg/hgrc:*: extensions.lfs= (glob)
253 $ cat large
253 $ cat large
254 LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES
254 LARGE-BECAUSE-IT-IS-MORE-THAN-30-BYTES
255 $ cat small
255 $ cat small
256 SMALL
256 SMALL
257
257
258 $ cd ..
258 $ cd ..
259
259
260 $ hg --config extensions.share= share repo7 sharedrepo
260 $ hg --config extensions.share= share repo7 sharedrepo
261 updating working directory
261 updating working directory
262 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
262 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
263 $ hg -R sharedrepo config extensions --debug | grep lfs
263 $ hg -R sharedrepo config extensions --debug | grep lfs
264 $TESTTMP/sharedrepo/.hg/hgrc:*: extensions.lfs= (glob)
264 $TESTTMP/sharedrepo/.hg/hgrc:*: extensions.lfs= (glob)
265
265
266 # Test rename and status
266 # Test rename and status
267
267
268 $ hg init repo8
268 $ hg init repo8
269 $ cd repo8
269 $ cd repo8
270 $ cat >> .hg/hgrc << EOF
270 $ cat >> .hg/hgrc << EOF
271 > [lfs]
271 > [lfs]
272 > track=size(">10B")
272 > track=size(">10B")
273 > EOF
273 > EOF
274
274
275 $ echo THIS-IS-LFS-BECAUSE-10-BYTES > a1
275 $ echo THIS-IS-LFS-BECAUSE-10-BYTES > a1
276 $ echo SMALL > a2
276 $ echo SMALL > a2
277 $ hg commit -m a -A a1 a2
277 $ hg commit -m a -A a1 a2
278 $ hg status
278 $ hg status
279 $ hg mv a1 b1
279 $ hg mv a1 b1
280 $ hg mv a2 a1
280 $ hg mv a2 a1
281 $ hg mv b1 a2
281 $ hg mv b1 a2
282 $ hg commit -m b
282 $ hg commit -m b
283 $ hg status
283 $ hg status
284 >>> with open('a2', 'wb') as f:
284 >>> with open('a2', 'wb') as f:
285 ... f.write(b'\1\nSTART-WITH-HG-FILELOG-METADATA')
285 ... f.write(b'\1\nSTART-WITH-HG-FILELOG-METADATA')
286 >>> with open('a1', 'wb') as f:
286 >>> with open('a1', 'wb') as f:
287 ... f.write(b'\1\nMETA\n')
287 ... f.write(b'\1\nMETA\n')
288 $ hg commit -m meta
288 $ hg commit -m meta
289 $ hg status
289 $ hg status
290 $ hg log -T '{rev}: {file_copies} | {file_dels} | {file_adds}\n'
290 $ hg log -T '{rev}: {file_copies} | {file_dels} | {file_adds}\n'
291 2: | |
291 2: | |
292 1: a1 (a2)a2 (a1) | |
292 1: a1 (a2)a2 (a1) | |
293 0: | | a1 a2
293 0: | | a1 a2
294
294
295 $ for n in a1 a2; do
295 $ for n in a1 a2; do
296 > for r in 0 1 2; do
296 > for r in 0 1 2; do
297 > printf '\n%s @ %s\n' $n $r
297 > printf '\n%s @ %s\n' $n $r
298 > hg debugdata $n $r
298 > hg debugdata $n $r
299 > done
299 > done
300 > done
300 > done
301
301
302 a1 @ 0
302 a1 @ 0
303 version https://git-lfs.github.com/spec/v1
303 version https://git-lfs.github.com/spec/v1
304 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
304 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
305 size 29
305 size 29
306 x-is-binary 0
306 x-is-binary 0
307
307
308 a1 @ 1
308 a1 @ 1
309 \x01 (esc)
309 \x01 (esc)
310 copy: a2
310 copy: a2
311 copyrev: 50470ad23cf937b1f4b9f80bfe54df38e65b50d9
311 copyrev: 50470ad23cf937b1f4b9f80bfe54df38e65b50d9
312 \x01 (esc)
312 \x01 (esc)
313 SMALL
313 SMALL
314
314
315 a1 @ 2
315 a1 @ 2
316 \x01 (esc)
316 \x01 (esc)
317 \x01 (esc)
317 \x01 (esc)
318 \x01 (esc)
318 \x01 (esc)
319 META
319 META
320
320
321 a2 @ 0
321 a2 @ 0
322 SMALL
322 SMALL
323
323
324 a2 @ 1
324 a2 @ 1
325 version https://git-lfs.github.com/spec/v1
325 version https://git-lfs.github.com/spec/v1
326 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
326 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
327 size 29
327 size 29
328 x-hg-copy a1
328 x-hg-copy a1
329 x-hg-copyrev be23af27908a582af43e5cda209a5a9b319de8d4
329 x-hg-copyrev be23af27908a582af43e5cda209a5a9b319de8d4
330 x-is-binary 0
330 x-is-binary 0
331
331
332 a2 @ 2
332 a2 @ 2
333 version https://git-lfs.github.com/spec/v1
333 version https://git-lfs.github.com/spec/v1
334 oid sha256:876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
334 oid sha256:876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
335 size 32
335 size 32
336 x-is-binary 0
336 x-is-binary 0
337
337
338 # Verify commit hashes include rename metadata
338 # Verify commit hashes include rename metadata
339
339
340 $ hg log -T '{rev}:{node|short} {desc}\n'
340 $ hg log -T '{rev}:{node|short} {desc}\n'
341 2:0fae949de7fa meta
341 2:0fae949de7fa meta
342 1:9cd6bdffdac0 b
342 1:9cd6bdffdac0 b
343 0:7f96794915f7 a
343 0:7f96794915f7 a
344
344
345 $ cd ..
345 $ cd ..
346
346
347 # Test bundle
347 # Test bundle
348
348
349 $ hg init repo9
349 $ hg init repo9
350 $ cd repo9
350 $ cd repo9
351 $ cat >> .hg/hgrc << EOF
351 $ cat >> .hg/hgrc << EOF
352 > [lfs]
352 > [lfs]
353 > track=size(">10B")
353 > track=size(">10B")
354 > [diff]
354 > [diff]
355 > git=1
355 > git=1
356 > EOF
356 > EOF
357
357
358 $ for i in 0 single two three 4; do
358 $ for i in 0 single two three 4; do
359 > echo 'THIS-IS-LFS-'$i > a
359 > echo 'THIS-IS-LFS-'$i > a
360 > hg commit -m a-$i -A a
360 > hg commit -m a-$i -A a
361 > done
361 > done
362
362
363 $ hg update 2 -q
363 $ hg update 2 -q
364 $ echo 'THIS-IS-LFS-2-CHILD' > a
364 $ echo 'THIS-IS-LFS-2-CHILD' > a
365 $ hg commit -m branching -q
365 $ hg commit -m branching -q
366
366
367 $ hg bundle --base 1 bundle.hg -v
367 $ hg bundle --base 1 bundle.hg -v
368 lfs: found 5ab7a3739a5feec94a562d070a14f36dba7cad17e5484a4a89eea8e5f3166888 in the local lfs store
368 lfs: found 5ab7a3739a5feec94a562d070a14f36dba7cad17e5484a4a89eea8e5f3166888 in the local lfs store
369 lfs: found a9c7d1cd6ce2b9bbdf46ed9a862845228717b921c089d0d42e3bcaed29eb612e in the local lfs store
369 lfs: found a9c7d1cd6ce2b9bbdf46ed9a862845228717b921c089d0d42e3bcaed29eb612e in the local lfs store
370 lfs: found f693890c49c409ec33673b71e53f297681f76c1166daf33b2ad7ebf8b1d3237e in the local lfs store
370 lfs: found f693890c49c409ec33673b71e53f297681f76c1166daf33b2ad7ebf8b1d3237e in the local lfs store
371 lfs: found fda198fea753eb66a252e9856915e1f5cddbe41723bd4b695ece2604ad3c9f75 in the local lfs store
371 lfs: found fda198fea753eb66a252e9856915e1f5cddbe41723bd4b695ece2604ad3c9f75 in the local lfs store
372 4 changesets found
372 4 changesets found
373 uncompressed size of bundle content:
373 uncompressed size of bundle content:
374 * (changelog) (glob)
374 * (changelog) (glob)
375 * (manifests) (glob)
375 * (manifests) (glob)
376 * a (glob)
376 * a (glob)
377 $ hg --config extensions.strip= strip -r 2 --no-backup --force -q
377 $ hg --config extensions.strip= strip -r 2 --no-backup --force -q
378 $ hg -R bundle.hg log -p -T '{rev} {desc}\n' a
378 $ hg -R bundle.hg log -p -T '{rev} {desc}\n' a
379 5 branching
379 5 branching
380 diff --git a/a b/a
380 diff --git a/a b/a
381 --- a/a
381 --- a/a
382 +++ b/a
382 +++ b/a
383 @@ -1,1 +1,1 @@
383 @@ -1,1 +1,1 @@
384 -THIS-IS-LFS-two
384 -THIS-IS-LFS-two
385 +THIS-IS-LFS-2-CHILD
385 +THIS-IS-LFS-2-CHILD
386
386
387 4 a-4
387 4 a-4
388 diff --git a/a b/a
388 diff --git a/a b/a
389 --- a/a
389 --- a/a
390 +++ b/a
390 +++ b/a
391 @@ -1,1 +1,1 @@
391 @@ -1,1 +1,1 @@
392 -THIS-IS-LFS-three
392 -THIS-IS-LFS-three
393 +THIS-IS-LFS-4
393 +THIS-IS-LFS-4
394
394
395 3 a-three
395 3 a-three
396 diff --git a/a b/a
396 diff --git a/a b/a
397 --- a/a
397 --- a/a
398 +++ b/a
398 +++ b/a
399 @@ -1,1 +1,1 @@
399 @@ -1,1 +1,1 @@
400 -THIS-IS-LFS-two
400 -THIS-IS-LFS-two
401 +THIS-IS-LFS-three
401 +THIS-IS-LFS-three
402
402
403 2 a-two
403 2 a-two
404 diff --git a/a b/a
404 diff --git a/a b/a
405 --- a/a
405 --- a/a
406 +++ b/a
406 +++ b/a
407 @@ -1,1 +1,1 @@
407 @@ -1,1 +1,1 @@
408 -THIS-IS-LFS-single
408 -THIS-IS-LFS-single
409 +THIS-IS-LFS-two
409 +THIS-IS-LFS-two
410
410
411 1 a-single
411 1 a-single
412 diff --git a/a b/a
412 diff --git a/a b/a
413 --- a/a
413 --- a/a
414 +++ b/a
414 +++ b/a
415 @@ -1,1 +1,1 @@
415 @@ -1,1 +1,1 @@
416 -THIS-IS-LFS-0
416 -THIS-IS-LFS-0
417 +THIS-IS-LFS-single
417 +THIS-IS-LFS-single
418
418
419 0 a-0
419 0 a-0
420 diff --git a/a b/a
420 diff --git a/a b/a
421 new file mode 100644
421 new file mode 100644
422 --- /dev/null
422 --- /dev/null
423 +++ b/a
423 +++ b/a
424 @@ -0,0 +1,1 @@
424 @@ -0,0 +1,1 @@
425 +THIS-IS-LFS-0
425 +THIS-IS-LFS-0
426
426
427 $ hg bundle -R bundle.hg --base 1 bundle-again.hg -q
427 $ hg bundle -R bundle.hg --base 1 bundle-again.hg -q
428 $ hg -R bundle-again.hg log -p -T '{rev} {desc}\n' a
428 $ hg -R bundle-again.hg log -p -T '{rev} {desc}\n' a
429 5 branching
429 5 branching
430 diff --git a/a b/a
430 diff --git a/a b/a
431 --- a/a
431 --- a/a
432 +++ b/a
432 +++ b/a
433 @@ -1,1 +1,1 @@
433 @@ -1,1 +1,1 @@
434 -THIS-IS-LFS-two
434 -THIS-IS-LFS-two
435 +THIS-IS-LFS-2-CHILD
435 +THIS-IS-LFS-2-CHILD
436
436
437 4 a-4
437 4 a-4
438 diff --git a/a b/a
438 diff --git a/a b/a
439 --- a/a
439 --- a/a
440 +++ b/a
440 +++ b/a
441 @@ -1,1 +1,1 @@
441 @@ -1,1 +1,1 @@
442 -THIS-IS-LFS-three
442 -THIS-IS-LFS-three
443 +THIS-IS-LFS-4
443 +THIS-IS-LFS-4
444
444
445 3 a-three
445 3 a-three
446 diff --git a/a b/a
446 diff --git a/a b/a
447 --- a/a
447 --- a/a
448 +++ b/a
448 +++ b/a
449 @@ -1,1 +1,1 @@
449 @@ -1,1 +1,1 @@
450 -THIS-IS-LFS-two
450 -THIS-IS-LFS-two
451 +THIS-IS-LFS-three
451 +THIS-IS-LFS-three
452
452
453 2 a-two
453 2 a-two
454 diff --git a/a b/a
454 diff --git a/a b/a
455 --- a/a
455 --- a/a
456 +++ b/a
456 +++ b/a
457 @@ -1,1 +1,1 @@
457 @@ -1,1 +1,1 @@
458 -THIS-IS-LFS-single
458 -THIS-IS-LFS-single
459 +THIS-IS-LFS-two
459 +THIS-IS-LFS-two
460
460
461 1 a-single
461 1 a-single
462 diff --git a/a b/a
462 diff --git a/a b/a
463 --- a/a
463 --- a/a
464 +++ b/a
464 +++ b/a
465 @@ -1,1 +1,1 @@
465 @@ -1,1 +1,1 @@
466 -THIS-IS-LFS-0
466 -THIS-IS-LFS-0
467 +THIS-IS-LFS-single
467 +THIS-IS-LFS-single
468
468
469 0 a-0
469 0 a-0
470 diff --git a/a b/a
470 diff --git a/a b/a
471 new file mode 100644
471 new file mode 100644
472 --- /dev/null
472 --- /dev/null
473 +++ b/a
473 +++ b/a
474 @@ -0,0 +1,1 @@
474 @@ -0,0 +1,1 @@
475 +THIS-IS-LFS-0
475 +THIS-IS-LFS-0
476
476
477 $ cd ..
477 $ cd ..
478
478
479 # Test isbinary
479 # Test isbinary
480
480
481 $ hg init repo10
481 $ hg init repo10
482 $ cd repo10
482 $ cd repo10
483 $ cat >> .hg/hgrc << EOF
483 $ cat >> .hg/hgrc << EOF
484 > [extensions]
484 > [extensions]
485 > lfs=
485 > lfs=
486 > [lfs]
486 > [lfs]
487 > track=all()
487 > track=all()
488 > EOF
488 > EOF
489 $ $PYTHON <<'EOF'
489 $ $PYTHON <<'EOF'
490 > def write(path, content):
490 > def write(path, content):
491 > with open(path, 'wb') as f:
491 > with open(path, 'wb') as f:
492 > f.write(content)
492 > f.write(content)
493 > write('a', b'\0\0')
493 > write('a', b'\0\0')
494 > write('b', b'\1\n')
494 > write('b', b'\1\n')
495 > write('c', b'\1\n\0')
495 > write('c', b'\1\n\0')
496 > write('d', b'xx')
496 > write('d', b'xx')
497 > EOF
497 > EOF
498 $ hg add a b c d
498 $ hg add a b c d
499 $ hg diff --stat
499 $ hg diff --stat
500 a | Bin
500 a | Bin
501 b | 1 +
501 b | 1 +
502 c | Bin
502 c | Bin
503 d | 1 +
503 d | 1 +
504 4 files changed, 2 insertions(+), 0 deletions(-)
504 4 files changed, 2 insertions(+), 0 deletions(-)
505 $ hg commit -m binarytest
505 $ hg commit -m binarytest
506 $ cat > $TESTTMP/dumpbinary.py << EOF
506 $ cat > $TESTTMP/dumpbinary.py << EOF
507 > def reposetup(ui, repo):
507 > def reposetup(ui, repo):
508 > for n in 'abcd':
508 > for n in 'abcd':
509 > ui.write(('%s: binary=%s\n') % (n, repo['.'][n].isbinary()))
509 > ui.write(('%s: binary=%s\n') % (n, repo['.'][n].isbinary()))
510 > EOF
510 > EOF
511 $ hg --config extensions.dumpbinary=$TESTTMP/dumpbinary.py id --trace
511 $ hg --config extensions.dumpbinary=$TESTTMP/dumpbinary.py id --trace
512 a: binary=True
512 a: binary=True
513 b: binary=False
513 b: binary=False
514 c: binary=True
514 c: binary=True
515 d: binary=False
515 d: binary=False
516 b55353847f02 tip
516 b55353847f02 tip
517
517
518 Binary blobs don't need to be present to be skipped in filesets. (And their
518 Binary blobs don't need to be present to be skipped in filesets. (And their
519 absence doesn't cause an abort.)
519 absence doesn't cause an abort.)
520
520
521 $ rm .hg/store/lfs/objects/96/a296d224f285c67bee93c30f8a309157f0daa35dc5b87e410b78630a09cfc7
521 $ rm .hg/store/lfs/objects/96/a296d224f285c67bee93c30f8a309157f0daa35dc5b87e410b78630a09cfc7
522 $ rm .hg/store/lfs/objects/92/f76135a4baf4faccb8586a60faf830c2bdfce147cefa188aaf4b790bd01b7e
522 $ rm .hg/store/lfs/objects/92/f76135a4baf4faccb8586a60faf830c2bdfce147cefa188aaf4b790bd01b7e
523
523
524 $ hg files --debug -r . 'set:eol("unix")' --config 'experimental.lfs.disableusercache=True'
524 $ hg files --debug -r . 'set:eol("unix")' --config 'experimental.lfs.disableusercache=True'
525 lfs: found c04b5bb1a5b2eb3e9cd4805420dba5a9d133da5b7adeeafb5474c4adae9faa80 in the local lfs store
525 lfs: found c04b5bb1a5b2eb3e9cd4805420dba5a9d133da5b7adeeafb5474c4adae9faa80 in the local lfs store
526 2 b
526 lfs: found 5dde896887f6754c9b15bfe3a441ae4806df2fde94001311e08bf110622e0bbe in the local lfs store
527 lfs: found 5dde896887f6754c9b15bfe3a441ae4806df2fde94001311e08bf110622e0bbe in the local lfs store
527 2 b
528
528
529 $ hg files --debug -r . 'set:binary()' --config 'experimental.lfs.disableusercache=True'
529 $ hg files --debug -r . 'set:binary()' --config 'experimental.lfs.disableusercache=True'
530 2 a
530 2 a
531 3 c
531 3 c
532
532
533 $ cd ..
533 $ cd ..
534
534
535 # Test fctx.cmp fastpath - diff without LFS blobs
535 # Test fctx.cmp fastpath - diff without LFS blobs
536
536
537 $ hg init repo12
537 $ hg init repo12
538 $ cd repo12
538 $ cd repo12
539 $ cat >> .hg/hgrc <<EOF
539 $ cat >> .hg/hgrc <<EOF
540 > [lfs]
540 > [lfs]
541 > threshold=1
541 > threshold=1
542 > EOF
542 > EOF
543 $ cat > ../patch.diff <<EOF
543 $ cat > ../patch.diff <<EOF
544 > # HG changeset patch
544 > # HG changeset patch
545 > 2
545 > 2
546 >
546 >
547 > diff --git a/a b/a
547 > diff --git a/a b/a
548 > old mode 100644
548 > old mode 100644
549 > new mode 100755
549 > new mode 100755
550 > EOF
550 > EOF
551
551
552 $ for i in 1 2 3; do
552 $ for i in 1 2 3; do
553 > cp ../repo10/a a
553 > cp ../repo10/a a
554 > if [ $i = 3 ]; then
554 > if [ $i = 3 ]; then
555 > # make a content-only change
555 > # make a content-only change
556 > hg import -q --bypass ../patch.diff
556 > hg import -q --bypass ../patch.diff
557 > hg update -q
557 > hg update -q
558 > rm ../patch.diff
558 > rm ../patch.diff
559 > else
559 > else
560 > echo $i >> a
560 > echo $i >> a
561 > hg commit -m $i -A a
561 > hg commit -m $i -A a
562 > fi
562 > fi
563 > done
563 > done
564 $ [ -d .hg/store/lfs/objects ]
564 $ [ -d .hg/store/lfs/objects ]
565
565
566 $ cd ..
566 $ cd ..
567
567
568 $ hg clone repo12 repo13 --noupdate
568 $ hg clone repo12 repo13 --noupdate
569 $ cd repo13
569 $ cd repo13
570 $ hg log --removed -p a -T '{desc}\n' --config diff.nobinary=1 --git
570 $ hg log --removed -p a -T '{desc}\n' --config diff.nobinary=1 --git
571 2
571 2
572 diff --git a/a b/a
572 diff --git a/a b/a
573 old mode 100644
573 old mode 100644
574 new mode 100755
574 new mode 100755
575
575
576 2
576 2
577 diff --git a/a b/a
577 diff --git a/a b/a
578 Binary file a has changed
578 Binary file a has changed
579
579
580 1
580 1
581 diff --git a/a b/a
581 diff --git a/a b/a
582 new file mode 100644
582 new file mode 100644
583 Binary file a has changed
583 Binary file a has changed
584
584
585 $ [ -d .hg/store/lfs/objects ]
585 $ [ -d .hg/store/lfs/objects ]
586 [1]
586 [1]
587
587
588 $ cd ..
588 $ cd ..
589
589
590 # Test filter
590 # Test filter
591
591
592 $ hg init repo11
592 $ hg init repo11
593 $ cd repo11
593 $ cd repo11
594 $ cat >> .hg/hgrc << EOF
594 $ cat >> .hg/hgrc << EOF
595 > [lfs]
595 > [lfs]
596 > track=(**.a & size(">5B")) | (**.b & !size(">5B"))
596 > track=(**.a & size(">5B")) | (**.b & !size(">5B"))
597 > | (**.c & "path:d" & !"path:d/c.c") | size(">10B")
597 > | (**.c & "path:d" & !"path:d/c.c") | size(">10B")
598 > EOF
598 > EOF
599
599
600 $ mkdir a
600 $ mkdir a
601 $ echo aaaaaa > a/1.a
601 $ echo aaaaaa > a/1.a
602 $ echo a > a/2.a
602 $ echo a > a/2.a
603 $ echo aaaaaa > 1.b
603 $ echo aaaaaa > 1.b
604 $ echo a > 2.b
604 $ echo a > 2.b
605 $ echo a > 1.c
605 $ echo a > 1.c
606 $ mkdir d
606 $ mkdir d
607 $ echo a > d/c.c
607 $ echo a > d/c.c
608 $ echo a > d/d.c
608 $ echo a > d/d.c
609 $ echo aaaaaaaaaaaa > x
609 $ echo aaaaaaaaaaaa > x
610 $ hg add . -q
610 $ hg add . -q
611 $ hg commit -m files
611 $ hg commit -m files
612
612
613 $ for p in a/1.a a/2.a 1.b 2.b 1.c d/c.c d/d.c x; do
613 $ for p in a/1.a a/2.a 1.b 2.b 1.c d/c.c d/d.c x; do
614 > if hg debugdata $p 0 2>&1 | grep git-lfs >/dev/null; then
614 > if hg debugdata $p 0 2>&1 | grep git-lfs >/dev/null; then
615 > echo "${p}: is lfs"
615 > echo "${p}: is lfs"
616 > else
616 > else
617 > echo "${p}: not lfs"
617 > echo "${p}: not lfs"
618 > fi
618 > fi
619 > done
619 > done
620 a/1.a: is lfs
620 a/1.a: is lfs
621 a/2.a: not lfs
621 a/2.a: not lfs
622 1.b: not lfs
622 1.b: not lfs
623 2.b: is lfs
623 2.b: is lfs
624 1.c: not lfs
624 1.c: not lfs
625 d/c.c: not lfs
625 d/c.c: not lfs
626 d/d.c: is lfs
626 d/d.c: is lfs
627 x: is lfs
627 x: is lfs
628
628
629 $ cd ..
629 $ cd ..
630
630
631 # Verify the repos
631 # Verify the repos
632
632
633 $ cat > $TESTTMP/dumpflog.py << EOF
633 $ cat > $TESTTMP/dumpflog.py << EOF
634 > # print raw revision sizes, flags, and hashes for certain files
634 > # print raw revision sizes, flags, and hashes for certain files
635 > import hashlib
635 > import hashlib
636 > from mercurial.node import short
636 > from mercurial.node import short
637 > from mercurial import revlog
637 > from mercurial import revlog
638 > def hash(rawtext):
638 > def hash(rawtext):
639 > h = hashlib.sha512()
639 > h = hashlib.sha512()
640 > h.update(rawtext)
640 > h.update(rawtext)
641 > return h.hexdigest()[:4]
641 > return h.hexdigest()[:4]
642 > def reposetup(ui, repo):
642 > def reposetup(ui, repo):
643 > # these 2 files are interesting
643 > # these 2 files are interesting
644 > for name in ['l', 's']:
644 > for name in ['l', 's']:
645 > fl = repo.file(name)
645 > fl = repo.file(name)
646 > if len(fl) == 0:
646 > if len(fl) == 0:
647 > continue
647 > continue
648 > sizes = [fl.rawsize(i) for i in fl]
648 > sizes = [fl.rawsize(i) for i in fl]
649 > texts = [fl.revision(i, raw=True) for i in fl]
649 > texts = [fl.revision(i, raw=True) for i in fl]
650 > flags = [int(fl.flags(i)) for i in fl]
650 > flags = [int(fl.flags(i)) for i in fl]
651 > hashes = [hash(t) for t in texts]
651 > hashes = [hash(t) for t in texts]
652 > print(' %s: rawsizes=%r flags=%r hashes=%r'
652 > print(' %s: rawsizes=%r flags=%r hashes=%r'
653 > % (name, sizes, flags, hashes))
653 > % (name, sizes, flags, hashes))
654 > EOF
654 > EOF
655
655
656 $ for i in client client2 server repo3 repo4 repo5 repo6 repo7 repo8 repo9 \
656 $ for i in client client2 server repo3 repo4 repo5 repo6 repo7 repo8 repo9 \
657 > repo10; do
657 > repo10; do
658 > echo 'repo:' $i
658 > echo 'repo:' $i
659 > hg --cwd $i verify --config extensions.dumpflog=$TESTTMP/dumpflog.py -q
659 > hg --cwd $i verify --config extensions.dumpflog=$TESTTMP/dumpflog.py -q
660 > done
660 > done
661 repo: client
661 repo: client
662 repo: client2
662 repo: client2
663 repo: server
663 repo: server
664 repo: repo3
664 repo: repo3
665 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
665 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
666 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
666 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
667 repo: repo4
667 repo: repo4
668 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
668 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
669 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
669 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
670 repo: repo5
670 repo: repo5
671 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
671 l: rawsizes=[211, 6, 8, 141] flags=[8192, 0, 0, 8192] hashes=['d2b8', '948c', 'cc88', '724d']
672 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
672 s: rawsizes=[74, 141, 141, 8] flags=[0, 8192, 8192, 0] hashes=['3c80', 'fce0', '874a', '826b']
673 repo: repo6
673 repo: repo6
674 repo: repo7
674 repo: repo7
675 repo: repo8
675 repo: repo8
676 repo: repo9
676 repo: repo9
677 repo: repo10
677 repo: repo10
678
678
679 repo13 doesn't have any cached lfs files and its source never pushed its
679 repo13 doesn't have any cached lfs files and its source never pushed its
680 files. Therefore, the files don't exist in the remote store. Use the files in
680 files. Therefore, the files don't exist in the remote store. Use the files in
681 the user cache.
681 the user cache.
682
682
683 $ test -d $TESTTMP/repo13/.hg/store/lfs/objects
683 $ test -d $TESTTMP/repo13/.hg/store/lfs/objects
684 [1]
684 [1]
685
685
686 $ hg --config extensions.share= share repo13 repo14
686 $ hg --config extensions.share= share repo13 repo14
687 updating working directory
687 updating working directory
688 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
688 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
689 $ hg -R repo14 -q verify
689 $ hg -R repo14 -q verify
690
690
691 $ hg clone repo13 repo15
691 $ hg clone repo13 repo15
692 updating to branch default
692 updating to branch default
693 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
693 1 files updated, 0 files merged, 0 files removed, 0 files unresolved
694 $ hg -R repo15 -q verify
694 $ hg -R repo15 -q verify
695
695
696 If the source repo doesn't have the blob (maybe it was pulled or cloned with
696 If the source repo doesn't have the blob (maybe it was pulled or cloned with
697 --noupdate), the blob is still accessible via the global cache to send to the
697 --noupdate), the blob is still accessible via the global cache to send to the
698 remote store.
698 remote store.
699
699
700 $ rm -rf $TESTTMP/repo15/.hg/store/lfs
700 $ rm -rf $TESTTMP/repo15/.hg/store/lfs
701 $ hg init repo16
701 $ hg init repo16
702 $ hg -R repo15 push repo16
702 $ hg -R repo15 push repo16
703 pushing to repo16
703 pushing to repo16
704 searching for changes
704 searching for changes
705 adding changesets
705 adding changesets
706 adding manifests
706 adding manifests
707 adding file changes
707 adding file changes
708 added 3 changesets with 2 changes to 1 files
708 added 3 changesets with 2 changes to 1 files
709 $ hg -R repo15 -q verify
709 $ hg -R repo15 -q verify
710
710
711 Test damaged file scenarios. (This also damages the usercache because of the
711 Test damaged file scenarios. (This also damages the usercache because of the
712 hardlinks.)
712 hardlinks.)
713
713
714 $ echo 'damage' >> repo5/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
714 $ echo 'damage' >> repo5/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
715
715
716 Repo with damaged lfs objects in any revision will fail verification.
716 Repo with damaged lfs objects in any revision will fail verification.
717
717
718 $ hg -R repo5 verify
718 $ hg -R repo5 verify
719 checking changesets
719 checking changesets
720 checking manifests
720 checking manifests
721 crosschecking files in changesets and manifests
721 crosschecking files in changesets and manifests
722 checking files
722 checking files
723 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
723 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
724 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
724 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
725 4 files, 5 changesets, 10 total revisions
725 4 files, 5 changesets, 10 total revisions
726 2 integrity errors encountered!
726 2 integrity errors encountered!
727 (first damaged changeset appears to be 0)
727 (first damaged changeset appears to be 0)
728 [1]
728 [1]
729
729
730 Updates work after cloning a damaged repo, if the damaged lfs objects aren't in
730 Updates work after cloning a damaged repo, if the damaged lfs objects aren't in
731 the update destination. Those objects won't be added to the new repo's store
731 the update destination. Those objects won't be added to the new repo's store
732 because they aren't accessed.
732 because they aren't accessed.
733
733
734 $ hg clone -v repo5 fromcorrupt
734 $ hg clone -v repo5 fromcorrupt
735 updating to branch default
735 updating to branch default
736 resolving manifests
736 resolving manifests
737 getting l
737 getting l
738 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the usercache
738 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the usercache
739 getting s
739 getting s
740 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
740 2 files updated, 0 files merged, 0 files removed, 0 files unresolved
741 $ test -f fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
741 $ test -f fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
742 [1]
742 [1]
743
743
744 Verify will copy/link all lfs objects into the local store that aren't already
744 Verify will copy/link all lfs objects into the local store that aren't already
745 present. Bypass the corrupted usercache to show that verify works when fed by
745 present. Bypass the corrupted usercache to show that verify works when fed by
746 the (uncorrupted) remote store.
746 the (uncorrupted) remote store.
747
747
748 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
748 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
749 repository uses revlog format 1
749 repository uses revlog format 1
750 checking changesets
750 checking changesets
751 checking manifests
751 checking manifests
752 crosschecking files in changesets and manifests
752 crosschecking files in changesets and manifests
753 checking files
753 checking files
754 lfs: adding 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e to the usercache
754 lfs: adding 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e to the usercache
755 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
755 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
756 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
756 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
757 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
757 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
758 lfs: adding 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 to the usercache
758 lfs: adding 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 to the usercache
759 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
759 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
760 lfs: adding b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c to the usercache
760 lfs: adding b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c to the usercache
761 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
761 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
762 4 files, 5 changesets, 10 total revisions
762 4 files, 5 changesets, 10 total revisions
763
763
764 Verify will not copy/link a corrupted file from the usercache into the local
764 Verify will not copy/link a corrupted file from the usercache into the local
765 store, and poison it. (The verify with a good remote now works.)
765 store, and poison it. (The verify with a good remote now works.)
766
766
767 $ rm -r fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
767 $ rm -r fromcorrupt/.hg/store/lfs/objects/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
768 $ hg -R fromcorrupt verify -v
768 $ hg -R fromcorrupt verify -v
769 repository uses revlog format 1
769 repository uses revlog format 1
770 checking changesets
770 checking changesets
771 checking manifests
771 checking manifests
772 crosschecking files in changesets and manifests
772 crosschecking files in changesets and manifests
773 checking files
773 checking files
774 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
774 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
775 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
775 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
776 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
776 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
777 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
777 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
778 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
778 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
779 4 files, 5 changesets, 10 total revisions
779 4 files, 5 changesets, 10 total revisions
780 2 integrity errors encountered!
780 2 integrity errors encountered!
781 (first damaged changeset appears to be 0)
781 (first damaged changeset appears to be 0)
782 [1]
782 [1]
783 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
783 $ hg -R fromcorrupt --config lfs.usercache=emptycache verify -v
784 repository uses revlog format 1
784 repository uses revlog format 1
785 checking changesets
785 checking changesets
786 checking manifests
786 checking manifests
787 crosschecking files in changesets and manifests
787 crosschecking files in changesets and manifests
788 checking files
788 checking files
789 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the usercache
789 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the usercache
790 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
790 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
791 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
791 lfs: found 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e in the local lfs store
792 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
792 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
793 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
793 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
794 4 files, 5 changesets, 10 total revisions
794 4 files, 5 changesets, 10 total revisions
795
795
796 Damaging a file required by the update destination fails the update.
796 Damaging a file required by the update destination fails the update.
797
797
798 $ echo 'damage' >> $TESTTMP/dummy-remote/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
798 $ echo 'damage' >> $TESTTMP/dummy-remote/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
799 $ hg --config lfs.usercache=emptycache clone -v repo5 fromcorrupt2
799 $ hg --config lfs.usercache=emptycache clone -v repo5 fromcorrupt2
800 updating to branch default
800 updating to branch default
801 resolving manifests
801 resolving manifests
802 abort: corrupt remote lfs object: 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
802 abort: corrupt remote lfs object: 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
803 [255]
803 [255]
804
804
805 A corrupted lfs blob is not transferred from a file://remotestore to the
805 A corrupted lfs blob is not transferred from a file://remotestore to the
806 usercache or local store.
806 usercache or local store.
807
807
808 $ test -f emptycache/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
808 $ test -f emptycache/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
809 [1]
809 [1]
810 $ test -f fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
810 $ test -f fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
811 [1]
811 [1]
812
812
813 $ hg -R fromcorrupt2 verify
813 $ hg -R fromcorrupt2 verify
814 checking changesets
814 checking changesets
815 checking manifests
815 checking manifests
816 crosschecking files in changesets and manifests
816 crosschecking files in changesets and manifests
817 checking files
817 checking files
818 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
818 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
819 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
819 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
820 4 files, 5 changesets, 10 total revisions
820 4 files, 5 changesets, 10 total revisions
821 2 integrity errors encountered!
821 2 integrity errors encountered!
822 (first damaged changeset appears to be 0)
822 (first damaged changeset appears to be 0)
823 [1]
823 [1]
824
824
825 Corrupt local files are not sent upstream. (The alternate dummy remote
825 Corrupt local files are not sent upstream. (The alternate dummy remote
826 avoids the corrupt lfs object in the original remote.)
826 avoids the corrupt lfs object in the original remote.)
827
827
828 $ mkdir $TESTTMP/dummy-remote2
828 $ mkdir $TESTTMP/dummy-remote2
829 $ hg init dest
829 $ hg init dest
830 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 push -v dest
830 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 push -v dest
831 pushing to dest
831 pushing to dest
832 searching for changes
832 searching for changes
833 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
833 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
834 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
834 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
835 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
835 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
836 abort: detected corrupt lfs object: 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
836 abort: detected corrupt lfs object: 66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
837 (run hg verify)
837 (run hg verify)
838 [255]
838 [255]
839
839
840 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 verify -v
840 $ hg -R fromcorrupt2 --config lfs.url=file:///$TESTTMP/dummy-remote2 verify -v
841 repository uses revlog format 1
841 repository uses revlog format 1
842 checking changesets
842 checking changesets
843 checking manifests
843 checking manifests
844 crosschecking files in changesets and manifests
844 crosschecking files in changesets and manifests
845 checking files
845 checking files
846 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
846 l@1: unpacking 46a2f24864bc: integrity check failed on data/l.i:0
847 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
847 lfs: found 22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b in the local lfs store
848 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
848 large@0: unpacking 2c531e0992ff: integrity check failed on data/large.i:0
849 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
849 lfs: found 89b6070915a3d573ff3599d1cda305bc5e38549b15c4847ab034169da66e1ca8 in the local lfs store
850 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
850 lfs: found b1a6ea88da0017a0e77db139a54618986e9a2489bee24af9fe596de9daac498c in the local lfs store
851 4 files, 5 changesets, 10 total revisions
851 4 files, 5 changesets, 10 total revisions
852 2 integrity errors encountered!
852 2 integrity errors encountered!
853 (first damaged changeset appears to be 0)
853 (first damaged changeset appears to be 0)
854 [1]
854 [1]
855
855
856 $ cat $TESTTMP/dummy-remote2/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
856 $ cat $TESTTMP/dummy-remote2/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
857 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
857 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
858 $ cat fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
858 $ cat fromcorrupt2/.hg/store/lfs/objects/22/f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b | $TESTDIR/f --sha256
859 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
859 sha256=22f66a3fc0b9bf3f012c814303995ec07099b3a9ce02a7af84b5970811074a3b
860 $ test -f $TESTTMP/dummy-remote2/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
860 $ test -f $TESTTMP/dummy-remote2/66/100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
861 [1]
861 [1]
862
862
863 Accessing a corrupt file will complain
863 Accessing a corrupt file will complain
864
864
865 $ hg --cwd fromcorrupt2 cat -r 0 large
865 $ hg --cwd fromcorrupt2 cat -r 0 large
866 abort: integrity check failed on data/large.i:0!
866 abort: integrity check failed on data/large.i:0!
867 [255]
867 [255]
868
868
869 lfs -> normal -> lfs round trip conversions are possible. The 'none()'
869 lfs -> normal -> lfs round trip conversions are possible. The 'none()'
870 predicate on the command line will override whatever is configured globally and
870 predicate on the command line will override whatever is configured globally and
871 locally, and ensures everything converts to a regular file. For lfs -> normal,
871 locally, and ensures everything converts to a regular file. For lfs -> normal,
872 there's no 'lfs' destination repo requirement. For normal -> lfs, there is.
872 there's no 'lfs' destination repo requirement. For normal -> lfs, there is.
873
873
874 $ hg --config extensions.convert= --config 'lfs.track=none()' \
874 $ hg --config extensions.convert= --config 'lfs.track=none()' \
875 > convert repo8 convert_normal
875 > convert repo8 convert_normal
876 initializing destination convert_normal repository
876 initializing destination convert_normal repository
877 scanning source...
877 scanning source...
878 sorting...
878 sorting...
879 converting...
879 converting...
880 2 a
880 2 a
881 1 b
881 1 b
882 0 meta
882 0 meta
883 $ grep 'lfs' convert_normal/.hg/requires
883 $ grep 'lfs' convert_normal/.hg/requires
884 [1]
884 [1]
885 $ hg --cwd convert_normal cat a1 -r 0 -T '{rawdata}'
885 $ hg --cwd convert_normal cat a1 -r 0 -T '{rawdata}'
886 THIS-IS-LFS-BECAUSE-10-BYTES
886 THIS-IS-LFS-BECAUSE-10-BYTES
887
887
888 $ hg --config extensions.convert= --config lfs.threshold=10B \
888 $ hg --config extensions.convert= --config lfs.threshold=10B \
889 > convert convert_normal convert_lfs
889 > convert convert_normal convert_lfs
890 initializing destination convert_lfs repository
890 initializing destination convert_lfs repository
891 scanning source...
891 scanning source...
892 sorting...
892 sorting...
893 converting...
893 converting...
894 2 a
894 2 a
895 1 b
895 1 b
896 0 meta
896 0 meta
897
897
898 $ hg --cwd convert_lfs cat -r 0 a1 -T '{rawdata}'
898 $ hg --cwd convert_lfs cat -r 0 a1 -T '{rawdata}'
899 version https://git-lfs.github.com/spec/v1
899 version https://git-lfs.github.com/spec/v1
900 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
900 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
901 size 29
901 size 29
902 x-is-binary 0
902 x-is-binary 0
903 $ hg --cwd convert_lfs debugdata a1 0
903 $ hg --cwd convert_lfs debugdata a1 0
904 version https://git-lfs.github.com/spec/v1
904 version https://git-lfs.github.com/spec/v1
905 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
905 oid sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
906 size 29
906 size 29
907 x-is-binary 0
907 x-is-binary 0
908 $ hg --cwd convert_lfs log -r 0 -T "{lfs_files % '{lfspointer % '{key}={value}\n'}'}"
908 $ hg --cwd convert_lfs log -r 0 -T "{lfs_files % '{lfspointer % '{key}={value}\n'}'}"
909 version=https://git-lfs.github.com/spec/v1
909 version=https://git-lfs.github.com/spec/v1
910 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
910 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
911 size=29
911 size=29
912 x-is-binary=0
912 x-is-binary=0
913 $ hg --cwd convert_lfs log -r 0 \
913 $ hg --cwd convert_lfs log -r 0 \
914 > -T '{lfs_files % "{get(lfspointer, "oid")}\n"}{lfs_files % "{lfspointer.oid}\n"}'
914 > -T '{lfs_files % "{get(lfspointer, "oid")}\n"}{lfs_files % "{lfspointer.oid}\n"}'
915 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
915 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
916 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
916 sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
917 $ hg --cwd convert_lfs log -r 0 -T '{lfs_files % "{lfspointer}\n"}'
917 $ hg --cwd convert_lfs log -r 0 -T '{lfs_files % "{lfspointer}\n"}'
918 version=https://git-lfs.github.com/spec/v1 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024 size=29 x-is-binary=0
918 version=https://git-lfs.github.com/spec/v1 oid=sha256:5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024 size=29 x-is-binary=0
919 $ hg --cwd convert_lfs \
919 $ hg --cwd convert_lfs \
920 > log -r 'all()' -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}'
920 > log -r 'all()' -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}'
921 0: a1: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
921 0: a1: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
922 1: a2: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
922 1: a2: 5bb8341bee63b3649f222b2215bde37322bea075a30575aa685d8f8d21c77024
923 2: a2: 876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
923 2: a2: 876dadc86a8542f9798048f2c47f51dbf8e4359aed883e8ec80c5db825f0d943
924
924
925 $ grep 'lfs' convert_lfs/.hg/requires
925 $ grep 'lfs' convert_lfs/.hg/requires
926 lfs
926 lfs
927
927
928 The hashes in all stages of the conversion are unchanged.
928 The hashes in all stages of the conversion are unchanged.
929
929
930 $ hg -R repo8 log -T '{node|short}\n'
930 $ hg -R repo8 log -T '{node|short}\n'
931 0fae949de7fa
931 0fae949de7fa
932 9cd6bdffdac0
932 9cd6bdffdac0
933 7f96794915f7
933 7f96794915f7
934 $ hg -R convert_normal log -T '{node|short}\n'
934 $ hg -R convert_normal log -T '{node|short}\n'
935 0fae949de7fa
935 0fae949de7fa
936 9cd6bdffdac0
936 9cd6bdffdac0
937 7f96794915f7
937 7f96794915f7
938 $ hg -R convert_lfs log -T '{node|short}\n'
938 $ hg -R convert_lfs log -T '{node|short}\n'
939 0fae949de7fa
939 0fae949de7fa
940 9cd6bdffdac0
940 9cd6bdffdac0
941 7f96794915f7
941 7f96794915f7
942
942
943 This convert is trickier, because it contains deleted files (via `hg mv`)
943 This convert is trickier, because it contains deleted files (via `hg mv`)
944
944
945 $ hg --config extensions.convert= --config lfs.threshold=1000M \
945 $ hg --config extensions.convert= --config lfs.threshold=1000M \
946 > convert repo3 convert_normal2
946 > convert repo3 convert_normal2
947 initializing destination convert_normal2 repository
947 initializing destination convert_normal2 repository
948 scanning source...
948 scanning source...
949 sorting...
949 sorting...
950 converting...
950 converting...
951 4 commit with lfs content
951 4 commit with lfs content
952 3 renames
952 3 renames
953 2 large to small, small to large
953 2 large to small, small to large
954 1 random modifications
954 1 random modifications
955 0 switch large and small again
955 0 switch large and small again
956 $ grep 'lfs' convert_normal2/.hg/requires
956 $ grep 'lfs' convert_normal2/.hg/requires
957 [1]
957 [1]
958 $ hg --cwd convert_normal2 debugdata large 0
958 $ hg --cwd convert_normal2 debugdata large 0
959 LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS
959 LONGER-THAN-TEN-BYTES-WILL-TRIGGER-LFS
960
960
961 $ hg --config extensions.convert= --config lfs.threshold=10B \
961 $ hg --config extensions.convert= --config lfs.threshold=10B \
962 > convert convert_normal2 convert_lfs2
962 > convert convert_normal2 convert_lfs2
963 initializing destination convert_lfs2 repository
963 initializing destination convert_lfs2 repository
964 scanning source...
964 scanning source...
965 sorting...
965 sorting...
966 converting...
966 converting...
967 4 commit with lfs content
967 4 commit with lfs content
968 3 renames
968 3 renames
969 2 large to small, small to large
969 2 large to small, small to large
970 1 random modifications
970 1 random modifications
971 0 switch large and small again
971 0 switch large and small again
972 $ grep 'lfs' convert_lfs2/.hg/requires
972 $ grep 'lfs' convert_lfs2/.hg/requires
973 lfs
973 lfs
974 $ hg --cwd convert_lfs2 debugdata large 0
974 $ hg --cwd convert_lfs2 debugdata large 0
975 version https://git-lfs.github.com/spec/v1
975 version https://git-lfs.github.com/spec/v1
976 oid sha256:66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
976 oid sha256:66100b384bf761271b407d79fc30cdd0554f3b2c5d944836e936d584b88ce88e
977 size 39
977 size 39
978 x-is-binary 0
978 x-is-binary 0
979
979
980 $ hg -R convert_lfs2 config --debug extensions | grep lfs
980 $ hg -R convert_lfs2 config --debug extensions | grep lfs
981 $TESTTMP/convert_lfs2/.hg/hgrc:*: extensions.lfs= (glob)
981 $TESTTMP/convert_lfs2/.hg/hgrc:*: extensions.lfs= (glob)
982
982
983 Committing deleted files works:
983 Committing deleted files works:
984
984
985 $ hg init $TESTTMP/repo-del
985 $ hg init $TESTTMP/repo-del
986 $ cd $TESTTMP/repo-del
986 $ cd $TESTTMP/repo-del
987 $ echo 1 > A
987 $ echo 1 > A
988 $ hg commit -m 'add A' -A A
988 $ hg commit -m 'add A' -A A
989 $ hg rm A
989 $ hg rm A
990 $ hg commit -m 'rm A'
990 $ hg commit -m 'rm A'
991
991
992 Bad .hglfs files will block the commit with a useful message
992 Bad .hglfs files will block the commit with a useful message
993
993
994 $ cat > .hglfs << EOF
994 $ cat > .hglfs << EOF
995 > [track]
995 > [track]
996 > **.test = size(">5B")
996 > **.test = size(">5B")
997 > bad file ... no commit
997 > bad file ... no commit
998 > EOF
998 > EOF
999
999
1000 $ echo x > file.txt
1000 $ echo x > file.txt
1001 $ hg ci -Aqm 'should fail'
1001 $ hg ci -Aqm 'should fail'
1002 hg: parse error at .hglfs:3: bad file ... no commit
1002 hg: parse error at .hglfs:3: bad file ... no commit
1003 [255]
1003 [255]
1004
1004
1005 $ cat > .hglfs << EOF
1005 $ cat > .hglfs << EOF
1006 > [track]
1006 > [track]
1007 > **.test = size(">5B")
1007 > **.test = size(">5B")
1008 > ** = nonexistent()
1008 > ** = nonexistent()
1009 > EOF
1009 > EOF
1010
1010
1011 $ hg ci -Aqm 'should fail'
1011 $ hg ci -Aqm 'should fail'
1012 abort: parse error in .hglfs: unknown identifier: nonexistent
1012 abort: parse error in .hglfs: unknown identifier: nonexistent
1013 [255]
1013 [255]
1014
1014
1015 '**' works out to mean all files.
1015 '**' works out to mean all files.
1016
1016
1017 $ cat > .hglfs << EOF
1017 $ cat > .hglfs << EOF
1018 > [track]
1018 > [track]
1019 > path:.hglfs = none()
1019 > path:.hglfs = none()
1020 > **.test = size(">5B")
1020 > **.test = size(">5B")
1021 > **.exclude = none()
1021 > **.exclude = none()
1022 > ** = size(">10B")
1022 > ** = size(">10B")
1023 > EOF
1023 > EOF
1024
1024
1025 The LFS policy takes effect without tracking the .hglfs file
1025 The LFS policy takes effect without tracking the .hglfs file
1026
1026
1027 $ echo 'largefile' > lfs.test
1027 $ echo 'largefile' > lfs.test
1028 $ echo '012345678901234567890' > nolfs.exclude
1028 $ echo '012345678901234567890' > nolfs.exclude
1029 $ echo '01234567890123456' > lfs.catchall
1029 $ echo '01234567890123456' > lfs.catchall
1030 $ hg add *
1030 $ hg add *
1031 $ hg ci -qm 'before add .hglfs'
1031 $ hg ci -qm 'before add .hglfs'
1032 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1032 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1033 2: lfs.catchall: d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1033 2: lfs.catchall: d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1034 lfs.test: 5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1034 lfs.test: 5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1035
1035
1036 The .hglfs file works when tracked
1036 The .hglfs file works when tracked
1037
1037
1038 $ echo 'largefile2' > lfs.test
1038 $ echo 'largefile2' > lfs.test
1039 $ echo '012345678901234567890a' > nolfs.exclude
1039 $ echo '012345678901234567890a' > nolfs.exclude
1040 $ echo '01234567890123456a' > lfs.catchall
1040 $ echo '01234567890123456a' > lfs.catchall
1041 $ hg ci -Aqm 'after adding .hglfs'
1041 $ hg ci -Aqm 'after adding .hglfs'
1042 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1042 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1043 3: lfs.catchall: 31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1043 3: lfs.catchall: 31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1044 lfs.test: 8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1044 lfs.test: 8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1045
1045
1046 The LFS policy stops when the .hglfs is gone
1046 The LFS policy stops when the .hglfs is gone
1047
1047
1048 $ mv .hglfs .hglfs_
1048 $ mv .hglfs .hglfs_
1049 $ echo 'largefile3' > lfs.test
1049 $ echo 'largefile3' > lfs.test
1050 $ echo '012345678901234567890abc' > nolfs.exclude
1050 $ echo '012345678901234567890abc' > nolfs.exclude
1051 $ echo '01234567890123456abc' > lfs.catchall
1051 $ echo '01234567890123456abc' > lfs.catchall
1052 $ hg ci -qm 'file test' -X .hglfs
1052 $ hg ci -qm 'file test' -X .hglfs
1053 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1053 $ hg log -r . -T '{rev}: {lfs_files % "{file}: {lfsoid}\n"}\n'
1054 4:
1054 4:
1055
1055
1056 $ mv .hglfs_ .hglfs
1056 $ mv .hglfs_ .hglfs
1057 $ echo '012345678901234567890abc' > lfs.test
1057 $ echo '012345678901234567890abc' > lfs.test
1058 $ hg ci -m 'back to lfs'
1058 $ hg ci -m 'back to lfs'
1059 $ hg rm lfs.test
1059 $ hg rm lfs.test
1060 $ hg ci -qm 'remove lfs'
1060 $ hg ci -qm 'remove lfs'
1061
1061
1062 {lfs_files} will list deleted files too
1062 {lfs_files} will list deleted files too
1063
1063
1064 $ hg log -T "{lfs_files % '{rev} {file}: {lfspointer.oid}\n'}"
1064 $ hg log -T "{lfs_files % '{rev} {file}: {lfspointer.oid}\n'}"
1065 6 lfs.test:
1065 6 lfs.test:
1066 5 lfs.test: sha256:43f8f41171b6f62a6b61ba4ce98a8a6c1649240a47ebafd43120aa215ac9e7f6
1066 5 lfs.test: sha256:43f8f41171b6f62a6b61ba4ce98a8a6c1649240a47ebafd43120aa215ac9e7f6
1067 3 lfs.catchall: sha256:31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1067 3 lfs.catchall: sha256:31f43b9c62b540126b0ad5884dc013d21a61c9329b77de1fceeae2fc58511573
1068 3 lfs.test: sha256:8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1068 3 lfs.test: sha256:8acd23467967bc7b8cc5a280056589b0ba0b17ff21dbd88a7b6474d6290378a6
1069 2 lfs.catchall: sha256:d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1069 2 lfs.catchall: sha256:d4ec46c2869ba22eceb42a729377432052d9dd75d82fc40390ebaadecee87ee9
1070 2 lfs.test: sha256:5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1070 2 lfs.test: sha256:5489e6ced8c36a7b267292bde9fd5242a5f80a7482e8f23fa0477393dfaa4d6c
1071
1071
1072 $ hg log -r 'file("set:lfs()")' -T '{rev} {join(lfs_files, ", ")}\n'
1072 $ hg log -r 'file("set:lfs()")' -T '{rev} {join(lfs_files, ", ")}\n'
1073 2 lfs.catchall, lfs.test
1073 2 lfs.catchall, lfs.test
1074 3 lfs.catchall, lfs.test
1074 3 lfs.catchall, lfs.test
1075 5 lfs.test
1075 5 lfs.test
1076 6 lfs.test
1076 6 lfs.test
1077
1077
1078 $ cd ..
1078 $ cd ..
1079
1079
1080 Unbundling adds a requirement to a non-lfs repo, if necessary.
1080 Unbundling adds a requirement to a non-lfs repo, if necessary.
1081
1081
1082 $ hg bundle -R $TESTTMP/repo-del -qr 0 --base null nolfs.hg
1082 $ hg bundle -R $TESTTMP/repo-del -qr 0 --base null nolfs.hg
1083 $ hg bundle -R convert_lfs2 -qr tip --base null lfs.hg
1083 $ hg bundle -R convert_lfs2 -qr tip --base null lfs.hg
1084 $ hg init unbundle
1084 $ hg init unbundle
1085 $ hg pull -R unbundle -q nolfs.hg
1085 $ hg pull -R unbundle -q nolfs.hg
1086 $ grep lfs unbundle/.hg/requires
1086 $ grep lfs unbundle/.hg/requires
1087 [1]
1087 [1]
1088 $ hg pull -R unbundle -q lfs.hg
1088 $ hg pull -R unbundle -q lfs.hg
1089 $ grep lfs unbundle/.hg/requires
1089 $ grep lfs unbundle/.hg/requires
1090 lfs
1090 lfs
1091
1091
1092 $ hg init no_lfs
1092 $ hg init no_lfs
1093 $ cat >> no_lfs/.hg/hgrc <<EOF
1093 $ cat >> no_lfs/.hg/hgrc <<EOF
1094 > [experimental]
1094 > [experimental]
1095 > changegroup3 = True
1095 > changegroup3 = True
1096 > [extensions]
1096 > [extensions]
1097 > lfs=!
1097 > lfs=!
1098 > EOF
1098 > EOF
1099 $ cp -R no_lfs no_lfs2
1099 $ cp -R no_lfs no_lfs2
1100
1100
1101 Pushing from a local lfs repo to a local repo without an lfs requirement and
1101 Pushing from a local lfs repo to a local repo without an lfs requirement and
1102 with lfs disabled, fails.
1102 with lfs disabled, fails.
1103
1103
1104 $ hg push -R convert_lfs2 no_lfs
1104 $ hg push -R convert_lfs2 no_lfs
1105 pushing to no_lfs
1105 pushing to no_lfs
1106 abort: required features are not supported in the destination: lfs
1106 abort: required features are not supported in the destination: lfs
1107 [255]
1107 [255]
1108 $ grep lfs no_lfs/.hg/requires
1108 $ grep lfs no_lfs/.hg/requires
1109 [1]
1109 [1]
1110
1110
1111 Pulling from a local lfs repo to a local repo without an lfs requirement and
1111 Pulling from a local lfs repo to a local repo without an lfs requirement and
1112 with lfs disabled, fails.
1112 with lfs disabled, fails.
1113
1113
1114 $ hg pull -R no_lfs2 convert_lfs2
1114 $ hg pull -R no_lfs2 convert_lfs2
1115 pulling from convert_lfs2
1115 pulling from convert_lfs2
1116 abort: required features are not supported in the destination: lfs
1116 abort: required features are not supported in the destination: lfs
1117 [255]
1117 [255]
1118 $ grep lfs no_lfs2/.hg/requires
1118 $ grep lfs no_lfs2/.hg/requires
1119 [1]
1119 [1]
General Comments 0
You need to be logged in to leave comments. Login now