##// END OF EJS Templates
configitems: register the 'largefiles.usercache' config
Boris Feld -
r34758:8cf0a6cd default
parent child Browse files
Show More
@@ -1,151 +1,155
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''track large binary files
9 '''track large binary files
10
10
11 Large binary files tend to be not very compressible, not very
11 Large binary files tend to be not very compressible, not very
12 diffable, and not at all mergeable. Such files are not handled
12 diffable, and not at all mergeable. Such files are not handled
13 efficiently by Mercurial's storage format (revlog), which is based on
13 efficiently by Mercurial's storage format (revlog), which is based on
14 compressed binary deltas; storing large binary files as regular
14 compressed binary deltas; storing large binary files as regular
15 Mercurial files wastes bandwidth and disk space and increases
15 Mercurial files wastes bandwidth and disk space and increases
16 Mercurial's memory usage. The largefiles extension addresses these
16 Mercurial's memory usage. The largefiles extension addresses these
17 problems by adding a centralized client-server layer on top of
17 problems by adding a centralized client-server layer on top of
18 Mercurial: largefiles live in a *central store* out on the network
18 Mercurial: largefiles live in a *central store* out on the network
19 somewhere, and you only fetch the revisions that you need when you
19 somewhere, and you only fetch the revisions that you need when you
20 need them.
20 need them.
21
21
22 largefiles works by maintaining a "standin file" in .hglf/ for each
22 largefiles works by maintaining a "standin file" in .hglf/ for each
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
24 newline) and are tracked by Mercurial. Largefile revisions are
24 newline) and are tracked by Mercurial. Largefile revisions are
25 identified by the SHA-1 hash of their contents, which is written to
25 identified by the SHA-1 hash of their contents, which is written to
26 the standin. largefiles uses that revision ID to get/put largefile
26 the standin. largefiles uses that revision ID to get/put largefile
27 revisions from/to the central store. This saves both disk space and
27 revisions from/to the central store. This saves both disk space and
28 bandwidth, since you don't need to retrieve all historical revisions
28 bandwidth, since you don't need to retrieve all historical revisions
29 of large files when you clone or pull.
29 of large files when you clone or pull.
30
30
31 To start a new repository or add new large binary files, just add
31 To start a new repository or add new large binary files, just add
32 --large to your :hg:`add` command. For example::
32 --large to your :hg:`add` command. For example::
33
33
34 $ dd if=/dev/urandom of=randomdata count=2000
34 $ dd if=/dev/urandom of=randomdata count=2000
35 $ hg add --large randomdata
35 $ hg add --large randomdata
36 $ hg commit -m "add randomdata as a largefile"
36 $ hg commit -m "add randomdata as a largefile"
37
37
38 When you push a changeset that adds/modifies largefiles to a remote
38 When you push a changeset that adds/modifies largefiles to a remote
39 repository, its largefile revisions will be uploaded along with it.
39 repository, its largefile revisions will be uploaded along with it.
40 Note that the remote Mercurial must also have the largefiles extension
40 Note that the remote Mercurial must also have the largefiles extension
41 enabled for this to work.
41 enabled for this to work.
42
42
43 When you pull a changeset that affects largefiles from a remote
43 When you pull a changeset that affects largefiles from a remote
44 repository, the largefiles for the changeset will by default not be
44 repository, the largefiles for the changeset will by default not be
45 pulled down. However, when you update to such a revision, any
45 pulled down. However, when you update to such a revision, any
46 largefiles needed by that revision are downloaded and cached (if
46 largefiles needed by that revision are downloaded and cached (if
47 they have never been downloaded before). One way to pull largefiles
47 they have never been downloaded before). One way to pull largefiles
48 when pulling is thus to use --update, which will update your working
48 when pulling is thus to use --update, which will update your working
49 copy to the latest pulled revision (and thereby downloading any new
49 copy to the latest pulled revision (and thereby downloading any new
50 largefiles).
50 largefiles).
51
51
52 If you want to pull largefiles you don't need for update yet, then
52 If you want to pull largefiles you don't need for update yet, then
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
54
54
55 If you know you are pulling from a non-default location and want to
55 If you know you are pulling from a non-default location and want to
56 download all the largefiles that correspond to the new changesets at
56 download all the largefiles that correspond to the new changesets at
57 the same time, then you can pull with `--lfrev "pulled()"`.
57 the same time, then you can pull with `--lfrev "pulled()"`.
58
58
59 If you just want to ensure that you will have the largefiles needed to
59 If you just want to ensure that you will have the largefiles needed to
60 merge or rebase with new heads that you are pulling, then you can pull
60 merge or rebase with new heads that you are pulling, then you can pull
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
62 that are new in the heads you are pulling.
62 that are new in the heads you are pulling.
63
63
64 Keep in mind that network access may now be required to update to
64 Keep in mind that network access may now be required to update to
65 changesets that you have not previously updated to. The nature of the
65 changesets that you have not previously updated to. The nature of the
66 largefiles extension means that updating is no longer guaranteed to
66 largefiles extension means that updating is no longer guaranteed to
67 be a local-only operation.
67 be a local-only operation.
68
68
69 If you already have large files tracked by Mercurial without the
69 If you already have large files tracked by Mercurial without the
70 largefiles extension, you will need to convert your repository in
70 largefiles extension, you will need to convert your repository in
71 order to benefit from largefiles. This is done with the
71 order to benefit from largefiles. This is done with the
72 :hg:`lfconvert` command::
72 :hg:`lfconvert` command::
73
73
74 $ hg lfconvert --size 10 oldrepo newrepo
74 $ hg lfconvert --size 10 oldrepo newrepo
75
75
76 In repositories that already have largefiles in them, any new file
76 In repositories that already have largefiles in them, any new file
77 over 10MB will automatically be added as a largefile. To change this
77 over 10MB will automatically be added as a largefile. To change this
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
79 to the minimum size in megabytes to track as a largefile, or use the
79 to the minimum size in megabytes to track as a largefile, or use the
80 --lfsize option to the add command (also in megabytes)::
80 --lfsize option to the add command (also in megabytes)::
81
81
82 [largefiles]
82 [largefiles]
83 minsize = 2
83 minsize = 2
84
84
85 $ hg add --lfsize 2
85 $ hg add --lfsize 2
86
86
87 The ``largefiles.patterns`` config option allows you to specify a list
87 The ``largefiles.patterns`` config option allows you to specify a list
88 of filename patterns (see :hg:`help patterns`) that should always be
88 of filename patterns (see :hg:`help patterns`) that should always be
89 tracked as largefiles::
89 tracked as largefiles::
90
90
91 [largefiles]
91 [largefiles]
92 patterns =
92 patterns =
93 *.jpg
93 *.jpg
94 re:.*\\.(png|bmp)$
94 re:.*\\.(png|bmp)$
95 library.zip
95 library.zip
96 content/audio/*
96 content/audio/*
97
97
98 Files that match one of these patterns will be added as largefiles
98 Files that match one of these patterns will be added as largefiles
99 regardless of their size.
99 regardless of their size.
100
100
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
102 will be ignored for any repositories not already containing a
102 will be ignored for any repositories not already containing a
103 largefile. To add the first largefile to a repository, you must
103 largefile. To add the first largefile to a repository, you must
104 explicitly do so with the --large flag passed to the :hg:`add`
104 explicitly do so with the --large flag passed to the :hg:`add`
105 command.
105 command.
106 '''
106 '''
107 from __future__ import absolute_import
107 from __future__ import absolute_import
108
108
109 from mercurial import (
109 from mercurial import (
110 configitems,
110 configitems,
111 hg,
111 hg,
112 localrepo,
112 localrepo,
113 registrar,
113 registrar,
114 )
114 )
115
115
116 from . import (
116 from . import (
117 lfcommands,
117 lfcommands,
118 overrides,
118 overrides,
119 proto,
119 proto,
120 reposetup,
120 reposetup,
121 uisetup as uisetupmod,
121 uisetup as uisetupmod,
122 )
122 )
123
123
124 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
124 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
125 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
125 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
126 # be specifying the version(s) of Mercurial they are tested with, or
126 # be specifying the version(s) of Mercurial they are tested with, or
127 # leave the attribute unspecified.
127 # leave the attribute unspecified.
128 testedwith = 'ships-with-hg-core'
128 testedwith = 'ships-with-hg-core'
129
129
130 configtable = {}
130 configtable = {}
131 configitem = registrar.configitem(configtable)
131 configitem = registrar.configitem(configtable)
132
132
133 configitem('largefiles', 'minsize',
133 configitem('largefiles', 'minsize',
134 default=configitems.dynamicdefault,
134 default=configitems.dynamicdefault,
135 )
135 )
136 configitem('largefiles', 'patterns',
136 configitem('largefiles', 'patterns',
137 default=list,
137 default=list,
138 )
138 )
139 configitem('largefiles', 'usercache',
140 default=None,
141 )
142
139 reposetup = reposetup.reposetup
143 reposetup = reposetup.reposetup
140
144
141 def featuresetup(ui, supported):
145 def featuresetup(ui, supported):
142 # don't die on seeing a repo with the largefiles requirement
146 # don't die on seeing a repo with the largefiles requirement
143 supported |= {'largefiles'}
147 supported |= {'largefiles'}
144
148
145 def uisetup(ui):
149 def uisetup(ui):
146 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
150 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
147 hg.wirepeersetupfuncs.append(proto.wirereposetup)
151 hg.wirepeersetupfuncs.append(proto.wirereposetup)
148 uisetupmod.uisetup(ui)
152 uisetupmod.uisetup(ui)
149
153
150 cmdtable = lfcommands.cmdtable
154 cmdtable = lfcommands.cmdtable
151 revsetpredicate = overrides.revsetpredicate
155 revsetpredicate = overrides.revsetpredicate
@@ -1,673 +1,673
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''largefiles utility code: must not import other modules in this package.'''
9 '''largefiles utility code: must not import other modules in this package.'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 import copy
12 import copy
13 import hashlib
13 import hashlib
14 import os
14 import os
15 import stat
15 import stat
16
16
17 from mercurial.i18n import _
17 from mercurial.i18n import _
18
18
19 from mercurial import (
19 from mercurial import (
20 dirstate,
20 dirstate,
21 encoding,
21 encoding,
22 error,
22 error,
23 httpconnection,
23 httpconnection,
24 match as matchmod,
24 match as matchmod,
25 node,
25 node,
26 pycompat,
26 pycompat,
27 scmutil,
27 scmutil,
28 sparse,
28 sparse,
29 util,
29 util,
30 vfs as vfsmod,
30 vfs as vfsmod,
31 )
31 )
32
32
33 shortname = '.hglf'
33 shortname = '.hglf'
34 shortnameslash = shortname + '/'
34 shortnameslash = shortname + '/'
35 longname = 'largefiles'
35 longname = 'largefiles'
36
36
37 # -- Private worker functions ------------------------------------------
37 # -- Private worker functions ------------------------------------------
38
38
39 def getminsize(ui, assumelfiles, opt, default=10):
39 def getminsize(ui, assumelfiles, opt, default=10):
40 lfsize = opt
40 lfsize = opt
41 if not lfsize and assumelfiles:
41 if not lfsize and assumelfiles:
42 lfsize = ui.config(longname, 'minsize', default=default)
42 lfsize = ui.config(longname, 'minsize', default=default)
43 if lfsize:
43 if lfsize:
44 try:
44 try:
45 lfsize = float(lfsize)
45 lfsize = float(lfsize)
46 except ValueError:
46 except ValueError:
47 raise error.Abort(_('largefiles: size must be number (not %s)\n')
47 raise error.Abort(_('largefiles: size must be number (not %s)\n')
48 % lfsize)
48 % lfsize)
49 if lfsize is None:
49 if lfsize is None:
50 raise error.Abort(_('minimum size for largefiles must be specified'))
50 raise error.Abort(_('minimum size for largefiles must be specified'))
51 return lfsize
51 return lfsize
52
52
53 def link(src, dest):
53 def link(src, dest):
54 """Try to create hardlink - if that fails, efficiently make a copy."""
54 """Try to create hardlink - if that fails, efficiently make a copy."""
55 util.makedirs(os.path.dirname(dest))
55 util.makedirs(os.path.dirname(dest))
56 try:
56 try:
57 util.oslink(src, dest)
57 util.oslink(src, dest)
58 except OSError:
58 except OSError:
59 # if hardlinks fail, fallback on atomic copy
59 # if hardlinks fail, fallback on atomic copy
60 with open(src, 'rb') as srcf, util.atomictempfile(dest) as dstf:
60 with open(src, 'rb') as srcf, util.atomictempfile(dest) as dstf:
61 for chunk in util.filechunkiter(srcf):
61 for chunk in util.filechunkiter(srcf):
62 dstf.write(chunk)
62 dstf.write(chunk)
63 os.chmod(dest, os.stat(src).st_mode)
63 os.chmod(dest, os.stat(src).st_mode)
64
64
65 def usercachepath(ui, hash):
65 def usercachepath(ui, hash):
66 '''Return the correct location in the "global" largefiles cache for a file
66 '''Return the correct location in the "global" largefiles cache for a file
67 with the given hash.
67 with the given hash.
68 This cache is used for sharing of largefiles across repositories - both
68 This cache is used for sharing of largefiles across repositories - both
69 to preserve download bandwidth and storage space.'''
69 to preserve download bandwidth and storage space.'''
70 return os.path.join(_usercachedir(ui), hash)
70 return os.path.join(_usercachedir(ui), hash)
71
71
72 def _usercachedir(ui):
72 def _usercachedir(ui):
73 '''Return the location of the "global" largefiles cache.'''
73 '''Return the location of the "global" largefiles cache.'''
74 path = ui.configpath(longname, 'usercache', None)
74 path = ui.configpath(longname, 'usercache')
75 if path:
75 if path:
76 return path
76 return path
77 if pycompat.iswindows:
77 if pycompat.iswindows:
78 appdata = encoding.environ.get('LOCALAPPDATA',\
78 appdata = encoding.environ.get('LOCALAPPDATA',\
79 encoding.environ.get('APPDATA'))
79 encoding.environ.get('APPDATA'))
80 if appdata:
80 if appdata:
81 return os.path.join(appdata, longname)
81 return os.path.join(appdata, longname)
82 elif pycompat.isdarwin:
82 elif pycompat.isdarwin:
83 home = encoding.environ.get('HOME')
83 home = encoding.environ.get('HOME')
84 if home:
84 if home:
85 return os.path.join(home, 'Library', 'Caches', longname)
85 return os.path.join(home, 'Library', 'Caches', longname)
86 elif pycompat.isposix:
86 elif pycompat.isposix:
87 path = encoding.environ.get('XDG_CACHE_HOME')
87 path = encoding.environ.get('XDG_CACHE_HOME')
88 if path:
88 if path:
89 return os.path.join(path, longname)
89 return os.path.join(path, longname)
90 home = encoding.environ.get('HOME')
90 home = encoding.environ.get('HOME')
91 if home:
91 if home:
92 return os.path.join(home, '.cache', longname)
92 return os.path.join(home, '.cache', longname)
93 else:
93 else:
94 raise error.Abort(_('unknown operating system: %s\n')
94 raise error.Abort(_('unknown operating system: %s\n')
95 % pycompat.osname)
95 % pycompat.osname)
96 raise error.Abort(_('unknown %s usercache location') % longname)
96 raise error.Abort(_('unknown %s usercache location') % longname)
97
97
98 def inusercache(ui, hash):
98 def inusercache(ui, hash):
99 path = usercachepath(ui, hash)
99 path = usercachepath(ui, hash)
100 return os.path.exists(path)
100 return os.path.exists(path)
101
101
102 def findfile(repo, hash):
102 def findfile(repo, hash):
103 '''Return store path of the largefile with the specified hash.
103 '''Return store path of the largefile with the specified hash.
104 As a side effect, the file might be linked from user cache.
104 As a side effect, the file might be linked from user cache.
105 Return None if the file can't be found locally.'''
105 Return None if the file can't be found locally.'''
106 path, exists = findstorepath(repo, hash)
106 path, exists = findstorepath(repo, hash)
107 if exists:
107 if exists:
108 repo.ui.note(_('found %s in store\n') % hash)
108 repo.ui.note(_('found %s in store\n') % hash)
109 return path
109 return path
110 elif inusercache(repo.ui, hash):
110 elif inusercache(repo.ui, hash):
111 repo.ui.note(_('found %s in system cache\n') % hash)
111 repo.ui.note(_('found %s in system cache\n') % hash)
112 path = storepath(repo, hash)
112 path = storepath(repo, hash)
113 link(usercachepath(repo.ui, hash), path)
113 link(usercachepath(repo.ui, hash), path)
114 return path
114 return path
115 return None
115 return None
116
116
117 class largefilesdirstate(dirstate.dirstate):
117 class largefilesdirstate(dirstate.dirstate):
118 def __getitem__(self, key):
118 def __getitem__(self, key):
119 return super(largefilesdirstate, self).__getitem__(unixpath(key))
119 return super(largefilesdirstate, self).__getitem__(unixpath(key))
120 def normal(self, f):
120 def normal(self, f):
121 return super(largefilesdirstate, self).normal(unixpath(f))
121 return super(largefilesdirstate, self).normal(unixpath(f))
122 def remove(self, f):
122 def remove(self, f):
123 return super(largefilesdirstate, self).remove(unixpath(f))
123 return super(largefilesdirstate, self).remove(unixpath(f))
124 def add(self, f):
124 def add(self, f):
125 return super(largefilesdirstate, self).add(unixpath(f))
125 return super(largefilesdirstate, self).add(unixpath(f))
126 def drop(self, f):
126 def drop(self, f):
127 return super(largefilesdirstate, self).drop(unixpath(f))
127 return super(largefilesdirstate, self).drop(unixpath(f))
128 def forget(self, f):
128 def forget(self, f):
129 return super(largefilesdirstate, self).forget(unixpath(f))
129 return super(largefilesdirstate, self).forget(unixpath(f))
130 def normallookup(self, f):
130 def normallookup(self, f):
131 return super(largefilesdirstate, self).normallookup(unixpath(f))
131 return super(largefilesdirstate, self).normallookup(unixpath(f))
132 def _ignore(self, f):
132 def _ignore(self, f):
133 return False
133 return False
134 def write(self, tr=False):
134 def write(self, tr=False):
135 # (1) disable PENDING mode always
135 # (1) disable PENDING mode always
136 # (lfdirstate isn't yet managed as a part of the transaction)
136 # (lfdirstate isn't yet managed as a part of the transaction)
137 # (2) avoid develwarn 'use dirstate.write with ....'
137 # (2) avoid develwarn 'use dirstate.write with ....'
138 super(largefilesdirstate, self).write(None)
138 super(largefilesdirstate, self).write(None)
139
139
140 def openlfdirstate(ui, repo, create=True):
140 def openlfdirstate(ui, repo, create=True):
141 '''
141 '''
142 Return a dirstate object that tracks largefiles: i.e. its root is
142 Return a dirstate object that tracks largefiles: i.e. its root is
143 the repo root, but it is saved in .hg/largefiles/dirstate.
143 the repo root, but it is saved in .hg/largefiles/dirstate.
144 '''
144 '''
145 vfs = repo.vfs
145 vfs = repo.vfs
146 lfstoredir = longname
146 lfstoredir = longname
147 opener = vfsmod.vfs(vfs.join(lfstoredir))
147 opener = vfsmod.vfs(vfs.join(lfstoredir))
148 lfdirstate = largefilesdirstate(opener, ui, repo.root,
148 lfdirstate = largefilesdirstate(opener, ui, repo.root,
149 repo.dirstate._validate,
149 repo.dirstate._validate,
150 lambda: sparse.matcher(repo))
150 lambda: sparse.matcher(repo))
151
151
152 # If the largefiles dirstate does not exist, populate and create
152 # If the largefiles dirstate does not exist, populate and create
153 # it. This ensures that we create it on the first meaningful
153 # it. This ensures that we create it on the first meaningful
154 # largefiles operation in a new clone.
154 # largefiles operation in a new clone.
155 if create and not vfs.exists(vfs.join(lfstoredir, 'dirstate')):
155 if create and not vfs.exists(vfs.join(lfstoredir, 'dirstate')):
156 matcher = getstandinmatcher(repo)
156 matcher = getstandinmatcher(repo)
157 standins = repo.dirstate.walk(matcher, subrepos=[], unknown=False,
157 standins = repo.dirstate.walk(matcher, subrepos=[], unknown=False,
158 ignored=False)
158 ignored=False)
159
159
160 if len(standins) > 0:
160 if len(standins) > 0:
161 vfs.makedirs(lfstoredir)
161 vfs.makedirs(lfstoredir)
162
162
163 for standin in standins:
163 for standin in standins:
164 lfile = splitstandin(standin)
164 lfile = splitstandin(standin)
165 lfdirstate.normallookup(lfile)
165 lfdirstate.normallookup(lfile)
166 return lfdirstate
166 return lfdirstate
167
167
168 def lfdirstatestatus(lfdirstate, repo):
168 def lfdirstatestatus(lfdirstate, repo):
169 pctx = repo['.']
169 pctx = repo['.']
170 match = matchmod.always(repo.root, repo.getcwd())
170 match = matchmod.always(repo.root, repo.getcwd())
171 unsure, s = lfdirstate.status(match, subrepos=[], ignored=False,
171 unsure, s = lfdirstate.status(match, subrepos=[], ignored=False,
172 clean=False, unknown=False)
172 clean=False, unknown=False)
173 modified, clean = s.modified, s.clean
173 modified, clean = s.modified, s.clean
174 for lfile in unsure:
174 for lfile in unsure:
175 try:
175 try:
176 fctx = pctx[standin(lfile)]
176 fctx = pctx[standin(lfile)]
177 except LookupError:
177 except LookupError:
178 fctx = None
178 fctx = None
179 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
179 if not fctx or readasstandin(fctx) != hashfile(repo.wjoin(lfile)):
180 modified.append(lfile)
180 modified.append(lfile)
181 else:
181 else:
182 clean.append(lfile)
182 clean.append(lfile)
183 lfdirstate.normal(lfile)
183 lfdirstate.normal(lfile)
184 return s
184 return s
185
185
186 def listlfiles(repo, rev=None, matcher=None):
186 def listlfiles(repo, rev=None, matcher=None):
187 '''return a list of largefiles in the working copy or the
187 '''return a list of largefiles in the working copy or the
188 specified changeset'''
188 specified changeset'''
189
189
190 if matcher is None:
190 if matcher is None:
191 matcher = getstandinmatcher(repo)
191 matcher = getstandinmatcher(repo)
192
192
193 # ignore unknown files in working directory
193 # ignore unknown files in working directory
194 return [splitstandin(f)
194 return [splitstandin(f)
195 for f in repo[rev].walk(matcher)
195 for f in repo[rev].walk(matcher)
196 if rev is not None or repo.dirstate[f] != '?']
196 if rev is not None or repo.dirstate[f] != '?']
197
197
198 def instore(repo, hash, forcelocal=False):
198 def instore(repo, hash, forcelocal=False):
199 '''Return true if a largefile with the given hash exists in the store'''
199 '''Return true if a largefile with the given hash exists in the store'''
200 return os.path.exists(storepath(repo, hash, forcelocal))
200 return os.path.exists(storepath(repo, hash, forcelocal))
201
201
202 def storepath(repo, hash, forcelocal=False):
202 def storepath(repo, hash, forcelocal=False):
203 '''Return the correct location in the repository largefiles store for a
203 '''Return the correct location in the repository largefiles store for a
204 file with the given hash.'''
204 file with the given hash.'''
205 if not forcelocal and repo.shared():
205 if not forcelocal and repo.shared():
206 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
206 return repo.vfs.reljoin(repo.sharedpath, longname, hash)
207 return repo.vfs.join(longname, hash)
207 return repo.vfs.join(longname, hash)
208
208
209 def findstorepath(repo, hash):
209 def findstorepath(repo, hash):
210 '''Search through the local store path(s) to find the file for the given
210 '''Search through the local store path(s) to find the file for the given
211 hash. If the file is not found, its path in the primary store is returned.
211 hash. If the file is not found, its path in the primary store is returned.
212 The return value is a tuple of (path, exists(path)).
212 The return value is a tuple of (path, exists(path)).
213 '''
213 '''
214 # For shared repos, the primary store is in the share source. But for
214 # For shared repos, the primary store is in the share source. But for
215 # backward compatibility, force a lookup in the local store if it wasn't
215 # backward compatibility, force a lookup in the local store if it wasn't
216 # found in the share source.
216 # found in the share source.
217 path = storepath(repo, hash, False)
217 path = storepath(repo, hash, False)
218
218
219 if instore(repo, hash):
219 if instore(repo, hash):
220 return (path, True)
220 return (path, True)
221 elif repo.shared() and instore(repo, hash, True):
221 elif repo.shared() and instore(repo, hash, True):
222 return storepath(repo, hash, True), True
222 return storepath(repo, hash, True), True
223
223
224 return (path, False)
224 return (path, False)
225
225
226 def copyfromcache(repo, hash, filename):
226 def copyfromcache(repo, hash, filename):
227 '''Copy the specified largefile from the repo or system cache to
227 '''Copy the specified largefile from the repo or system cache to
228 filename in the repository. Return true on success or false if the
228 filename in the repository. Return true on success or false if the
229 file was not found in either cache (which should not happened:
229 file was not found in either cache (which should not happened:
230 this is meant to be called only after ensuring that the needed
230 this is meant to be called only after ensuring that the needed
231 largefile exists in the cache).'''
231 largefile exists in the cache).'''
232 wvfs = repo.wvfs
232 wvfs = repo.wvfs
233 path = findfile(repo, hash)
233 path = findfile(repo, hash)
234 if path is None:
234 if path is None:
235 return False
235 return False
236 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
236 wvfs.makedirs(wvfs.dirname(wvfs.join(filename)))
237 # The write may fail before the file is fully written, but we
237 # The write may fail before the file is fully written, but we
238 # don't use atomic writes in the working copy.
238 # don't use atomic writes in the working copy.
239 with open(path, 'rb') as srcfd, wvfs(filename, 'wb') as destfd:
239 with open(path, 'rb') as srcfd, wvfs(filename, 'wb') as destfd:
240 gothash = copyandhash(
240 gothash = copyandhash(
241 util.filechunkiter(srcfd), destfd)
241 util.filechunkiter(srcfd), destfd)
242 if gothash != hash:
242 if gothash != hash:
243 repo.ui.warn(_('%s: data corruption in %s with hash %s\n')
243 repo.ui.warn(_('%s: data corruption in %s with hash %s\n')
244 % (filename, path, gothash))
244 % (filename, path, gothash))
245 wvfs.unlink(filename)
245 wvfs.unlink(filename)
246 return False
246 return False
247 return True
247 return True
248
248
249 def copytostore(repo, ctx, file, fstandin):
249 def copytostore(repo, ctx, file, fstandin):
250 wvfs = repo.wvfs
250 wvfs = repo.wvfs
251 hash = readasstandin(ctx[fstandin])
251 hash = readasstandin(ctx[fstandin])
252 if instore(repo, hash):
252 if instore(repo, hash):
253 return
253 return
254 if wvfs.exists(file):
254 if wvfs.exists(file):
255 copytostoreabsolute(repo, wvfs.join(file), hash)
255 copytostoreabsolute(repo, wvfs.join(file), hash)
256 else:
256 else:
257 repo.ui.warn(_("%s: largefile %s not available from local store\n") %
257 repo.ui.warn(_("%s: largefile %s not available from local store\n") %
258 (file, hash))
258 (file, hash))
259
259
260 def copyalltostore(repo, node):
260 def copyalltostore(repo, node):
261 '''Copy all largefiles in a given revision to the store'''
261 '''Copy all largefiles in a given revision to the store'''
262
262
263 ctx = repo[node]
263 ctx = repo[node]
264 for filename in ctx.files():
264 for filename in ctx.files():
265 realfile = splitstandin(filename)
265 realfile = splitstandin(filename)
266 if realfile is not None and filename in ctx.manifest():
266 if realfile is not None and filename in ctx.manifest():
267 copytostore(repo, ctx, realfile, filename)
267 copytostore(repo, ctx, realfile, filename)
268
268
269 def copytostoreabsolute(repo, file, hash):
269 def copytostoreabsolute(repo, file, hash):
270 if inusercache(repo.ui, hash):
270 if inusercache(repo.ui, hash):
271 link(usercachepath(repo.ui, hash), storepath(repo, hash))
271 link(usercachepath(repo.ui, hash), storepath(repo, hash))
272 else:
272 else:
273 util.makedirs(os.path.dirname(storepath(repo, hash)))
273 util.makedirs(os.path.dirname(storepath(repo, hash)))
274 with open(file, 'rb') as srcf:
274 with open(file, 'rb') as srcf:
275 with util.atomictempfile(storepath(repo, hash),
275 with util.atomictempfile(storepath(repo, hash),
276 createmode=repo.store.createmode) as dstf:
276 createmode=repo.store.createmode) as dstf:
277 for chunk in util.filechunkiter(srcf):
277 for chunk in util.filechunkiter(srcf):
278 dstf.write(chunk)
278 dstf.write(chunk)
279 linktousercache(repo, hash)
279 linktousercache(repo, hash)
280
280
281 def linktousercache(repo, hash):
281 def linktousercache(repo, hash):
282 '''Link / copy the largefile with the specified hash from the store
282 '''Link / copy the largefile with the specified hash from the store
283 to the cache.'''
283 to the cache.'''
284 path = usercachepath(repo.ui, hash)
284 path = usercachepath(repo.ui, hash)
285 link(storepath(repo, hash), path)
285 link(storepath(repo, hash), path)
286
286
287 def getstandinmatcher(repo, rmatcher=None):
287 def getstandinmatcher(repo, rmatcher=None):
288 '''Return a match object that applies rmatcher to the standin directory'''
288 '''Return a match object that applies rmatcher to the standin directory'''
289 wvfs = repo.wvfs
289 wvfs = repo.wvfs
290 standindir = shortname
290 standindir = shortname
291
291
292 # no warnings about missing files or directories
292 # no warnings about missing files or directories
293 badfn = lambda f, msg: None
293 badfn = lambda f, msg: None
294
294
295 if rmatcher and not rmatcher.always():
295 if rmatcher and not rmatcher.always():
296 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
296 pats = [wvfs.join(standindir, pat) for pat in rmatcher.files()]
297 if not pats:
297 if not pats:
298 pats = [wvfs.join(standindir)]
298 pats = [wvfs.join(standindir)]
299 match = scmutil.match(repo[None], pats, badfn=badfn)
299 match = scmutil.match(repo[None], pats, badfn=badfn)
300 else:
300 else:
301 # no patterns: relative to repo root
301 # no patterns: relative to repo root
302 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
302 match = scmutil.match(repo[None], [wvfs.join(standindir)], badfn=badfn)
303 return match
303 return match
304
304
305 def composestandinmatcher(repo, rmatcher):
305 def composestandinmatcher(repo, rmatcher):
306 '''Return a matcher that accepts standins corresponding to the
306 '''Return a matcher that accepts standins corresponding to the
307 files accepted by rmatcher. Pass the list of files in the matcher
307 files accepted by rmatcher. Pass the list of files in the matcher
308 as the paths specified by the user.'''
308 as the paths specified by the user.'''
309 smatcher = getstandinmatcher(repo, rmatcher)
309 smatcher = getstandinmatcher(repo, rmatcher)
310 isstandin = smatcher.matchfn
310 isstandin = smatcher.matchfn
311 def composedmatchfn(f):
311 def composedmatchfn(f):
312 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
312 return isstandin(f) and rmatcher.matchfn(splitstandin(f))
313 smatcher.matchfn = composedmatchfn
313 smatcher.matchfn = composedmatchfn
314
314
315 return smatcher
315 return smatcher
316
316
317 def standin(filename):
317 def standin(filename):
318 '''Return the repo-relative path to the standin for the specified big
318 '''Return the repo-relative path to the standin for the specified big
319 file.'''
319 file.'''
320 # Notes:
320 # Notes:
321 # 1) Some callers want an absolute path, but for instance addlargefiles
321 # 1) Some callers want an absolute path, but for instance addlargefiles
322 # needs it repo-relative so it can be passed to repo[None].add(). So
322 # needs it repo-relative so it can be passed to repo[None].add(). So
323 # leave it up to the caller to use repo.wjoin() to get an absolute path.
323 # leave it up to the caller to use repo.wjoin() to get an absolute path.
324 # 2) Join with '/' because that's what dirstate always uses, even on
324 # 2) Join with '/' because that's what dirstate always uses, even on
325 # Windows. Change existing separator to '/' first in case we are
325 # Windows. Change existing separator to '/' first in case we are
326 # passed filenames from an external source (like the command line).
326 # passed filenames from an external source (like the command line).
327 return shortnameslash + util.pconvert(filename)
327 return shortnameslash + util.pconvert(filename)
328
328
329 def isstandin(filename):
329 def isstandin(filename):
330 '''Return true if filename is a big file standin. filename must be
330 '''Return true if filename is a big file standin. filename must be
331 in Mercurial's internal form (slash-separated).'''
331 in Mercurial's internal form (slash-separated).'''
332 return filename.startswith(shortnameslash)
332 return filename.startswith(shortnameslash)
333
333
334 def splitstandin(filename):
334 def splitstandin(filename):
335 # Split on / because that's what dirstate always uses, even on Windows.
335 # Split on / because that's what dirstate always uses, even on Windows.
336 # Change local separator to / first just in case we are passed filenames
336 # Change local separator to / first just in case we are passed filenames
337 # from an external source (like the command line).
337 # from an external source (like the command line).
338 bits = util.pconvert(filename).split('/', 1)
338 bits = util.pconvert(filename).split('/', 1)
339 if len(bits) == 2 and bits[0] == shortname:
339 if len(bits) == 2 and bits[0] == shortname:
340 return bits[1]
340 return bits[1]
341 else:
341 else:
342 return None
342 return None
343
343
344 def updatestandin(repo, lfile, standin):
344 def updatestandin(repo, lfile, standin):
345 """Re-calculate hash value of lfile and write it into standin
345 """Re-calculate hash value of lfile and write it into standin
346
346
347 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
347 This assumes that "lfutil.standin(lfile) == standin", for efficiency.
348 """
348 """
349 file = repo.wjoin(lfile)
349 file = repo.wjoin(lfile)
350 if repo.wvfs.exists(lfile):
350 if repo.wvfs.exists(lfile):
351 hash = hashfile(file)
351 hash = hashfile(file)
352 executable = getexecutable(file)
352 executable = getexecutable(file)
353 writestandin(repo, standin, hash, executable)
353 writestandin(repo, standin, hash, executable)
354 else:
354 else:
355 raise error.Abort(_('%s: file not found!') % lfile)
355 raise error.Abort(_('%s: file not found!') % lfile)
356
356
357 def readasstandin(fctx):
357 def readasstandin(fctx):
358 '''read hex hash from given filectx of standin file
358 '''read hex hash from given filectx of standin file
359
359
360 This encapsulates how "standin" data is stored into storage layer.'''
360 This encapsulates how "standin" data is stored into storage layer.'''
361 return fctx.data().strip()
361 return fctx.data().strip()
362
362
363 def writestandin(repo, standin, hash, executable):
363 def writestandin(repo, standin, hash, executable):
364 '''write hash to <repo.root>/<standin>'''
364 '''write hash to <repo.root>/<standin>'''
365 repo.wwrite(standin, hash + '\n', executable and 'x' or '')
365 repo.wwrite(standin, hash + '\n', executable and 'x' or '')
366
366
367 def copyandhash(instream, outfile):
367 def copyandhash(instream, outfile):
368 '''Read bytes from instream (iterable) and write them to outfile,
368 '''Read bytes from instream (iterable) and write them to outfile,
369 computing the SHA-1 hash of the data along the way. Return the hash.'''
369 computing the SHA-1 hash of the data along the way. Return the hash.'''
370 hasher = hashlib.sha1('')
370 hasher = hashlib.sha1('')
371 for data in instream:
371 for data in instream:
372 hasher.update(data)
372 hasher.update(data)
373 outfile.write(data)
373 outfile.write(data)
374 return hasher.hexdigest()
374 return hasher.hexdigest()
375
375
376 def hashfile(file):
376 def hashfile(file):
377 if not os.path.exists(file):
377 if not os.path.exists(file):
378 return ''
378 return ''
379 with open(file, 'rb') as fd:
379 with open(file, 'rb') as fd:
380 return hexsha1(fd)
380 return hexsha1(fd)
381
381
382 def getexecutable(filename):
382 def getexecutable(filename):
383 mode = os.stat(filename).st_mode
383 mode = os.stat(filename).st_mode
384 return ((mode & stat.S_IXUSR) and
384 return ((mode & stat.S_IXUSR) and
385 (mode & stat.S_IXGRP) and
385 (mode & stat.S_IXGRP) and
386 (mode & stat.S_IXOTH))
386 (mode & stat.S_IXOTH))
387
387
388 def urljoin(first, second, *arg):
388 def urljoin(first, second, *arg):
389 def join(left, right):
389 def join(left, right):
390 if not left.endswith('/'):
390 if not left.endswith('/'):
391 left += '/'
391 left += '/'
392 if right.startswith('/'):
392 if right.startswith('/'):
393 right = right[1:]
393 right = right[1:]
394 return left + right
394 return left + right
395
395
396 url = join(first, second)
396 url = join(first, second)
397 for a in arg:
397 for a in arg:
398 url = join(url, a)
398 url = join(url, a)
399 return url
399 return url
400
400
401 def hexsha1(fileobj):
401 def hexsha1(fileobj):
402 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
402 """hexsha1 returns the hex-encoded sha1 sum of the data in the file-like
403 object data"""
403 object data"""
404 h = hashlib.sha1()
404 h = hashlib.sha1()
405 for chunk in util.filechunkiter(fileobj):
405 for chunk in util.filechunkiter(fileobj):
406 h.update(chunk)
406 h.update(chunk)
407 return h.hexdigest()
407 return h.hexdigest()
408
408
409 def httpsendfile(ui, filename):
409 def httpsendfile(ui, filename):
410 return httpconnection.httpsendfile(ui, filename, 'rb')
410 return httpconnection.httpsendfile(ui, filename, 'rb')
411
411
412 def unixpath(path):
412 def unixpath(path):
413 '''Return a version of path normalized for use with the lfdirstate.'''
413 '''Return a version of path normalized for use with the lfdirstate.'''
414 return util.pconvert(os.path.normpath(path))
414 return util.pconvert(os.path.normpath(path))
415
415
416 def islfilesrepo(repo):
416 def islfilesrepo(repo):
417 '''Return true if the repo is a largefile repo.'''
417 '''Return true if the repo is a largefile repo.'''
418 if ('largefiles' in repo.requirements and
418 if ('largefiles' in repo.requirements and
419 any(shortnameslash in f[0] for f in repo.store.datafiles())):
419 any(shortnameslash in f[0] for f in repo.store.datafiles())):
420 return True
420 return True
421
421
422 return any(openlfdirstate(repo.ui, repo, False))
422 return any(openlfdirstate(repo.ui, repo, False))
423
423
424 class storeprotonotcapable(Exception):
424 class storeprotonotcapable(Exception):
425 def __init__(self, storetypes):
425 def __init__(self, storetypes):
426 self.storetypes = storetypes
426 self.storetypes = storetypes
427
427
428 def getstandinsstate(repo):
428 def getstandinsstate(repo):
429 standins = []
429 standins = []
430 matcher = getstandinmatcher(repo)
430 matcher = getstandinmatcher(repo)
431 wctx = repo[None]
431 wctx = repo[None]
432 for standin in repo.dirstate.walk(matcher, subrepos=[], unknown=False,
432 for standin in repo.dirstate.walk(matcher, subrepos=[], unknown=False,
433 ignored=False):
433 ignored=False):
434 lfile = splitstandin(standin)
434 lfile = splitstandin(standin)
435 try:
435 try:
436 hash = readasstandin(wctx[standin])
436 hash = readasstandin(wctx[standin])
437 except IOError:
437 except IOError:
438 hash = None
438 hash = None
439 standins.append((lfile, hash))
439 standins.append((lfile, hash))
440 return standins
440 return standins
441
441
442 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
442 def synclfdirstate(repo, lfdirstate, lfile, normallookup):
443 lfstandin = standin(lfile)
443 lfstandin = standin(lfile)
444 if lfstandin in repo.dirstate:
444 if lfstandin in repo.dirstate:
445 stat = repo.dirstate._map[lfstandin]
445 stat = repo.dirstate._map[lfstandin]
446 state, mtime = stat[0], stat[3]
446 state, mtime = stat[0], stat[3]
447 else:
447 else:
448 state, mtime = '?', -1
448 state, mtime = '?', -1
449 if state == 'n':
449 if state == 'n':
450 if (normallookup or mtime < 0 or
450 if (normallookup or mtime < 0 or
451 not repo.wvfs.exists(lfile)):
451 not repo.wvfs.exists(lfile)):
452 # state 'n' doesn't ensure 'clean' in this case
452 # state 'n' doesn't ensure 'clean' in this case
453 lfdirstate.normallookup(lfile)
453 lfdirstate.normallookup(lfile)
454 else:
454 else:
455 lfdirstate.normal(lfile)
455 lfdirstate.normal(lfile)
456 elif state == 'm':
456 elif state == 'm':
457 lfdirstate.normallookup(lfile)
457 lfdirstate.normallookup(lfile)
458 elif state == 'r':
458 elif state == 'r':
459 lfdirstate.remove(lfile)
459 lfdirstate.remove(lfile)
460 elif state == 'a':
460 elif state == 'a':
461 lfdirstate.add(lfile)
461 lfdirstate.add(lfile)
462 elif state == '?':
462 elif state == '?':
463 lfdirstate.drop(lfile)
463 lfdirstate.drop(lfile)
464
464
465 def markcommitted(orig, ctx, node):
465 def markcommitted(orig, ctx, node):
466 repo = ctx.repo()
466 repo = ctx.repo()
467
467
468 orig(node)
468 orig(node)
469
469
470 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
470 # ATTENTION: "ctx.files()" may differ from "repo[node].files()"
471 # because files coming from the 2nd parent are omitted in the latter.
471 # because files coming from the 2nd parent are omitted in the latter.
472 #
472 #
473 # The former should be used to get targets of "synclfdirstate",
473 # The former should be used to get targets of "synclfdirstate",
474 # because such files:
474 # because such files:
475 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
475 # - are marked as "a" by "patch.patch()" (e.g. via transplant), and
476 # - have to be marked as "n" after commit, but
476 # - have to be marked as "n" after commit, but
477 # - aren't listed in "repo[node].files()"
477 # - aren't listed in "repo[node].files()"
478
478
479 lfdirstate = openlfdirstate(repo.ui, repo)
479 lfdirstate = openlfdirstate(repo.ui, repo)
480 for f in ctx.files():
480 for f in ctx.files():
481 lfile = splitstandin(f)
481 lfile = splitstandin(f)
482 if lfile is not None:
482 if lfile is not None:
483 synclfdirstate(repo, lfdirstate, lfile, False)
483 synclfdirstate(repo, lfdirstate, lfile, False)
484 lfdirstate.write()
484 lfdirstate.write()
485
485
486 # As part of committing, copy all of the largefiles into the cache.
486 # As part of committing, copy all of the largefiles into the cache.
487 #
487 #
488 # Using "node" instead of "ctx" implies additional "repo[node]"
488 # Using "node" instead of "ctx" implies additional "repo[node]"
489 # lookup while copyalltostore(), but can omit redundant check for
489 # lookup while copyalltostore(), but can omit redundant check for
490 # files comming from the 2nd parent, which should exist in store
490 # files comming from the 2nd parent, which should exist in store
491 # at merging.
491 # at merging.
492 copyalltostore(repo, node)
492 copyalltostore(repo, node)
493
493
494 def getlfilestoupdate(oldstandins, newstandins):
494 def getlfilestoupdate(oldstandins, newstandins):
495 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
495 changedstandins = set(oldstandins).symmetric_difference(set(newstandins))
496 filelist = []
496 filelist = []
497 for f in changedstandins:
497 for f in changedstandins:
498 if f[0] not in filelist:
498 if f[0] not in filelist:
499 filelist.append(f[0])
499 filelist.append(f[0])
500 return filelist
500 return filelist
501
501
502 def getlfilestoupload(repo, missing, addfunc):
502 def getlfilestoupload(repo, missing, addfunc):
503 for i, n in enumerate(missing):
503 for i, n in enumerate(missing):
504 repo.ui.progress(_('finding outgoing largefiles'), i,
504 repo.ui.progress(_('finding outgoing largefiles'), i,
505 unit=_('revisions'), total=len(missing))
505 unit=_('revisions'), total=len(missing))
506 parents = [p for p in repo[n].parents() if p != node.nullid]
506 parents = [p for p in repo[n].parents() if p != node.nullid]
507
507
508 oldlfstatus = repo.lfstatus
508 oldlfstatus = repo.lfstatus
509 repo.lfstatus = False
509 repo.lfstatus = False
510 try:
510 try:
511 ctx = repo[n]
511 ctx = repo[n]
512 finally:
512 finally:
513 repo.lfstatus = oldlfstatus
513 repo.lfstatus = oldlfstatus
514
514
515 files = set(ctx.files())
515 files = set(ctx.files())
516 if len(parents) == 2:
516 if len(parents) == 2:
517 mc = ctx.manifest()
517 mc = ctx.manifest()
518 mp1 = ctx.parents()[0].manifest()
518 mp1 = ctx.parents()[0].manifest()
519 mp2 = ctx.parents()[1].manifest()
519 mp2 = ctx.parents()[1].manifest()
520 for f in mp1:
520 for f in mp1:
521 if f not in mc:
521 if f not in mc:
522 files.add(f)
522 files.add(f)
523 for f in mp2:
523 for f in mp2:
524 if f not in mc:
524 if f not in mc:
525 files.add(f)
525 files.add(f)
526 for f in mc:
526 for f in mc:
527 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
527 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
528 files.add(f)
528 files.add(f)
529 for fn in files:
529 for fn in files:
530 if isstandin(fn) and fn in ctx:
530 if isstandin(fn) and fn in ctx:
531 addfunc(fn, readasstandin(ctx[fn]))
531 addfunc(fn, readasstandin(ctx[fn]))
532 repo.ui.progress(_('finding outgoing largefiles'), None)
532 repo.ui.progress(_('finding outgoing largefiles'), None)
533
533
534 def updatestandinsbymatch(repo, match):
534 def updatestandinsbymatch(repo, match):
535 '''Update standins in the working directory according to specified match
535 '''Update standins in the working directory according to specified match
536
536
537 This returns (possibly modified) ``match`` object to be used for
537 This returns (possibly modified) ``match`` object to be used for
538 subsequent commit process.
538 subsequent commit process.
539 '''
539 '''
540
540
541 ui = repo.ui
541 ui = repo.ui
542
542
543 # Case 1: user calls commit with no specific files or
543 # Case 1: user calls commit with no specific files or
544 # include/exclude patterns: refresh and commit all files that
544 # include/exclude patterns: refresh and commit all files that
545 # are "dirty".
545 # are "dirty".
546 if match is None or match.always():
546 if match is None or match.always():
547 # Spend a bit of time here to get a list of files we know
547 # Spend a bit of time here to get a list of files we know
548 # are modified so we can compare only against those.
548 # are modified so we can compare only against those.
549 # It can cost a lot of time (several seconds)
549 # It can cost a lot of time (several seconds)
550 # otherwise to update all standins if the largefiles are
550 # otherwise to update all standins if the largefiles are
551 # large.
551 # large.
552 lfdirstate = openlfdirstate(ui, repo)
552 lfdirstate = openlfdirstate(ui, repo)
553 dirtymatch = matchmod.always(repo.root, repo.getcwd())
553 dirtymatch = matchmod.always(repo.root, repo.getcwd())
554 unsure, s = lfdirstate.status(dirtymatch, subrepos=[], ignored=False,
554 unsure, s = lfdirstate.status(dirtymatch, subrepos=[], ignored=False,
555 clean=False, unknown=False)
555 clean=False, unknown=False)
556 modifiedfiles = unsure + s.modified + s.added + s.removed
556 modifiedfiles = unsure + s.modified + s.added + s.removed
557 lfiles = listlfiles(repo)
557 lfiles = listlfiles(repo)
558 # this only loops through largefiles that exist (not
558 # this only loops through largefiles that exist (not
559 # removed/renamed)
559 # removed/renamed)
560 for lfile in lfiles:
560 for lfile in lfiles:
561 if lfile in modifiedfiles:
561 if lfile in modifiedfiles:
562 fstandin = standin(lfile)
562 fstandin = standin(lfile)
563 if repo.wvfs.exists(fstandin):
563 if repo.wvfs.exists(fstandin):
564 # this handles the case where a rebase is being
564 # this handles the case where a rebase is being
565 # performed and the working copy is not updated
565 # performed and the working copy is not updated
566 # yet.
566 # yet.
567 if repo.wvfs.exists(lfile):
567 if repo.wvfs.exists(lfile):
568 updatestandin(repo, lfile, fstandin)
568 updatestandin(repo, lfile, fstandin)
569
569
570 return match
570 return match
571
571
572 lfiles = listlfiles(repo)
572 lfiles = listlfiles(repo)
573 match._files = repo._subdirlfs(match.files(), lfiles)
573 match._files = repo._subdirlfs(match.files(), lfiles)
574
574
575 # Case 2: user calls commit with specified patterns: refresh
575 # Case 2: user calls commit with specified patterns: refresh
576 # any matching big files.
576 # any matching big files.
577 smatcher = composestandinmatcher(repo, match)
577 smatcher = composestandinmatcher(repo, match)
578 standins = repo.dirstate.walk(smatcher, subrepos=[], unknown=False,
578 standins = repo.dirstate.walk(smatcher, subrepos=[], unknown=False,
579 ignored=False)
579 ignored=False)
580
580
581 # No matching big files: get out of the way and pass control to
581 # No matching big files: get out of the way and pass control to
582 # the usual commit() method.
582 # the usual commit() method.
583 if not standins:
583 if not standins:
584 return match
584 return match
585
585
586 # Refresh all matching big files. It's possible that the
586 # Refresh all matching big files. It's possible that the
587 # commit will end up failing, in which case the big files will
587 # commit will end up failing, in which case the big files will
588 # stay refreshed. No harm done: the user modified them and
588 # stay refreshed. No harm done: the user modified them and
589 # asked to commit them, so sooner or later we're going to
589 # asked to commit them, so sooner or later we're going to
590 # refresh the standins. Might as well leave them refreshed.
590 # refresh the standins. Might as well leave them refreshed.
591 lfdirstate = openlfdirstate(ui, repo)
591 lfdirstate = openlfdirstate(ui, repo)
592 for fstandin in standins:
592 for fstandin in standins:
593 lfile = splitstandin(fstandin)
593 lfile = splitstandin(fstandin)
594 if lfdirstate[lfile] != 'r':
594 if lfdirstate[lfile] != 'r':
595 updatestandin(repo, lfile, fstandin)
595 updatestandin(repo, lfile, fstandin)
596
596
597 # Cook up a new matcher that only matches regular files or
597 # Cook up a new matcher that only matches regular files or
598 # standins corresponding to the big files requested by the
598 # standins corresponding to the big files requested by the
599 # user. Have to modify _files to prevent commit() from
599 # user. Have to modify _files to prevent commit() from
600 # complaining "not tracked" for big files.
600 # complaining "not tracked" for big files.
601 match = copy.copy(match)
601 match = copy.copy(match)
602 origmatchfn = match.matchfn
602 origmatchfn = match.matchfn
603
603
604 # Check both the list of largefiles and the list of
604 # Check both the list of largefiles and the list of
605 # standins because if a largefile was removed, it
605 # standins because if a largefile was removed, it
606 # won't be in the list of largefiles at this point
606 # won't be in the list of largefiles at this point
607 match._files += sorted(standins)
607 match._files += sorted(standins)
608
608
609 actualfiles = []
609 actualfiles = []
610 for f in match._files:
610 for f in match._files:
611 fstandin = standin(f)
611 fstandin = standin(f)
612
612
613 # For largefiles, only one of the normal and standin should be
613 # For largefiles, only one of the normal and standin should be
614 # committed (except if one of them is a remove). In the case of a
614 # committed (except if one of them is a remove). In the case of a
615 # standin removal, drop the normal file if it is unknown to dirstate.
615 # standin removal, drop the normal file if it is unknown to dirstate.
616 # Thus, skip plain largefile names but keep the standin.
616 # Thus, skip plain largefile names but keep the standin.
617 if f in lfiles or fstandin in standins:
617 if f in lfiles or fstandin in standins:
618 if repo.dirstate[fstandin] != 'r':
618 if repo.dirstate[fstandin] != 'r':
619 if repo.dirstate[f] != 'r':
619 if repo.dirstate[f] != 'r':
620 continue
620 continue
621 elif repo.dirstate[f] == '?':
621 elif repo.dirstate[f] == '?':
622 continue
622 continue
623
623
624 actualfiles.append(f)
624 actualfiles.append(f)
625 match._files = actualfiles
625 match._files = actualfiles
626
626
627 def matchfn(f):
627 def matchfn(f):
628 if origmatchfn(f):
628 if origmatchfn(f):
629 return f not in lfiles
629 return f not in lfiles
630 else:
630 else:
631 return f in standins
631 return f in standins
632
632
633 match.matchfn = matchfn
633 match.matchfn = matchfn
634
634
635 return match
635 return match
636
636
637 class automatedcommithook(object):
637 class automatedcommithook(object):
638 '''Stateful hook to update standins at the 1st commit of resuming
638 '''Stateful hook to update standins at the 1st commit of resuming
639
639
640 For efficiency, updating standins in the working directory should
640 For efficiency, updating standins in the working directory should
641 be avoided while automated committing (like rebase, transplant and
641 be avoided while automated committing (like rebase, transplant and
642 so on), because they should be updated before committing.
642 so on), because they should be updated before committing.
643
643
644 But the 1st commit of resuming automated committing (e.g. ``rebase
644 But the 1st commit of resuming automated committing (e.g. ``rebase
645 --continue``) should update them, because largefiles may be
645 --continue``) should update them, because largefiles may be
646 modified manually.
646 modified manually.
647 '''
647 '''
648 def __init__(self, resuming):
648 def __init__(self, resuming):
649 self.resuming = resuming
649 self.resuming = resuming
650
650
651 def __call__(self, repo, match):
651 def __call__(self, repo, match):
652 if self.resuming:
652 if self.resuming:
653 self.resuming = False # avoids updating at subsequent commits
653 self.resuming = False # avoids updating at subsequent commits
654 return updatestandinsbymatch(repo, match)
654 return updatestandinsbymatch(repo, match)
655 else:
655 else:
656 return match
656 return match
657
657
658 def getstatuswriter(ui, repo, forcibly=None):
658 def getstatuswriter(ui, repo, forcibly=None):
659 '''Return the function to write largefiles specific status out
659 '''Return the function to write largefiles specific status out
660
660
661 If ``forcibly`` is ``None``, this returns the last element of
661 If ``forcibly`` is ``None``, this returns the last element of
662 ``repo._lfstatuswriters`` as "default" writer function.
662 ``repo._lfstatuswriters`` as "default" writer function.
663
663
664 Otherwise, this returns the function to always write out (or
664 Otherwise, this returns the function to always write out (or
665 ignore if ``not forcibly``) status.
665 ignore if ``not forcibly``) status.
666 '''
666 '''
667 if forcibly is None and util.safehasattr(repo, '_largefilesenabled'):
667 if forcibly is None and util.safehasattr(repo, '_largefilesenabled'):
668 return repo._lfstatuswriters[-1]
668 return repo._lfstatuswriters[-1]
669 else:
669 else:
670 if forcibly:
670 if forcibly:
671 return ui.status # forcibly WRITE OUT
671 return ui.status # forcibly WRITE OUT
672 else:
672 else:
673 return lambda *msg, **opts: None # forcibly IGNORE
673 return lambda *msg, **opts: None # forcibly IGNORE
General Comments 0
You need to be logged in to leave comments. Login now