##// END OF EJS Templates
largefiles: port wrapped functions to exthelper...
Matt Harbison -
r41092:0a7f582f default
parent child Browse files
Show More
@@ -1,160 +1,161 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''track large binary files
9 '''track large binary files
10
10
11 Large binary files tend to be not very compressible, not very
11 Large binary files tend to be not very compressible, not very
12 diffable, and not at all mergeable. Such files are not handled
12 diffable, and not at all mergeable. Such files are not handled
13 efficiently by Mercurial's storage format (revlog), which is based on
13 efficiently by Mercurial's storage format (revlog), which is based on
14 compressed binary deltas; storing large binary files as regular
14 compressed binary deltas; storing large binary files as regular
15 Mercurial files wastes bandwidth and disk space and increases
15 Mercurial files wastes bandwidth and disk space and increases
16 Mercurial's memory usage. The largefiles extension addresses these
16 Mercurial's memory usage. The largefiles extension addresses these
17 problems by adding a centralized client-server layer on top of
17 problems by adding a centralized client-server layer on top of
18 Mercurial: largefiles live in a *central store* out on the network
18 Mercurial: largefiles live in a *central store* out on the network
19 somewhere, and you only fetch the revisions that you need when you
19 somewhere, and you only fetch the revisions that you need when you
20 need them.
20 need them.
21
21
22 largefiles works by maintaining a "standin file" in .hglf/ for each
22 largefiles works by maintaining a "standin file" in .hglf/ for each
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
24 newline) and are tracked by Mercurial. Largefile revisions are
24 newline) and are tracked by Mercurial. Largefile revisions are
25 identified by the SHA-1 hash of their contents, which is written to
25 identified by the SHA-1 hash of their contents, which is written to
26 the standin. largefiles uses that revision ID to get/put largefile
26 the standin. largefiles uses that revision ID to get/put largefile
27 revisions from/to the central store. This saves both disk space and
27 revisions from/to the central store. This saves both disk space and
28 bandwidth, since you don't need to retrieve all historical revisions
28 bandwidth, since you don't need to retrieve all historical revisions
29 of large files when you clone or pull.
29 of large files when you clone or pull.
30
30
31 To start a new repository or add new large binary files, just add
31 To start a new repository or add new large binary files, just add
32 --large to your :hg:`add` command. For example::
32 --large to your :hg:`add` command. For example::
33
33
34 $ dd if=/dev/urandom of=randomdata count=2000
34 $ dd if=/dev/urandom of=randomdata count=2000
35 $ hg add --large randomdata
35 $ hg add --large randomdata
36 $ hg commit -m "add randomdata as a largefile"
36 $ hg commit -m "add randomdata as a largefile"
37
37
38 When you push a changeset that adds/modifies largefiles to a remote
38 When you push a changeset that adds/modifies largefiles to a remote
39 repository, its largefile revisions will be uploaded along with it.
39 repository, its largefile revisions will be uploaded along with it.
40 Note that the remote Mercurial must also have the largefiles extension
40 Note that the remote Mercurial must also have the largefiles extension
41 enabled for this to work.
41 enabled for this to work.
42
42
43 When you pull a changeset that affects largefiles from a remote
43 When you pull a changeset that affects largefiles from a remote
44 repository, the largefiles for the changeset will by default not be
44 repository, the largefiles for the changeset will by default not be
45 pulled down. However, when you update to such a revision, any
45 pulled down. However, when you update to such a revision, any
46 largefiles needed by that revision are downloaded and cached (if
46 largefiles needed by that revision are downloaded and cached (if
47 they have never been downloaded before). One way to pull largefiles
47 they have never been downloaded before). One way to pull largefiles
48 when pulling is thus to use --update, which will update your working
48 when pulling is thus to use --update, which will update your working
49 copy to the latest pulled revision (and thereby downloading any new
49 copy to the latest pulled revision (and thereby downloading any new
50 largefiles).
50 largefiles).
51
51
52 If you want to pull largefiles you don't need for update yet, then
52 If you want to pull largefiles you don't need for update yet, then
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
54
54
55 If you know you are pulling from a non-default location and want to
55 If you know you are pulling from a non-default location and want to
56 download all the largefiles that correspond to the new changesets at
56 download all the largefiles that correspond to the new changesets at
57 the same time, then you can pull with `--lfrev "pulled()"`.
57 the same time, then you can pull with `--lfrev "pulled()"`.
58
58
59 If you just want to ensure that you will have the largefiles needed to
59 If you just want to ensure that you will have the largefiles needed to
60 merge or rebase with new heads that you are pulling, then you can pull
60 merge or rebase with new heads that you are pulling, then you can pull
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
62 that are new in the heads you are pulling.
62 that are new in the heads you are pulling.
63
63
64 Keep in mind that network access may now be required to update to
64 Keep in mind that network access may now be required to update to
65 changesets that you have not previously updated to. The nature of the
65 changesets that you have not previously updated to. The nature of the
66 largefiles extension means that updating is no longer guaranteed to
66 largefiles extension means that updating is no longer guaranteed to
67 be a local-only operation.
67 be a local-only operation.
68
68
69 If you already have large files tracked by Mercurial without the
69 If you already have large files tracked by Mercurial without the
70 largefiles extension, you will need to convert your repository in
70 largefiles extension, you will need to convert your repository in
71 order to benefit from largefiles. This is done with the
71 order to benefit from largefiles. This is done with the
72 :hg:`lfconvert` command::
72 :hg:`lfconvert` command::
73
73
74 $ hg lfconvert --size 10 oldrepo newrepo
74 $ hg lfconvert --size 10 oldrepo newrepo
75
75
76 In repositories that already have largefiles in them, any new file
76 In repositories that already have largefiles in them, any new file
77 over 10MB will automatically be added as a largefile. To change this
77 over 10MB will automatically be added as a largefile. To change this
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
79 to the minimum size in megabytes to track as a largefile, or use the
79 to the minimum size in megabytes to track as a largefile, or use the
80 --lfsize option to the add command (also in megabytes)::
80 --lfsize option to the add command (also in megabytes)::
81
81
82 [largefiles]
82 [largefiles]
83 minsize = 2
83 minsize = 2
84
84
85 $ hg add --lfsize 2
85 $ hg add --lfsize 2
86
86
87 The ``largefiles.patterns`` config option allows you to specify a list
87 The ``largefiles.patterns`` config option allows you to specify a list
88 of filename patterns (see :hg:`help patterns`) that should always be
88 of filename patterns (see :hg:`help patterns`) that should always be
89 tracked as largefiles::
89 tracked as largefiles::
90
90
91 [largefiles]
91 [largefiles]
92 patterns =
92 patterns =
93 *.jpg
93 *.jpg
94 re:.*\\.(png|bmp)$
94 re:.*\\.(png|bmp)$
95 library.zip
95 library.zip
96 content/audio/*
96 content/audio/*
97
97
98 Files that match one of these patterns will be added as largefiles
98 Files that match one of these patterns will be added as largefiles
99 regardless of their size.
99 regardless of their size.
100
100
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
102 will be ignored for any repositories not already containing a
102 will be ignored for any repositories not already containing a
103 largefile. To add the first largefile to a repository, you must
103 largefile. To add the first largefile to a repository, you must
104 explicitly do so with the --large flag passed to the :hg:`add`
104 explicitly do so with the --large flag passed to the :hg:`add`
105 command.
105 command.
106 '''
106 '''
107 from __future__ import absolute_import
107 from __future__ import absolute_import
108
108
109 from mercurial import (
109 from mercurial import (
110 configitems,
110 configitems,
111 exthelper,
111 exthelper,
112 hg,
112 hg,
113 localrepo,
113 localrepo,
114 )
114 )
115
115
116 from . import (
116 from . import (
117 lfcommands,
117 lfcommands,
118 overrides,
118 overrides,
119 proto,
119 proto,
120 reposetup,
120 reposetup,
121 uisetup as uisetupmod,
121 uisetup as uisetupmod,
122 )
122 )
123
123
124 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
124 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
125 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
125 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
126 # be specifying the version(s) of Mercurial they are tested with, or
126 # be specifying the version(s) of Mercurial they are tested with, or
127 # leave the attribute unspecified.
127 # leave the attribute unspecified.
128 testedwith = 'ships-with-hg-core'
128 testedwith = 'ships-with-hg-core'
129
129
130 eh = exthelper.exthelper()
130 eh = exthelper.exthelper()
131 eh.merge(lfcommands.eh)
131 eh.merge(lfcommands.eh)
132 eh.merge(overrides.eh)
132 eh.merge(overrides.eh)
133 eh.merge(proto.eh)
133
134
134 eh.configitem('largefiles', 'minsize',
135 eh.configitem('largefiles', 'minsize',
135 default=configitems.dynamicdefault,
136 default=configitems.dynamicdefault,
136 )
137 )
137 eh.configitem('largefiles', 'patterns',
138 eh.configitem('largefiles', 'patterns',
138 default=list,
139 default=list,
139 )
140 )
140 eh.configitem('largefiles', 'usercache',
141 eh.configitem('largefiles', 'usercache',
141 default=None,
142 default=None,
142 )
143 )
143
144
144 cmdtable = eh.cmdtable
145 cmdtable = eh.cmdtable
145 configtable = eh.configtable
146 configtable = eh.configtable
146 extsetup = eh.finalextsetup
147 extsetup = eh.finalextsetup
147 reposetup = reposetup.reposetup
148 reposetup = reposetup.reposetup
148 uisetup = eh.finaluisetup
149 uisetup = eh.finaluisetup
149
150
150 def featuresetup(ui, supported):
151 def featuresetup(ui, supported):
151 # don't die on seeing a repo with the largefiles requirement
152 # don't die on seeing a repo with the largefiles requirement
152 supported |= {'largefiles'}
153 supported |= {'largefiles'}
153
154
154 @eh.uisetup
155 @eh.uisetup
155 def _uisetup(ui):
156 def _uisetup(ui):
156 localrepo.featuresetupfuncs.add(featuresetup)
157 localrepo.featuresetupfuncs.add(featuresetup)
157 hg.wirepeersetupfuncs.append(proto.wirereposetup)
158 hg.wirepeersetupfuncs.append(proto.wirereposetup)
158 uisetupmod.uisetup(ui)
159 uisetupmod.uisetup(ui)
159
160
160 revsetpredicate = overrides.revsetpredicate
161 revsetpredicate = overrides.revsetpredicate
@@ -1,1529 +1,1567 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
9 '''Overridden Mercurial commands and functions for the largefiles extension'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 import copy
12 import copy
13 import os
13 import os
14
14
15 from mercurial.i18n import _
15 from mercurial.i18n import _
16
16
17 from mercurial.hgweb import (
18 webcommands,
19 )
20
17 from mercurial import (
21 from mercurial import (
18 archival,
22 archival,
19 cmdutil,
23 cmdutil,
24 copies as copiesmod,
20 error,
25 error,
26 exchange,
21 exthelper,
27 exthelper,
28 filemerge,
22 hg,
29 hg,
23 logcmdutil,
30 logcmdutil,
24 match as matchmod,
31 match as matchmod,
32 merge,
25 pathutil,
33 pathutil,
26 pycompat,
34 pycompat,
27 registrar,
35 registrar,
28 scmutil,
36 scmutil,
29 smartset,
37 smartset,
38 subrepo,
39 upgrade,
40 url as urlmod,
30 util,
41 util,
31 )
42 )
32
43
33 from . import (
44 from . import (
34 lfcommands,
45 lfcommands,
35 lfutil,
46 lfutil,
36 storefactory,
47 storefactory,
37 )
48 )
38
49
39 eh = exthelper.exthelper()
50 eh = exthelper.exthelper()
40
51
41 # -- Utility functions: commonly/repeatedly needed functionality ---------------
52 # -- Utility functions: commonly/repeatedly needed functionality ---------------
42
53
43 def composelargefilematcher(match, manifest):
54 def composelargefilematcher(match, manifest):
44 '''create a matcher that matches only the largefiles in the original
55 '''create a matcher that matches only the largefiles in the original
45 matcher'''
56 matcher'''
46 m = copy.copy(match)
57 m = copy.copy(match)
47 lfile = lambda f: lfutil.standin(f) in manifest
58 lfile = lambda f: lfutil.standin(f) in manifest
48 m._files = [lf for lf in m._files if lfile(lf)]
59 m._files = [lf for lf in m._files if lfile(lf)]
49 m._fileset = set(m._files)
60 m._fileset = set(m._files)
50 m.always = lambda: False
61 m.always = lambda: False
51 origmatchfn = m.matchfn
62 origmatchfn = m.matchfn
52 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
63 m.matchfn = lambda f: lfile(f) and origmatchfn(f)
53 return m
64 return m
54
65
55 def composenormalfilematcher(match, manifest, exclude=None):
66 def composenormalfilematcher(match, manifest, exclude=None):
56 excluded = set()
67 excluded = set()
57 if exclude is not None:
68 if exclude is not None:
58 excluded.update(exclude)
69 excluded.update(exclude)
59
70
60 m = copy.copy(match)
71 m = copy.copy(match)
61 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
72 notlfile = lambda f: not (lfutil.isstandin(f) or lfutil.standin(f) in
62 manifest or f in excluded)
73 manifest or f in excluded)
63 m._files = [lf for lf in m._files if notlfile(lf)]
74 m._files = [lf for lf in m._files if notlfile(lf)]
64 m._fileset = set(m._files)
75 m._fileset = set(m._files)
65 m.always = lambda: False
76 m.always = lambda: False
66 origmatchfn = m.matchfn
77 origmatchfn = m.matchfn
67 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
78 m.matchfn = lambda f: notlfile(f) and origmatchfn(f)
68 return m
79 return m
69
80
70 def installnormalfilesmatchfn(manifest):
81 def installnormalfilesmatchfn(manifest):
71 '''installmatchfn with a matchfn that ignores all largefiles'''
82 '''installmatchfn with a matchfn that ignores all largefiles'''
72 def overridematch(ctx, pats=(), opts=None, globbed=False,
83 def overridematch(ctx, pats=(), opts=None, globbed=False,
73 default='relpath', badfn=None):
84 default='relpath', badfn=None):
74 if opts is None:
85 if opts is None:
75 opts = {}
86 opts = {}
76 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
87 match = oldmatch(ctx, pats, opts, globbed, default, badfn=badfn)
77 return composenormalfilematcher(match, manifest)
88 return composenormalfilematcher(match, manifest)
78 oldmatch = installmatchfn(overridematch)
89 oldmatch = installmatchfn(overridematch)
79
90
80 def installmatchfn(f):
91 def installmatchfn(f):
81 '''monkey patch the scmutil module with a custom match function.
92 '''monkey patch the scmutil module with a custom match function.
82 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
93 Warning: it is monkey patching the _module_ on runtime! Not thread safe!'''
83 oldmatch = scmutil.match
94 oldmatch = scmutil.match
84 setattr(f, 'oldmatch', oldmatch)
95 setattr(f, 'oldmatch', oldmatch)
85 scmutil.match = f
96 scmutil.match = f
86 return oldmatch
97 return oldmatch
87
98
88 def restorematchfn():
99 def restorematchfn():
89 '''restores scmutil.match to what it was before installmatchfn
100 '''restores scmutil.match to what it was before installmatchfn
90 was called. no-op if scmutil.match is its original function.
101 was called. no-op if scmutil.match is its original function.
91
102
92 Note that n calls to installmatchfn will require n calls to
103 Note that n calls to installmatchfn will require n calls to
93 restore the original matchfn.'''
104 restore the original matchfn.'''
94 scmutil.match = getattr(scmutil.match, 'oldmatch')
105 scmutil.match = getattr(scmutil.match, 'oldmatch')
95
106
96 def installmatchandpatsfn(f):
107 def installmatchandpatsfn(f):
97 oldmatchandpats = scmutil.matchandpats
108 oldmatchandpats = scmutil.matchandpats
98 setattr(f, 'oldmatchandpats', oldmatchandpats)
109 setattr(f, 'oldmatchandpats', oldmatchandpats)
99 scmutil.matchandpats = f
110 scmutil.matchandpats = f
100 return oldmatchandpats
111 return oldmatchandpats
101
112
102 def restorematchandpatsfn():
113 def restorematchandpatsfn():
103 '''restores scmutil.matchandpats to what it was before
114 '''restores scmutil.matchandpats to what it was before
104 installmatchandpatsfn was called. No-op if scmutil.matchandpats
115 installmatchandpatsfn was called. No-op if scmutil.matchandpats
105 is its original function.
116 is its original function.
106
117
107 Note that n calls to installmatchandpatsfn will require n calls
118 Note that n calls to installmatchandpatsfn will require n calls
108 to restore the original matchfn.'''
119 to restore the original matchfn.'''
109 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
120 scmutil.matchandpats = getattr(scmutil.matchandpats, 'oldmatchandpats',
110 scmutil.matchandpats)
121 scmutil.matchandpats)
111
122
112 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
123 def addlargefiles(ui, repo, isaddremove, matcher, **opts):
113 large = opts.get(r'large')
124 large = opts.get(r'large')
114 lfsize = lfutil.getminsize(
125 lfsize = lfutil.getminsize(
115 ui, lfutil.islfilesrepo(repo), opts.get(r'lfsize'))
126 ui, lfutil.islfilesrepo(repo), opts.get(r'lfsize'))
116
127
117 lfmatcher = None
128 lfmatcher = None
118 if lfutil.islfilesrepo(repo):
129 if lfutil.islfilesrepo(repo):
119 lfpats = ui.configlist(lfutil.longname, 'patterns')
130 lfpats = ui.configlist(lfutil.longname, 'patterns')
120 if lfpats:
131 if lfpats:
121 lfmatcher = matchmod.match(repo.root, '', list(lfpats))
132 lfmatcher = matchmod.match(repo.root, '', list(lfpats))
122
133
123 lfnames = []
134 lfnames = []
124 m = matcher
135 m = matcher
125
136
126 wctx = repo[None]
137 wctx = repo[None]
127 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
138 for f in wctx.walk(matchmod.badmatch(m, lambda x, y: None)):
128 exact = m.exact(f)
139 exact = m.exact(f)
129 lfile = lfutil.standin(f) in wctx
140 lfile = lfutil.standin(f) in wctx
130 nfile = f in wctx
141 nfile = f in wctx
131 exists = lfile or nfile
142 exists = lfile or nfile
132
143
133 # addremove in core gets fancy with the name, add doesn't
144 # addremove in core gets fancy with the name, add doesn't
134 if isaddremove:
145 if isaddremove:
135 name = m.uipath(f)
146 name = m.uipath(f)
136 else:
147 else:
137 name = m.rel(f)
148 name = m.rel(f)
138
149
139 # Don't warn the user when they attempt to add a normal tracked file.
150 # Don't warn the user when they attempt to add a normal tracked file.
140 # The normal add code will do that for us.
151 # The normal add code will do that for us.
141 if exact and exists:
152 if exact and exists:
142 if lfile:
153 if lfile:
143 ui.warn(_('%s already a largefile\n') % name)
154 ui.warn(_('%s already a largefile\n') % name)
144 continue
155 continue
145
156
146 if (exact or not exists) and not lfutil.isstandin(f):
157 if (exact or not exists) and not lfutil.isstandin(f):
147 # In case the file was removed previously, but not committed
158 # In case the file was removed previously, but not committed
148 # (issue3507)
159 # (issue3507)
149 if not repo.wvfs.exists(f):
160 if not repo.wvfs.exists(f):
150 continue
161 continue
151
162
152 abovemin = (lfsize and
163 abovemin = (lfsize and
153 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
164 repo.wvfs.lstat(f).st_size >= lfsize * 1024 * 1024)
154 if large or abovemin or (lfmatcher and lfmatcher(f)):
165 if large or abovemin or (lfmatcher and lfmatcher(f)):
155 lfnames.append(f)
166 lfnames.append(f)
156 if ui.verbose or not exact:
167 if ui.verbose or not exact:
157 ui.status(_('adding %s as a largefile\n') % name)
168 ui.status(_('adding %s as a largefile\n') % name)
158
169
159 bad = []
170 bad = []
160
171
161 # Need to lock, otherwise there could be a race condition between
172 # Need to lock, otherwise there could be a race condition between
162 # when standins are created and added to the repo.
173 # when standins are created and added to the repo.
163 with repo.wlock():
174 with repo.wlock():
164 if not opts.get(r'dry_run'):
175 if not opts.get(r'dry_run'):
165 standins = []
176 standins = []
166 lfdirstate = lfutil.openlfdirstate(ui, repo)
177 lfdirstate = lfutil.openlfdirstate(ui, repo)
167 for f in lfnames:
178 for f in lfnames:
168 standinname = lfutil.standin(f)
179 standinname = lfutil.standin(f)
169 lfutil.writestandin(repo, standinname, hash='',
180 lfutil.writestandin(repo, standinname, hash='',
170 executable=lfutil.getexecutable(repo.wjoin(f)))
181 executable=lfutil.getexecutable(repo.wjoin(f)))
171 standins.append(standinname)
182 standins.append(standinname)
172 if lfdirstate[f] == 'r':
183 if lfdirstate[f] == 'r':
173 lfdirstate.normallookup(f)
184 lfdirstate.normallookup(f)
174 else:
185 else:
175 lfdirstate.add(f)
186 lfdirstate.add(f)
176 lfdirstate.write()
187 lfdirstate.write()
177 bad += [lfutil.splitstandin(f)
188 bad += [lfutil.splitstandin(f)
178 for f in repo[None].add(standins)
189 for f in repo[None].add(standins)
179 if f in m.files()]
190 if f in m.files()]
180
191
181 added = [f for f in lfnames if f not in bad]
192 added = [f for f in lfnames if f not in bad]
182 return added, bad
193 return added, bad
183
194
184 def removelargefiles(ui, repo, isaddremove, matcher, dryrun, **opts):
195 def removelargefiles(ui, repo, isaddremove, matcher, dryrun, **opts):
185 after = opts.get(r'after')
196 after = opts.get(r'after')
186 m = composelargefilematcher(matcher, repo[None].manifest())
197 m = composelargefilematcher(matcher, repo[None].manifest())
187 try:
198 try:
188 repo.lfstatus = True
199 repo.lfstatus = True
189 s = repo.status(match=m, clean=not isaddremove)
200 s = repo.status(match=m, clean=not isaddremove)
190 finally:
201 finally:
191 repo.lfstatus = False
202 repo.lfstatus = False
192 manifest = repo[None].manifest()
203 manifest = repo[None].manifest()
193 modified, added, deleted, clean = [[f for f in list
204 modified, added, deleted, clean = [[f for f in list
194 if lfutil.standin(f) in manifest]
205 if lfutil.standin(f) in manifest]
195 for list in (s.modified, s.added,
206 for list in (s.modified, s.added,
196 s.deleted, s.clean)]
207 s.deleted, s.clean)]
197
208
198 def warn(files, msg):
209 def warn(files, msg):
199 for f in files:
210 for f in files:
200 ui.warn(msg % m.rel(f))
211 ui.warn(msg % m.rel(f))
201 return int(len(files) > 0)
212 return int(len(files) > 0)
202
213
203 result = 0
214 result = 0
204
215
205 if after:
216 if after:
206 remove = deleted
217 remove = deleted
207 result = warn(modified + added + clean,
218 result = warn(modified + added + clean,
208 _('not removing %s: file still exists\n'))
219 _('not removing %s: file still exists\n'))
209 else:
220 else:
210 remove = deleted + clean
221 remove = deleted + clean
211 result = warn(modified, _('not removing %s: file is modified (use -f'
222 result = warn(modified, _('not removing %s: file is modified (use -f'
212 ' to force removal)\n'))
223 ' to force removal)\n'))
213 result = warn(added, _('not removing %s: file has been marked for add'
224 result = warn(added, _('not removing %s: file has been marked for add'
214 ' (use forget to undo)\n')) or result
225 ' (use forget to undo)\n')) or result
215
226
216 # Need to lock because standin files are deleted then removed from the
227 # Need to lock because standin files are deleted then removed from the
217 # repository and we could race in-between.
228 # repository and we could race in-between.
218 with repo.wlock():
229 with repo.wlock():
219 lfdirstate = lfutil.openlfdirstate(ui, repo)
230 lfdirstate = lfutil.openlfdirstate(ui, repo)
220 for f in sorted(remove):
231 for f in sorted(remove):
221 if ui.verbose or not m.exact(f):
232 if ui.verbose or not m.exact(f):
222 # addremove in core gets fancy with the name, remove doesn't
233 # addremove in core gets fancy with the name, remove doesn't
223 if isaddremove:
234 if isaddremove:
224 name = m.uipath(f)
235 name = m.uipath(f)
225 else:
236 else:
226 name = m.rel(f)
237 name = m.rel(f)
227 ui.status(_('removing %s\n') % name)
238 ui.status(_('removing %s\n') % name)
228
239
229 if not dryrun:
240 if not dryrun:
230 if not after:
241 if not after:
231 repo.wvfs.unlinkpath(f, ignoremissing=True)
242 repo.wvfs.unlinkpath(f, ignoremissing=True)
232
243
233 if dryrun:
244 if dryrun:
234 return result
245 return result
235
246
236 remove = [lfutil.standin(f) for f in remove]
247 remove = [lfutil.standin(f) for f in remove]
237 # If this is being called by addremove, let the original addremove
248 # If this is being called by addremove, let the original addremove
238 # function handle this.
249 # function handle this.
239 if not isaddremove:
250 if not isaddremove:
240 for f in remove:
251 for f in remove:
241 repo.wvfs.unlinkpath(f, ignoremissing=True)
252 repo.wvfs.unlinkpath(f, ignoremissing=True)
242 repo[None].forget(remove)
253 repo[None].forget(remove)
243
254
244 for f in remove:
255 for f in remove:
245 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
256 lfutil.synclfdirstate(repo, lfdirstate, lfutil.splitstandin(f),
246 False)
257 False)
247
258
248 lfdirstate.write()
259 lfdirstate.write()
249
260
250 return result
261 return result
251
262
252 # For overriding mercurial.hgweb.webcommands so that largefiles will
263 # For overriding mercurial.hgweb.webcommands so that largefiles will
253 # appear at their right place in the manifests.
264 # appear at their right place in the manifests.
265 @eh.wrapfunction(webcommands, 'decodepath')
254 def decodepath(orig, path):
266 def decodepath(orig, path):
255 return lfutil.splitstandin(path) or path
267 return lfutil.splitstandin(path) or path
256
268
257 # -- Wrappers: modify existing commands --------------------------------
269 # -- Wrappers: modify existing commands --------------------------------
258
270
259 @eh.wrapcommand('add',
271 @eh.wrapcommand('add',
260 opts=[('', 'large', None, _('add as largefile')),
272 opts=[('', 'large', None, _('add as largefile')),
261 ('', 'normal', None, _('add as normal file')),
273 ('', 'normal', None, _('add as normal file')),
262 ('', 'lfsize', '', _('add all files above this size (in megabytes) '
274 ('', 'lfsize', '', _('add all files above this size (in megabytes) '
263 'as largefiles (default: 10)'))])
275 'as largefiles (default: 10)'))])
264 def overrideadd(orig, ui, repo, *pats, **opts):
276 def overrideadd(orig, ui, repo, *pats, **opts):
265 if opts.get(r'normal') and opts.get(r'large'):
277 if opts.get(r'normal') and opts.get(r'large'):
266 raise error.Abort(_('--normal cannot be used with --large'))
278 raise error.Abort(_('--normal cannot be used with --large'))
267 return orig(ui, repo, *pats, **opts)
279 return orig(ui, repo, *pats, **opts)
268
280
281 @eh.wrapfunction(cmdutil, 'add')
269 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
282 def cmdutiladd(orig, ui, repo, matcher, prefix, explicitonly, **opts):
270 # The --normal flag short circuits this override
283 # The --normal flag short circuits this override
271 if opts.get(r'normal'):
284 if opts.get(r'normal'):
272 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
285 return orig(ui, repo, matcher, prefix, explicitonly, **opts)
273
286
274 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
287 ladded, lbad = addlargefiles(ui, repo, False, matcher, **opts)
275 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
288 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest(),
276 ladded)
289 ladded)
277 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
290 bad = orig(ui, repo, normalmatcher, prefix, explicitonly, **opts)
278
291
279 bad.extend(f for f in lbad)
292 bad.extend(f for f in lbad)
280 return bad
293 return bad
281
294
295 @eh.wrapfunction(cmdutil, 'remove')
282 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos,
296 def cmdutilremove(orig, ui, repo, matcher, prefix, after, force, subrepos,
283 dryrun):
297 dryrun):
284 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
298 normalmatcher = composenormalfilematcher(matcher, repo[None].manifest())
285 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos,
299 result = orig(ui, repo, normalmatcher, prefix, after, force, subrepos,
286 dryrun)
300 dryrun)
287 return removelargefiles(ui, repo, False, matcher, dryrun, after=after,
301 return removelargefiles(ui, repo, False, matcher, dryrun, after=after,
288 force=force) or result
302 force=force) or result
289
303
304 @eh.wrapfunction(subrepo.hgsubrepo, 'status')
290 def overridestatusfn(orig, repo, rev2, **opts):
305 def overridestatusfn(orig, repo, rev2, **opts):
291 try:
306 try:
292 repo._repo.lfstatus = True
307 repo._repo.lfstatus = True
293 return orig(repo, rev2, **opts)
308 return orig(repo, rev2, **opts)
294 finally:
309 finally:
295 repo._repo.lfstatus = False
310 repo._repo.lfstatus = False
296
311
297 @eh.wrapcommand('status')
312 @eh.wrapcommand('status')
298 def overridestatus(orig, ui, repo, *pats, **opts):
313 def overridestatus(orig, ui, repo, *pats, **opts):
299 try:
314 try:
300 repo.lfstatus = True
315 repo.lfstatus = True
301 return orig(ui, repo, *pats, **opts)
316 return orig(ui, repo, *pats, **opts)
302 finally:
317 finally:
303 repo.lfstatus = False
318 repo.lfstatus = False
304
319
320 @eh.wrapfunction(subrepo.hgsubrepo, 'dirty')
305 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
321 def overridedirty(orig, repo, ignoreupdate=False, missing=False):
306 try:
322 try:
307 repo._repo.lfstatus = True
323 repo._repo.lfstatus = True
308 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
324 return orig(repo, ignoreupdate=ignoreupdate, missing=missing)
309 finally:
325 finally:
310 repo._repo.lfstatus = False
326 repo._repo.lfstatus = False
311
327
312 @eh.wrapcommand('log')
328 @eh.wrapcommand('log')
313 def overridelog(orig, ui, repo, *pats, **opts):
329 def overridelog(orig, ui, repo, *pats, **opts):
314 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
330 def overridematchandpats(ctx, pats=(), opts=None, globbed=False,
315 default='relpath', badfn=None):
331 default='relpath', badfn=None):
316 """Matcher that merges root directory with .hglf, suitable for log.
332 """Matcher that merges root directory with .hglf, suitable for log.
317 It is still possible to match .hglf directly.
333 It is still possible to match .hglf directly.
318 For any listed files run log on the standin too.
334 For any listed files run log on the standin too.
319 matchfn tries both the given filename and with .hglf stripped.
335 matchfn tries both the given filename and with .hglf stripped.
320 """
336 """
321 if opts is None:
337 if opts is None:
322 opts = {}
338 opts = {}
323 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
339 matchandpats = oldmatchandpats(ctx, pats, opts, globbed, default,
324 badfn=badfn)
340 badfn=badfn)
325 m, p = copy.copy(matchandpats)
341 m, p = copy.copy(matchandpats)
326
342
327 if m.always():
343 if m.always():
328 # We want to match everything anyway, so there's no benefit trying
344 # We want to match everything anyway, so there's no benefit trying
329 # to add standins.
345 # to add standins.
330 return matchandpats
346 return matchandpats
331
347
332 pats = set(p)
348 pats = set(p)
333
349
334 def fixpats(pat, tostandin=lfutil.standin):
350 def fixpats(pat, tostandin=lfutil.standin):
335 if pat.startswith('set:'):
351 if pat.startswith('set:'):
336 return pat
352 return pat
337
353
338 kindpat = matchmod._patsplit(pat, None)
354 kindpat = matchmod._patsplit(pat, None)
339
355
340 if kindpat[0] is not None:
356 if kindpat[0] is not None:
341 return kindpat[0] + ':' + tostandin(kindpat[1])
357 return kindpat[0] + ':' + tostandin(kindpat[1])
342 return tostandin(kindpat[1])
358 return tostandin(kindpat[1])
343
359
344 if m._cwd:
360 if m._cwd:
345 hglf = lfutil.shortname
361 hglf = lfutil.shortname
346 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
362 back = util.pconvert(m.rel(hglf)[:-len(hglf)])
347
363
348 def tostandin(f):
364 def tostandin(f):
349 # The file may already be a standin, so truncate the back
365 # The file may already be a standin, so truncate the back
350 # prefix and test before mangling it. This avoids turning
366 # prefix and test before mangling it. This avoids turning
351 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
367 # 'glob:../.hglf/foo*' into 'glob:../.hglf/../.hglf/foo*'.
352 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
368 if f.startswith(back) and lfutil.splitstandin(f[len(back):]):
353 return f
369 return f
354
370
355 # An absolute path is from outside the repo, so truncate the
371 # An absolute path is from outside the repo, so truncate the
356 # path to the root before building the standin. Otherwise cwd
372 # path to the root before building the standin. Otherwise cwd
357 # is somewhere in the repo, relative to root, and needs to be
373 # is somewhere in the repo, relative to root, and needs to be
358 # prepended before building the standin.
374 # prepended before building the standin.
359 if os.path.isabs(m._cwd):
375 if os.path.isabs(m._cwd):
360 f = f[len(back):]
376 f = f[len(back):]
361 else:
377 else:
362 f = m._cwd + '/' + f
378 f = m._cwd + '/' + f
363 return back + lfutil.standin(f)
379 return back + lfutil.standin(f)
364 else:
380 else:
365 def tostandin(f):
381 def tostandin(f):
366 if lfutil.isstandin(f):
382 if lfutil.isstandin(f):
367 return f
383 return f
368 return lfutil.standin(f)
384 return lfutil.standin(f)
369 pats.update(fixpats(f, tostandin) for f in p)
385 pats.update(fixpats(f, tostandin) for f in p)
370
386
371 for i in range(0, len(m._files)):
387 for i in range(0, len(m._files)):
372 # Don't add '.hglf' to m.files, since that is already covered by '.'
388 # Don't add '.hglf' to m.files, since that is already covered by '.'
373 if m._files[i] == '.':
389 if m._files[i] == '.':
374 continue
390 continue
375 standin = lfutil.standin(m._files[i])
391 standin = lfutil.standin(m._files[i])
376 # If the "standin" is a directory, append instead of replace to
392 # If the "standin" is a directory, append instead of replace to
377 # support naming a directory on the command line with only
393 # support naming a directory on the command line with only
378 # largefiles. The original directory is kept to support normal
394 # largefiles. The original directory is kept to support normal
379 # files.
395 # files.
380 if standin in ctx:
396 if standin in ctx:
381 m._files[i] = standin
397 m._files[i] = standin
382 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
398 elif m._files[i] not in ctx and repo.wvfs.isdir(standin):
383 m._files.append(standin)
399 m._files.append(standin)
384
400
385 m._fileset = set(m._files)
401 m._fileset = set(m._files)
386 m.always = lambda: False
402 m.always = lambda: False
387 origmatchfn = m.matchfn
403 origmatchfn = m.matchfn
388 def lfmatchfn(f):
404 def lfmatchfn(f):
389 lf = lfutil.splitstandin(f)
405 lf = lfutil.splitstandin(f)
390 if lf is not None and origmatchfn(lf):
406 if lf is not None and origmatchfn(lf):
391 return True
407 return True
392 r = origmatchfn(f)
408 r = origmatchfn(f)
393 return r
409 return r
394 m.matchfn = lfmatchfn
410 m.matchfn = lfmatchfn
395
411
396 ui.debug('updated patterns: %s\n' % ', '.join(sorted(pats)))
412 ui.debug('updated patterns: %s\n' % ', '.join(sorted(pats)))
397 return m, pats
413 return m, pats
398
414
399 # For hg log --patch, the match object is used in two different senses:
415 # For hg log --patch, the match object is used in two different senses:
400 # (1) to determine what revisions should be printed out, and
416 # (1) to determine what revisions should be printed out, and
401 # (2) to determine what files to print out diffs for.
417 # (2) to determine what files to print out diffs for.
402 # The magic matchandpats override should be used for case (1) but not for
418 # The magic matchandpats override should be used for case (1) but not for
403 # case (2).
419 # case (2).
404 def overridemakefilematcher(repo, pats, opts, badfn=None):
420 def overridemakefilematcher(repo, pats, opts, badfn=None):
405 wctx = repo[None]
421 wctx = repo[None]
406 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
422 match, pats = oldmatchandpats(wctx, pats, opts, badfn=badfn)
407 return lambda ctx: match
423 return lambda ctx: match
408
424
409 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
425 oldmatchandpats = installmatchandpatsfn(overridematchandpats)
410 oldmakefilematcher = logcmdutil._makenofollowfilematcher
426 oldmakefilematcher = logcmdutil._makenofollowfilematcher
411 setattr(logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
427 setattr(logcmdutil, '_makenofollowfilematcher', overridemakefilematcher)
412
428
413 try:
429 try:
414 return orig(ui, repo, *pats, **opts)
430 return orig(ui, repo, *pats, **opts)
415 finally:
431 finally:
416 restorematchandpatsfn()
432 restorematchandpatsfn()
417 setattr(logcmdutil, '_makenofollowfilematcher', oldmakefilematcher)
433 setattr(logcmdutil, '_makenofollowfilematcher', oldmakefilematcher)
418
434
419 @eh.wrapcommand('verify',
435 @eh.wrapcommand('verify',
420 opts=[('', 'large', None,
436 opts=[('', 'large', None,
421 _('verify that all largefiles in current revision exists')),
437 _('verify that all largefiles in current revision exists')),
422 ('', 'lfa', None,
438 ('', 'lfa', None,
423 _('verify largefiles in all revisions, not just current')),
439 _('verify largefiles in all revisions, not just current')),
424 ('', 'lfc', None,
440 ('', 'lfc', None,
425 _('verify local largefile contents, not just existence'))])
441 _('verify local largefile contents, not just existence'))])
426 def overrideverify(orig, ui, repo, *pats, **opts):
442 def overrideverify(orig, ui, repo, *pats, **opts):
427 large = opts.pop(r'large', False)
443 large = opts.pop(r'large', False)
428 all = opts.pop(r'lfa', False)
444 all = opts.pop(r'lfa', False)
429 contents = opts.pop(r'lfc', False)
445 contents = opts.pop(r'lfc', False)
430
446
431 result = orig(ui, repo, *pats, **opts)
447 result = orig(ui, repo, *pats, **opts)
432 if large or all or contents:
448 if large or all or contents:
433 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
449 result = result or lfcommands.verifylfiles(ui, repo, all, contents)
434 return result
450 return result
435
451
436 @eh.wrapcommand('debugstate',
452 @eh.wrapcommand('debugstate',
437 opts=[('', 'large', None, _('display largefiles dirstate'))])
453 opts=[('', 'large', None, _('display largefiles dirstate'))])
438 def overridedebugstate(orig, ui, repo, *pats, **opts):
454 def overridedebugstate(orig, ui, repo, *pats, **opts):
439 large = opts.pop(r'large', False)
455 large = opts.pop(r'large', False)
440 if large:
456 if large:
441 class fakerepo(object):
457 class fakerepo(object):
442 dirstate = lfutil.openlfdirstate(ui, repo)
458 dirstate = lfutil.openlfdirstate(ui, repo)
443 orig(ui, fakerepo, *pats, **opts)
459 orig(ui, fakerepo, *pats, **opts)
444 else:
460 else:
445 orig(ui, repo, *pats, **opts)
461 orig(ui, repo, *pats, **opts)
446
462
447 # Before starting the manifest merge, merge.updates will call
463 # Before starting the manifest merge, merge.updates will call
448 # _checkunknownfile to check if there are any files in the merged-in
464 # _checkunknownfile to check if there are any files in the merged-in
449 # changeset that collide with unknown files in the working copy.
465 # changeset that collide with unknown files in the working copy.
450 #
466 #
451 # The largefiles are seen as unknown, so this prevents us from merging
467 # The largefiles are seen as unknown, so this prevents us from merging
452 # in a file 'foo' if we already have a largefile with the same name.
468 # in a file 'foo' if we already have a largefile with the same name.
453 #
469 #
454 # The overridden function filters the unknown files by removing any
470 # The overridden function filters the unknown files by removing any
455 # largefiles. This makes the merge proceed and we can then handle this
471 # largefiles. This makes the merge proceed and we can then handle this
456 # case further in the overridden calculateupdates function below.
472 # case further in the overridden calculateupdates function below.
473 @eh.wrapfunction(merge, '_checkunknownfile')
457 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
474 def overridecheckunknownfile(origfn, repo, wctx, mctx, f, f2=None):
458 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
475 if lfutil.standin(repo.dirstate.normalize(f)) in wctx:
459 return False
476 return False
460 return origfn(repo, wctx, mctx, f, f2)
477 return origfn(repo, wctx, mctx, f, f2)
461
478
462 # The manifest merge handles conflicts on the manifest level. We want
479 # The manifest merge handles conflicts on the manifest level. We want
463 # to handle changes in largefile-ness of files at this level too.
480 # to handle changes in largefile-ness of files at this level too.
464 #
481 #
465 # The strategy is to run the original calculateupdates and then process
482 # The strategy is to run the original calculateupdates and then process
466 # the action list it outputs. There are two cases we need to deal with:
483 # the action list it outputs. There are two cases we need to deal with:
467 #
484 #
468 # 1. Normal file in p1, largefile in p2. Here the largefile is
485 # 1. Normal file in p1, largefile in p2. Here the largefile is
469 # detected via its standin file, which will enter the working copy
486 # detected via its standin file, which will enter the working copy
470 # with a "get" action. It is not "merge" since the standin is all
487 # with a "get" action. It is not "merge" since the standin is all
471 # Mercurial is concerned with at this level -- the link to the
488 # Mercurial is concerned with at this level -- the link to the
472 # existing normal file is not relevant here.
489 # existing normal file is not relevant here.
473 #
490 #
474 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
491 # 2. Largefile in p1, normal file in p2. Here we get a "merge" action
475 # since the largefile will be present in the working copy and
492 # since the largefile will be present in the working copy and
476 # different from the normal file in p2. Mercurial therefore
493 # different from the normal file in p2. Mercurial therefore
477 # triggers a merge action.
494 # triggers a merge action.
478 #
495 #
479 # In both cases, we prompt the user and emit new actions to either
496 # In both cases, we prompt the user and emit new actions to either
480 # remove the standin (if the normal file was kept) or to remove the
497 # remove the standin (if the normal file was kept) or to remove the
481 # normal file and get the standin (if the largefile was kept). The
498 # normal file and get the standin (if the largefile was kept). The
482 # default prompt answer is to use the largefile version since it was
499 # default prompt answer is to use the largefile version since it was
483 # presumably changed on purpose.
500 # presumably changed on purpose.
484 #
501 #
485 # Finally, the merge.applyupdates function will then take care of
502 # Finally, the merge.applyupdates function will then take care of
486 # writing the files into the working copy and lfcommands.updatelfiles
503 # writing the files into the working copy and lfcommands.updatelfiles
487 # will update the largefiles.
504 # will update the largefiles.
505 @eh.wrapfunction(merge, 'calculateupdates')
488 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
506 def overridecalculateupdates(origfn, repo, p1, p2, pas, branchmerge, force,
489 acceptremote, *args, **kwargs):
507 acceptremote, *args, **kwargs):
490 overwrite = force and not branchmerge
508 overwrite = force and not branchmerge
491 actions, diverge, renamedelete = origfn(
509 actions, diverge, renamedelete = origfn(
492 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
510 repo, p1, p2, pas, branchmerge, force, acceptremote, *args, **kwargs)
493
511
494 if overwrite:
512 if overwrite:
495 return actions, diverge, renamedelete
513 return actions, diverge, renamedelete
496
514
497 # Convert to dictionary with filename as key and action as value.
515 # Convert to dictionary with filename as key and action as value.
498 lfiles = set()
516 lfiles = set()
499 for f in actions:
517 for f in actions:
500 splitstandin = lfutil.splitstandin(f)
518 splitstandin = lfutil.splitstandin(f)
501 if splitstandin in p1:
519 if splitstandin in p1:
502 lfiles.add(splitstandin)
520 lfiles.add(splitstandin)
503 elif lfutil.standin(f) in p1:
521 elif lfutil.standin(f) in p1:
504 lfiles.add(f)
522 lfiles.add(f)
505
523
506 for lfile in sorted(lfiles):
524 for lfile in sorted(lfiles):
507 standin = lfutil.standin(lfile)
525 standin = lfutil.standin(lfile)
508 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
526 (lm, largs, lmsg) = actions.get(lfile, (None, None, None))
509 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
527 (sm, sargs, smsg) = actions.get(standin, (None, None, None))
510 if sm in ('g', 'dc') and lm != 'r':
528 if sm in ('g', 'dc') and lm != 'r':
511 if sm == 'dc':
529 if sm == 'dc':
512 f1, f2, fa, move, anc = sargs
530 f1, f2, fa, move, anc = sargs
513 sargs = (p2[f2].flags(), False)
531 sargs = (p2[f2].flags(), False)
514 # Case 1: normal file in the working copy, largefile in
532 # Case 1: normal file in the working copy, largefile in
515 # the second parent
533 # the second parent
516 usermsg = _('remote turned local normal file %s into a largefile\n'
534 usermsg = _('remote turned local normal file %s into a largefile\n'
517 'use (l)argefile or keep (n)ormal file?'
535 'use (l)argefile or keep (n)ormal file?'
518 '$$ &Largefile $$ &Normal file') % lfile
536 '$$ &Largefile $$ &Normal file') % lfile
519 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
537 if repo.ui.promptchoice(usermsg, 0) == 0: # pick remote largefile
520 actions[lfile] = ('r', None, 'replaced by standin')
538 actions[lfile] = ('r', None, 'replaced by standin')
521 actions[standin] = ('g', sargs, 'replaces standin')
539 actions[standin] = ('g', sargs, 'replaces standin')
522 else: # keep local normal file
540 else: # keep local normal file
523 actions[lfile] = ('k', None, 'replaces standin')
541 actions[lfile] = ('k', None, 'replaces standin')
524 if branchmerge:
542 if branchmerge:
525 actions[standin] = ('k', None, 'replaced by non-standin')
543 actions[standin] = ('k', None, 'replaced by non-standin')
526 else:
544 else:
527 actions[standin] = ('r', None, 'replaced by non-standin')
545 actions[standin] = ('r', None, 'replaced by non-standin')
528 elif lm in ('g', 'dc') and sm != 'r':
546 elif lm in ('g', 'dc') and sm != 'r':
529 if lm == 'dc':
547 if lm == 'dc':
530 f1, f2, fa, move, anc = largs
548 f1, f2, fa, move, anc = largs
531 largs = (p2[f2].flags(), False)
549 largs = (p2[f2].flags(), False)
532 # Case 2: largefile in the working copy, normal file in
550 # Case 2: largefile in the working copy, normal file in
533 # the second parent
551 # the second parent
534 usermsg = _('remote turned local largefile %s into a normal file\n'
552 usermsg = _('remote turned local largefile %s into a normal file\n'
535 'keep (l)argefile or use (n)ormal file?'
553 'keep (l)argefile or use (n)ormal file?'
536 '$$ &Largefile $$ &Normal file') % lfile
554 '$$ &Largefile $$ &Normal file') % lfile
537 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
555 if repo.ui.promptchoice(usermsg, 0) == 0: # keep local largefile
538 if branchmerge:
556 if branchmerge:
539 # largefile can be restored from standin safely
557 # largefile can be restored from standin safely
540 actions[lfile] = ('k', None, 'replaced by standin')
558 actions[lfile] = ('k', None, 'replaced by standin')
541 actions[standin] = ('k', None, 'replaces standin')
559 actions[standin] = ('k', None, 'replaces standin')
542 else:
560 else:
543 # "lfile" should be marked as "removed" without
561 # "lfile" should be marked as "removed" without
544 # removal of itself
562 # removal of itself
545 actions[lfile] = ('lfmr', None,
563 actions[lfile] = ('lfmr', None,
546 'forget non-standin largefile')
564 'forget non-standin largefile')
547
565
548 # linear-merge should treat this largefile as 're-added'
566 # linear-merge should treat this largefile as 're-added'
549 actions[standin] = ('a', None, 'keep standin')
567 actions[standin] = ('a', None, 'keep standin')
550 else: # pick remote normal file
568 else: # pick remote normal file
551 actions[lfile] = ('g', largs, 'replaces standin')
569 actions[lfile] = ('g', largs, 'replaces standin')
552 actions[standin] = ('r', None, 'replaced by non-standin')
570 actions[standin] = ('r', None, 'replaced by non-standin')
553
571
554 return actions, diverge, renamedelete
572 return actions, diverge, renamedelete
555
573
574 @eh.wrapfunction(merge, 'recordupdates')
556 def mergerecordupdates(orig, repo, actions, branchmerge):
575 def mergerecordupdates(orig, repo, actions, branchmerge):
557 if 'lfmr' in actions:
576 if 'lfmr' in actions:
558 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
577 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
559 for lfile, args, msg in actions['lfmr']:
578 for lfile, args, msg in actions['lfmr']:
560 # this should be executed before 'orig', to execute 'remove'
579 # this should be executed before 'orig', to execute 'remove'
561 # before all other actions
580 # before all other actions
562 repo.dirstate.remove(lfile)
581 repo.dirstate.remove(lfile)
563 # make sure lfile doesn't get synclfdirstate'd as normal
582 # make sure lfile doesn't get synclfdirstate'd as normal
564 lfdirstate.add(lfile)
583 lfdirstate.add(lfile)
565 lfdirstate.write()
584 lfdirstate.write()
566
585
567 return orig(repo, actions, branchmerge)
586 return orig(repo, actions, branchmerge)
568
587
569 # Override filemerge to prompt the user about how they wish to merge
588 # Override filemerge to prompt the user about how they wish to merge
570 # largefiles. This will handle identical edits without prompting the user.
589 # largefiles. This will handle identical edits without prompting the user.
590 @eh.wrapfunction(filemerge, '_filemerge')
571 def overridefilemerge(origfn, premerge, repo, wctx, mynode, orig, fcd, fco, fca,
591 def overridefilemerge(origfn, premerge, repo, wctx, mynode, orig, fcd, fco, fca,
572 labels=None):
592 labels=None):
573 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
593 if not lfutil.isstandin(orig) or fcd.isabsent() or fco.isabsent():
574 return origfn(premerge, repo, wctx, mynode, orig, fcd, fco, fca,
594 return origfn(premerge, repo, wctx, mynode, orig, fcd, fco, fca,
575 labels=labels)
595 labels=labels)
576
596
577 ahash = lfutil.readasstandin(fca).lower()
597 ahash = lfutil.readasstandin(fca).lower()
578 dhash = lfutil.readasstandin(fcd).lower()
598 dhash = lfutil.readasstandin(fcd).lower()
579 ohash = lfutil.readasstandin(fco).lower()
599 ohash = lfutil.readasstandin(fco).lower()
580 if (ohash != ahash and
600 if (ohash != ahash and
581 ohash != dhash and
601 ohash != dhash and
582 (dhash == ahash or
602 (dhash == ahash or
583 repo.ui.promptchoice(
603 repo.ui.promptchoice(
584 _('largefile %s has a merge conflict\nancestor was %s\n'
604 _('largefile %s has a merge conflict\nancestor was %s\n'
585 'keep (l)ocal %s or\ntake (o)ther %s?'
605 'keep (l)ocal %s or\ntake (o)ther %s?'
586 '$$ &Local $$ &Other') %
606 '$$ &Local $$ &Other') %
587 (lfutil.splitstandin(orig), ahash, dhash, ohash),
607 (lfutil.splitstandin(orig), ahash, dhash, ohash),
588 0) == 1)):
608 0) == 1)):
589 repo.wwrite(fcd.path(), fco.data(), fco.flags())
609 repo.wwrite(fcd.path(), fco.data(), fco.flags())
590 return True, 0, False
610 return True, 0, False
591
611
612 @eh.wrapfunction(copiesmod, 'pathcopies')
592 def copiespathcopies(orig, ctx1, ctx2, match=None):
613 def copiespathcopies(orig, ctx1, ctx2, match=None):
593 copies = orig(ctx1, ctx2, match=match)
614 copies = orig(ctx1, ctx2, match=match)
594 updated = {}
615 updated = {}
595
616
596 for k, v in copies.iteritems():
617 for k, v in copies.iteritems():
597 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
618 updated[lfutil.splitstandin(k) or k] = lfutil.splitstandin(v) or v
598
619
599 return updated
620 return updated
600
621
601 # Copy first changes the matchers to match standins instead of
622 # Copy first changes the matchers to match standins instead of
602 # largefiles. Then it overrides util.copyfile in that function it
623 # largefiles. Then it overrides util.copyfile in that function it
603 # checks if the destination largefile already exists. It also keeps a
624 # checks if the destination largefile already exists. It also keeps a
604 # list of copied files so that the largefiles can be copied and the
625 # list of copied files so that the largefiles can be copied and the
605 # dirstate updated.
626 # dirstate updated.
627 @eh.wrapfunction(cmdutil, 'copy')
606 def overridecopy(orig, ui, repo, pats, opts, rename=False):
628 def overridecopy(orig, ui, repo, pats, opts, rename=False):
607 # doesn't remove largefile on rename
629 # doesn't remove largefile on rename
608 if len(pats) < 2:
630 if len(pats) < 2:
609 # this isn't legal, let the original function deal with it
631 # this isn't legal, let the original function deal with it
610 return orig(ui, repo, pats, opts, rename)
632 return orig(ui, repo, pats, opts, rename)
611
633
612 # This could copy both lfiles and normal files in one command,
634 # This could copy both lfiles and normal files in one command,
613 # but we don't want to do that. First replace their matcher to
635 # but we don't want to do that. First replace their matcher to
614 # only match normal files and run it, then replace it to just
636 # only match normal files and run it, then replace it to just
615 # match largefiles and run it again.
637 # match largefiles and run it again.
616 nonormalfiles = False
638 nonormalfiles = False
617 nolfiles = False
639 nolfiles = False
618 installnormalfilesmatchfn(repo[None].manifest())
640 installnormalfilesmatchfn(repo[None].manifest())
619 try:
641 try:
620 result = orig(ui, repo, pats, opts, rename)
642 result = orig(ui, repo, pats, opts, rename)
621 except error.Abort as e:
643 except error.Abort as e:
622 if pycompat.bytestr(e) != _('no files to copy'):
644 if pycompat.bytestr(e) != _('no files to copy'):
623 raise e
645 raise e
624 else:
646 else:
625 nonormalfiles = True
647 nonormalfiles = True
626 result = 0
648 result = 0
627 finally:
649 finally:
628 restorematchfn()
650 restorematchfn()
629
651
630 # The first rename can cause our current working directory to be removed.
652 # The first rename can cause our current working directory to be removed.
631 # In that case there is nothing left to copy/rename so just quit.
653 # In that case there is nothing left to copy/rename so just quit.
632 try:
654 try:
633 repo.getcwd()
655 repo.getcwd()
634 except OSError:
656 except OSError:
635 return result
657 return result
636
658
637 def makestandin(relpath):
659 def makestandin(relpath):
638 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
660 path = pathutil.canonpath(repo.root, repo.getcwd(), relpath)
639 return repo.wvfs.join(lfutil.standin(path))
661 return repo.wvfs.join(lfutil.standin(path))
640
662
641 fullpats = scmutil.expandpats(pats)
663 fullpats = scmutil.expandpats(pats)
642 dest = fullpats[-1]
664 dest = fullpats[-1]
643
665
644 if os.path.isdir(dest):
666 if os.path.isdir(dest):
645 if not os.path.isdir(makestandin(dest)):
667 if not os.path.isdir(makestandin(dest)):
646 os.makedirs(makestandin(dest))
668 os.makedirs(makestandin(dest))
647
669
648 try:
670 try:
649 # When we call orig below it creates the standins but we don't add
671 # When we call orig below it creates the standins but we don't add
650 # them to the dir state until later so lock during that time.
672 # them to the dir state until later so lock during that time.
651 wlock = repo.wlock()
673 wlock = repo.wlock()
652
674
653 manifest = repo[None].manifest()
675 manifest = repo[None].manifest()
654 def overridematch(ctx, pats=(), opts=None, globbed=False,
676 def overridematch(ctx, pats=(), opts=None, globbed=False,
655 default='relpath', badfn=None):
677 default='relpath', badfn=None):
656 if opts is None:
678 if opts is None:
657 opts = {}
679 opts = {}
658 newpats = []
680 newpats = []
659 # The patterns were previously mangled to add the standin
681 # The patterns were previously mangled to add the standin
660 # directory; we need to remove that now
682 # directory; we need to remove that now
661 for pat in pats:
683 for pat in pats:
662 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
684 if matchmod.patkind(pat) is None and lfutil.shortname in pat:
663 newpats.append(pat.replace(lfutil.shortname, ''))
685 newpats.append(pat.replace(lfutil.shortname, ''))
664 else:
686 else:
665 newpats.append(pat)
687 newpats.append(pat)
666 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
688 match = oldmatch(ctx, newpats, opts, globbed, default, badfn=badfn)
667 m = copy.copy(match)
689 m = copy.copy(match)
668 lfile = lambda f: lfutil.standin(f) in manifest
690 lfile = lambda f: lfutil.standin(f) in manifest
669 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
691 m._files = [lfutil.standin(f) for f in m._files if lfile(f)]
670 m._fileset = set(m._files)
692 m._fileset = set(m._files)
671 origmatchfn = m.matchfn
693 origmatchfn = m.matchfn
672 def matchfn(f):
694 def matchfn(f):
673 lfile = lfutil.splitstandin(f)
695 lfile = lfutil.splitstandin(f)
674 return (lfile is not None and
696 return (lfile is not None and
675 (f in manifest) and
697 (f in manifest) and
676 origmatchfn(lfile) or
698 origmatchfn(lfile) or
677 None)
699 None)
678 m.matchfn = matchfn
700 m.matchfn = matchfn
679 return m
701 return m
680 oldmatch = installmatchfn(overridematch)
702 oldmatch = installmatchfn(overridematch)
681 listpats = []
703 listpats = []
682 for pat in pats:
704 for pat in pats:
683 if matchmod.patkind(pat) is not None:
705 if matchmod.patkind(pat) is not None:
684 listpats.append(pat)
706 listpats.append(pat)
685 else:
707 else:
686 listpats.append(makestandin(pat))
708 listpats.append(makestandin(pat))
687
709
688 try:
710 try:
689 origcopyfile = util.copyfile
711 origcopyfile = util.copyfile
690 copiedfiles = []
712 copiedfiles = []
691 def overridecopyfile(src, dest, *args, **kwargs):
713 def overridecopyfile(src, dest, *args, **kwargs):
692 if (lfutil.shortname in src and
714 if (lfutil.shortname in src and
693 dest.startswith(repo.wjoin(lfutil.shortname))):
715 dest.startswith(repo.wjoin(lfutil.shortname))):
694 destlfile = dest.replace(lfutil.shortname, '')
716 destlfile = dest.replace(lfutil.shortname, '')
695 if not opts['force'] and os.path.exists(destlfile):
717 if not opts['force'] and os.path.exists(destlfile):
696 raise IOError('',
718 raise IOError('',
697 _('destination largefile already exists'))
719 _('destination largefile already exists'))
698 copiedfiles.append((src, dest))
720 copiedfiles.append((src, dest))
699 origcopyfile(src, dest, *args, **kwargs)
721 origcopyfile(src, dest, *args, **kwargs)
700
722
701 util.copyfile = overridecopyfile
723 util.copyfile = overridecopyfile
702 result += orig(ui, repo, listpats, opts, rename)
724 result += orig(ui, repo, listpats, opts, rename)
703 finally:
725 finally:
704 util.copyfile = origcopyfile
726 util.copyfile = origcopyfile
705
727
706 lfdirstate = lfutil.openlfdirstate(ui, repo)
728 lfdirstate = lfutil.openlfdirstate(ui, repo)
707 for (src, dest) in copiedfiles:
729 for (src, dest) in copiedfiles:
708 if (lfutil.shortname in src and
730 if (lfutil.shortname in src and
709 dest.startswith(repo.wjoin(lfutil.shortname))):
731 dest.startswith(repo.wjoin(lfutil.shortname))):
710 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
732 srclfile = src.replace(repo.wjoin(lfutil.standin('')), '')
711 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
733 destlfile = dest.replace(repo.wjoin(lfutil.standin('')), '')
712 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or '.'
734 destlfiledir = repo.wvfs.dirname(repo.wjoin(destlfile)) or '.'
713 if not os.path.isdir(destlfiledir):
735 if not os.path.isdir(destlfiledir):
714 os.makedirs(destlfiledir)
736 os.makedirs(destlfiledir)
715 if rename:
737 if rename:
716 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
738 os.rename(repo.wjoin(srclfile), repo.wjoin(destlfile))
717
739
718 # The file is gone, but this deletes any empty parent
740 # The file is gone, but this deletes any empty parent
719 # directories as a side-effect.
741 # directories as a side-effect.
720 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
742 repo.wvfs.unlinkpath(srclfile, ignoremissing=True)
721 lfdirstate.remove(srclfile)
743 lfdirstate.remove(srclfile)
722 else:
744 else:
723 util.copyfile(repo.wjoin(srclfile),
745 util.copyfile(repo.wjoin(srclfile),
724 repo.wjoin(destlfile))
746 repo.wjoin(destlfile))
725
747
726 lfdirstate.add(destlfile)
748 lfdirstate.add(destlfile)
727 lfdirstate.write()
749 lfdirstate.write()
728 except error.Abort as e:
750 except error.Abort as e:
729 if pycompat.bytestr(e) != _('no files to copy'):
751 if pycompat.bytestr(e) != _('no files to copy'):
730 raise e
752 raise e
731 else:
753 else:
732 nolfiles = True
754 nolfiles = True
733 finally:
755 finally:
734 restorematchfn()
756 restorematchfn()
735 wlock.release()
757 wlock.release()
736
758
737 if nolfiles and nonormalfiles:
759 if nolfiles and nonormalfiles:
738 raise error.Abort(_('no files to copy'))
760 raise error.Abort(_('no files to copy'))
739
761
740 return result
762 return result
741
763
742 # When the user calls revert, we have to be careful to not revert any
764 # When the user calls revert, we have to be careful to not revert any
743 # changes to other largefiles accidentally. This means we have to keep
765 # changes to other largefiles accidentally. This means we have to keep
744 # track of the largefiles that are being reverted so we only pull down
766 # track of the largefiles that are being reverted so we only pull down
745 # the necessary largefiles.
767 # the necessary largefiles.
746 #
768 #
747 # Standins are only updated (to match the hash of largefiles) before
769 # Standins are only updated (to match the hash of largefiles) before
748 # commits. Update the standins then run the original revert, changing
770 # commits. Update the standins then run the original revert, changing
749 # the matcher to hit standins instead of largefiles. Based on the
771 # the matcher to hit standins instead of largefiles. Based on the
750 # resulting standins update the largefiles.
772 # resulting standins update the largefiles.
773 @eh.wrapfunction(cmdutil, 'revert')
751 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
774 def overriderevert(orig, ui, repo, ctx, parents, *pats, **opts):
752 # Because we put the standins in a bad state (by updating them)
775 # Because we put the standins in a bad state (by updating them)
753 # and then return them to a correct state we need to lock to
776 # and then return them to a correct state we need to lock to
754 # prevent others from changing them in their incorrect state.
777 # prevent others from changing them in their incorrect state.
755 with repo.wlock():
778 with repo.wlock():
756 lfdirstate = lfutil.openlfdirstate(ui, repo)
779 lfdirstate = lfutil.openlfdirstate(ui, repo)
757 s = lfutil.lfdirstatestatus(lfdirstate, repo)
780 s = lfutil.lfdirstatestatus(lfdirstate, repo)
758 lfdirstate.write()
781 lfdirstate.write()
759 for lfile in s.modified:
782 for lfile in s.modified:
760 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
783 lfutil.updatestandin(repo, lfile, lfutil.standin(lfile))
761 for lfile in s.deleted:
784 for lfile in s.deleted:
762 fstandin = lfutil.standin(lfile)
785 fstandin = lfutil.standin(lfile)
763 if (repo.wvfs.exists(fstandin)):
786 if (repo.wvfs.exists(fstandin)):
764 repo.wvfs.unlink(fstandin)
787 repo.wvfs.unlink(fstandin)
765
788
766 oldstandins = lfutil.getstandinsstate(repo)
789 oldstandins = lfutil.getstandinsstate(repo)
767
790
768 def overridematch(mctx, pats=(), opts=None, globbed=False,
791 def overridematch(mctx, pats=(), opts=None, globbed=False,
769 default='relpath', badfn=None):
792 default='relpath', badfn=None):
770 if opts is None:
793 if opts is None:
771 opts = {}
794 opts = {}
772 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
795 match = oldmatch(mctx, pats, opts, globbed, default, badfn=badfn)
773 m = copy.copy(match)
796 m = copy.copy(match)
774
797
775 # revert supports recursing into subrepos, and though largefiles
798 # revert supports recursing into subrepos, and though largefiles
776 # currently doesn't work correctly in that case, this match is
799 # currently doesn't work correctly in that case, this match is
777 # called, so the lfdirstate above may not be the correct one for
800 # called, so the lfdirstate above may not be the correct one for
778 # this invocation of match.
801 # this invocation of match.
779 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
802 lfdirstate = lfutil.openlfdirstate(mctx.repo().ui, mctx.repo(),
780 False)
803 False)
781
804
782 wctx = repo[None]
805 wctx = repo[None]
783 matchfiles = []
806 matchfiles = []
784 for f in m._files:
807 for f in m._files:
785 standin = lfutil.standin(f)
808 standin = lfutil.standin(f)
786 if standin in ctx or standin in mctx:
809 if standin in ctx or standin in mctx:
787 matchfiles.append(standin)
810 matchfiles.append(standin)
788 elif standin in wctx or lfdirstate[f] == 'r':
811 elif standin in wctx or lfdirstate[f] == 'r':
789 continue
812 continue
790 else:
813 else:
791 matchfiles.append(f)
814 matchfiles.append(f)
792 m._files = matchfiles
815 m._files = matchfiles
793 m._fileset = set(m._files)
816 m._fileset = set(m._files)
794 origmatchfn = m.matchfn
817 origmatchfn = m.matchfn
795 def matchfn(f):
818 def matchfn(f):
796 lfile = lfutil.splitstandin(f)
819 lfile = lfutil.splitstandin(f)
797 if lfile is not None:
820 if lfile is not None:
798 return (origmatchfn(lfile) and
821 return (origmatchfn(lfile) and
799 (f in ctx or f in mctx))
822 (f in ctx or f in mctx))
800 return origmatchfn(f)
823 return origmatchfn(f)
801 m.matchfn = matchfn
824 m.matchfn = matchfn
802 return m
825 return m
803 oldmatch = installmatchfn(overridematch)
826 oldmatch = installmatchfn(overridematch)
804 try:
827 try:
805 orig(ui, repo, ctx, parents, *pats, **opts)
828 orig(ui, repo, ctx, parents, *pats, **opts)
806 finally:
829 finally:
807 restorematchfn()
830 restorematchfn()
808
831
809 newstandins = lfutil.getstandinsstate(repo)
832 newstandins = lfutil.getstandinsstate(repo)
810 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
833 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
811 # lfdirstate should be 'normallookup'-ed for updated files,
834 # lfdirstate should be 'normallookup'-ed for updated files,
812 # because reverting doesn't touch dirstate for 'normal' files
835 # because reverting doesn't touch dirstate for 'normal' files
813 # when target revision is explicitly specified: in such case,
836 # when target revision is explicitly specified: in such case,
814 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
837 # 'n' and valid timestamp in dirstate doesn't ensure 'clean'
815 # of target (standin) file.
838 # of target (standin) file.
816 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
839 lfcommands.updatelfiles(ui, repo, filelist, printmessage=False,
817 normallookup=True)
840 normallookup=True)
818
841
819 # after pulling changesets, we need to take some extra care to get
842 # after pulling changesets, we need to take some extra care to get
820 # largefiles updated remotely
843 # largefiles updated remotely
821 @eh.wrapcommand('pull',
844 @eh.wrapcommand('pull',
822 opts=[('', 'all-largefiles', None,
845 opts=[('', 'all-largefiles', None,
823 _('download all pulled versions of largefiles (DEPRECATED)')),
846 _('download all pulled versions of largefiles (DEPRECATED)')),
824 ('', 'lfrev', [],
847 ('', 'lfrev', [],
825 _('download largefiles for these revisions'), _('REV'))])
848 _('download largefiles for these revisions'), _('REV'))])
826 def overridepull(orig, ui, repo, source=None, **opts):
849 def overridepull(orig, ui, repo, source=None, **opts):
827 revsprepull = len(repo)
850 revsprepull = len(repo)
828 if not source:
851 if not source:
829 source = 'default'
852 source = 'default'
830 repo.lfpullsource = source
853 repo.lfpullsource = source
831 result = orig(ui, repo, source, **opts)
854 result = orig(ui, repo, source, **opts)
832 revspostpull = len(repo)
855 revspostpull = len(repo)
833 lfrevs = opts.get(r'lfrev', [])
856 lfrevs = opts.get(r'lfrev', [])
834 if opts.get(r'all_largefiles'):
857 if opts.get(r'all_largefiles'):
835 lfrevs.append('pulled()')
858 lfrevs.append('pulled()')
836 if lfrevs and revspostpull > revsprepull:
859 if lfrevs and revspostpull > revsprepull:
837 numcached = 0
860 numcached = 0
838 repo.firstpulled = revsprepull # for pulled() revset expression
861 repo.firstpulled = revsprepull # for pulled() revset expression
839 try:
862 try:
840 for rev in scmutil.revrange(repo, lfrevs):
863 for rev in scmutil.revrange(repo, lfrevs):
841 ui.note(_('pulling largefiles for revision %d\n') % rev)
864 ui.note(_('pulling largefiles for revision %d\n') % rev)
842 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
865 (cached, missing) = lfcommands.cachelfiles(ui, repo, rev)
843 numcached += len(cached)
866 numcached += len(cached)
844 finally:
867 finally:
845 del repo.firstpulled
868 del repo.firstpulled
846 ui.status(_("%d largefiles cached\n") % numcached)
869 ui.status(_("%d largefiles cached\n") % numcached)
847 return result
870 return result
848
871
849 @eh.wrapcommand('push',
872 @eh.wrapcommand('push',
850 opts=[('', 'lfrev', [],
873 opts=[('', 'lfrev', [],
851 _('upload largefiles for these revisions'), _('REV'))])
874 _('upload largefiles for these revisions'), _('REV'))])
852 def overridepush(orig, ui, repo, *args, **kwargs):
875 def overridepush(orig, ui, repo, *args, **kwargs):
853 """Override push command and store --lfrev parameters in opargs"""
876 """Override push command and store --lfrev parameters in opargs"""
854 lfrevs = kwargs.pop(r'lfrev', None)
877 lfrevs = kwargs.pop(r'lfrev', None)
855 if lfrevs:
878 if lfrevs:
856 opargs = kwargs.setdefault(r'opargs', {})
879 opargs = kwargs.setdefault(r'opargs', {})
857 opargs['lfrevs'] = scmutil.revrange(repo, lfrevs)
880 opargs['lfrevs'] = scmutil.revrange(repo, lfrevs)
858 return orig(ui, repo, *args, **kwargs)
881 return orig(ui, repo, *args, **kwargs)
859
882
883 @eh.wrapfunction(exchange, 'pushoperation')
860 def exchangepushoperation(orig, *args, **kwargs):
884 def exchangepushoperation(orig, *args, **kwargs):
861 """Override pushoperation constructor and store lfrevs parameter"""
885 """Override pushoperation constructor and store lfrevs parameter"""
862 lfrevs = kwargs.pop(r'lfrevs', None)
886 lfrevs = kwargs.pop(r'lfrevs', None)
863 pushop = orig(*args, **kwargs)
887 pushop = orig(*args, **kwargs)
864 pushop.lfrevs = lfrevs
888 pushop.lfrevs = lfrevs
865 return pushop
889 return pushop
866
890
867 revsetpredicate = registrar.revsetpredicate()
891 revsetpredicate = registrar.revsetpredicate()
868
892
869 @revsetpredicate('pulled()')
893 @revsetpredicate('pulled()')
870 def pulledrevsetsymbol(repo, subset, x):
894 def pulledrevsetsymbol(repo, subset, x):
871 """Changesets that just has been pulled.
895 """Changesets that just has been pulled.
872
896
873 Only available with largefiles from pull --lfrev expressions.
897 Only available with largefiles from pull --lfrev expressions.
874
898
875 .. container:: verbose
899 .. container:: verbose
876
900
877 Some examples:
901 Some examples:
878
902
879 - pull largefiles for all new changesets::
903 - pull largefiles for all new changesets::
880
904
881 hg pull -lfrev "pulled()"
905 hg pull -lfrev "pulled()"
882
906
883 - pull largefiles for all new branch heads::
907 - pull largefiles for all new branch heads::
884
908
885 hg pull -lfrev "head(pulled()) and not closed()"
909 hg pull -lfrev "head(pulled()) and not closed()"
886
910
887 """
911 """
888
912
889 try:
913 try:
890 firstpulled = repo.firstpulled
914 firstpulled = repo.firstpulled
891 except AttributeError:
915 except AttributeError:
892 raise error.Abort(_("pulled() only available in --lfrev"))
916 raise error.Abort(_("pulled() only available in --lfrev"))
893 return smartset.baseset([r for r in subset if r >= firstpulled])
917 return smartset.baseset([r for r in subset if r >= firstpulled])
894
918
895 @eh.wrapcommand('clone',
919 @eh.wrapcommand('clone',
896 opts=[('', 'all-largefiles', None,
920 opts=[('', 'all-largefiles', None,
897 _('download all versions of all largefiles'))])
921 _('download all versions of all largefiles'))])
898 def overrideclone(orig, ui, source, dest=None, **opts):
922 def overrideclone(orig, ui, source, dest=None, **opts):
899 d = dest
923 d = dest
900 if d is None:
924 if d is None:
901 d = hg.defaultdest(source)
925 d = hg.defaultdest(source)
902 if opts.get(r'all_largefiles') and not hg.islocal(d):
926 if opts.get(r'all_largefiles') and not hg.islocal(d):
903 raise error.Abort(_(
927 raise error.Abort(_(
904 '--all-largefiles is incompatible with non-local destination %s') %
928 '--all-largefiles is incompatible with non-local destination %s') %
905 d)
929 d)
906
930
907 return orig(ui, source, dest, **opts)
931 return orig(ui, source, dest, **opts)
908
932
933 @eh.wrapfunction(hg, 'clone')
909 def hgclone(orig, ui, opts, *args, **kwargs):
934 def hgclone(orig, ui, opts, *args, **kwargs):
910 result = orig(ui, opts, *args, **kwargs)
935 result = orig(ui, opts, *args, **kwargs)
911
936
912 if result is not None:
937 if result is not None:
913 sourcerepo, destrepo = result
938 sourcerepo, destrepo = result
914 repo = destrepo.local()
939 repo = destrepo.local()
915
940
916 # When cloning to a remote repo (like through SSH), no repo is available
941 # When cloning to a remote repo (like through SSH), no repo is available
917 # from the peer. Therefore the largefiles can't be downloaded and the
942 # from the peer. Therefore the largefiles can't be downloaded and the
918 # hgrc can't be updated.
943 # hgrc can't be updated.
919 if not repo:
944 if not repo:
920 return result
945 return result
921
946
922 # Caching is implicitly limited to 'rev' option, since the dest repo was
947 # Caching is implicitly limited to 'rev' option, since the dest repo was
923 # truncated at that point. The user may expect a download count with
948 # truncated at that point. The user may expect a download count with
924 # this option, so attempt whether or not this is a largefile repo.
949 # this option, so attempt whether or not this is a largefile repo.
925 if opts.get('all_largefiles'):
950 if opts.get('all_largefiles'):
926 success, missing = lfcommands.downloadlfiles(ui, repo, None)
951 success, missing = lfcommands.downloadlfiles(ui, repo, None)
927
952
928 if missing != 0:
953 if missing != 0:
929 return None
954 return None
930
955
931 return result
956 return result
932
957
933 @eh.wrapcommand('rebase', extension='rebase')
958 @eh.wrapcommand('rebase', extension='rebase')
934 def overriderebase(orig, ui, repo, **opts):
959 def overriderebase(orig, ui, repo, **opts):
935 if not util.safehasattr(repo, '_largefilesenabled'):
960 if not util.safehasattr(repo, '_largefilesenabled'):
936 return orig(ui, repo, **opts)
961 return orig(ui, repo, **opts)
937
962
938 resuming = opts.get(r'continue')
963 resuming = opts.get(r'continue')
939 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
964 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
940 repo._lfstatuswriters.append(lambda *msg, **opts: None)
965 repo._lfstatuswriters.append(lambda *msg, **opts: None)
941 try:
966 try:
942 return orig(ui, repo, **opts)
967 return orig(ui, repo, **opts)
943 finally:
968 finally:
944 repo._lfstatuswriters.pop()
969 repo._lfstatuswriters.pop()
945 repo._lfcommithooks.pop()
970 repo._lfcommithooks.pop()
946
971
947 @eh.wrapcommand('archive')
972 @eh.wrapcommand('archive')
948 def overridearchivecmd(orig, ui, repo, dest, **opts):
973 def overridearchivecmd(orig, ui, repo, dest, **opts):
949 repo.unfiltered().lfstatus = True
974 repo.unfiltered().lfstatus = True
950
975
951 try:
976 try:
952 return orig(ui, repo.unfiltered(), dest, **opts)
977 return orig(ui, repo.unfiltered(), dest, **opts)
953 finally:
978 finally:
954 repo.unfiltered().lfstatus = False
979 repo.unfiltered().lfstatus = False
955
980
981 @eh.wrapfunction(webcommands, 'archive')
956 def hgwebarchive(orig, web):
982 def hgwebarchive(orig, web):
957 web.repo.lfstatus = True
983 web.repo.lfstatus = True
958
984
959 try:
985 try:
960 return orig(web)
986 return orig(web)
961 finally:
987 finally:
962 web.repo.lfstatus = False
988 web.repo.lfstatus = False
963
989
990 @eh.wrapfunction(archival, 'archive')
964 def overridearchive(orig, repo, dest, node, kind, decode=True, match=None,
991 def overridearchive(orig, repo, dest, node, kind, decode=True, match=None,
965 prefix='', mtime=None, subrepos=None):
992 prefix='', mtime=None, subrepos=None):
966 # For some reason setting repo.lfstatus in hgwebarchive only changes the
993 # For some reason setting repo.lfstatus in hgwebarchive only changes the
967 # unfiltered repo's attr, so check that as well.
994 # unfiltered repo's attr, so check that as well.
968 if not repo.lfstatus and not repo.unfiltered().lfstatus:
995 if not repo.lfstatus and not repo.unfiltered().lfstatus:
969 return orig(repo, dest, node, kind, decode, match, prefix, mtime,
996 return orig(repo, dest, node, kind, decode, match, prefix, mtime,
970 subrepos)
997 subrepos)
971
998
972 # No need to lock because we are only reading history and
999 # No need to lock because we are only reading history and
973 # largefile caches, neither of which are modified.
1000 # largefile caches, neither of which are modified.
974 if node is not None:
1001 if node is not None:
975 lfcommands.cachelfiles(repo.ui, repo, node)
1002 lfcommands.cachelfiles(repo.ui, repo, node)
976
1003
977 if kind not in archival.archivers:
1004 if kind not in archival.archivers:
978 raise error.Abort(_("unknown archive type '%s'") % kind)
1005 raise error.Abort(_("unknown archive type '%s'") % kind)
979
1006
980 ctx = repo[node]
1007 ctx = repo[node]
981
1008
982 if kind == 'files':
1009 if kind == 'files':
983 if prefix:
1010 if prefix:
984 raise error.Abort(
1011 raise error.Abort(
985 _('cannot give prefix when archiving to files'))
1012 _('cannot give prefix when archiving to files'))
986 else:
1013 else:
987 prefix = archival.tidyprefix(dest, kind, prefix)
1014 prefix = archival.tidyprefix(dest, kind, prefix)
988
1015
989 def write(name, mode, islink, getdata):
1016 def write(name, mode, islink, getdata):
990 if match and not match(name):
1017 if match and not match(name):
991 return
1018 return
992 data = getdata()
1019 data = getdata()
993 if decode:
1020 if decode:
994 data = repo.wwritedata(name, data)
1021 data = repo.wwritedata(name, data)
995 archiver.addfile(prefix + name, mode, islink, data)
1022 archiver.addfile(prefix + name, mode, islink, data)
996
1023
997 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
1024 archiver = archival.archivers[kind](dest, mtime or ctx.date()[0])
998
1025
999 if repo.ui.configbool("ui", "archivemeta"):
1026 if repo.ui.configbool("ui", "archivemeta"):
1000 write('.hg_archival.txt', 0o644, False,
1027 write('.hg_archival.txt', 0o644, False,
1001 lambda: archival.buildmetadata(ctx))
1028 lambda: archival.buildmetadata(ctx))
1002
1029
1003 for f in ctx:
1030 for f in ctx:
1004 ff = ctx.flags(f)
1031 ff = ctx.flags(f)
1005 getdata = ctx[f].data
1032 getdata = ctx[f].data
1006 lfile = lfutil.splitstandin(f)
1033 lfile = lfutil.splitstandin(f)
1007 if lfile is not None:
1034 if lfile is not None:
1008 if node is not None:
1035 if node is not None:
1009 path = lfutil.findfile(repo, getdata().strip())
1036 path = lfutil.findfile(repo, getdata().strip())
1010
1037
1011 if path is None:
1038 if path is None:
1012 raise error.Abort(
1039 raise error.Abort(
1013 _('largefile %s not found in repo store or system cache')
1040 _('largefile %s not found in repo store or system cache')
1014 % lfile)
1041 % lfile)
1015 else:
1042 else:
1016 path = lfile
1043 path = lfile
1017
1044
1018 f = lfile
1045 f = lfile
1019
1046
1020 getdata = lambda: util.readfile(path)
1047 getdata = lambda: util.readfile(path)
1021 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1048 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1022
1049
1023 if subrepos:
1050 if subrepos:
1024 for subpath in sorted(ctx.substate):
1051 for subpath in sorted(ctx.substate):
1025 sub = ctx.workingsub(subpath)
1052 sub = ctx.workingsub(subpath)
1026 submatch = matchmod.subdirmatcher(subpath, match)
1053 submatch = matchmod.subdirmatcher(subpath, match)
1027 sub._repo.lfstatus = True
1054 sub._repo.lfstatus = True
1028 sub.archive(archiver, prefix, submatch)
1055 sub.archive(archiver, prefix, submatch)
1029
1056
1030 archiver.done()
1057 archiver.done()
1031
1058
1059 @eh.wrapfunction(subrepo.hgsubrepo, 'archive')
1032 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
1060 def hgsubrepoarchive(orig, repo, archiver, prefix, match=None, decode=True):
1033 lfenabled = util.safehasattr(repo._repo, '_largefilesenabled')
1061 lfenabled = util.safehasattr(repo._repo, '_largefilesenabled')
1034 if not lfenabled or not repo._repo.lfstatus:
1062 if not lfenabled or not repo._repo.lfstatus:
1035 return orig(repo, archiver, prefix, match, decode)
1063 return orig(repo, archiver, prefix, match, decode)
1036
1064
1037 repo._get(repo._state + ('hg',))
1065 repo._get(repo._state + ('hg',))
1038 rev = repo._state[1]
1066 rev = repo._state[1]
1039 ctx = repo._repo[rev]
1067 ctx = repo._repo[rev]
1040
1068
1041 if ctx.node() is not None:
1069 if ctx.node() is not None:
1042 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1070 lfcommands.cachelfiles(repo.ui, repo._repo, ctx.node())
1043
1071
1044 def write(name, mode, islink, getdata):
1072 def write(name, mode, islink, getdata):
1045 # At this point, the standin has been replaced with the largefile name,
1073 # At this point, the standin has been replaced with the largefile name,
1046 # so the normal matcher works here without the lfutil variants.
1074 # so the normal matcher works here without the lfutil variants.
1047 if match and not match(f):
1075 if match and not match(f):
1048 return
1076 return
1049 data = getdata()
1077 data = getdata()
1050 if decode:
1078 if decode:
1051 data = repo._repo.wwritedata(name, data)
1079 data = repo._repo.wwritedata(name, data)
1052
1080
1053 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1081 archiver.addfile(prefix + repo._path + '/' + name, mode, islink, data)
1054
1082
1055 for f in ctx:
1083 for f in ctx:
1056 ff = ctx.flags(f)
1084 ff = ctx.flags(f)
1057 getdata = ctx[f].data
1085 getdata = ctx[f].data
1058 lfile = lfutil.splitstandin(f)
1086 lfile = lfutil.splitstandin(f)
1059 if lfile is not None:
1087 if lfile is not None:
1060 if ctx.node() is not None:
1088 if ctx.node() is not None:
1061 path = lfutil.findfile(repo._repo, getdata().strip())
1089 path = lfutil.findfile(repo._repo, getdata().strip())
1062
1090
1063 if path is None:
1091 if path is None:
1064 raise error.Abort(
1092 raise error.Abort(
1065 _('largefile %s not found in repo store or system cache')
1093 _('largefile %s not found in repo store or system cache')
1066 % lfile)
1094 % lfile)
1067 else:
1095 else:
1068 path = lfile
1096 path = lfile
1069
1097
1070 f = lfile
1098 f = lfile
1071
1099
1072 getdata = lambda: util.readfile(os.path.join(prefix, path))
1100 getdata = lambda: util.readfile(os.path.join(prefix, path))
1073
1101
1074 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1102 write(f, 'x' in ff and 0o755 or 0o644, 'l' in ff, getdata)
1075
1103
1076 for subpath in sorted(ctx.substate):
1104 for subpath in sorted(ctx.substate):
1077 sub = ctx.workingsub(subpath)
1105 sub = ctx.workingsub(subpath)
1078 submatch = matchmod.subdirmatcher(subpath, match)
1106 submatch = matchmod.subdirmatcher(subpath, match)
1079 sub._repo.lfstatus = True
1107 sub._repo.lfstatus = True
1080 sub.archive(archiver, prefix + repo._path + '/', submatch, decode)
1108 sub.archive(archiver, prefix + repo._path + '/', submatch, decode)
1081
1109
1082 # If a largefile is modified, the change is not reflected in its
1110 # If a largefile is modified, the change is not reflected in its
1083 # standin until a commit. cmdutil.bailifchanged() raises an exception
1111 # standin until a commit. cmdutil.bailifchanged() raises an exception
1084 # if the repo has uncommitted changes. Wrap it to also check if
1112 # if the repo has uncommitted changes. Wrap it to also check if
1085 # largefiles were changed. This is used by bisect, backout and fetch.
1113 # largefiles were changed. This is used by bisect, backout and fetch.
1114 @eh.wrapfunction(cmdutil, 'bailifchanged')
1086 def overridebailifchanged(orig, repo, *args, **kwargs):
1115 def overridebailifchanged(orig, repo, *args, **kwargs):
1087 orig(repo, *args, **kwargs)
1116 orig(repo, *args, **kwargs)
1088 repo.lfstatus = True
1117 repo.lfstatus = True
1089 s = repo.status()
1118 s = repo.status()
1090 repo.lfstatus = False
1119 repo.lfstatus = False
1091 if s.modified or s.added or s.removed or s.deleted:
1120 if s.modified or s.added or s.removed or s.deleted:
1092 raise error.Abort(_('uncommitted changes'))
1121 raise error.Abort(_('uncommitted changes'))
1093
1122
1123 @eh.wrapfunction(cmdutil, 'postcommitstatus')
1094 def postcommitstatus(orig, repo, *args, **kwargs):
1124 def postcommitstatus(orig, repo, *args, **kwargs):
1095 repo.lfstatus = True
1125 repo.lfstatus = True
1096 try:
1126 try:
1097 return orig(repo, *args, **kwargs)
1127 return orig(repo, *args, **kwargs)
1098 finally:
1128 finally:
1099 repo.lfstatus = False
1129 repo.lfstatus = False
1100
1130
1131 @eh.wrapfunction(cmdutil, 'forget')
1101 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly, dryrun,
1132 def cmdutilforget(orig, ui, repo, match, prefix, explicitonly, dryrun,
1102 interactive):
1133 interactive):
1103 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1134 normalmatcher = composenormalfilematcher(match, repo[None].manifest())
1104 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly, dryrun,
1135 bad, forgot = orig(ui, repo, normalmatcher, prefix, explicitonly, dryrun,
1105 interactive)
1136 interactive)
1106 m = composelargefilematcher(match, repo[None].manifest())
1137 m = composelargefilematcher(match, repo[None].manifest())
1107
1138
1108 try:
1139 try:
1109 repo.lfstatus = True
1140 repo.lfstatus = True
1110 s = repo.status(match=m, clean=True)
1141 s = repo.status(match=m, clean=True)
1111 finally:
1142 finally:
1112 repo.lfstatus = False
1143 repo.lfstatus = False
1113 manifest = repo[None].manifest()
1144 manifest = repo[None].manifest()
1114 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1145 forget = sorted(s.modified + s.added + s.deleted + s.clean)
1115 forget = [f for f in forget if lfutil.standin(f) in manifest]
1146 forget = [f for f in forget if lfutil.standin(f) in manifest]
1116
1147
1117 for f in forget:
1148 for f in forget:
1118 fstandin = lfutil.standin(f)
1149 fstandin = lfutil.standin(f)
1119 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1150 if fstandin not in repo.dirstate and not repo.wvfs.isdir(fstandin):
1120 ui.warn(_('not removing %s: file is already untracked\n')
1151 ui.warn(_('not removing %s: file is already untracked\n')
1121 % m.rel(f))
1152 % m.rel(f))
1122 bad.append(f)
1153 bad.append(f)
1123
1154
1124 for f in forget:
1155 for f in forget:
1125 if ui.verbose or not m.exact(f):
1156 if ui.verbose or not m.exact(f):
1126 ui.status(_('removing %s\n') % m.rel(f))
1157 ui.status(_('removing %s\n') % m.rel(f))
1127
1158
1128 # Need to lock because standin files are deleted then removed from the
1159 # Need to lock because standin files are deleted then removed from the
1129 # repository and we could race in-between.
1160 # repository and we could race in-between.
1130 with repo.wlock():
1161 with repo.wlock():
1131 lfdirstate = lfutil.openlfdirstate(ui, repo)
1162 lfdirstate = lfutil.openlfdirstate(ui, repo)
1132 for f in forget:
1163 for f in forget:
1133 if lfdirstate[f] == 'a':
1164 if lfdirstate[f] == 'a':
1134 lfdirstate.drop(f)
1165 lfdirstate.drop(f)
1135 else:
1166 else:
1136 lfdirstate.remove(f)
1167 lfdirstate.remove(f)
1137 lfdirstate.write()
1168 lfdirstate.write()
1138 standins = [lfutil.standin(f) for f in forget]
1169 standins = [lfutil.standin(f) for f in forget]
1139 for f in standins:
1170 for f in standins:
1140 repo.wvfs.unlinkpath(f, ignoremissing=True)
1171 repo.wvfs.unlinkpath(f, ignoremissing=True)
1141 rejected = repo[None].forget(standins)
1172 rejected = repo[None].forget(standins)
1142
1173
1143 bad.extend(f for f in rejected if f in m.files())
1174 bad.extend(f for f in rejected if f in m.files())
1144 forgot.extend(f for f in forget if f not in rejected)
1175 forgot.extend(f for f in forget if f not in rejected)
1145 return bad, forgot
1176 return bad, forgot
1146
1177
1147 def _getoutgoings(repo, other, missing, addfunc):
1178 def _getoutgoings(repo, other, missing, addfunc):
1148 """get pairs of filename and largefile hash in outgoing revisions
1179 """get pairs of filename and largefile hash in outgoing revisions
1149 in 'missing'.
1180 in 'missing'.
1150
1181
1151 largefiles already existing on 'other' repository are ignored.
1182 largefiles already existing on 'other' repository are ignored.
1152
1183
1153 'addfunc' is invoked with each unique pairs of filename and
1184 'addfunc' is invoked with each unique pairs of filename and
1154 largefile hash value.
1185 largefile hash value.
1155 """
1186 """
1156 knowns = set()
1187 knowns = set()
1157 lfhashes = set()
1188 lfhashes = set()
1158 def dedup(fn, lfhash):
1189 def dedup(fn, lfhash):
1159 k = (fn, lfhash)
1190 k = (fn, lfhash)
1160 if k not in knowns:
1191 if k not in knowns:
1161 knowns.add(k)
1192 knowns.add(k)
1162 lfhashes.add(lfhash)
1193 lfhashes.add(lfhash)
1163 lfutil.getlfilestoupload(repo, missing, dedup)
1194 lfutil.getlfilestoupload(repo, missing, dedup)
1164 if lfhashes:
1195 if lfhashes:
1165 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1196 lfexists = storefactory.openstore(repo, other).exists(lfhashes)
1166 for fn, lfhash in knowns:
1197 for fn, lfhash in knowns:
1167 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1198 if not lfexists[lfhash]: # lfhash doesn't exist on "other"
1168 addfunc(fn, lfhash)
1199 addfunc(fn, lfhash)
1169
1200
1170 def outgoinghook(ui, repo, other, opts, missing):
1201 def outgoinghook(ui, repo, other, opts, missing):
1171 if opts.pop('large', None):
1202 if opts.pop('large', None):
1172 lfhashes = set()
1203 lfhashes = set()
1173 if ui.debugflag:
1204 if ui.debugflag:
1174 toupload = {}
1205 toupload = {}
1175 def addfunc(fn, lfhash):
1206 def addfunc(fn, lfhash):
1176 if fn not in toupload:
1207 if fn not in toupload:
1177 toupload[fn] = []
1208 toupload[fn] = []
1178 toupload[fn].append(lfhash)
1209 toupload[fn].append(lfhash)
1179 lfhashes.add(lfhash)
1210 lfhashes.add(lfhash)
1180 def showhashes(fn):
1211 def showhashes(fn):
1181 for lfhash in sorted(toupload[fn]):
1212 for lfhash in sorted(toupload[fn]):
1182 ui.debug(' %s\n' % (lfhash))
1213 ui.debug(' %s\n' % (lfhash))
1183 else:
1214 else:
1184 toupload = set()
1215 toupload = set()
1185 def addfunc(fn, lfhash):
1216 def addfunc(fn, lfhash):
1186 toupload.add(fn)
1217 toupload.add(fn)
1187 lfhashes.add(lfhash)
1218 lfhashes.add(lfhash)
1188 def showhashes(fn):
1219 def showhashes(fn):
1189 pass
1220 pass
1190 _getoutgoings(repo, other, missing, addfunc)
1221 _getoutgoings(repo, other, missing, addfunc)
1191
1222
1192 if not toupload:
1223 if not toupload:
1193 ui.status(_('largefiles: no files to upload\n'))
1224 ui.status(_('largefiles: no files to upload\n'))
1194 else:
1225 else:
1195 ui.status(_('largefiles to upload (%d entities):\n')
1226 ui.status(_('largefiles to upload (%d entities):\n')
1196 % (len(lfhashes)))
1227 % (len(lfhashes)))
1197 for file in sorted(toupload):
1228 for file in sorted(toupload):
1198 ui.status(lfutil.splitstandin(file) + '\n')
1229 ui.status(lfutil.splitstandin(file) + '\n')
1199 showhashes(file)
1230 showhashes(file)
1200 ui.status('\n')
1231 ui.status('\n')
1201
1232
1202 @eh.wrapcommand('outgoing',
1233 @eh.wrapcommand('outgoing',
1203 opts=[('', 'large', None, _('display outgoing largefiles'))])
1234 opts=[('', 'large', None, _('display outgoing largefiles'))])
1204 def _outgoingcmd(orig, *args, **kwargs):
1235 def _outgoingcmd(orig, *args, **kwargs):
1205 # Nothing to do here other than add the extra help option- the hook above
1236 # Nothing to do here other than add the extra help option- the hook above
1206 # processes it.
1237 # processes it.
1207 return orig(*args, **kwargs)
1238 return orig(*args, **kwargs)
1208
1239
1209 def summaryremotehook(ui, repo, opts, changes):
1240 def summaryremotehook(ui, repo, opts, changes):
1210 largeopt = opts.get('large', False)
1241 largeopt = opts.get('large', False)
1211 if changes is None:
1242 if changes is None:
1212 if largeopt:
1243 if largeopt:
1213 return (False, True) # only outgoing check is needed
1244 return (False, True) # only outgoing check is needed
1214 else:
1245 else:
1215 return (False, False)
1246 return (False, False)
1216 elif largeopt:
1247 elif largeopt:
1217 url, branch, peer, outgoing = changes[1]
1248 url, branch, peer, outgoing = changes[1]
1218 if peer is None:
1249 if peer is None:
1219 # i18n: column positioning for "hg summary"
1250 # i18n: column positioning for "hg summary"
1220 ui.status(_('largefiles: (no remote repo)\n'))
1251 ui.status(_('largefiles: (no remote repo)\n'))
1221 return
1252 return
1222
1253
1223 toupload = set()
1254 toupload = set()
1224 lfhashes = set()
1255 lfhashes = set()
1225 def addfunc(fn, lfhash):
1256 def addfunc(fn, lfhash):
1226 toupload.add(fn)
1257 toupload.add(fn)
1227 lfhashes.add(lfhash)
1258 lfhashes.add(lfhash)
1228 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1259 _getoutgoings(repo, peer, outgoing.missing, addfunc)
1229
1260
1230 if not toupload:
1261 if not toupload:
1231 # i18n: column positioning for "hg summary"
1262 # i18n: column positioning for "hg summary"
1232 ui.status(_('largefiles: (no files to upload)\n'))
1263 ui.status(_('largefiles: (no files to upload)\n'))
1233 else:
1264 else:
1234 # i18n: column positioning for "hg summary"
1265 # i18n: column positioning for "hg summary"
1235 ui.status(_('largefiles: %d entities for %d files to upload\n')
1266 ui.status(_('largefiles: %d entities for %d files to upload\n')
1236 % (len(lfhashes), len(toupload)))
1267 % (len(lfhashes), len(toupload)))
1237
1268
1238 @eh.wrapcommand('summary',
1269 @eh.wrapcommand('summary',
1239 opts=[('', 'large', None, _('display outgoing largefiles'))])
1270 opts=[('', 'large', None, _('display outgoing largefiles'))])
1240 def overridesummary(orig, ui, repo, *pats, **opts):
1271 def overridesummary(orig, ui, repo, *pats, **opts):
1241 try:
1272 try:
1242 repo.lfstatus = True
1273 repo.lfstatus = True
1243 orig(ui, repo, *pats, **opts)
1274 orig(ui, repo, *pats, **opts)
1244 finally:
1275 finally:
1245 repo.lfstatus = False
1276 repo.lfstatus = False
1246
1277
1278 @eh.wrapfunction(scmutil, 'addremove')
1247 def scmutiladdremove(orig, repo, matcher, prefix, opts=None):
1279 def scmutiladdremove(orig, repo, matcher, prefix, opts=None):
1248 if opts is None:
1280 if opts is None:
1249 opts = {}
1281 opts = {}
1250 if not lfutil.islfilesrepo(repo):
1282 if not lfutil.islfilesrepo(repo):
1251 return orig(repo, matcher, prefix, opts)
1283 return orig(repo, matcher, prefix, opts)
1252 # Get the list of missing largefiles so we can remove them
1284 # Get the list of missing largefiles so we can remove them
1253 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1285 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1254 unsure, s = lfdirstate.status(matchmod.always(repo.root, repo.getcwd()),
1286 unsure, s = lfdirstate.status(matchmod.always(repo.root, repo.getcwd()),
1255 subrepos=[], ignored=False, clean=False,
1287 subrepos=[], ignored=False, clean=False,
1256 unknown=False)
1288 unknown=False)
1257
1289
1258 # Call into the normal remove code, but the removing of the standin, we want
1290 # Call into the normal remove code, but the removing of the standin, we want
1259 # to have handled by original addremove. Monkey patching here makes sure
1291 # to have handled by original addremove. Monkey patching here makes sure
1260 # we don't remove the standin in the largefiles code, preventing a very
1292 # we don't remove the standin in the largefiles code, preventing a very
1261 # confused state later.
1293 # confused state later.
1262 if s.deleted:
1294 if s.deleted:
1263 m = copy.copy(matcher)
1295 m = copy.copy(matcher)
1264
1296
1265 # The m._files and m._map attributes are not changed to the deleted list
1297 # The m._files and m._map attributes are not changed to the deleted list
1266 # because that affects the m.exact() test, which in turn governs whether
1298 # because that affects the m.exact() test, which in turn governs whether
1267 # or not the file name is printed, and how. Simply limit the original
1299 # or not the file name is printed, and how. Simply limit the original
1268 # matches to those in the deleted status list.
1300 # matches to those in the deleted status list.
1269 matchfn = m.matchfn
1301 matchfn = m.matchfn
1270 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1302 m.matchfn = lambda f: f in s.deleted and matchfn(f)
1271
1303
1272 removelargefiles(repo.ui, repo, True, m, opts.get('dry_run'),
1304 removelargefiles(repo.ui, repo, True, m, opts.get('dry_run'),
1273 **pycompat.strkwargs(opts))
1305 **pycompat.strkwargs(opts))
1274 # Call into the normal add code, and any files that *should* be added as
1306 # Call into the normal add code, and any files that *should* be added as
1275 # largefiles will be
1307 # largefiles will be
1276 added, bad = addlargefiles(repo.ui, repo, True, matcher,
1308 added, bad = addlargefiles(repo.ui, repo, True, matcher,
1277 **pycompat.strkwargs(opts))
1309 **pycompat.strkwargs(opts))
1278 # Now that we've handled largefiles, hand off to the original addremove
1310 # Now that we've handled largefiles, hand off to the original addremove
1279 # function to take care of the rest. Make sure it doesn't do anything with
1311 # function to take care of the rest. Make sure it doesn't do anything with
1280 # largefiles by passing a matcher that will ignore them.
1312 # largefiles by passing a matcher that will ignore them.
1281 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1313 matcher = composenormalfilematcher(matcher, repo[None].manifest(), added)
1282 return orig(repo, matcher, prefix, opts)
1314 return orig(repo, matcher, prefix, opts)
1283
1315
1284 # Calling purge with --all will cause the largefiles to be deleted.
1316 # Calling purge with --all will cause the largefiles to be deleted.
1285 # Override repo.status to prevent this from happening.
1317 # Override repo.status to prevent this from happening.
1286 @eh.wrapcommand('purge', extension='purge')
1318 @eh.wrapcommand('purge', extension='purge')
1287 def overridepurge(orig, ui, repo, *dirs, **opts):
1319 def overridepurge(orig, ui, repo, *dirs, **opts):
1288 # XXX Monkey patching a repoview will not work. The assigned attribute will
1320 # XXX Monkey patching a repoview will not work. The assigned attribute will
1289 # be set on the unfiltered repo, but we will only lookup attributes in the
1321 # be set on the unfiltered repo, but we will only lookup attributes in the
1290 # unfiltered repo if the lookup in the repoview object itself fails. As the
1322 # unfiltered repo if the lookup in the repoview object itself fails. As the
1291 # monkey patched method exists on the repoview class the lookup will not
1323 # monkey patched method exists on the repoview class the lookup will not
1292 # fail. As a result, the original version will shadow the monkey patched
1324 # fail. As a result, the original version will shadow the monkey patched
1293 # one, defeating the monkey patch.
1325 # one, defeating the monkey patch.
1294 #
1326 #
1295 # As a work around we use an unfiltered repo here. We should do something
1327 # As a work around we use an unfiltered repo here. We should do something
1296 # cleaner instead.
1328 # cleaner instead.
1297 repo = repo.unfiltered()
1329 repo = repo.unfiltered()
1298 oldstatus = repo.status
1330 oldstatus = repo.status
1299 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1331 def overridestatus(node1='.', node2=None, match=None, ignored=False,
1300 clean=False, unknown=False, listsubrepos=False):
1332 clean=False, unknown=False, listsubrepos=False):
1301 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1333 r = oldstatus(node1, node2, match, ignored, clean, unknown,
1302 listsubrepos)
1334 listsubrepos)
1303 lfdirstate = lfutil.openlfdirstate(ui, repo)
1335 lfdirstate = lfutil.openlfdirstate(ui, repo)
1304 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1336 unknown = [f for f in r.unknown if lfdirstate[f] == '?']
1305 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1337 ignored = [f for f in r.ignored if lfdirstate[f] == '?']
1306 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1338 return scmutil.status(r.modified, r.added, r.removed, r.deleted,
1307 unknown, ignored, r.clean)
1339 unknown, ignored, r.clean)
1308 repo.status = overridestatus
1340 repo.status = overridestatus
1309 orig(ui, repo, *dirs, **opts)
1341 orig(ui, repo, *dirs, **opts)
1310 repo.status = oldstatus
1342 repo.status = oldstatus
1311
1343
1312 @eh.wrapcommand('rollback')
1344 @eh.wrapcommand('rollback')
1313 def overriderollback(orig, ui, repo, **opts):
1345 def overriderollback(orig, ui, repo, **opts):
1314 with repo.wlock():
1346 with repo.wlock():
1315 before = repo.dirstate.parents()
1347 before = repo.dirstate.parents()
1316 orphans = set(f for f in repo.dirstate
1348 orphans = set(f for f in repo.dirstate
1317 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1349 if lfutil.isstandin(f) and repo.dirstate[f] != 'r')
1318 result = orig(ui, repo, **opts)
1350 result = orig(ui, repo, **opts)
1319 after = repo.dirstate.parents()
1351 after = repo.dirstate.parents()
1320 if before == after:
1352 if before == after:
1321 return result # no need to restore standins
1353 return result # no need to restore standins
1322
1354
1323 pctx = repo['.']
1355 pctx = repo['.']
1324 for f in repo.dirstate:
1356 for f in repo.dirstate:
1325 if lfutil.isstandin(f):
1357 if lfutil.isstandin(f):
1326 orphans.discard(f)
1358 orphans.discard(f)
1327 if repo.dirstate[f] == 'r':
1359 if repo.dirstate[f] == 'r':
1328 repo.wvfs.unlinkpath(f, ignoremissing=True)
1360 repo.wvfs.unlinkpath(f, ignoremissing=True)
1329 elif f in pctx:
1361 elif f in pctx:
1330 fctx = pctx[f]
1362 fctx = pctx[f]
1331 repo.wwrite(f, fctx.data(), fctx.flags())
1363 repo.wwrite(f, fctx.data(), fctx.flags())
1332 else:
1364 else:
1333 # content of standin is not so important in 'a',
1365 # content of standin is not so important in 'a',
1334 # 'm' or 'n' (coming from the 2nd parent) cases
1366 # 'm' or 'n' (coming from the 2nd parent) cases
1335 lfutil.writestandin(repo, f, '', False)
1367 lfutil.writestandin(repo, f, '', False)
1336 for standin in orphans:
1368 for standin in orphans:
1337 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1369 repo.wvfs.unlinkpath(standin, ignoremissing=True)
1338
1370
1339 lfdirstate = lfutil.openlfdirstate(ui, repo)
1371 lfdirstate = lfutil.openlfdirstate(ui, repo)
1340 orphans = set(lfdirstate)
1372 orphans = set(lfdirstate)
1341 lfiles = lfutil.listlfiles(repo)
1373 lfiles = lfutil.listlfiles(repo)
1342 for file in lfiles:
1374 for file in lfiles:
1343 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1375 lfutil.synclfdirstate(repo, lfdirstate, file, True)
1344 orphans.discard(file)
1376 orphans.discard(file)
1345 for lfile in orphans:
1377 for lfile in orphans:
1346 lfdirstate.drop(lfile)
1378 lfdirstate.drop(lfile)
1347 lfdirstate.write()
1379 lfdirstate.write()
1348 return result
1380 return result
1349
1381
1350 @eh.wrapcommand('transplant', extension='transplant')
1382 @eh.wrapcommand('transplant', extension='transplant')
1351 def overridetransplant(orig, ui, repo, *revs, **opts):
1383 def overridetransplant(orig, ui, repo, *revs, **opts):
1352 resuming = opts.get(r'continue')
1384 resuming = opts.get(r'continue')
1353 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1385 repo._lfcommithooks.append(lfutil.automatedcommithook(resuming))
1354 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1386 repo._lfstatuswriters.append(lambda *msg, **opts: None)
1355 try:
1387 try:
1356 result = orig(ui, repo, *revs, **opts)
1388 result = orig(ui, repo, *revs, **opts)
1357 finally:
1389 finally:
1358 repo._lfstatuswriters.pop()
1390 repo._lfstatuswriters.pop()
1359 repo._lfcommithooks.pop()
1391 repo._lfcommithooks.pop()
1360 return result
1392 return result
1361
1393
1362 @eh.wrapcommand('cat')
1394 @eh.wrapcommand('cat')
1363 def overridecat(orig, ui, repo, file1, *pats, **opts):
1395 def overridecat(orig, ui, repo, file1, *pats, **opts):
1364 opts = pycompat.byteskwargs(opts)
1396 opts = pycompat.byteskwargs(opts)
1365 ctx = scmutil.revsingle(repo, opts.get('rev'))
1397 ctx = scmutil.revsingle(repo, opts.get('rev'))
1366 err = 1
1398 err = 1
1367 notbad = set()
1399 notbad = set()
1368 m = scmutil.match(ctx, (file1,) + pats, opts)
1400 m = scmutil.match(ctx, (file1,) + pats, opts)
1369 origmatchfn = m.matchfn
1401 origmatchfn = m.matchfn
1370 def lfmatchfn(f):
1402 def lfmatchfn(f):
1371 if origmatchfn(f):
1403 if origmatchfn(f):
1372 return True
1404 return True
1373 lf = lfutil.splitstandin(f)
1405 lf = lfutil.splitstandin(f)
1374 if lf is None:
1406 if lf is None:
1375 return False
1407 return False
1376 notbad.add(lf)
1408 notbad.add(lf)
1377 return origmatchfn(lf)
1409 return origmatchfn(lf)
1378 m.matchfn = lfmatchfn
1410 m.matchfn = lfmatchfn
1379 origbadfn = m.bad
1411 origbadfn = m.bad
1380 def lfbadfn(f, msg):
1412 def lfbadfn(f, msg):
1381 if not f in notbad:
1413 if not f in notbad:
1382 origbadfn(f, msg)
1414 origbadfn(f, msg)
1383 m.bad = lfbadfn
1415 m.bad = lfbadfn
1384
1416
1385 origvisitdirfn = m.visitdir
1417 origvisitdirfn = m.visitdir
1386 def lfvisitdirfn(dir):
1418 def lfvisitdirfn(dir):
1387 if dir == lfutil.shortname:
1419 if dir == lfutil.shortname:
1388 return True
1420 return True
1389 ret = origvisitdirfn(dir)
1421 ret = origvisitdirfn(dir)
1390 if ret:
1422 if ret:
1391 return ret
1423 return ret
1392 lf = lfutil.splitstandin(dir)
1424 lf = lfutil.splitstandin(dir)
1393 if lf is None:
1425 if lf is None:
1394 return False
1426 return False
1395 return origvisitdirfn(lf)
1427 return origvisitdirfn(lf)
1396 m.visitdir = lfvisitdirfn
1428 m.visitdir = lfvisitdirfn
1397
1429
1398 for f in ctx.walk(m):
1430 for f in ctx.walk(m):
1399 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1431 with cmdutil.makefileobj(ctx, opts.get('output'), pathname=f) as fp:
1400 lf = lfutil.splitstandin(f)
1432 lf = lfutil.splitstandin(f)
1401 if lf is None or origmatchfn(f):
1433 if lf is None or origmatchfn(f):
1402 # duplicating unreachable code from commands.cat
1434 # duplicating unreachable code from commands.cat
1403 data = ctx[f].data()
1435 data = ctx[f].data()
1404 if opts.get('decode'):
1436 if opts.get('decode'):
1405 data = repo.wwritedata(f, data)
1437 data = repo.wwritedata(f, data)
1406 fp.write(data)
1438 fp.write(data)
1407 else:
1439 else:
1408 hash = lfutil.readasstandin(ctx[f])
1440 hash = lfutil.readasstandin(ctx[f])
1409 if not lfutil.inusercache(repo.ui, hash):
1441 if not lfutil.inusercache(repo.ui, hash):
1410 store = storefactory.openstore(repo)
1442 store = storefactory.openstore(repo)
1411 success, missing = store.get([(lf, hash)])
1443 success, missing = store.get([(lf, hash)])
1412 if len(success) != 1:
1444 if len(success) != 1:
1413 raise error.Abort(
1445 raise error.Abort(
1414 _('largefile %s is not in cache and could not be '
1446 _('largefile %s is not in cache and could not be '
1415 'downloaded') % lf)
1447 'downloaded') % lf)
1416 path = lfutil.usercachepath(repo.ui, hash)
1448 path = lfutil.usercachepath(repo.ui, hash)
1417 with open(path, "rb") as fpin:
1449 with open(path, "rb") as fpin:
1418 for chunk in util.filechunkiter(fpin):
1450 for chunk in util.filechunkiter(fpin):
1419 fp.write(chunk)
1451 fp.write(chunk)
1420 err = 0
1452 err = 0
1421 return err
1453 return err
1422
1454
1455 @eh.wrapfunction(merge, 'update')
1423 def mergeupdate(orig, repo, node, branchmerge, force,
1456 def mergeupdate(orig, repo, node, branchmerge, force,
1424 *args, **kwargs):
1457 *args, **kwargs):
1425 matcher = kwargs.get(r'matcher', None)
1458 matcher = kwargs.get(r'matcher', None)
1426 # note if this is a partial update
1459 # note if this is a partial update
1427 partial = matcher and not matcher.always()
1460 partial = matcher and not matcher.always()
1428 with repo.wlock():
1461 with repo.wlock():
1429 # branch | | |
1462 # branch | | |
1430 # merge | force | partial | action
1463 # merge | force | partial | action
1431 # -------+-------+---------+--------------
1464 # -------+-------+---------+--------------
1432 # x | x | x | linear-merge
1465 # x | x | x | linear-merge
1433 # o | x | x | branch-merge
1466 # o | x | x | branch-merge
1434 # x | o | x | overwrite (as clean update)
1467 # x | o | x | overwrite (as clean update)
1435 # o | o | x | force-branch-merge (*1)
1468 # o | o | x | force-branch-merge (*1)
1436 # x | x | o | (*)
1469 # x | x | o | (*)
1437 # o | x | o | (*)
1470 # o | x | o | (*)
1438 # x | o | o | overwrite (as revert)
1471 # x | o | o | overwrite (as revert)
1439 # o | o | o | (*)
1472 # o | o | o | (*)
1440 #
1473 #
1441 # (*) don't care
1474 # (*) don't care
1442 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1475 # (*1) deprecated, but used internally (e.g: "rebase --collapse")
1443
1476
1444 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1477 lfdirstate = lfutil.openlfdirstate(repo.ui, repo)
1445 unsure, s = lfdirstate.status(matchmod.always(repo.root,
1478 unsure, s = lfdirstate.status(matchmod.always(repo.root,
1446 repo.getcwd()),
1479 repo.getcwd()),
1447 subrepos=[], ignored=False,
1480 subrepos=[], ignored=False,
1448 clean=True, unknown=False)
1481 clean=True, unknown=False)
1449 oldclean = set(s.clean)
1482 oldclean = set(s.clean)
1450 pctx = repo['.']
1483 pctx = repo['.']
1451 dctx = repo[node]
1484 dctx = repo[node]
1452 for lfile in unsure + s.modified:
1485 for lfile in unsure + s.modified:
1453 lfileabs = repo.wvfs.join(lfile)
1486 lfileabs = repo.wvfs.join(lfile)
1454 if not repo.wvfs.exists(lfileabs):
1487 if not repo.wvfs.exists(lfileabs):
1455 continue
1488 continue
1456 lfhash = lfutil.hashfile(lfileabs)
1489 lfhash = lfutil.hashfile(lfileabs)
1457 standin = lfutil.standin(lfile)
1490 standin = lfutil.standin(lfile)
1458 lfutil.writestandin(repo, standin, lfhash,
1491 lfutil.writestandin(repo, standin, lfhash,
1459 lfutil.getexecutable(lfileabs))
1492 lfutil.getexecutable(lfileabs))
1460 if (standin in pctx and
1493 if (standin in pctx and
1461 lfhash == lfutil.readasstandin(pctx[standin])):
1494 lfhash == lfutil.readasstandin(pctx[standin])):
1462 oldclean.add(lfile)
1495 oldclean.add(lfile)
1463 for lfile in s.added:
1496 for lfile in s.added:
1464 fstandin = lfutil.standin(lfile)
1497 fstandin = lfutil.standin(lfile)
1465 if fstandin not in dctx:
1498 if fstandin not in dctx:
1466 # in this case, content of standin file is meaningless
1499 # in this case, content of standin file is meaningless
1467 # (in dctx, lfile is unknown, or normal file)
1500 # (in dctx, lfile is unknown, or normal file)
1468 continue
1501 continue
1469 lfutil.updatestandin(repo, lfile, fstandin)
1502 lfutil.updatestandin(repo, lfile, fstandin)
1470 # mark all clean largefiles as dirty, just in case the update gets
1503 # mark all clean largefiles as dirty, just in case the update gets
1471 # interrupted before largefiles and lfdirstate are synchronized
1504 # interrupted before largefiles and lfdirstate are synchronized
1472 for lfile in oldclean:
1505 for lfile in oldclean:
1473 lfdirstate.normallookup(lfile)
1506 lfdirstate.normallookup(lfile)
1474 lfdirstate.write()
1507 lfdirstate.write()
1475
1508
1476 oldstandins = lfutil.getstandinsstate(repo)
1509 oldstandins = lfutil.getstandinsstate(repo)
1477 # Make sure the merge runs on disk, not in-memory. largefiles is not a
1510 # Make sure the merge runs on disk, not in-memory. largefiles is not a
1478 # good candidate for in-memory merge (large files, custom dirstate,
1511 # good candidate for in-memory merge (large files, custom dirstate,
1479 # matcher usage).
1512 # matcher usage).
1480 kwargs[r'wc'] = repo[None]
1513 kwargs[r'wc'] = repo[None]
1481 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1514 result = orig(repo, node, branchmerge, force, *args, **kwargs)
1482
1515
1483 newstandins = lfutil.getstandinsstate(repo)
1516 newstandins = lfutil.getstandinsstate(repo)
1484 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1517 filelist = lfutil.getlfilestoupdate(oldstandins, newstandins)
1485
1518
1486 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1519 # to avoid leaving all largefiles as dirty and thus rehash them, mark
1487 # all the ones that didn't change as clean
1520 # all the ones that didn't change as clean
1488 for lfile in oldclean.difference(filelist):
1521 for lfile in oldclean.difference(filelist):
1489 lfdirstate.normal(lfile)
1522 lfdirstate.normal(lfile)
1490 lfdirstate.write()
1523 lfdirstate.write()
1491
1524
1492 if branchmerge or force or partial:
1525 if branchmerge or force or partial:
1493 filelist.extend(s.deleted + s.removed)
1526 filelist.extend(s.deleted + s.removed)
1494
1527
1495 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1528 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1496 normallookup=partial)
1529 normallookup=partial)
1497
1530
1498 return result
1531 return result
1499
1532
1533 @eh.wrapfunction(scmutil, 'marktouched')
1500 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1534 def scmutilmarktouched(orig, repo, files, *args, **kwargs):
1501 result = orig(repo, files, *args, **kwargs)
1535 result = orig(repo, files, *args, **kwargs)
1502
1536
1503 filelist = []
1537 filelist = []
1504 for f in files:
1538 for f in files:
1505 lf = lfutil.splitstandin(f)
1539 lf = lfutil.splitstandin(f)
1506 if lf is not None:
1540 if lf is not None:
1507 filelist.append(lf)
1541 filelist.append(lf)
1508 if filelist:
1542 if filelist:
1509 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1543 lfcommands.updatelfiles(repo.ui, repo, filelist=filelist,
1510 printmessage=False, normallookup=True)
1544 printmessage=False, normallookup=True)
1511
1545
1512 return result
1546 return result
1513
1547
1548 @eh.wrapfunction(upgrade, 'preservedrequirements')
1549 @eh.wrapfunction(upgrade, 'supporteddestrequirements')
1514 def upgraderequirements(orig, repo):
1550 def upgraderequirements(orig, repo):
1515 reqs = orig(repo)
1551 reqs = orig(repo)
1516 if 'largefiles' in repo.requirements:
1552 if 'largefiles' in repo.requirements:
1517 reqs.add('largefiles')
1553 reqs.add('largefiles')
1518 return reqs
1554 return reqs
1519
1555
1520 _lfscheme = 'largefile://'
1556 _lfscheme = 'largefile://'
1557
1558 @eh.wrapfunction(urlmod, 'open')
1521 def openlargefile(orig, ui, url_, data=None):
1559 def openlargefile(orig, ui, url_, data=None):
1522 if url_.startswith(_lfscheme):
1560 if url_.startswith(_lfscheme):
1523 if data:
1561 if data:
1524 msg = "cannot use data on a 'largefile://' url"
1562 msg = "cannot use data on a 'largefile://' url"
1525 raise error.ProgrammingError(msg)
1563 raise error.ProgrammingError(msg)
1526 lfid = url_[len(_lfscheme):]
1564 lfid = url_[len(_lfscheme):]
1527 return storefactory.getlfile(ui, lfid)
1565 return storefactory.getlfile(ui, lfid)
1528 else:
1566 else:
1529 return orig(ui, url_, data=data)
1567 return orig(ui, url_, data=data)
@@ -1,193 +1,198 b''
1 # Copyright 2011 Fog Creek Software
1 # Copyright 2011 Fog Creek Software
2 #
2 #
3 # This software may be used and distributed according to the terms of the
3 # This software may be used and distributed according to the terms of the
4 # GNU General Public License version 2 or any later version.
4 # GNU General Public License version 2 or any later version.
5 from __future__ import absolute_import
5 from __future__ import absolute_import
6
6
7 import os
7 import os
8 import re
8 import re
9
9
10 from mercurial.i18n import _
10 from mercurial.i18n import _
11
11
12 from mercurial import (
12 from mercurial import (
13 error,
13 error,
14 exthelper,
14 httppeer,
15 httppeer,
15 util,
16 util,
16 wireprototypes,
17 wireprototypes,
17 wireprotov1peer,
18 wireprotov1peer,
19 wireprotov1server,
18 )
20 )
19
21
20 from . import (
22 from . import (
21 lfutil,
23 lfutil,
22 )
24 )
23
25
24 urlerr = util.urlerr
26 urlerr = util.urlerr
25 urlreq = util.urlreq
27 urlreq = util.urlreq
26
28
27 LARGEFILES_REQUIRED_MSG = ('\nThis repository uses the largefiles extension.'
29 LARGEFILES_REQUIRED_MSG = ('\nThis repository uses the largefiles extension.'
28 '\n\nPlease enable it in your Mercurial config '
30 '\n\nPlease enable it in your Mercurial config '
29 'file.\n')
31 'file.\n')
30
32
33 eh = exthelper.exthelper()
34
31 # these will all be replaced by largefiles.uisetup
35 # these will all be replaced by largefiles.uisetup
32 ssholdcallstream = None
36 ssholdcallstream = None
33 httpoldcallstream = None
37 httpoldcallstream = None
34
38
35 def putlfile(repo, proto, sha):
39 def putlfile(repo, proto, sha):
36 '''Server command for putting a largefile into a repository's local store
40 '''Server command for putting a largefile into a repository's local store
37 and into the user cache.'''
41 and into the user cache.'''
38 with proto.mayberedirectstdio() as output:
42 with proto.mayberedirectstdio() as output:
39 path = lfutil.storepath(repo, sha)
43 path = lfutil.storepath(repo, sha)
40 util.makedirs(os.path.dirname(path))
44 util.makedirs(os.path.dirname(path))
41 tmpfp = util.atomictempfile(path, createmode=repo.store.createmode)
45 tmpfp = util.atomictempfile(path, createmode=repo.store.createmode)
42
46
43 try:
47 try:
44 for p in proto.getpayload():
48 for p in proto.getpayload():
45 tmpfp.write(p)
49 tmpfp.write(p)
46 tmpfp._fp.seek(0)
50 tmpfp._fp.seek(0)
47 if sha != lfutil.hexsha1(tmpfp._fp):
51 if sha != lfutil.hexsha1(tmpfp._fp):
48 raise IOError(0, _('largefile contents do not match hash'))
52 raise IOError(0, _('largefile contents do not match hash'))
49 tmpfp.close()
53 tmpfp.close()
50 lfutil.linktousercache(repo, sha)
54 lfutil.linktousercache(repo, sha)
51 except IOError as e:
55 except IOError as e:
52 repo.ui.warn(_('largefiles: failed to put %s into store: %s\n') %
56 repo.ui.warn(_('largefiles: failed to put %s into store: %s\n') %
53 (sha, e.strerror))
57 (sha, e.strerror))
54 return wireprototypes.pushres(
58 return wireprototypes.pushres(
55 1, output.getvalue() if output else '')
59 1, output.getvalue() if output else '')
56 finally:
60 finally:
57 tmpfp.discard()
61 tmpfp.discard()
58
62
59 return wireprototypes.pushres(0, output.getvalue() if output else '')
63 return wireprototypes.pushres(0, output.getvalue() if output else '')
60
64
61 def getlfile(repo, proto, sha):
65 def getlfile(repo, proto, sha):
62 '''Server command for retrieving a largefile from the repository-local
66 '''Server command for retrieving a largefile from the repository-local
63 cache or user cache.'''
67 cache or user cache.'''
64 filename = lfutil.findfile(repo, sha)
68 filename = lfutil.findfile(repo, sha)
65 if not filename:
69 if not filename:
66 raise error.Abort(_('requested largefile %s not present in cache')
70 raise error.Abort(_('requested largefile %s not present in cache')
67 % sha)
71 % sha)
68 f = open(filename, 'rb')
72 f = open(filename, 'rb')
69 length = os.fstat(f.fileno())[6]
73 length = os.fstat(f.fileno())[6]
70
74
71 # Since we can't set an HTTP content-length header here, and
75 # Since we can't set an HTTP content-length header here, and
72 # Mercurial core provides no way to give the length of a streamres
76 # Mercurial core provides no way to give the length of a streamres
73 # (and reading the entire file into RAM would be ill-advised), we
77 # (and reading the entire file into RAM would be ill-advised), we
74 # just send the length on the first line of the response, like the
78 # just send the length on the first line of the response, like the
75 # ssh proto does for string responses.
79 # ssh proto does for string responses.
76 def generator():
80 def generator():
77 yield '%d\n' % length
81 yield '%d\n' % length
78 for chunk in util.filechunkiter(f):
82 for chunk in util.filechunkiter(f):
79 yield chunk
83 yield chunk
80 return wireprototypes.streamreslegacy(gen=generator())
84 return wireprototypes.streamreslegacy(gen=generator())
81
85
82 def statlfile(repo, proto, sha):
86 def statlfile(repo, proto, sha):
83 '''Server command for checking if a largefile is present - returns '2\n' if
87 '''Server command for checking if a largefile is present - returns '2\n' if
84 the largefile is missing, '0\n' if it seems to be in good condition.
88 the largefile is missing, '0\n' if it seems to be in good condition.
85
89
86 The value 1 is reserved for mismatched checksum, but that is too expensive
90 The value 1 is reserved for mismatched checksum, but that is too expensive
87 to be verified on every stat and must be caught be running 'hg verify'
91 to be verified on every stat and must be caught be running 'hg verify'
88 server side.'''
92 server side.'''
89 filename = lfutil.findfile(repo, sha)
93 filename = lfutil.findfile(repo, sha)
90 if not filename:
94 if not filename:
91 return wireprototypes.bytesresponse('2\n')
95 return wireprototypes.bytesresponse('2\n')
92 return wireprototypes.bytesresponse('0\n')
96 return wireprototypes.bytesresponse('0\n')
93
97
94 def wirereposetup(ui, repo):
98 def wirereposetup(ui, repo):
95 class lfileswirerepository(repo.__class__):
99 class lfileswirerepository(repo.__class__):
96 def putlfile(self, sha, fd):
100 def putlfile(self, sha, fd):
97 # unfortunately, httprepository._callpush tries to convert its
101 # unfortunately, httprepository._callpush tries to convert its
98 # input file-like into a bundle before sending it, so we can't use
102 # input file-like into a bundle before sending it, so we can't use
99 # it ...
103 # it ...
100 if issubclass(self.__class__, httppeer.httppeer):
104 if issubclass(self.__class__, httppeer.httppeer):
101 res = self._call('putlfile', data=fd, sha=sha,
105 res = self._call('putlfile', data=fd, sha=sha,
102 headers={r'content-type': r'application/mercurial-0.1'})
106 headers={r'content-type': r'application/mercurial-0.1'})
103 try:
107 try:
104 d, output = res.split('\n', 1)
108 d, output = res.split('\n', 1)
105 for l in output.splitlines(True):
109 for l in output.splitlines(True):
106 self.ui.warn(_('remote: '), l) # assume l ends with \n
110 self.ui.warn(_('remote: '), l) # assume l ends with \n
107 return int(d)
111 return int(d)
108 except ValueError:
112 except ValueError:
109 self.ui.warn(_('unexpected putlfile response: %r\n') % res)
113 self.ui.warn(_('unexpected putlfile response: %r\n') % res)
110 return 1
114 return 1
111 # ... but we can't use sshrepository._call because the data=
115 # ... but we can't use sshrepository._call because the data=
112 # argument won't get sent, and _callpush does exactly what we want
116 # argument won't get sent, and _callpush does exactly what we want
113 # in this case: send the data straight through
117 # in this case: send the data straight through
114 else:
118 else:
115 try:
119 try:
116 ret, output = self._callpush("putlfile", fd, sha=sha)
120 ret, output = self._callpush("putlfile", fd, sha=sha)
117 if ret == "":
121 if ret == "":
118 raise error.ResponseError(_('putlfile failed:'),
122 raise error.ResponseError(_('putlfile failed:'),
119 output)
123 output)
120 return int(ret)
124 return int(ret)
121 except IOError:
125 except IOError:
122 return 1
126 return 1
123 except ValueError:
127 except ValueError:
124 raise error.ResponseError(
128 raise error.ResponseError(
125 _('putlfile failed (unexpected response):'), ret)
129 _('putlfile failed (unexpected response):'), ret)
126
130
127 def getlfile(self, sha):
131 def getlfile(self, sha):
128 """returns an iterable with the chunks of the file with sha sha"""
132 """returns an iterable with the chunks of the file with sha sha"""
129 stream = self._callstream("getlfile", sha=sha)
133 stream = self._callstream("getlfile", sha=sha)
130 length = stream.readline()
134 length = stream.readline()
131 try:
135 try:
132 length = int(length)
136 length = int(length)
133 except ValueError:
137 except ValueError:
134 self._abort(error.ResponseError(_("unexpected response:"),
138 self._abort(error.ResponseError(_("unexpected response:"),
135 length))
139 length))
136
140
137 # SSH streams will block if reading more than length
141 # SSH streams will block if reading more than length
138 for chunk in util.filechunkiter(stream, limit=length):
142 for chunk in util.filechunkiter(stream, limit=length):
139 yield chunk
143 yield chunk
140 # HTTP streams must hit the end to process the last empty
144 # HTTP streams must hit the end to process the last empty
141 # chunk of Chunked-Encoding so the connection can be reused.
145 # chunk of Chunked-Encoding so the connection can be reused.
142 if issubclass(self.__class__, httppeer.httppeer):
146 if issubclass(self.__class__, httppeer.httppeer):
143 chunk = stream.read(1)
147 chunk = stream.read(1)
144 if chunk:
148 if chunk:
145 self._abort(error.ResponseError(_("unexpected response:"),
149 self._abort(error.ResponseError(_("unexpected response:"),
146 chunk))
150 chunk))
147
151
148 @wireprotov1peer.batchable
152 @wireprotov1peer.batchable
149 def statlfile(self, sha):
153 def statlfile(self, sha):
150 f = wireprotov1peer.future()
154 f = wireprotov1peer.future()
151 result = {'sha': sha}
155 result = {'sha': sha}
152 yield result, f
156 yield result, f
153 try:
157 try:
154 yield int(f.value)
158 yield int(f.value)
155 except (ValueError, urlerr.httperror):
159 except (ValueError, urlerr.httperror):
156 # If the server returns anything but an integer followed by a
160 # If the server returns anything but an integer followed by a
157 # newline, newline, it's not speaking our language; if we get
161 # newline, newline, it's not speaking our language; if we get
158 # an HTTP error, we can't be sure the largefile is present;
162 # an HTTP error, we can't be sure the largefile is present;
159 # either way, consider it missing.
163 # either way, consider it missing.
160 yield 2
164 yield 2
161
165
162 repo.__class__ = lfileswirerepository
166 repo.__class__ = lfileswirerepository
163
167
164 # advertise the largefiles=serve capability
168 # advertise the largefiles=serve capability
169 @eh.wrapfunction(wireprotov1server, '_capabilities')
165 def _capabilities(orig, repo, proto):
170 def _capabilities(orig, repo, proto):
166 '''announce largefile server capability'''
171 '''announce largefile server capability'''
167 caps = orig(repo, proto)
172 caps = orig(repo, proto)
168 caps.append('largefiles=serve')
173 caps.append('largefiles=serve')
169 return caps
174 return caps
170
175
171 def heads(orig, repo, proto):
176 def heads(orig, repo, proto):
172 '''Wrap server command - largefile capable clients will know to call
177 '''Wrap server command - largefile capable clients will know to call
173 lheads instead'''
178 lheads instead'''
174 if lfutil.islfilesrepo(repo):
179 if lfutil.islfilesrepo(repo):
175 return wireprototypes.ooberror(LARGEFILES_REQUIRED_MSG)
180 return wireprototypes.ooberror(LARGEFILES_REQUIRED_MSG)
176
181
177 return orig(repo, proto)
182 return orig(repo, proto)
178
183
179 def sshrepocallstream(self, cmd, **args):
184 def sshrepocallstream(self, cmd, **args):
180 if cmd == 'heads' and self.capable('largefiles'):
185 if cmd == 'heads' and self.capable('largefiles'):
181 cmd = 'lheads'
186 cmd = 'lheads'
182 if cmd == 'batch' and self.capable('largefiles'):
187 if cmd == 'batch' and self.capable('largefiles'):
183 args[r'cmds'] = args[r'cmds'].replace('heads ', 'lheads ')
188 args[r'cmds'] = args[r'cmds'].replace('heads ', 'lheads ')
184 return ssholdcallstream(self, cmd, **args)
189 return ssholdcallstream(self, cmd, **args)
185
190
186 headsre = re.compile(br'(^|;)heads\b')
191 headsre = re.compile(br'(^|;)heads\b')
187
192
188 def httprepocallstream(self, cmd, **args):
193 def httprepocallstream(self, cmd, **args):
189 if cmd == 'heads' and self.capable('largefiles'):
194 if cmd == 'heads' and self.capable('largefiles'):
190 cmd = 'lheads'
195 cmd = 'lheads'
191 if cmd == 'batch' and self.capable('largefiles'):
196 if cmd == 'batch' and self.capable('largefiles'):
192 args[r'cmds'] = headsre.sub('lheads', args[r'cmds'])
197 args[r'cmds'] = headsre.sub('lheads', args[r'cmds'])
193 return httpoldcallstream(self, cmd, **args)
198 return httpoldcallstream(self, cmd, **args)
@@ -1,132 +1,56 b''
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''setup for largefiles extension: uisetup'''
9 '''setup for largefiles extension: uisetup'''
10 from __future__ import absolute_import
10 from __future__ import absolute_import
11
11
12 from mercurial.hgweb import (
13 webcommands,
14 )
15
16 from mercurial import (
12 from mercurial import (
17 archival,
18 cmdutil,
13 cmdutil,
19 copies,
20 exchange,
21 extensions,
14 extensions,
22 filemerge,
23 hg,
24 httppeer,
15 httppeer,
25 merge,
26 scmutil,
27 sshpeer,
16 sshpeer,
28 subrepo,
29 upgrade,
30 url,
31 wireprotov1server,
17 wireprotov1server,
32 )
18 )
33
19
34 from . import (
20 from . import (
35 overrides,
21 overrides,
36 proto,
22 proto,
37 )
23 )
38
24
39 def uisetup(ui):
25 def uisetup(ui):
40 # Disable auto-status for some commands which assume that all
41 # files in the result are under Mercurial's control
42
43 # The scmutil function is called both by the (trivial) addremove command,
44 # and in the process of handling commit -A (issue3542)
45 extensions.wrapfunction(scmutil, 'addremove', overrides.scmutiladdremove)
46 extensions.wrapfunction(cmdutil, 'add', overrides.cmdutiladd)
47 extensions.wrapfunction(cmdutil, 'remove', overrides.cmdutilremove)
48 extensions.wrapfunction(cmdutil, 'forget', overrides.cmdutilforget)
49
50 extensions.wrapfunction(copies, 'pathcopies', overrides.copiespathcopies)
51
52 extensions.wrapfunction(upgrade, 'preservedrequirements',
53 overrides.upgraderequirements)
54
55 extensions.wrapfunction(upgrade, 'supporteddestrequirements',
56 overrides.upgraderequirements)
57
58 # Subrepos call status function
59 extensions.wrapfunction(subrepo.hgsubrepo, 'status',
60 overrides.overridestatusfn)
61
26
62 cmdutil.outgoinghooks.add('largefiles', overrides.outgoinghook)
27 cmdutil.outgoinghooks.add('largefiles', overrides.outgoinghook)
63 cmdutil.summaryremotehooks.add('largefiles', overrides.summaryremotehook)
28 cmdutil.summaryremotehooks.add('largefiles', overrides.summaryremotehook)
64
29
65 extensions.wrapfunction(exchange, 'pushoperation',
66 overrides.exchangepushoperation)
67
68 extensions.wrapfunction(hg, 'clone', overrides.hgclone)
69
70 extensions.wrapfunction(merge, '_checkunknownfile',
71 overrides.overridecheckunknownfile)
72 extensions.wrapfunction(merge, 'calculateupdates',
73 overrides.overridecalculateupdates)
74 extensions.wrapfunction(merge, 'recordupdates',
75 overrides.mergerecordupdates)
76 extensions.wrapfunction(merge, 'update', overrides.mergeupdate)
77 extensions.wrapfunction(filemerge, '_filemerge',
78 overrides.overridefilemerge)
79 extensions.wrapfunction(cmdutil, 'copy', overrides.overridecopy)
80
81 # Summary calls dirty on the subrepos
82 extensions.wrapfunction(subrepo.hgsubrepo, 'dirty', overrides.overridedirty)
83
84 extensions.wrapfunction(cmdutil, 'revert', overrides.overriderevert)
85
86 extensions.wrapfunction(archival, 'archive', overrides.overridearchive)
87 extensions.wrapfunction(subrepo.hgsubrepo, 'archive',
88 overrides.hgsubrepoarchive)
89 extensions.wrapfunction(webcommands, 'archive', overrides.hgwebarchive)
90 extensions.wrapfunction(cmdutil, 'bailifchanged',
91 overrides.overridebailifchanged)
92
93 extensions.wrapfunction(cmdutil, 'postcommitstatus',
94 overrides.postcommitstatus)
95 extensions.wrapfunction(scmutil, 'marktouched',
96 overrides.scmutilmarktouched)
97
98 extensions.wrapfunction(url, 'open',
99 overrides.openlargefile)
100
101 # create the new wireproto commands ...
30 # create the new wireproto commands ...
102 wireprotov1server.wireprotocommand('putlfile', 'sha', permission='push')(
31 wireprotov1server.wireprotocommand('putlfile', 'sha', permission='push')(
103 proto.putlfile)
32 proto.putlfile)
104 wireprotov1server.wireprotocommand('getlfile', 'sha', permission='pull')(
33 wireprotov1server.wireprotocommand('getlfile', 'sha', permission='pull')(
105 proto.getlfile)
34 proto.getlfile)
106 wireprotov1server.wireprotocommand('statlfile', 'sha', permission='pull')(
35 wireprotov1server.wireprotocommand('statlfile', 'sha', permission='pull')(
107 proto.statlfile)
36 proto.statlfile)
108 wireprotov1server.wireprotocommand('lheads', '', permission='pull')(
37 wireprotov1server.wireprotocommand('lheads', '', permission='pull')(
109 wireprotov1server.heads)
38 wireprotov1server.heads)
110
39
111 # ... and wrap some existing ones
112 extensions.wrapfunction(wireprotov1server.commands['heads'], 'func',
40 extensions.wrapfunction(wireprotov1server.commands['heads'], 'func',
113 proto.heads)
41 proto.heads)
114 # TODO also wrap wireproto.commandsv2 once heads is implemented there.
42 # TODO also wrap wireproto.commandsv2 once heads is implemented there.
115
43
116 extensions.wrapfunction(webcommands, 'decodepath', overrides.decodepath)
117
118 extensions.wrapfunction(wireprotov1server, '_capabilities',
119 proto._capabilities)
120
121 # can't do this in reposetup because it needs to have happened before
44 # can't do this in reposetup because it needs to have happened before
122 # wirerepo.__init__ is called
45 # wirerepo.__init__ is called
123 proto.ssholdcallstream = sshpeer.sshv1peer._callstream
46 proto.ssholdcallstream = sshpeer.sshv1peer._callstream
124 proto.httpoldcallstream = httppeer.httppeer._callstream
47 proto.httpoldcallstream = httppeer.httppeer._callstream
125 sshpeer.sshv1peer._callstream = proto.sshrepocallstream
48 sshpeer.sshv1peer._callstream = proto.sshrepocallstream
126 httppeer.httppeer._callstream = proto.httprepocallstream
49 httppeer.httppeer._callstream = proto.httprepocallstream
127
50
128 # override some extensions' stuff as well
51 # override some extensions' stuff as well
129 for name, module in extensions.extensions():
52 for name, module in extensions.extensions():
130 if name == 'rebase':
53 if name == 'rebase':
54 # TODO: teach exthelper to handle this
131 extensions.wrapfunction(module, 'rebase',
55 extensions.wrapfunction(module, 'rebase',
132 overrides.overriderebase)
56 overrides.overriderebase)
General Comments 0
You need to be logged in to leave comments. Login now