##// END OF EJS Templates
largefiles: define norepo in command decorator
Gregory Szorc -
r21770:15d434be default
parent child Browse files
Show More
@@ -1,130 +1,128
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''track large binary files
9 '''track large binary files
10
10
11 Large binary files tend to be not very compressible, not very
11 Large binary files tend to be not very compressible, not very
12 diffable, and not at all mergeable. Such files are not handled
12 diffable, and not at all mergeable. Such files are not handled
13 efficiently by Mercurial's storage format (revlog), which is based on
13 efficiently by Mercurial's storage format (revlog), which is based on
14 compressed binary deltas; storing large binary files as regular
14 compressed binary deltas; storing large binary files as regular
15 Mercurial files wastes bandwidth and disk space and increases
15 Mercurial files wastes bandwidth and disk space and increases
16 Mercurial's memory usage. The largefiles extension addresses these
16 Mercurial's memory usage. The largefiles extension addresses these
17 problems by adding a centralized client-server layer on top of
17 problems by adding a centralized client-server layer on top of
18 Mercurial: largefiles live in a *central store* out on the network
18 Mercurial: largefiles live in a *central store* out on the network
19 somewhere, and you only fetch the revisions that you need when you
19 somewhere, and you only fetch the revisions that you need when you
20 need them.
20 need them.
21
21
22 largefiles works by maintaining a "standin file" in .hglf/ for each
22 largefiles works by maintaining a "standin file" in .hglf/ for each
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
23 largefile. The standins are small (41 bytes: an SHA-1 hash plus
24 newline) and are tracked by Mercurial. Largefile revisions are
24 newline) and are tracked by Mercurial. Largefile revisions are
25 identified by the SHA-1 hash of their contents, which is written to
25 identified by the SHA-1 hash of their contents, which is written to
26 the standin. largefiles uses that revision ID to get/put largefile
26 the standin. largefiles uses that revision ID to get/put largefile
27 revisions from/to the central store. This saves both disk space and
27 revisions from/to the central store. This saves both disk space and
28 bandwidth, since you don't need to retrieve all historical revisions
28 bandwidth, since you don't need to retrieve all historical revisions
29 of large files when you clone or pull.
29 of large files when you clone or pull.
30
30
31 To start a new repository or add new large binary files, just add
31 To start a new repository or add new large binary files, just add
32 --large to your :hg:`add` command. For example::
32 --large to your :hg:`add` command. For example::
33
33
34 $ dd if=/dev/urandom of=randomdata count=2000
34 $ dd if=/dev/urandom of=randomdata count=2000
35 $ hg add --large randomdata
35 $ hg add --large randomdata
36 $ hg commit -m 'add randomdata as a largefile'
36 $ hg commit -m 'add randomdata as a largefile'
37
37
38 When you push a changeset that adds/modifies largefiles to a remote
38 When you push a changeset that adds/modifies largefiles to a remote
39 repository, its largefile revisions will be uploaded along with it.
39 repository, its largefile revisions will be uploaded along with it.
40 Note that the remote Mercurial must also have the largefiles extension
40 Note that the remote Mercurial must also have the largefiles extension
41 enabled for this to work.
41 enabled for this to work.
42
42
43 When you pull a changeset that affects largefiles from a remote
43 When you pull a changeset that affects largefiles from a remote
44 repository, the largefiles for the changeset will by default not be
44 repository, the largefiles for the changeset will by default not be
45 pulled down. However, when you update to such a revision, any
45 pulled down. However, when you update to such a revision, any
46 largefiles needed by that revision are downloaded and cached (if
46 largefiles needed by that revision are downloaded and cached (if
47 they have never been downloaded before). One way to pull largefiles
47 they have never been downloaded before). One way to pull largefiles
48 when pulling is thus to use --update, which will update your working
48 when pulling is thus to use --update, which will update your working
49 copy to the latest pulled revision (and thereby downloading any new
49 copy to the latest pulled revision (and thereby downloading any new
50 largefiles).
50 largefiles).
51
51
52 If you want to pull largefiles you don't need for update yet, then
52 If you want to pull largefiles you don't need for update yet, then
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
53 you can use pull with the `--lfrev` option or the :hg:`lfpull` command.
54
54
55 If you know you are pulling from a non-default location and want to
55 If you know you are pulling from a non-default location and want to
56 download all the largefiles that correspond to the new changesets at
56 download all the largefiles that correspond to the new changesets at
57 the same time, then you can pull with `--lfrev "pulled()"`.
57 the same time, then you can pull with `--lfrev "pulled()"`.
58
58
59 If you just want to ensure that you will have the largefiles needed to
59 If you just want to ensure that you will have the largefiles needed to
60 merge or rebase with new heads that you are pulling, then you can pull
60 merge or rebase with new heads that you are pulling, then you can pull
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
61 with `--lfrev "head(pulled())"` flag to pre-emptively download any largefiles
62 that are new in the heads you are pulling.
62 that are new in the heads you are pulling.
63
63
64 Keep in mind that network access may now be required to update to
64 Keep in mind that network access may now be required to update to
65 changesets that you have not previously updated to. The nature of the
65 changesets that you have not previously updated to. The nature of the
66 largefiles extension means that updating is no longer guaranteed to
66 largefiles extension means that updating is no longer guaranteed to
67 be a local-only operation.
67 be a local-only operation.
68
68
69 If you already have large files tracked by Mercurial without the
69 If you already have large files tracked by Mercurial without the
70 largefiles extension, you will need to convert your repository in
70 largefiles extension, you will need to convert your repository in
71 order to benefit from largefiles. This is done with the
71 order to benefit from largefiles. This is done with the
72 :hg:`lfconvert` command::
72 :hg:`lfconvert` command::
73
73
74 $ hg lfconvert --size 10 oldrepo newrepo
74 $ hg lfconvert --size 10 oldrepo newrepo
75
75
76 In repositories that already have largefiles in them, any new file
76 In repositories that already have largefiles in them, any new file
77 over 10MB will automatically be added as a largefile. To change this
77 over 10MB will automatically be added as a largefile. To change this
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
78 threshold, set ``largefiles.minsize`` in your Mercurial config file
79 to the minimum size in megabytes to track as a largefile, or use the
79 to the minimum size in megabytes to track as a largefile, or use the
80 --lfsize option to the add command (also in megabytes)::
80 --lfsize option to the add command (also in megabytes)::
81
81
82 [largefiles]
82 [largefiles]
83 minsize = 2
83 minsize = 2
84
84
85 $ hg add --lfsize 2
85 $ hg add --lfsize 2
86
86
87 The ``largefiles.patterns`` config option allows you to specify a list
87 The ``largefiles.patterns`` config option allows you to specify a list
88 of filename patterns (see :hg:`help patterns`) that should always be
88 of filename patterns (see :hg:`help patterns`) that should always be
89 tracked as largefiles::
89 tracked as largefiles::
90
90
91 [largefiles]
91 [largefiles]
92 patterns =
92 patterns =
93 *.jpg
93 *.jpg
94 re:.*\.(png|bmp)$
94 re:.*\.(png|bmp)$
95 library.zip
95 library.zip
96 content/audio/*
96 content/audio/*
97
97
98 Files that match one of these patterns will be added as largefiles
98 Files that match one of these patterns will be added as largefiles
99 regardless of their size.
99 regardless of their size.
100
100
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
101 The ``largefiles.minsize`` and ``largefiles.patterns`` config options
102 will be ignored for any repositories not already containing a
102 will be ignored for any repositories not already containing a
103 largefile. To add the first largefile to a repository, you must
103 largefile. To add the first largefile to a repository, you must
104 explicitly do so with the --large flag passed to the :hg:`add`
104 explicitly do so with the --large flag passed to the :hg:`add`
105 command.
105 command.
106 '''
106 '''
107
107
108 from mercurial import commands, hg, localrepo
108 from mercurial import hg, localrepo
109
109
110 import lfcommands
110 import lfcommands
111 import proto
111 import proto
112 import reposetup
112 import reposetup
113 import uisetup as uisetupmod
113 import uisetup as uisetupmod
114
114
115 testedwith = 'internal'
115 testedwith = 'internal'
116
116
117 reposetup = reposetup.reposetup
117 reposetup = reposetup.reposetup
118
118
119 def featuresetup(ui, supported):
119 def featuresetup(ui, supported):
120 # don't die on seeing a repo with the largefiles requirement
120 # don't die on seeing a repo with the largefiles requirement
121 supported |= set(['largefiles'])
121 supported |= set(['largefiles'])
122
122
123 def uisetup(ui):
123 def uisetup(ui):
124 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
124 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
125 hg.wirepeersetupfuncs.append(proto.wirereposetup)
125 hg.wirepeersetupfuncs.append(proto.wirereposetup)
126 uisetupmod.uisetup(ui)
126 uisetupmod.uisetup(ui)
127
127
128 commands.norepo += " lfconvert"
129
130 cmdtable = lfcommands.cmdtable
128 cmdtable = lfcommands.cmdtable
@@ -1,572 +1,573
1 # Copyright 2009-2010 Gregory P. Ward
1 # Copyright 2009-2010 Gregory P. Ward
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
2 # Copyright 2009-2010 Intelerad Medical Systems Incorporated
3 # Copyright 2010-2011 Fog Creek Software
3 # Copyright 2010-2011 Fog Creek Software
4 # Copyright 2010-2011 Unity Technologies
4 # Copyright 2010-2011 Unity Technologies
5 #
5 #
6 # This software may be used and distributed according to the terms of the
6 # This software may be used and distributed according to the terms of the
7 # GNU General Public License version 2 or any later version.
7 # GNU General Public License version 2 or any later version.
8
8
9 '''High-level command function for lfconvert, plus the cmdtable.'''
9 '''High-level command function for lfconvert, plus the cmdtable.'''
10
10
11 import os, errno
11 import os, errno
12 import shutil
12 import shutil
13
13
14 from mercurial import util, match as match_, hg, node, context, error, \
14 from mercurial import util, match as match_, hg, node, context, error, \
15 cmdutil, scmutil, commands
15 cmdutil, scmutil, commands
16 from mercurial.i18n import _
16 from mercurial.i18n import _
17 from mercurial.lock import release
17 from mercurial.lock import release
18
18
19 import lfutil
19 import lfutil
20 import basestore
20 import basestore
21
21
22 # -- Commands ----------------------------------------------------------
22 # -- Commands ----------------------------------------------------------
23
23
24 cmdtable = {}
24 cmdtable = {}
25 command = cmdutil.command(cmdtable)
25 command = cmdutil.command(cmdtable)
26
26
27 commands.inferrepo += " lfconvert"
27 commands.inferrepo += " lfconvert"
28
28
29 @command('lfconvert',
29 @command('lfconvert',
30 [('s', 'size', '',
30 [('s', 'size', '',
31 _('minimum size (MB) for files to be converted as largefiles'), 'SIZE'),
31 _('minimum size (MB) for files to be converted as largefiles'), 'SIZE'),
32 ('', 'to-normal', False,
32 ('', 'to-normal', False,
33 _('convert from a largefiles repo to a normal repo')),
33 _('convert from a largefiles repo to a normal repo')),
34 ],
34 ],
35 _('hg lfconvert SOURCE DEST [FILE ...]'))
35 _('hg lfconvert SOURCE DEST [FILE ...]'),
36 norepo=True)
36 def lfconvert(ui, src, dest, *pats, **opts):
37 def lfconvert(ui, src, dest, *pats, **opts):
37 '''convert a normal repository to a largefiles repository
38 '''convert a normal repository to a largefiles repository
38
39
39 Convert repository SOURCE to a new repository DEST, identical to
40 Convert repository SOURCE to a new repository DEST, identical to
40 SOURCE except that certain files will be converted as largefiles:
41 SOURCE except that certain files will be converted as largefiles:
41 specifically, any file that matches any PATTERN *or* whose size is
42 specifically, any file that matches any PATTERN *or* whose size is
42 above the minimum size threshold is converted as a largefile. The
43 above the minimum size threshold is converted as a largefile. The
43 size used to determine whether or not to track a file as a
44 size used to determine whether or not to track a file as a
44 largefile is the size of the first version of the file. The
45 largefile is the size of the first version of the file. The
45 minimum size can be specified either with --size or in
46 minimum size can be specified either with --size or in
46 configuration as ``largefiles.size``.
47 configuration as ``largefiles.size``.
47
48
48 After running this command you will need to make sure that
49 After running this command you will need to make sure that
49 largefiles is enabled anywhere you intend to push the new
50 largefiles is enabled anywhere you intend to push the new
50 repository.
51 repository.
51
52
52 Use --to-normal to convert largefiles back to normal files; after
53 Use --to-normal to convert largefiles back to normal files; after
53 this, the DEST repository can be used without largefiles at all.'''
54 this, the DEST repository can be used without largefiles at all.'''
54
55
55 if opts['to_normal']:
56 if opts['to_normal']:
56 tolfile = False
57 tolfile = False
57 else:
58 else:
58 tolfile = True
59 tolfile = True
59 size = lfutil.getminsize(ui, True, opts.get('size'), default=None)
60 size = lfutil.getminsize(ui, True, opts.get('size'), default=None)
60
61
61 if not hg.islocal(src):
62 if not hg.islocal(src):
62 raise util.Abort(_('%s is not a local Mercurial repo') % src)
63 raise util.Abort(_('%s is not a local Mercurial repo') % src)
63 if not hg.islocal(dest):
64 if not hg.islocal(dest):
64 raise util.Abort(_('%s is not a local Mercurial repo') % dest)
65 raise util.Abort(_('%s is not a local Mercurial repo') % dest)
65
66
66 rsrc = hg.repository(ui, src)
67 rsrc = hg.repository(ui, src)
67 ui.status(_('initializing destination %s\n') % dest)
68 ui.status(_('initializing destination %s\n') % dest)
68 rdst = hg.repository(ui, dest, create=True)
69 rdst = hg.repository(ui, dest, create=True)
69
70
70 success = False
71 success = False
71 dstwlock = dstlock = None
72 dstwlock = dstlock = None
72 try:
73 try:
73 # Lock destination to prevent modification while it is converted to.
74 # Lock destination to prevent modification while it is converted to.
74 # Don't need to lock src because we are just reading from its history
75 # Don't need to lock src because we are just reading from its history
75 # which can't change.
76 # which can't change.
76 dstwlock = rdst.wlock()
77 dstwlock = rdst.wlock()
77 dstlock = rdst.lock()
78 dstlock = rdst.lock()
78
79
79 # Get a list of all changesets in the source. The easy way to do this
80 # Get a list of all changesets in the source. The easy way to do this
80 # is to simply walk the changelog, using changelog.nodesbetween().
81 # is to simply walk the changelog, using changelog.nodesbetween().
81 # Take a look at mercurial/revlog.py:639 for more details.
82 # Take a look at mercurial/revlog.py:639 for more details.
82 # Use a generator instead of a list to decrease memory usage
83 # Use a generator instead of a list to decrease memory usage
83 ctxs = (rsrc[ctx] for ctx in rsrc.changelog.nodesbetween(None,
84 ctxs = (rsrc[ctx] for ctx in rsrc.changelog.nodesbetween(None,
84 rsrc.heads())[0])
85 rsrc.heads())[0])
85 revmap = {node.nullid: node.nullid}
86 revmap = {node.nullid: node.nullid}
86 if tolfile:
87 if tolfile:
87 lfiles = set()
88 lfiles = set()
88 normalfiles = set()
89 normalfiles = set()
89 if not pats:
90 if not pats:
90 pats = ui.configlist(lfutil.longname, 'patterns', default=[])
91 pats = ui.configlist(lfutil.longname, 'patterns', default=[])
91 if pats:
92 if pats:
92 matcher = match_.match(rsrc.root, '', list(pats))
93 matcher = match_.match(rsrc.root, '', list(pats))
93 else:
94 else:
94 matcher = None
95 matcher = None
95
96
96 lfiletohash = {}
97 lfiletohash = {}
97 for ctx in ctxs:
98 for ctx in ctxs:
98 ui.progress(_('converting revisions'), ctx.rev(),
99 ui.progress(_('converting revisions'), ctx.rev(),
99 unit=_('revision'), total=rsrc['tip'].rev())
100 unit=_('revision'), total=rsrc['tip'].rev())
100 _lfconvert_addchangeset(rsrc, rdst, ctx, revmap,
101 _lfconvert_addchangeset(rsrc, rdst, ctx, revmap,
101 lfiles, normalfiles, matcher, size, lfiletohash)
102 lfiles, normalfiles, matcher, size, lfiletohash)
102 ui.progress(_('converting revisions'), None)
103 ui.progress(_('converting revisions'), None)
103
104
104 if os.path.exists(rdst.wjoin(lfutil.shortname)):
105 if os.path.exists(rdst.wjoin(lfutil.shortname)):
105 shutil.rmtree(rdst.wjoin(lfutil.shortname))
106 shutil.rmtree(rdst.wjoin(lfutil.shortname))
106
107
107 for f in lfiletohash.keys():
108 for f in lfiletohash.keys():
108 if os.path.isfile(rdst.wjoin(f)):
109 if os.path.isfile(rdst.wjoin(f)):
109 os.unlink(rdst.wjoin(f))
110 os.unlink(rdst.wjoin(f))
110 try:
111 try:
111 os.removedirs(os.path.dirname(rdst.wjoin(f)))
112 os.removedirs(os.path.dirname(rdst.wjoin(f)))
112 except OSError:
113 except OSError:
113 pass
114 pass
114
115
115 # If there were any files converted to largefiles, add largefiles
116 # If there were any files converted to largefiles, add largefiles
116 # to the destination repository's requirements.
117 # to the destination repository's requirements.
117 if lfiles:
118 if lfiles:
118 rdst.requirements.add('largefiles')
119 rdst.requirements.add('largefiles')
119 rdst._writerequirements()
120 rdst._writerequirements()
120 else:
121 else:
121 for ctx in ctxs:
122 for ctx in ctxs:
122 ui.progress(_('converting revisions'), ctx.rev(),
123 ui.progress(_('converting revisions'), ctx.rev(),
123 unit=_('revision'), total=rsrc['tip'].rev())
124 unit=_('revision'), total=rsrc['tip'].rev())
124 _addchangeset(ui, rsrc, rdst, ctx, revmap)
125 _addchangeset(ui, rsrc, rdst, ctx, revmap)
125
126
126 ui.progress(_('converting revisions'), None)
127 ui.progress(_('converting revisions'), None)
127 success = True
128 success = True
128 finally:
129 finally:
129 rdst.dirstate.clear()
130 rdst.dirstate.clear()
130 release(dstlock, dstwlock)
131 release(dstlock, dstwlock)
131 if not success:
132 if not success:
132 # we failed, remove the new directory
133 # we failed, remove the new directory
133 shutil.rmtree(rdst.root)
134 shutil.rmtree(rdst.root)
134
135
135 def _addchangeset(ui, rsrc, rdst, ctx, revmap):
136 def _addchangeset(ui, rsrc, rdst, ctx, revmap):
136 # Convert src parents to dst parents
137 # Convert src parents to dst parents
137 parents = _convertparents(ctx, revmap)
138 parents = _convertparents(ctx, revmap)
138
139
139 # Generate list of changed files
140 # Generate list of changed files
140 files = _getchangedfiles(ctx, parents)
141 files = _getchangedfiles(ctx, parents)
141
142
142 def getfilectx(repo, memctx, f):
143 def getfilectx(repo, memctx, f):
143 if lfutil.standin(f) in files:
144 if lfutil.standin(f) in files:
144 # if the file isn't in the manifest then it was removed
145 # if the file isn't in the manifest then it was removed
145 # or renamed, raise IOError to indicate this
146 # or renamed, raise IOError to indicate this
146 try:
147 try:
147 fctx = ctx.filectx(lfutil.standin(f))
148 fctx = ctx.filectx(lfutil.standin(f))
148 except error.LookupError:
149 except error.LookupError:
149 raise IOError
150 raise IOError
150 renamed = fctx.renamed()
151 renamed = fctx.renamed()
151 if renamed:
152 if renamed:
152 renamed = lfutil.splitstandin(renamed[0])
153 renamed = lfutil.splitstandin(renamed[0])
153
154
154 hash = fctx.data().strip()
155 hash = fctx.data().strip()
155 path = lfutil.findfile(rsrc, hash)
156 path = lfutil.findfile(rsrc, hash)
156
157
157 # If one file is missing, likely all files from this rev are
158 # If one file is missing, likely all files from this rev are
158 if path is None:
159 if path is None:
159 cachelfiles(ui, rsrc, ctx.node())
160 cachelfiles(ui, rsrc, ctx.node())
160 path = lfutil.findfile(rsrc, hash)
161 path = lfutil.findfile(rsrc, hash)
161
162
162 if path is None:
163 if path is None:
163 raise util.Abort(
164 raise util.Abort(
164 _("missing largefile \'%s\' from revision %s")
165 _("missing largefile \'%s\' from revision %s")
165 % (f, node.hex(ctx.node())))
166 % (f, node.hex(ctx.node())))
166
167
167 data = ''
168 data = ''
168 fd = None
169 fd = None
169 try:
170 try:
170 fd = open(path, 'rb')
171 fd = open(path, 'rb')
171 data = fd.read()
172 data = fd.read()
172 finally:
173 finally:
173 if fd:
174 if fd:
174 fd.close()
175 fd.close()
175 return context.memfilectx(repo, f, data, 'l' in fctx.flags(),
176 return context.memfilectx(repo, f, data, 'l' in fctx.flags(),
176 'x' in fctx.flags(), renamed)
177 'x' in fctx.flags(), renamed)
177 else:
178 else:
178 return _getnormalcontext(repo, ctx, f, revmap)
179 return _getnormalcontext(repo, ctx, f, revmap)
179
180
180 dstfiles = []
181 dstfiles = []
181 for file in files:
182 for file in files:
182 if lfutil.isstandin(file):
183 if lfutil.isstandin(file):
183 dstfiles.append(lfutil.splitstandin(file))
184 dstfiles.append(lfutil.splitstandin(file))
184 else:
185 else:
185 dstfiles.append(file)
186 dstfiles.append(file)
186 # Commit
187 # Commit
187 _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap)
188 _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap)
188
189
189 def _lfconvert_addchangeset(rsrc, rdst, ctx, revmap, lfiles, normalfiles,
190 def _lfconvert_addchangeset(rsrc, rdst, ctx, revmap, lfiles, normalfiles,
190 matcher, size, lfiletohash):
191 matcher, size, lfiletohash):
191 # Convert src parents to dst parents
192 # Convert src parents to dst parents
192 parents = _convertparents(ctx, revmap)
193 parents = _convertparents(ctx, revmap)
193
194
194 # Generate list of changed files
195 # Generate list of changed files
195 files = _getchangedfiles(ctx, parents)
196 files = _getchangedfiles(ctx, parents)
196
197
197 dstfiles = []
198 dstfiles = []
198 for f in files:
199 for f in files:
199 if f not in lfiles and f not in normalfiles:
200 if f not in lfiles and f not in normalfiles:
200 islfile = _islfile(f, ctx, matcher, size)
201 islfile = _islfile(f, ctx, matcher, size)
201 # If this file was renamed or copied then copy
202 # If this file was renamed or copied then copy
202 # the largefile-ness of its predecessor
203 # the largefile-ness of its predecessor
203 if f in ctx.manifest():
204 if f in ctx.manifest():
204 fctx = ctx.filectx(f)
205 fctx = ctx.filectx(f)
205 renamed = fctx.renamed()
206 renamed = fctx.renamed()
206 renamedlfile = renamed and renamed[0] in lfiles
207 renamedlfile = renamed and renamed[0] in lfiles
207 islfile |= renamedlfile
208 islfile |= renamedlfile
208 if 'l' in fctx.flags():
209 if 'l' in fctx.flags():
209 if renamedlfile:
210 if renamedlfile:
210 raise util.Abort(
211 raise util.Abort(
211 _('renamed/copied largefile %s becomes symlink')
212 _('renamed/copied largefile %s becomes symlink')
212 % f)
213 % f)
213 islfile = False
214 islfile = False
214 if islfile:
215 if islfile:
215 lfiles.add(f)
216 lfiles.add(f)
216 else:
217 else:
217 normalfiles.add(f)
218 normalfiles.add(f)
218
219
219 if f in lfiles:
220 if f in lfiles:
220 dstfiles.append(lfutil.standin(f))
221 dstfiles.append(lfutil.standin(f))
221 # largefile in manifest if it has not been removed/renamed
222 # largefile in manifest if it has not been removed/renamed
222 if f in ctx.manifest():
223 if f in ctx.manifest():
223 fctx = ctx.filectx(f)
224 fctx = ctx.filectx(f)
224 if 'l' in fctx.flags():
225 if 'l' in fctx.flags():
225 renamed = fctx.renamed()
226 renamed = fctx.renamed()
226 if renamed and renamed[0] in lfiles:
227 if renamed and renamed[0] in lfiles:
227 raise util.Abort(_('largefile %s becomes symlink') % f)
228 raise util.Abort(_('largefile %s becomes symlink') % f)
228
229
229 # largefile was modified, update standins
230 # largefile was modified, update standins
230 m = util.sha1('')
231 m = util.sha1('')
231 m.update(ctx[f].data())
232 m.update(ctx[f].data())
232 hash = m.hexdigest()
233 hash = m.hexdigest()
233 if f not in lfiletohash or lfiletohash[f] != hash:
234 if f not in lfiletohash or lfiletohash[f] != hash:
234 rdst.wwrite(f, ctx[f].data(), ctx[f].flags())
235 rdst.wwrite(f, ctx[f].data(), ctx[f].flags())
235 executable = 'x' in ctx[f].flags()
236 executable = 'x' in ctx[f].flags()
236 lfutil.writestandin(rdst, lfutil.standin(f), hash,
237 lfutil.writestandin(rdst, lfutil.standin(f), hash,
237 executable)
238 executable)
238 lfiletohash[f] = hash
239 lfiletohash[f] = hash
239 else:
240 else:
240 # normal file
241 # normal file
241 dstfiles.append(f)
242 dstfiles.append(f)
242
243
243 def getfilectx(repo, memctx, f):
244 def getfilectx(repo, memctx, f):
244 if lfutil.isstandin(f):
245 if lfutil.isstandin(f):
245 # if the file isn't in the manifest then it was removed
246 # if the file isn't in the manifest then it was removed
246 # or renamed, raise IOError to indicate this
247 # or renamed, raise IOError to indicate this
247 srcfname = lfutil.splitstandin(f)
248 srcfname = lfutil.splitstandin(f)
248 try:
249 try:
249 fctx = ctx.filectx(srcfname)
250 fctx = ctx.filectx(srcfname)
250 except error.LookupError:
251 except error.LookupError:
251 raise IOError
252 raise IOError
252 renamed = fctx.renamed()
253 renamed = fctx.renamed()
253 if renamed:
254 if renamed:
254 # standin is always a largefile because largefile-ness
255 # standin is always a largefile because largefile-ness
255 # doesn't change after rename or copy
256 # doesn't change after rename or copy
256 renamed = lfutil.standin(renamed[0])
257 renamed = lfutil.standin(renamed[0])
257
258
258 return context.memfilectx(repo, f, lfiletohash[srcfname] + '\n',
259 return context.memfilectx(repo, f, lfiletohash[srcfname] + '\n',
259 'l' in fctx.flags(), 'x' in fctx.flags(),
260 'l' in fctx.flags(), 'x' in fctx.flags(),
260 renamed)
261 renamed)
261 else:
262 else:
262 return _getnormalcontext(repo, ctx, f, revmap)
263 return _getnormalcontext(repo, ctx, f, revmap)
263
264
264 # Commit
265 # Commit
265 _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap)
266 _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap)
266
267
267 def _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap):
268 def _commitcontext(rdst, parents, ctx, dstfiles, getfilectx, revmap):
268 mctx = context.memctx(rdst, parents, ctx.description(), dstfiles,
269 mctx = context.memctx(rdst, parents, ctx.description(), dstfiles,
269 getfilectx, ctx.user(), ctx.date(), ctx.extra())
270 getfilectx, ctx.user(), ctx.date(), ctx.extra())
270 ret = rdst.commitctx(mctx)
271 ret = rdst.commitctx(mctx)
271 rdst.setparents(ret)
272 rdst.setparents(ret)
272 revmap[ctx.node()] = rdst.changelog.tip()
273 revmap[ctx.node()] = rdst.changelog.tip()
273
274
274 # Generate list of changed files
275 # Generate list of changed files
275 def _getchangedfiles(ctx, parents):
276 def _getchangedfiles(ctx, parents):
276 files = set(ctx.files())
277 files = set(ctx.files())
277 if node.nullid not in parents:
278 if node.nullid not in parents:
278 mc = ctx.manifest()
279 mc = ctx.manifest()
279 mp1 = ctx.parents()[0].manifest()
280 mp1 = ctx.parents()[0].manifest()
280 mp2 = ctx.parents()[1].manifest()
281 mp2 = ctx.parents()[1].manifest()
281 files |= (set(mp1) | set(mp2)) - set(mc)
282 files |= (set(mp1) | set(mp2)) - set(mc)
282 for f in mc:
283 for f in mc:
283 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
284 if mc[f] != mp1.get(f, None) or mc[f] != mp2.get(f, None):
284 files.add(f)
285 files.add(f)
285 return files
286 return files
286
287
287 # Convert src parents to dst parents
288 # Convert src parents to dst parents
288 def _convertparents(ctx, revmap):
289 def _convertparents(ctx, revmap):
289 parents = []
290 parents = []
290 for p in ctx.parents():
291 for p in ctx.parents():
291 parents.append(revmap[p.node()])
292 parents.append(revmap[p.node()])
292 while len(parents) < 2:
293 while len(parents) < 2:
293 parents.append(node.nullid)
294 parents.append(node.nullid)
294 return parents
295 return parents
295
296
296 # Get memfilectx for a normal file
297 # Get memfilectx for a normal file
297 def _getnormalcontext(repo, ctx, f, revmap):
298 def _getnormalcontext(repo, ctx, f, revmap):
298 try:
299 try:
299 fctx = ctx.filectx(f)
300 fctx = ctx.filectx(f)
300 except error.LookupError:
301 except error.LookupError:
301 raise IOError
302 raise IOError
302 renamed = fctx.renamed()
303 renamed = fctx.renamed()
303 if renamed:
304 if renamed:
304 renamed = renamed[0]
305 renamed = renamed[0]
305
306
306 data = fctx.data()
307 data = fctx.data()
307 if f == '.hgtags':
308 if f == '.hgtags':
308 data = _converttags (repo.ui, revmap, data)
309 data = _converttags (repo.ui, revmap, data)
309 return context.memfilectx(repo, f, data, 'l' in fctx.flags(),
310 return context.memfilectx(repo, f, data, 'l' in fctx.flags(),
310 'x' in fctx.flags(), renamed)
311 'x' in fctx.flags(), renamed)
311
312
312 # Remap tag data using a revision map
313 # Remap tag data using a revision map
313 def _converttags(ui, revmap, data):
314 def _converttags(ui, revmap, data):
314 newdata = []
315 newdata = []
315 for line in data.splitlines():
316 for line in data.splitlines():
316 try:
317 try:
317 id, name = line.split(' ', 1)
318 id, name = line.split(' ', 1)
318 except ValueError:
319 except ValueError:
319 ui.warn(_('skipping incorrectly formatted tag %s\n')
320 ui.warn(_('skipping incorrectly formatted tag %s\n')
320 % line)
321 % line)
321 continue
322 continue
322 try:
323 try:
323 newid = node.bin(id)
324 newid = node.bin(id)
324 except TypeError:
325 except TypeError:
325 ui.warn(_('skipping incorrectly formatted id %s\n')
326 ui.warn(_('skipping incorrectly formatted id %s\n')
326 % id)
327 % id)
327 continue
328 continue
328 try:
329 try:
329 newdata.append('%s %s\n' % (node.hex(revmap[newid]),
330 newdata.append('%s %s\n' % (node.hex(revmap[newid]),
330 name))
331 name))
331 except KeyError:
332 except KeyError:
332 ui.warn(_('no mapping for id %s\n') % id)
333 ui.warn(_('no mapping for id %s\n') % id)
333 continue
334 continue
334 return ''.join(newdata)
335 return ''.join(newdata)
335
336
336 def _islfile(file, ctx, matcher, size):
337 def _islfile(file, ctx, matcher, size):
337 '''Return true if file should be considered a largefile, i.e.
338 '''Return true if file should be considered a largefile, i.e.
338 matcher matches it or it is larger than size.'''
339 matcher matches it or it is larger than size.'''
339 # never store special .hg* files as largefiles
340 # never store special .hg* files as largefiles
340 if file == '.hgtags' or file == '.hgignore' or file == '.hgsigs':
341 if file == '.hgtags' or file == '.hgignore' or file == '.hgsigs':
341 return False
342 return False
342 if matcher and matcher(file):
343 if matcher and matcher(file):
343 return True
344 return True
344 try:
345 try:
345 return ctx.filectx(file).size() >= size * 1024 * 1024
346 return ctx.filectx(file).size() >= size * 1024 * 1024
346 except error.LookupError:
347 except error.LookupError:
347 return False
348 return False
348
349
349 def uploadlfiles(ui, rsrc, rdst, files):
350 def uploadlfiles(ui, rsrc, rdst, files):
350 '''upload largefiles to the central store'''
351 '''upload largefiles to the central store'''
351
352
352 if not files:
353 if not files:
353 return
354 return
354
355
355 store = basestore._openstore(rsrc, rdst, put=True)
356 store = basestore._openstore(rsrc, rdst, put=True)
356
357
357 at = 0
358 at = 0
358 ui.debug("sending statlfile command for %d largefiles\n" % len(files))
359 ui.debug("sending statlfile command for %d largefiles\n" % len(files))
359 retval = store.exists(files)
360 retval = store.exists(files)
360 files = filter(lambda h: not retval[h], files)
361 files = filter(lambda h: not retval[h], files)
361 ui.debug("%d largefiles need to be uploaded\n" % len(files))
362 ui.debug("%d largefiles need to be uploaded\n" % len(files))
362
363
363 for hash in files:
364 for hash in files:
364 ui.progress(_('uploading largefiles'), at, unit='largefile',
365 ui.progress(_('uploading largefiles'), at, unit='largefile',
365 total=len(files))
366 total=len(files))
366 source = lfutil.findfile(rsrc, hash)
367 source = lfutil.findfile(rsrc, hash)
367 if not source:
368 if not source:
368 raise util.Abort(_('largefile %s missing from store'
369 raise util.Abort(_('largefile %s missing from store'
369 ' (needs to be uploaded)') % hash)
370 ' (needs to be uploaded)') % hash)
370 # XXX check for errors here
371 # XXX check for errors here
371 store.put(source, hash)
372 store.put(source, hash)
372 at += 1
373 at += 1
373 ui.progress(_('uploading largefiles'), None)
374 ui.progress(_('uploading largefiles'), None)
374
375
375 def verifylfiles(ui, repo, all=False, contents=False):
376 def verifylfiles(ui, repo, all=False, contents=False):
376 '''Verify that every largefile revision in the current changeset
377 '''Verify that every largefile revision in the current changeset
377 exists in the central store. With --contents, also verify that
378 exists in the central store. With --contents, also verify that
378 the contents of each local largefile file revision are correct (SHA-1 hash
379 the contents of each local largefile file revision are correct (SHA-1 hash
379 matches the revision ID). With --all, check every changeset in
380 matches the revision ID). With --all, check every changeset in
380 this repository.'''
381 this repository.'''
381 if all:
382 if all:
382 # Pass a list to the function rather than an iterator because we know a
383 # Pass a list to the function rather than an iterator because we know a
383 # list will work.
384 # list will work.
384 revs = range(len(repo))
385 revs = range(len(repo))
385 else:
386 else:
386 revs = ['.']
387 revs = ['.']
387
388
388 store = basestore._openstore(repo)
389 store = basestore._openstore(repo)
389 return store.verify(revs, contents=contents)
390 return store.verify(revs, contents=contents)
390
391
391 def cachelfiles(ui, repo, node, filelist=None):
392 def cachelfiles(ui, repo, node, filelist=None):
392 '''cachelfiles ensures that all largefiles needed by the specified revision
393 '''cachelfiles ensures that all largefiles needed by the specified revision
393 are present in the repository's largefile cache.
394 are present in the repository's largefile cache.
394
395
395 returns a tuple (cached, missing). cached is the list of files downloaded
396 returns a tuple (cached, missing). cached is the list of files downloaded
396 by this operation; missing is the list of files that were needed but could
397 by this operation; missing is the list of files that were needed but could
397 not be found.'''
398 not be found.'''
398 lfiles = lfutil.listlfiles(repo, node)
399 lfiles = lfutil.listlfiles(repo, node)
399 if filelist:
400 if filelist:
400 lfiles = set(lfiles) & set(filelist)
401 lfiles = set(lfiles) & set(filelist)
401 toget = []
402 toget = []
402
403
403 for lfile in lfiles:
404 for lfile in lfiles:
404 try:
405 try:
405 expectedhash = repo[node][lfutil.standin(lfile)].data().strip()
406 expectedhash = repo[node][lfutil.standin(lfile)].data().strip()
406 except IOError, err:
407 except IOError, err:
407 if err.errno == errno.ENOENT:
408 if err.errno == errno.ENOENT:
408 continue # node must be None and standin wasn't found in wctx
409 continue # node must be None and standin wasn't found in wctx
409 raise
410 raise
410 if not lfutil.findfile(repo, expectedhash):
411 if not lfutil.findfile(repo, expectedhash):
411 toget.append((lfile, expectedhash))
412 toget.append((lfile, expectedhash))
412
413
413 if toget:
414 if toget:
414 store = basestore._openstore(repo)
415 store = basestore._openstore(repo)
415 ret = store.get(toget)
416 ret = store.get(toget)
416 return ret
417 return ret
417
418
418 return ([], [])
419 return ([], [])
419
420
420 def downloadlfiles(ui, repo, rev=None):
421 def downloadlfiles(ui, repo, rev=None):
421 matchfn = scmutil.match(repo[None],
422 matchfn = scmutil.match(repo[None],
422 [repo.wjoin(lfutil.shortname)], {})
423 [repo.wjoin(lfutil.shortname)], {})
423 def prepare(ctx, fns):
424 def prepare(ctx, fns):
424 pass
425 pass
425 totalsuccess = 0
426 totalsuccess = 0
426 totalmissing = 0
427 totalmissing = 0
427 if rev != []: # walkchangerevs on empty list would return all revs
428 if rev != []: # walkchangerevs on empty list would return all revs
428 for ctx in cmdutil.walkchangerevs(repo, matchfn, {'rev' : rev},
429 for ctx in cmdutil.walkchangerevs(repo, matchfn, {'rev' : rev},
429 prepare):
430 prepare):
430 success, missing = cachelfiles(ui, repo, ctx.node())
431 success, missing = cachelfiles(ui, repo, ctx.node())
431 totalsuccess += len(success)
432 totalsuccess += len(success)
432 totalmissing += len(missing)
433 totalmissing += len(missing)
433 ui.status(_("%d additional largefiles cached\n") % totalsuccess)
434 ui.status(_("%d additional largefiles cached\n") % totalsuccess)
434 if totalmissing > 0:
435 if totalmissing > 0:
435 ui.status(_("%d largefiles failed to download\n") % totalmissing)
436 ui.status(_("%d largefiles failed to download\n") % totalmissing)
436 return totalsuccess, totalmissing
437 return totalsuccess, totalmissing
437
438
438 def updatelfiles(ui, repo, filelist=None, printmessage=True):
439 def updatelfiles(ui, repo, filelist=None, printmessage=True):
439 wlock = repo.wlock()
440 wlock = repo.wlock()
440 try:
441 try:
441 lfdirstate = lfutil.openlfdirstate(ui, repo)
442 lfdirstate = lfutil.openlfdirstate(ui, repo)
442 lfiles = set(lfutil.listlfiles(repo)) | set(lfdirstate)
443 lfiles = set(lfutil.listlfiles(repo)) | set(lfdirstate)
443
444
444 if filelist is not None:
445 if filelist is not None:
445 lfiles = [f for f in lfiles if f in filelist]
446 lfiles = [f for f in lfiles if f in filelist]
446
447
447 update = {}
448 update = {}
448 updated, removed = 0, 0
449 updated, removed = 0, 0
449 for lfile in lfiles:
450 for lfile in lfiles:
450 abslfile = repo.wjoin(lfile)
451 abslfile = repo.wjoin(lfile)
451 absstandin = repo.wjoin(lfutil.standin(lfile))
452 absstandin = repo.wjoin(lfutil.standin(lfile))
452 if os.path.exists(absstandin):
453 if os.path.exists(absstandin):
453 if (os.path.exists(absstandin + '.orig') and
454 if (os.path.exists(absstandin + '.orig') and
454 os.path.exists(abslfile)):
455 os.path.exists(abslfile)):
455 shutil.copyfile(abslfile, abslfile + '.orig')
456 shutil.copyfile(abslfile, abslfile + '.orig')
456 util.unlinkpath(absstandin + '.orig')
457 util.unlinkpath(absstandin + '.orig')
457 expecthash = lfutil.readstandin(repo, lfile)
458 expecthash = lfutil.readstandin(repo, lfile)
458 if (expecthash != '' and
459 if (expecthash != '' and
459 (not os.path.exists(abslfile) or
460 (not os.path.exists(abslfile) or
460 expecthash != lfutil.hashfile(abslfile))):
461 expecthash != lfutil.hashfile(abslfile))):
461 if lfile not in repo[None]: # not switched to normal file
462 if lfile not in repo[None]: # not switched to normal file
462 util.unlinkpath(abslfile, ignoremissing=True)
463 util.unlinkpath(abslfile, ignoremissing=True)
463 # use normallookup() to allocate entry in largefiles
464 # use normallookup() to allocate entry in largefiles
464 # dirstate, because lack of it misleads
465 # dirstate, because lack of it misleads
465 # lfilesrepo.status() into recognition that such cache
466 # lfilesrepo.status() into recognition that such cache
466 # missing files are REMOVED.
467 # missing files are REMOVED.
467 lfdirstate.normallookup(lfile)
468 lfdirstate.normallookup(lfile)
468 update[lfile] = expecthash
469 update[lfile] = expecthash
469 else:
470 else:
470 # Remove lfiles for which the standin is deleted, unless the
471 # Remove lfiles for which the standin is deleted, unless the
471 # lfile is added to the repository again. This happens when a
472 # lfile is added to the repository again. This happens when a
472 # largefile is converted back to a normal file: the standin
473 # largefile is converted back to a normal file: the standin
473 # disappears, but a new (normal) file appears as the lfile.
474 # disappears, but a new (normal) file appears as the lfile.
474 if (os.path.exists(abslfile) and
475 if (os.path.exists(abslfile) and
475 repo.dirstate.normalize(lfile) not in repo[None]):
476 repo.dirstate.normalize(lfile) not in repo[None]):
476 util.unlinkpath(abslfile)
477 util.unlinkpath(abslfile)
477 removed += 1
478 removed += 1
478
479
479 # largefile processing might be slow and be interrupted - be prepared
480 # largefile processing might be slow and be interrupted - be prepared
480 lfdirstate.write()
481 lfdirstate.write()
481
482
482 if lfiles:
483 if lfiles:
483 if printmessage:
484 if printmessage:
484 ui.status(_('getting changed largefiles\n'))
485 ui.status(_('getting changed largefiles\n'))
485 cachelfiles(ui, repo, None, lfiles)
486 cachelfiles(ui, repo, None, lfiles)
486
487
487 for lfile in lfiles:
488 for lfile in lfiles:
488 update1 = 0
489 update1 = 0
489
490
490 expecthash = update.get(lfile)
491 expecthash = update.get(lfile)
491 if expecthash:
492 if expecthash:
492 if not lfutil.copyfromcache(repo, expecthash, lfile):
493 if not lfutil.copyfromcache(repo, expecthash, lfile):
493 # failed ... but already removed and set to normallookup
494 # failed ... but already removed and set to normallookup
494 continue
495 continue
495 # Synchronize largefile dirstate to the last modified
496 # Synchronize largefile dirstate to the last modified
496 # time of the file
497 # time of the file
497 lfdirstate.normal(lfile)
498 lfdirstate.normal(lfile)
498 update1 = 1
499 update1 = 1
499
500
500 # copy the state of largefile standin from the repository's
501 # copy the state of largefile standin from the repository's
501 # dirstate to its state in the lfdirstate.
502 # dirstate to its state in the lfdirstate.
502 abslfile = repo.wjoin(lfile)
503 abslfile = repo.wjoin(lfile)
503 absstandin = repo.wjoin(lfutil.standin(lfile))
504 absstandin = repo.wjoin(lfutil.standin(lfile))
504 if os.path.exists(absstandin):
505 if os.path.exists(absstandin):
505 mode = os.stat(absstandin).st_mode
506 mode = os.stat(absstandin).st_mode
506 if mode != os.stat(abslfile).st_mode:
507 if mode != os.stat(abslfile).st_mode:
507 os.chmod(abslfile, mode)
508 os.chmod(abslfile, mode)
508 update1 = 1
509 update1 = 1
509
510
510 updated += update1
511 updated += update1
511
512
512 state = repo.dirstate[lfutil.standin(lfile)]
513 state = repo.dirstate[lfutil.standin(lfile)]
513 if state == 'n':
514 if state == 'n':
514 # When rebasing, we need to synchronize the standin and the
515 # When rebasing, we need to synchronize the standin and the
515 # largefile, because otherwise the largefile will get reverted.
516 # largefile, because otherwise the largefile will get reverted.
516 # But for commit's sake, we have to mark the file as unclean.
517 # But for commit's sake, we have to mark the file as unclean.
517 if getattr(repo, "_isrebasing", False):
518 if getattr(repo, "_isrebasing", False):
518 lfdirstate.normallookup(lfile)
519 lfdirstate.normallookup(lfile)
519 else:
520 else:
520 lfdirstate.normal(lfile)
521 lfdirstate.normal(lfile)
521 elif state == 'r':
522 elif state == 'r':
522 lfdirstate.remove(lfile)
523 lfdirstate.remove(lfile)
523 elif state == 'a':
524 elif state == 'a':
524 lfdirstate.add(lfile)
525 lfdirstate.add(lfile)
525 elif state == '?':
526 elif state == '?':
526 lfdirstate.drop(lfile)
527 lfdirstate.drop(lfile)
527
528
528 lfdirstate.write()
529 lfdirstate.write()
529 if printmessage and lfiles:
530 if printmessage and lfiles:
530 ui.status(_('%d largefiles updated, %d removed\n') % (updated,
531 ui.status(_('%d largefiles updated, %d removed\n') % (updated,
531 removed))
532 removed))
532 finally:
533 finally:
533 wlock.release()
534 wlock.release()
534
535
535 @command('lfpull',
536 @command('lfpull',
536 [('r', 'rev', [], _('pull largefiles for these revisions'))
537 [('r', 'rev', [], _('pull largefiles for these revisions'))
537 ] + commands.remoteopts,
538 ] + commands.remoteopts,
538 _('-r REV... [-e CMD] [--remotecmd CMD] [SOURCE]'))
539 _('-r REV... [-e CMD] [--remotecmd CMD] [SOURCE]'))
539 def lfpull(ui, repo, source="default", **opts):
540 def lfpull(ui, repo, source="default", **opts):
540 """pull largefiles for the specified revisions from the specified source
541 """pull largefiles for the specified revisions from the specified source
541
542
542 Pull largefiles that are referenced from local changesets but missing
543 Pull largefiles that are referenced from local changesets but missing
543 locally, pulling from a remote repository to the local cache.
544 locally, pulling from a remote repository to the local cache.
544
545
545 If SOURCE is omitted, the 'default' path will be used.
546 If SOURCE is omitted, the 'default' path will be used.
546 See :hg:`help urls` for more information.
547 See :hg:`help urls` for more information.
547
548
548 .. container:: verbose
549 .. container:: verbose
549
550
550 Some examples:
551 Some examples:
551
552
552 - pull largefiles for all branch heads::
553 - pull largefiles for all branch heads::
553
554
554 hg lfpull -r "head() and not closed()"
555 hg lfpull -r "head() and not closed()"
555
556
556 - pull largefiles on the default branch::
557 - pull largefiles on the default branch::
557
558
558 hg lfpull -r "branch(default)"
559 hg lfpull -r "branch(default)"
559 """
560 """
560 repo.lfpullsource = source
561 repo.lfpullsource = source
561
562
562 revs = opts.get('rev', [])
563 revs = opts.get('rev', [])
563 if not revs:
564 if not revs:
564 raise util.Abort(_('no revisions specified'))
565 raise util.Abort(_('no revisions specified'))
565 revs = scmutil.revrange(repo, revs)
566 revs = scmutil.revrange(repo, revs)
566
567
567 numcached = 0
568 numcached = 0
568 for rev in revs:
569 for rev in revs:
569 ui.note(_('pulling largefiles for revision %s\n') % rev)
570 ui.note(_('pulling largefiles for revision %s\n') % rev)
570 (cached, missing) = cachelfiles(ui, repo, rev)
571 (cached, missing) = cachelfiles(ui, repo, rev)
571 numcached += len(cached)
572 numcached += len(cached)
572 ui.status(_("%d largefiles cached\n") % numcached)
573 ui.status(_("%d largefiles cached\n") % numcached)
General Comments 0
You need to be logged in to leave comments. Login now