##// END OF EJS Templates
lfs: move the 'supportedoutgoingversions' handling to changegroup.py...
Matt Harbison -
r37150:a54113fc default
parent child Browse files
Show More
@@ -1,390 +1,387
1 # lfs - hash-preserving large file support using Git-LFS protocol
1 # lfs - hash-preserving large file support using Git-LFS protocol
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 """lfs - large file support (EXPERIMENTAL)
8 """lfs - large file support (EXPERIMENTAL)
9
9
10 This extension allows large files to be tracked outside of the normal
10 This extension allows large files to be tracked outside of the normal
11 repository storage and stored on a centralized server, similar to the
11 repository storage and stored on a centralized server, similar to the
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 communicating with the server, so existing git infrastructure can be
13 communicating with the server, so existing git infrastructure can be
14 harnessed. Even though the files are stored outside of the repository,
14 harnessed. Even though the files are stored outside of the repository,
15 they are still integrity checked in the same manner as normal files.
15 they are still integrity checked in the same manner as normal files.
16
16
17 The files stored outside of the repository are downloaded on demand,
17 The files stored outside of the repository are downloaded on demand,
18 which reduces the time to clone, and possibly the local disk usage.
18 which reduces the time to clone, and possibly the local disk usage.
19 This changes fundamental workflows in a DVCS, so careful thought
19 This changes fundamental workflows in a DVCS, so careful thought
20 should be given before deploying it. :hg:`convert` can be used to
20 should be given before deploying it. :hg:`convert` can be used to
21 convert LFS repositories to normal repositories that no longer
21 convert LFS repositories to normal repositories that no longer
22 require this extension, and do so without changing the commit hashes.
22 require this extension, and do so without changing the commit hashes.
23 This allows the extension to be disabled if the centralized workflow
23 This allows the extension to be disabled if the centralized workflow
24 becomes burdensome. However, the pre and post convert clones will
24 becomes burdensome. However, the pre and post convert clones will
25 not be able to communicate with each other unless the extension is
25 not be able to communicate with each other unless the extension is
26 enabled on both.
26 enabled on both.
27
27
28 To start a new repository, or to add LFS files to an existing one, just
28 To start a new repository, or to add LFS files to an existing one, just
29 create an ``.hglfs`` file as described below in the root directory of
29 create an ``.hglfs`` file as described below in the root directory of
30 the repository. Typically, this file should be put under version
30 the repository. Typically, this file should be put under version
31 control, so that the settings will propagate to other repositories with
31 control, so that the settings will propagate to other repositories with
32 push and pull. During any commit, Mercurial will consult this file to
32 push and pull. During any commit, Mercurial will consult this file to
33 determine if an added or modified file should be stored externally. The
33 determine if an added or modified file should be stored externally. The
34 type of storage depends on the characteristics of the file at each
34 type of storage depends on the characteristics of the file at each
35 commit. A file that is near a size threshold may switch back and forth
35 commit. A file that is near a size threshold may switch back and forth
36 between LFS and normal storage, as needed.
36 between LFS and normal storage, as needed.
37
37
38 Alternately, both normal repositories and largefile controlled
38 Alternately, both normal repositories and largefile controlled
39 repositories can be converted to LFS by using :hg:`convert` and the
39 repositories can be converted to LFS by using :hg:`convert` and the
40 ``lfs.track`` config option described below. The ``.hglfs`` file
40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 should then be created and added, to control subsequent LFS selection.
41 should then be created and added, to control subsequent LFS selection.
42 The hashes are also unchanged in this case. The LFS and non-LFS
42 The hashes are also unchanged in this case. The LFS and non-LFS
43 repositories can be distinguished because the LFS repository will
43 repositories can be distinguished because the LFS repository will
44 abort any command if this extension is disabled.
44 abort any command if this extension is disabled.
45
45
46 Committed LFS files are held locally, until the repository is pushed.
46 Committed LFS files are held locally, until the repository is pushed.
47 Prior to pushing the normal repository data, the LFS files that are
47 Prior to pushing the normal repository data, the LFS files that are
48 tracked by the outgoing commits are automatically uploaded to the
48 tracked by the outgoing commits are automatically uploaded to the
49 configured central server. No LFS files are transferred on
49 configured central server. No LFS files are transferred on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 demand as they need to be read, if a cached copy cannot be found
51 demand as they need to be read, if a cached copy cannot be found
52 locally. Both committing and downloading an LFS file will link the
52 locally. Both committing and downloading an LFS file will link the
53 file to a usercache, to speed up future access. See the `usercache`
53 file to a usercache, to speed up future access. See the `usercache`
54 config setting described below.
54 config setting described below.
55
55
56 .hglfs::
56 .hglfs::
57
57
58 The extension reads its configuration from a versioned ``.hglfs``
58 The extension reads its configuration from a versioned ``.hglfs``
59 configuration file found in the root of the working directory. The
59 configuration file found in the root of the working directory. The
60 ``.hglfs`` file uses the same syntax as all other Mercurial
60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 configuration files. It uses a single section, ``[track]``.
61 configuration files. It uses a single section, ``[track]``.
62
62
63 The ``[track]`` section specifies which files are stored as LFS (or
63 The ``[track]`` section specifies which files are stored as LFS (or
64 not). Each line is keyed by a file pattern, with a predicate value.
64 not). Each line is keyed by a file pattern, with a predicate value.
65 The first file pattern match is used, so put more specific patterns
65 The first file pattern match is used, so put more specific patterns
66 first. The available predicates are ``all()``, ``none()``, and
66 first. The available predicates are ``all()``, ``none()``, and
67 ``size()``. See "hg help filesets.size" for the latter.
67 ``size()``. See "hg help filesets.size" for the latter.
68
68
69 Example versioned ``.hglfs`` file::
69 Example versioned ``.hglfs`` file::
70
70
71 [track]
71 [track]
72 # No Makefile or python file, anywhere, will be LFS
72 # No Makefile or python file, anywhere, will be LFS
73 **Makefile = none()
73 **Makefile = none()
74 **.py = none()
74 **.py = none()
75
75
76 **.zip = all()
76 **.zip = all()
77 **.exe = size(">1MB")
77 **.exe = size(">1MB")
78
78
79 # Catchall for everything not matched above
79 # Catchall for everything not matched above
80 ** = size(">10MB")
80 ** = size(">10MB")
81
81
82 Configs::
82 Configs::
83
83
84 [lfs]
84 [lfs]
85 # Remote endpoint. Multiple protocols are supported:
85 # Remote endpoint. Multiple protocols are supported:
86 # - http(s)://user:pass@example.com/path
86 # - http(s)://user:pass@example.com/path
87 # git-lfs endpoint
87 # git-lfs endpoint
88 # - file:///tmp/path
88 # - file:///tmp/path
89 # local filesystem, usually for testing
89 # local filesystem, usually for testing
90 # if unset, lfs will prompt setting this when it must use this value.
90 # if unset, lfs will prompt setting this when it must use this value.
91 # (default: unset)
91 # (default: unset)
92 url = https://example.com/repo.git/info/lfs
92 url = https://example.com/repo.git/info/lfs
93
93
94 # Which files to track in LFS. Path tests are "**.extname" for file
94 # Which files to track in LFS. Path tests are "**.extname" for file
95 # extensions, and "path:under/some/directory" for path prefix. Both
95 # extensions, and "path:under/some/directory" for path prefix. Both
96 # are relative to the repository root.
96 # are relative to the repository root.
97 # File size can be tested with the "size()" fileset, and tests can be
97 # File size can be tested with the "size()" fileset, and tests can be
98 # joined with fileset operators. (See "hg help filesets.operators".)
98 # joined with fileset operators. (See "hg help filesets.operators".)
99 #
99 #
100 # Some examples:
100 # Some examples:
101 # - all() # everything
101 # - all() # everything
102 # - none() # nothing
102 # - none() # nothing
103 # - size(">20MB") # larger than 20MB
103 # - size(">20MB") # larger than 20MB
104 # - !**.txt # anything not a *.txt file
104 # - !**.txt # anything not a *.txt file
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 # - path:bin # files under "bin" in the project root
106 # - path:bin # files under "bin" in the project root
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 # | (path:bin & !path:/bin/README) | size(">1GB")
108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 # (default: none())
109 # (default: none())
110 #
110 #
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 # will eventually be deprecated and removed.
112 # will eventually be deprecated and removed.
113 track = size(">10M")
113 track = size(">10M")
114
114
115 # how many times to retry before giving up on transferring an object
115 # how many times to retry before giving up on transferring an object
116 retry = 5
116 retry = 5
117
117
118 # the local directory to store lfs files for sharing across local clones.
118 # the local directory to store lfs files for sharing across local clones.
119 # If not set, the cache is located in an OS specific cache location.
119 # If not set, the cache is located in an OS specific cache location.
120 usercache = /path/to/global/cache
120 usercache = /path/to/global/cache
121 """
121 """
122
122
123 from __future__ import absolute_import
123 from __future__ import absolute_import
124
124
125 from mercurial.i18n import _
125 from mercurial.i18n import _
126
126
127 from mercurial import (
127 from mercurial import (
128 bundle2,
128 bundle2,
129 changegroup,
129 changegroup,
130 cmdutil,
130 cmdutil,
131 config,
131 config,
132 context,
132 context,
133 error,
133 error,
134 exchange,
134 exchange,
135 extensions,
135 extensions,
136 filelog,
136 filelog,
137 fileset,
137 fileset,
138 hg,
138 hg,
139 localrepo,
139 localrepo,
140 minifileset,
140 minifileset,
141 node,
141 node,
142 pycompat,
142 pycompat,
143 registrar,
143 registrar,
144 revlog,
144 revlog,
145 scmutil,
145 scmutil,
146 templateutil,
146 templateutil,
147 upgrade,
147 upgrade,
148 util,
148 util,
149 vfs as vfsmod,
149 vfs as vfsmod,
150 wireproto,
150 wireproto,
151 )
151 )
152
152
153 from . import (
153 from . import (
154 blobstore,
154 blobstore,
155 wrapper,
155 wrapper,
156 )
156 )
157
157
158 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
158 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
159 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
159 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
160 # be specifying the version(s) of Mercurial they are tested with, or
160 # be specifying the version(s) of Mercurial they are tested with, or
161 # leave the attribute unspecified.
161 # leave the attribute unspecified.
162 testedwith = 'ships-with-hg-core'
162 testedwith = 'ships-with-hg-core'
163
163
164 configtable = {}
164 configtable = {}
165 configitem = registrar.configitem(configtable)
165 configitem = registrar.configitem(configtable)
166
166
167 configitem('experimental', 'lfs.user-agent',
167 configitem('experimental', 'lfs.user-agent',
168 default=None,
168 default=None,
169 )
169 )
170 configitem('experimental', 'lfs.worker-enable',
170 configitem('experimental', 'lfs.worker-enable',
171 default=False,
171 default=False,
172 )
172 )
173
173
174 configitem('lfs', 'url',
174 configitem('lfs', 'url',
175 default=None,
175 default=None,
176 )
176 )
177 configitem('lfs', 'usercache',
177 configitem('lfs', 'usercache',
178 default=None,
178 default=None,
179 )
179 )
180 # Deprecated
180 # Deprecated
181 configitem('lfs', 'threshold',
181 configitem('lfs', 'threshold',
182 default=None,
182 default=None,
183 )
183 )
184 configitem('lfs', 'track',
184 configitem('lfs', 'track',
185 default='none()',
185 default='none()',
186 )
186 )
187 configitem('lfs', 'retry',
187 configitem('lfs', 'retry',
188 default=5,
188 default=5,
189 )
189 )
190
190
191 cmdtable = {}
191 cmdtable = {}
192 command = registrar.command(cmdtable)
192 command = registrar.command(cmdtable)
193
193
194 templatekeyword = registrar.templatekeyword()
194 templatekeyword = registrar.templatekeyword()
195 filesetpredicate = registrar.filesetpredicate()
195 filesetpredicate = registrar.filesetpredicate()
196
196
197 def featuresetup(ui, supported):
197 def featuresetup(ui, supported):
198 # don't die on seeing a repo with the lfs requirement
198 # don't die on seeing a repo with the lfs requirement
199 supported |= {'lfs'}
199 supported |= {'lfs'}
200
200
201 def uisetup(ui):
201 def uisetup(ui):
202 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
202 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
203
203
204 def reposetup(ui, repo):
204 def reposetup(ui, repo):
205 # Nothing to do with a remote repo
205 # Nothing to do with a remote repo
206 if not repo.local():
206 if not repo.local():
207 return
207 return
208
208
209 repo.svfs.lfslocalblobstore = blobstore.local(repo)
209 repo.svfs.lfslocalblobstore = blobstore.local(repo)
210 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
210 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
211
211
212 class lfsrepo(repo.__class__):
212 class lfsrepo(repo.__class__):
213 @localrepo.unfilteredmethod
213 @localrepo.unfilteredmethod
214 def commitctx(self, ctx, error=False):
214 def commitctx(self, ctx, error=False):
215 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
215 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
216 return super(lfsrepo, self).commitctx(ctx, error)
216 return super(lfsrepo, self).commitctx(ctx, error)
217
217
218 repo.__class__ = lfsrepo
218 repo.__class__ = lfsrepo
219
219
220 if 'lfs' not in repo.requirements:
220 if 'lfs' not in repo.requirements:
221 def checkrequireslfs(ui, repo, **kwargs):
221 def checkrequireslfs(ui, repo, **kwargs):
222 if 'lfs' not in repo.requirements:
222 if 'lfs' not in repo.requirements:
223 last = kwargs.get(r'node_last')
223 last = kwargs.get(r'node_last')
224 _bin = node.bin
224 _bin = node.bin
225 if last:
225 if last:
226 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
226 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
227 else:
227 else:
228 s = repo.set('%n', _bin(kwargs[r'node']))
228 s = repo.set('%n', _bin(kwargs[r'node']))
229 for ctx in s:
229 for ctx in s:
230 # TODO: is there a way to just walk the files in the commit?
230 # TODO: is there a way to just walk the files in the commit?
231 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
231 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
232 repo.requirements.add('lfs')
232 repo.requirements.add('lfs')
233 repo._writerequirements()
233 repo._writerequirements()
234 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
234 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
235 break
235 break
236
236
237 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
237 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
238 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
238 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
239 else:
239 else:
240 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
240 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
241
241
242 def _trackedmatcher(repo):
242 def _trackedmatcher(repo):
243 """Return a function (path, size) -> bool indicating whether or not to
243 """Return a function (path, size) -> bool indicating whether or not to
244 track a given file with lfs."""
244 track a given file with lfs."""
245 if not repo.wvfs.exists('.hglfs'):
245 if not repo.wvfs.exists('.hglfs'):
246 # No '.hglfs' in wdir. Fallback to config for now.
246 # No '.hglfs' in wdir. Fallback to config for now.
247 trackspec = repo.ui.config('lfs', 'track')
247 trackspec = repo.ui.config('lfs', 'track')
248
248
249 # deprecated config: lfs.threshold
249 # deprecated config: lfs.threshold
250 threshold = repo.ui.configbytes('lfs', 'threshold')
250 threshold = repo.ui.configbytes('lfs', 'threshold')
251 if threshold:
251 if threshold:
252 fileset.parse(trackspec) # make sure syntax errors are confined
252 fileset.parse(trackspec) # make sure syntax errors are confined
253 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
253 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
254
254
255 return minifileset.compile(trackspec)
255 return minifileset.compile(trackspec)
256
256
257 data = repo.wvfs.tryread('.hglfs')
257 data = repo.wvfs.tryread('.hglfs')
258 if not data:
258 if not data:
259 return lambda p, s: False
259 return lambda p, s: False
260
260
261 # Parse errors here will abort with a message that points to the .hglfs file
261 # Parse errors here will abort with a message that points to the .hglfs file
262 # and line number.
262 # and line number.
263 cfg = config.config()
263 cfg = config.config()
264 cfg.parse('.hglfs', data)
264 cfg.parse('.hglfs', data)
265
265
266 try:
266 try:
267 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
267 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
268 for pattern, rule in cfg.items('track')]
268 for pattern, rule in cfg.items('track')]
269 except error.ParseError as e:
269 except error.ParseError as e:
270 # The original exception gives no indicator that the error is in the
270 # The original exception gives no indicator that the error is in the
271 # .hglfs file, so add that.
271 # .hglfs file, so add that.
272
272
273 # TODO: See if the line number of the file can be made available.
273 # TODO: See if the line number of the file can be made available.
274 raise error.Abort(_('parse error in .hglfs: %s') % e)
274 raise error.Abort(_('parse error in .hglfs: %s') % e)
275
275
276 def _match(path, size):
276 def _match(path, size):
277 for pat, rule in rules:
277 for pat, rule in rules:
278 if pat(path, size):
278 if pat(path, size):
279 return rule(path, size)
279 return rule(path, size)
280
280
281 return False
281 return False
282
282
283 return _match
283 return _match
284
284
285 def wrapfilelog(filelog):
285 def wrapfilelog(filelog):
286 wrapfunction = extensions.wrapfunction
286 wrapfunction = extensions.wrapfunction
287
287
288 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
288 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
289 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
289 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
290 wrapfunction(filelog, 'size', wrapper.filelogsize)
290 wrapfunction(filelog, 'size', wrapper.filelogsize)
291
291
292 def extsetup(ui):
292 def extsetup(ui):
293 wrapfilelog(filelog.filelog)
293 wrapfilelog(filelog.filelog)
294
294
295 wrapfunction = extensions.wrapfunction
295 wrapfunction = extensions.wrapfunction
296
296
297 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
297 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
298 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
298 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
299
299
300 wrapfunction(upgrade, '_finishdatamigration',
300 wrapfunction(upgrade, '_finishdatamigration',
301 wrapper.upgradefinishdatamigration)
301 wrapper.upgradefinishdatamigration)
302
302
303 wrapfunction(upgrade, 'preservedrequirements',
303 wrapfunction(upgrade, 'preservedrequirements',
304 wrapper.upgraderequirements)
304 wrapper.upgraderequirements)
305
305
306 wrapfunction(upgrade, 'supporteddestrequirements',
306 wrapfunction(upgrade, 'supporteddestrequirements',
307 wrapper.upgraderequirements)
307 wrapper.upgraderequirements)
308
308
309 wrapfunction(changegroup,
309 wrapfunction(changegroup,
310 'supportedoutgoingversions',
311 wrapper.supportedoutgoingversions)
312 wrapfunction(changegroup,
313 'allsupportedversions',
310 'allsupportedversions',
314 wrapper.allsupportedversions)
311 wrapper.allsupportedversions)
315
312
316 wrapfunction(exchange, 'push', wrapper.push)
313 wrapfunction(exchange, 'push', wrapper.push)
317 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
314 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
318
315
319 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
316 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
320 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
317 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
321 context.basefilectx.islfs = wrapper.filectxislfs
318 context.basefilectx.islfs = wrapper.filectxislfs
322
319
323 revlog.addflagprocessor(
320 revlog.addflagprocessor(
324 revlog.REVIDX_EXTSTORED,
321 revlog.REVIDX_EXTSTORED,
325 (
322 (
326 wrapper.readfromstore,
323 wrapper.readfromstore,
327 wrapper.writetostore,
324 wrapper.writetostore,
328 wrapper.bypasscheckhash,
325 wrapper.bypasscheckhash,
329 ),
326 ),
330 )
327 )
331
328
332 wrapfunction(hg, 'clone', wrapper.hgclone)
329 wrapfunction(hg, 'clone', wrapper.hgclone)
333 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
330 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
334
331
335 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
336
333
337 # Make bundle choose changegroup3 instead of changegroup2. This affects
334 # Make bundle choose changegroup3 instead of changegroup2. This affects
338 # "hg bundle" command. Note: it does not cover all bundle formats like
335 # "hg bundle" command. Note: it does not cover all bundle formats like
339 # "packed1". Using "packed1" with lfs will likely cause trouble.
336 # "packed1". Using "packed1" with lfs will likely cause trouble.
340 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
337 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
341 for k in names:
338 for k in names:
342 exchange._bundlespeccgversions[k] = '03'
339 exchange._bundlespeccgversions[k] = '03'
343
340
344 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
341 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
345 # options and blob stores are passed from othervfs to the new readonlyvfs.
342 # options and blob stores are passed from othervfs to the new readonlyvfs.
346 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
343 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
347
344
348 # when writing a bundle via "hg bundle" command, upload related LFS blobs
345 # when writing a bundle via "hg bundle" command, upload related LFS blobs
349 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
346 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
350
347
351 @filesetpredicate('lfs()', callstatus=True)
348 @filesetpredicate('lfs()', callstatus=True)
352 def lfsfileset(mctx, x):
349 def lfsfileset(mctx, x):
353 """File that uses LFS storage."""
350 """File that uses LFS storage."""
354 # i18n: "lfs" is a keyword
351 # i18n: "lfs" is a keyword
355 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
352 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
356 return [f for f in mctx.subset
353 return [f for f in mctx.subset
357 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
354 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
358
355
359 @templatekeyword('lfs_files', requires={'ctx'})
356 @templatekeyword('lfs_files', requires={'ctx'})
360 def lfsfiles(context, mapping):
357 def lfsfiles(context, mapping):
361 """List of strings. All files modified, added, or removed by this
358 """List of strings. All files modified, added, or removed by this
362 changeset."""
359 changeset."""
363 ctx = context.resource(mapping, 'ctx')
360 ctx = context.resource(mapping, 'ctx')
364
361
365 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
362 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
366 files = sorted(pointers.keys())
363 files = sorted(pointers.keys())
367
364
368 def pointer(v):
365 def pointer(v):
369 # In the file spec, version is first and the other keys are sorted.
366 # In the file spec, version is first and the other keys are sorted.
370 sortkeyfunc = lambda x: (x[0] != 'version', x)
367 sortkeyfunc = lambda x: (x[0] != 'version', x)
371 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
368 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
372 return util.sortdict(items)
369 return util.sortdict(items)
373
370
374 makemap = lambda v: {
371 makemap = lambda v: {
375 'file': v,
372 'file': v,
376 'lfsoid': pointers[v].oid() if pointers[v] else None,
373 'lfsoid': pointers[v].oid() if pointers[v] else None,
377 'lfspointer': templateutil.hybriddict(pointer(v)),
374 'lfspointer': templateutil.hybriddict(pointer(v)),
378 }
375 }
379
376
380 # TODO: make the separator ', '?
377 # TODO: make the separator ', '?
381 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
378 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
382 return templateutil.hybrid(f, files, makemap, pycompat.identity)
379 return templateutil.hybrid(f, files, makemap, pycompat.identity)
383
380
384 @command('debuglfsupload',
381 @command('debuglfsupload',
385 [('r', 'rev', [], _('upload large files introduced by REV'))])
382 [('r', 'rev', [], _('upload large files introduced by REV'))])
386 def debuglfsupload(ui, repo, **opts):
383 def debuglfsupload(ui, repo, **opts):
387 """upload lfs blobs added by the working copy parent or given revisions"""
384 """upload lfs blobs added by the working copy parent or given revisions"""
388 revs = opts.get(r'rev', [])
385 revs = opts.get(r'rev', [])
389 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
386 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
390 wrapper.uploadblobs(repo, pointers)
387 wrapper.uploadblobs(repo, pointers)
@@ -1,395 +1,387
1 # wrapper.py - methods wrapping core mercurial logic
1 # wrapper.py - methods wrapping core mercurial logic
2 #
2 #
3 # Copyright 2017 Facebook, Inc.
3 # Copyright 2017 Facebook, Inc.
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import hashlib
10 import hashlib
11
11
12 from mercurial.i18n import _
12 from mercurial.i18n import _
13 from mercurial.node import bin, hex, nullid, short
13 from mercurial.node import bin, hex, nullid, short
14
14
15 from mercurial import (
15 from mercurial import (
16 error,
16 error,
17 filelog,
17 filelog,
18 revlog,
18 revlog,
19 util,
19 util,
20 )
20 )
21
21
22 from mercurial.utils import (
22 from mercurial.utils import (
23 stringutil,
23 stringutil,
24 )
24 )
25
25
26 from ..largefiles import lfutil
26 from ..largefiles import lfutil
27
27
28 from . import (
28 from . import (
29 blobstore,
29 blobstore,
30 pointer,
30 pointer,
31 )
31 )
32
32
33 def supportedoutgoingversions(orig, repo):
34 versions = orig(repo)
35 if 'lfs' in repo.requirements:
36 versions.discard('01')
37 versions.discard('02')
38 versions.add('03')
39 return versions
40
41 def allsupportedversions(orig, ui):
33 def allsupportedversions(orig, ui):
42 versions = orig(ui)
34 versions = orig(ui)
43 versions.add('03')
35 versions.add('03')
44 return versions
36 return versions
45
37
46 def _capabilities(orig, repo, proto):
38 def _capabilities(orig, repo, proto):
47 '''Wrap server command to announce lfs server capability'''
39 '''Wrap server command to announce lfs server capability'''
48 caps = orig(repo, proto)
40 caps = orig(repo, proto)
49 # XXX: change to 'lfs=serve' when separate git server isn't required?
41 # XXX: change to 'lfs=serve' when separate git server isn't required?
50 caps.append('lfs')
42 caps.append('lfs')
51 return caps
43 return caps
52
44
53 def bypasscheckhash(self, text):
45 def bypasscheckhash(self, text):
54 return False
46 return False
55
47
56 def readfromstore(self, text):
48 def readfromstore(self, text):
57 """Read filelog content from local blobstore transform for flagprocessor.
49 """Read filelog content from local blobstore transform for flagprocessor.
58
50
59 Default tranform for flagprocessor, returning contents from blobstore.
51 Default tranform for flagprocessor, returning contents from blobstore.
60 Returns a 2-typle (text, validatehash) where validatehash is True as the
52 Returns a 2-typle (text, validatehash) where validatehash is True as the
61 contents of the blobstore should be checked using checkhash.
53 contents of the blobstore should be checked using checkhash.
62 """
54 """
63 p = pointer.deserialize(text)
55 p = pointer.deserialize(text)
64 oid = p.oid()
56 oid = p.oid()
65 store = self.opener.lfslocalblobstore
57 store = self.opener.lfslocalblobstore
66 if not store.has(oid):
58 if not store.has(oid):
67 p.filename = self.filename
59 p.filename = self.filename
68 self.opener.lfsremoteblobstore.readbatch([p], store)
60 self.opener.lfsremoteblobstore.readbatch([p], store)
69
61
70 # The caller will validate the content
62 # The caller will validate the content
71 text = store.read(oid, verify=False)
63 text = store.read(oid, verify=False)
72
64
73 # pack hg filelog metadata
65 # pack hg filelog metadata
74 hgmeta = {}
66 hgmeta = {}
75 for k in p.keys():
67 for k in p.keys():
76 if k.startswith('x-hg-'):
68 if k.startswith('x-hg-'):
77 name = k[len('x-hg-'):]
69 name = k[len('x-hg-'):]
78 hgmeta[name] = p[k]
70 hgmeta[name] = p[k]
79 if hgmeta or text.startswith('\1\n'):
71 if hgmeta or text.startswith('\1\n'):
80 text = filelog.packmeta(hgmeta, text)
72 text = filelog.packmeta(hgmeta, text)
81
73
82 return (text, True)
74 return (text, True)
83
75
84 def writetostore(self, text):
76 def writetostore(self, text):
85 # hg filelog metadata (includes rename, etc)
77 # hg filelog metadata (includes rename, etc)
86 hgmeta, offset = filelog.parsemeta(text)
78 hgmeta, offset = filelog.parsemeta(text)
87 if offset and offset > 0:
79 if offset and offset > 0:
88 # lfs blob does not contain hg filelog metadata
80 # lfs blob does not contain hg filelog metadata
89 text = text[offset:]
81 text = text[offset:]
90
82
91 # git-lfs only supports sha256
83 # git-lfs only supports sha256
92 oid = hex(hashlib.sha256(text).digest())
84 oid = hex(hashlib.sha256(text).digest())
93 self.opener.lfslocalblobstore.write(oid, text)
85 self.opener.lfslocalblobstore.write(oid, text)
94
86
95 # replace contents with metadata
87 # replace contents with metadata
96 longoid = 'sha256:%s' % oid
88 longoid = 'sha256:%s' % oid
97 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
89 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
98
90
99 # by default, we expect the content to be binary. however, LFS could also
91 # by default, we expect the content to be binary. however, LFS could also
100 # be used for non-binary content. add a special entry for non-binary data.
92 # be used for non-binary content. add a special entry for non-binary data.
101 # this will be used by filectx.isbinary().
93 # this will be used by filectx.isbinary().
102 if not stringutil.binary(text):
94 if not stringutil.binary(text):
103 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
95 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
104 metadata['x-is-binary'] = '0'
96 metadata['x-is-binary'] = '0'
105
97
106 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
98 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
107 if hgmeta is not None:
99 if hgmeta is not None:
108 for k, v in hgmeta.iteritems():
100 for k, v in hgmeta.iteritems():
109 metadata['x-hg-%s' % k] = v
101 metadata['x-hg-%s' % k] = v
110
102
111 rawtext = metadata.serialize()
103 rawtext = metadata.serialize()
112 return (rawtext, False)
104 return (rawtext, False)
113
105
114 def _islfs(rlog, node=None, rev=None):
106 def _islfs(rlog, node=None, rev=None):
115 if rev is None:
107 if rev is None:
116 if node is None:
108 if node is None:
117 # both None - likely working copy content where node is not ready
109 # both None - likely working copy content where node is not ready
118 return False
110 return False
119 rev = rlog.rev(node)
111 rev = rlog.rev(node)
120 else:
112 else:
121 node = rlog.node(rev)
113 node = rlog.node(rev)
122 if node == nullid:
114 if node == nullid:
123 return False
115 return False
124 flags = rlog.flags(rev)
116 flags = rlog.flags(rev)
125 return bool(flags & revlog.REVIDX_EXTSTORED)
117 return bool(flags & revlog.REVIDX_EXTSTORED)
126
118
127 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
119 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
128 cachedelta=None, node=None,
120 cachedelta=None, node=None,
129 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
121 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
130 textlen = len(text)
122 textlen = len(text)
131 # exclude hg rename meta from file size
123 # exclude hg rename meta from file size
132 meta, offset = filelog.parsemeta(text)
124 meta, offset = filelog.parsemeta(text)
133 if offset:
125 if offset:
134 textlen -= offset
126 textlen -= offset
135
127
136 lfstrack = self.opener.options['lfstrack']
128 lfstrack = self.opener.options['lfstrack']
137
129
138 if lfstrack(self.filename, textlen):
130 if lfstrack(self.filename, textlen):
139 flags |= revlog.REVIDX_EXTSTORED
131 flags |= revlog.REVIDX_EXTSTORED
140
132
141 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
133 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
142 node=node, flags=flags, **kwds)
134 node=node, flags=flags, **kwds)
143
135
144 def filelogrenamed(orig, self, node):
136 def filelogrenamed(orig, self, node):
145 if _islfs(self, node):
137 if _islfs(self, node):
146 rawtext = self.revision(node, raw=True)
138 rawtext = self.revision(node, raw=True)
147 if not rawtext:
139 if not rawtext:
148 return False
140 return False
149 metadata = pointer.deserialize(rawtext)
141 metadata = pointer.deserialize(rawtext)
150 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
142 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
151 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
143 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
152 else:
144 else:
153 return False
145 return False
154 return orig(self, node)
146 return orig(self, node)
155
147
156 def filelogsize(orig, self, rev):
148 def filelogsize(orig, self, rev):
157 if _islfs(self, rev=rev):
149 if _islfs(self, rev=rev):
158 # fast path: use lfs metadata to answer size
150 # fast path: use lfs metadata to answer size
159 rawtext = self.revision(rev, raw=True)
151 rawtext = self.revision(rev, raw=True)
160 metadata = pointer.deserialize(rawtext)
152 metadata = pointer.deserialize(rawtext)
161 return int(metadata['size'])
153 return int(metadata['size'])
162 return orig(self, rev)
154 return orig(self, rev)
163
155
164 def filectxcmp(orig, self, fctx):
156 def filectxcmp(orig, self, fctx):
165 """returns True if text is different than fctx"""
157 """returns True if text is different than fctx"""
166 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
158 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
167 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
159 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
168 # fast path: check LFS oid
160 # fast path: check LFS oid
169 p1 = pointer.deserialize(self.rawdata())
161 p1 = pointer.deserialize(self.rawdata())
170 p2 = pointer.deserialize(fctx.rawdata())
162 p2 = pointer.deserialize(fctx.rawdata())
171 return p1.oid() != p2.oid()
163 return p1.oid() != p2.oid()
172 return orig(self, fctx)
164 return orig(self, fctx)
173
165
174 def filectxisbinary(orig, self):
166 def filectxisbinary(orig, self):
175 if self.islfs():
167 if self.islfs():
176 # fast path: use lfs metadata to answer isbinary
168 # fast path: use lfs metadata to answer isbinary
177 metadata = pointer.deserialize(self.rawdata())
169 metadata = pointer.deserialize(self.rawdata())
178 # if lfs metadata says nothing, assume it's binary by default
170 # if lfs metadata says nothing, assume it's binary by default
179 return bool(int(metadata.get('x-is-binary', 1)))
171 return bool(int(metadata.get('x-is-binary', 1)))
180 return orig(self)
172 return orig(self)
181
173
182 def filectxislfs(self):
174 def filectxislfs(self):
183 return _islfs(self.filelog(), self.filenode())
175 return _islfs(self.filelog(), self.filenode())
184
176
185 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
177 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
186 orig(fm, ctx, matcher, path, decode)
178 orig(fm, ctx, matcher, path, decode)
187 fm.data(rawdata=ctx[path].rawdata())
179 fm.data(rawdata=ctx[path].rawdata())
188
180
189 def convertsink(orig, sink):
181 def convertsink(orig, sink):
190 sink = orig(sink)
182 sink = orig(sink)
191 if sink.repotype == 'hg':
183 if sink.repotype == 'hg':
192 class lfssink(sink.__class__):
184 class lfssink(sink.__class__):
193 def putcommit(self, files, copies, parents, commit, source, revmap,
185 def putcommit(self, files, copies, parents, commit, source, revmap,
194 full, cleanp2):
186 full, cleanp2):
195 pc = super(lfssink, self).putcommit
187 pc = super(lfssink, self).putcommit
196 node = pc(files, copies, parents, commit, source, revmap, full,
188 node = pc(files, copies, parents, commit, source, revmap, full,
197 cleanp2)
189 cleanp2)
198
190
199 if 'lfs' not in self.repo.requirements:
191 if 'lfs' not in self.repo.requirements:
200 ctx = self.repo[node]
192 ctx = self.repo[node]
201
193
202 # The file list may contain removed files, so check for
194 # The file list may contain removed files, so check for
203 # membership before assuming it is in the context.
195 # membership before assuming it is in the context.
204 if any(f in ctx and ctx[f].islfs() for f, n in files):
196 if any(f in ctx and ctx[f].islfs() for f, n in files):
205 self.repo.requirements.add('lfs')
197 self.repo.requirements.add('lfs')
206 self.repo._writerequirements()
198 self.repo._writerequirements()
207
199
208 # Permanently enable lfs locally
200 # Permanently enable lfs locally
209 self.repo.vfs.append(
201 self.repo.vfs.append(
210 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
202 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
211
203
212 return node
204 return node
213
205
214 sink.__class__ = lfssink
206 sink.__class__ = lfssink
215
207
216 return sink
208 return sink
217
209
218 def vfsinit(orig, self, othervfs):
210 def vfsinit(orig, self, othervfs):
219 orig(self, othervfs)
211 orig(self, othervfs)
220 # copy lfs related options
212 # copy lfs related options
221 for k, v in othervfs.options.items():
213 for k, v in othervfs.options.items():
222 if k.startswith('lfs'):
214 if k.startswith('lfs'):
223 self.options[k] = v
215 self.options[k] = v
224 # also copy lfs blobstores. note: this can run before reposetup, so lfs
216 # also copy lfs blobstores. note: this can run before reposetup, so lfs
225 # blobstore attributes are not always ready at this time.
217 # blobstore attributes are not always ready at this time.
226 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
218 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
227 if util.safehasattr(othervfs, name):
219 if util.safehasattr(othervfs, name):
228 setattr(self, name, getattr(othervfs, name))
220 setattr(self, name, getattr(othervfs, name))
229
221
230 def hgclone(orig, ui, opts, *args, **kwargs):
222 def hgclone(orig, ui, opts, *args, **kwargs):
231 result = orig(ui, opts, *args, **kwargs)
223 result = orig(ui, opts, *args, **kwargs)
232
224
233 if result is not None:
225 if result is not None:
234 sourcerepo, destrepo = result
226 sourcerepo, destrepo = result
235 repo = destrepo.local()
227 repo = destrepo.local()
236
228
237 # When cloning to a remote repo (like through SSH), no repo is available
229 # When cloning to a remote repo (like through SSH), no repo is available
238 # from the peer. Therefore the hgrc can't be updated.
230 # from the peer. Therefore the hgrc can't be updated.
239 if not repo:
231 if not repo:
240 return result
232 return result
241
233
242 # If lfs is required for this repo, permanently enable it locally
234 # If lfs is required for this repo, permanently enable it locally
243 if 'lfs' in repo.requirements:
235 if 'lfs' in repo.requirements:
244 repo.vfs.append('hgrc',
236 repo.vfs.append('hgrc',
245 util.tonativeeol('\n[extensions]\nlfs=\n'))
237 util.tonativeeol('\n[extensions]\nlfs=\n'))
246
238
247 return result
239 return result
248
240
249 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
241 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
250 orig(sourcerepo, destrepo, bookmarks, defaultpath)
242 orig(sourcerepo, destrepo, bookmarks, defaultpath)
251
243
252 # If lfs is required for this repo, permanently enable it locally
244 # If lfs is required for this repo, permanently enable it locally
253 if 'lfs' in destrepo.requirements:
245 if 'lfs' in destrepo.requirements:
254 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
246 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
255
247
256 def _prefetchfiles(repo, ctx, files):
248 def _prefetchfiles(repo, ctx, files):
257 """Ensure that required LFS blobs are present, fetching them as a group if
249 """Ensure that required LFS blobs are present, fetching them as a group if
258 needed."""
250 needed."""
259 pointers = []
251 pointers = []
260 localstore = repo.svfs.lfslocalblobstore
252 localstore = repo.svfs.lfslocalblobstore
261
253
262 for f in files:
254 for f in files:
263 p = pointerfromctx(ctx, f)
255 p = pointerfromctx(ctx, f)
264 if p and not localstore.has(p.oid()):
256 if p and not localstore.has(p.oid()):
265 p.filename = f
257 p.filename = f
266 pointers.append(p)
258 pointers.append(p)
267
259
268 if pointers:
260 if pointers:
269 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
261 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
270
262
271 def _canskipupload(repo):
263 def _canskipupload(repo):
272 # if remotestore is a null store, upload is a no-op and can be skipped
264 # if remotestore is a null store, upload is a no-op and can be skipped
273 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
265 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
274
266
275 def candownload(repo):
267 def candownload(repo):
276 # if remotestore is a null store, downloads will lead to nothing
268 # if remotestore is a null store, downloads will lead to nothing
277 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
269 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
278
270
279 def uploadblobsfromrevs(repo, revs):
271 def uploadblobsfromrevs(repo, revs):
280 '''upload lfs blobs introduced by revs
272 '''upload lfs blobs introduced by revs
281
273
282 Note: also used by other extensions e. g. infinitepush. avoid renaming.
274 Note: also used by other extensions e. g. infinitepush. avoid renaming.
283 '''
275 '''
284 if _canskipupload(repo):
276 if _canskipupload(repo):
285 return
277 return
286 pointers = extractpointers(repo, revs)
278 pointers = extractpointers(repo, revs)
287 uploadblobs(repo, pointers)
279 uploadblobs(repo, pointers)
288
280
289 def prepush(pushop):
281 def prepush(pushop):
290 """Prepush hook.
282 """Prepush hook.
291
283
292 Read through the revisions to push, looking for filelog entries that can be
284 Read through the revisions to push, looking for filelog entries that can be
293 deserialized into metadata so that we can block the push on their upload to
285 deserialized into metadata so that we can block the push on their upload to
294 the remote blobstore.
286 the remote blobstore.
295 """
287 """
296 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
288 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
297
289
298 def push(orig, repo, remote, *args, **kwargs):
290 def push(orig, repo, remote, *args, **kwargs):
299 """bail on push if the extension isn't enabled on remote when needed"""
291 """bail on push if the extension isn't enabled on remote when needed"""
300 if 'lfs' in repo.requirements:
292 if 'lfs' in repo.requirements:
301 # If the remote peer is for a local repo, the requirement tests in the
293 # If the remote peer is for a local repo, the requirement tests in the
302 # base class method enforce lfs support. Otherwise, some revisions in
294 # base class method enforce lfs support. Otherwise, some revisions in
303 # this repo use lfs, and the remote repo needs the extension loaded.
295 # this repo use lfs, and the remote repo needs the extension loaded.
304 if not remote.local() and not remote.capable('lfs'):
296 if not remote.local() and not remote.capable('lfs'):
305 # This is a copy of the message in exchange.push() when requirements
297 # This is a copy of the message in exchange.push() when requirements
306 # are missing between local repos.
298 # are missing between local repos.
307 m = _("required features are not supported in the destination: %s")
299 m = _("required features are not supported in the destination: %s")
308 raise error.Abort(m % 'lfs',
300 raise error.Abort(m % 'lfs',
309 hint=_('enable the lfs extension on the server'))
301 hint=_('enable the lfs extension on the server'))
310 return orig(repo, remote, *args, **kwargs)
302 return orig(repo, remote, *args, **kwargs)
311
303
312 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
304 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
313 *args, **kwargs):
305 *args, **kwargs):
314 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
306 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
315 uploadblobsfromrevs(repo, outgoing.missing)
307 uploadblobsfromrevs(repo, outgoing.missing)
316 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
308 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
317 **kwargs)
309 **kwargs)
318
310
319 def extractpointers(repo, revs):
311 def extractpointers(repo, revs):
320 """return a list of lfs pointers added by given revs"""
312 """return a list of lfs pointers added by given revs"""
321 repo.ui.debug('lfs: computing set of blobs to upload\n')
313 repo.ui.debug('lfs: computing set of blobs to upload\n')
322 pointers = {}
314 pointers = {}
323 for r in revs:
315 for r in revs:
324 ctx = repo[r]
316 ctx = repo[r]
325 for p in pointersfromctx(ctx).values():
317 for p in pointersfromctx(ctx).values():
326 pointers[p.oid()] = p
318 pointers[p.oid()] = p
327 return sorted(pointers.values())
319 return sorted(pointers.values())
328
320
329 def pointerfromctx(ctx, f, removed=False):
321 def pointerfromctx(ctx, f, removed=False):
330 """return a pointer for the named file from the given changectx, or None if
322 """return a pointer for the named file from the given changectx, or None if
331 the file isn't LFS.
323 the file isn't LFS.
332
324
333 Optionally, the pointer for a file deleted from the context can be returned.
325 Optionally, the pointer for a file deleted from the context can be returned.
334 Since no such pointer is actually stored, and to distinguish from a non LFS
326 Since no such pointer is actually stored, and to distinguish from a non LFS
335 file, this pointer is represented by an empty dict.
327 file, this pointer is represented by an empty dict.
336 """
328 """
337 _ctx = ctx
329 _ctx = ctx
338 if f not in ctx:
330 if f not in ctx:
339 if not removed:
331 if not removed:
340 return None
332 return None
341 if f in ctx.p1():
333 if f in ctx.p1():
342 _ctx = ctx.p1()
334 _ctx = ctx.p1()
343 elif f in ctx.p2():
335 elif f in ctx.p2():
344 _ctx = ctx.p2()
336 _ctx = ctx.p2()
345 else:
337 else:
346 return None
338 return None
347 fctx = _ctx[f]
339 fctx = _ctx[f]
348 if not _islfs(fctx.filelog(), fctx.filenode()):
340 if not _islfs(fctx.filelog(), fctx.filenode()):
349 return None
341 return None
350 try:
342 try:
351 p = pointer.deserialize(fctx.rawdata())
343 p = pointer.deserialize(fctx.rawdata())
352 if ctx == _ctx:
344 if ctx == _ctx:
353 return p
345 return p
354 return {}
346 return {}
355 except pointer.InvalidPointer as ex:
347 except pointer.InvalidPointer as ex:
356 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
348 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
357 % (f, short(_ctx.node()), ex))
349 % (f, short(_ctx.node()), ex))
358
350
359 def pointersfromctx(ctx, removed=False):
351 def pointersfromctx(ctx, removed=False):
360 """return a dict {path: pointer} for given single changectx.
352 """return a dict {path: pointer} for given single changectx.
361
353
362 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
354 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
363 stored for the path is an empty dict.
355 stored for the path is an empty dict.
364 """
356 """
365 result = {}
357 result = {}
366 for f in ctx.files():
358 for f in ctx.files():
367 p = pointerfromctx(ctx, f, removed=removed)
359 p = pointerfromctx(ctx, f, removed=removed)
368 if p is not None:
360 if p is not None:
369 result[f] = p
361 result[f] = p
370 return result
362 return result
371
363
372 def uploadblobs(repo, pointers):
364 def uploadblobs(repo, pointers):
373 """upload given pointers from local blobstore"""
365 """upload given pointers from local blobstore"""
374 if not pointers:
366 if not pointers:
375 return
367 return
376
368
377 remoteblob = repo.svfs.lfsremoteblobstore
369 remoteblob = repo.svfs.lfsremoteblobstore
378 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
370 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
379
371
380 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
372 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
381 orig(ui, srcrepo, dstrepo, requirements)
373 orig(ui, srcrepo, dstrepo, requirements)
382
374
383 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
375 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
384 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
376 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
385
377
386 for dirpath, dirs, files in srclfsvfs.walk():
378 for dirpath, dirs, files in srclfsvfs.walk():
387 for oid in files:
379 for oid in files:
388 ui.write(_('copying lfs blob %s\n') % oid)
380 ui.write(_('copying lfs blob %s\n') % oid)
389 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
381 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
390
382
391 def upgraderequirements(orig, repo):
383 def upgraderequirements(orig, repo):
392 reqs = orig(repo)
384 reqs = orig(repo)
393 if 'lfs' in repo.requirements:
385 if 'lfs' in repo.requirements:
394 reqs.add('lfs')
386 reqs.add('lfs')
395 return reqs
387 return reqs
@@ -1,1014 +1,1022
1 # changegroup.py - Mercurial changegroup manipulation functions
1 # changegroup.py - Mercurial changegroup manipulation functions
2 #
2 #
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import os
10 import os
11 import struct
11 import struct
12 import tempfile
12 import tempfile
13 import weakref
13 import weakref
14
14
15 from .i18n import _
15 from .i18n import _
16 from .node import (
16 from .node import (
17 hex,
17 hex,
18 nullrev,
18 nullrev,
19 short,
19 short,
20 )
20 )
21
21
22 from . import (
22 from . import (
23 dagutil,
23 dagutil,
24 error,
24 error,
25 mdiff,
25 mdiff,
26 phases,
26 phases,
27 pycompat,
27 pycompat,
28 util,
28 util,
29 )
29 )
30
30
31 from .utils import (
31 from .utils import (
32 stringutil,
32 stringutil,
33 )
33 )
34
34
35 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
35 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
36 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
36 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
37 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
37 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
38
38
39 LFS_REQUIREMENT = 'lfs'
40
39 # When narrowing is finalized and no longer subject to format changes,
41 # When narrowing is finalized and no longer subject to format changes,
40 # we should move this to just "narrow" or similar.
42 # we should move this to just "narrow" or similar.
41 NARROW_REQUIREMENT = 'narrowhg-experimental'
43 NARROW_REQUIREMENT = 'narrowhg-experimental'
42
44
43 readexactly = util.readexactly
45 readexactly = util.readexactly
44
46
45 def getchunk(stream):
47 def getchunk(stream):
46 """return the next chunk from stream as a string"""
48 """return the next chunk from stream as a string"""
47 d = readexactly(stream, 4)
49 d = readexactly(stream, 4)
48 l = struct.unpack(">l", d)[0]
50 l = struct.unpack(">l", d)[0]
49 if l <= 4:
51 if l <= 4:
50 if l:
52 if l:
51 raise error.Abort(_("invalid chunk length %d") % l)
53 raise error.Abort(_("invalid chunk length %d") % l)
52 return ""
54 return ""
53 return readexactly(stream, l - 4)
55 return readexactly(stream, l - 4)
54
56
55 def chunkheader(length):
57 def chunkheader(length):
56 """return a changegroup chunk header (string)"""
58 """return a changegroup chunk header (string)"""
57 return struct.pack(">l", length + 4)
59 return struct.pack(">l", length + 4)
58
60
59 def closechunk():
61 def closechunk():
60 """return a changegroup chunk header (string) for a zero-length chunk"""
62 """return a changegroup chunk header (string) for a zero-length chunk"""
61 return struct.pack(">l", 0)
63 return struct.pack(">l", 0)
62
64
63 def writechunks(ui, chunks, filename, vfs=None):
65 def writechunks(ui, chunks, filename, vfs=None):
64 """Write chunks to a file and return its filename.
66 """Write chunks to a file and return its filename.
65
67
66 The stream is assumed to be a bundle file.
68 The stream is assumed to be a bundle file.
67 Existing files will not be overwritten.
69 Existing files will not be overwritten.
68 If no filename is specified, a temporary file is created.
70 If no filename is specified, a temporary file is created.
69 """
71 """
70 fh = None
72 fh = None
71 cleanup = None
73 cleanup = None
72 try:
74 try:
73 if filename:
75 if filename:
74 if vfs:
76 if vfs:
75 fh = vfs.open(filename, "wb")
77 fh = vfs.open(filename, "wb")
76 else:
78 else:
77 # Increase default buffer size because default is usually
79 # Increase default buffer size because default is usually
78 # small (4k is common on Linux).
80 # small (4k is common on Linux).
79 fh = open(filename, "wb", 131072)
81 fh = open(filename, "wb", 131072)
80 else:
82 else:
81 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
83 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
82 fh = os.fdopen(fd, r"wb")
84 fh = os.fdopen(fd, r"wb")
83 cleanup = filename
85 cleanup = filename
84 for c in chunks:
86 for c in chunks:
85 fh.write(c)
87 fh.write(c)
86 cleanup = None
88 cleanup = None
87 return filename
89 return filename
88 finally:
90 finally:
89 if fh is not None:
91 if fh is not None:
90 fh.close()
92 fh.close()
91 if cleanup is not None:
93 if cleanup is not None:
92 if filename and vfs:
94 if filename and vfs:
93 vfs.unlink(cleanup)
95 vfs.unlink(cleanup)
94 else:
96 else:
95 os.unlink(cleanup)
97 os.unlink(cleanup)
96
98
97 class cg1unpacker(object):
99 class cg1unpacker(object):
98 """Unpacker for cg1 changegroup streams.
100 """Unpacker for cg1 changegroup streams.
99
101
100 A changegroup unpacker handles the framing of the revision data in
102 A changegroup unpacker handles the framing of the revision data in
101 the wire format. Most consumers will want to use the apply()
103 the wire format. Most consumers will want to use the apply()
102 method to add the changes from the changegroup to a repository.
104 method to add the changes from the changegroup to a repository.
103
105
104 If you're forwarding a changegroup unmodified to another consumer,
106 If you're forwarding a changegroup unmodified to another consumer,
105 use getchunks(), which returns an iterator of changegroup
107 use getchunks(), which returns an iterator of changegroup
106 chunks. This is mostly useful for cases where you need to know the
108 chunks. This is mostly useful for cases where you need to know the
107 data stream has ended by observing the end of the changegroup.
109 data stream has ended by observing the end of the changegroup.
108
110
109 deltachunk() is useful only if you're applying delta data. Most
111 deltachunk() is useful only if you're applying delta data. Most
110 consumers should prefer apply() instead.
112 consumers should prefer apply() instead.
111
113
112 A few other public methods exist. Those are used only for
114 A few other public methods exist. Those are used only for
113 bundlerepo and some debug commands - their use is discouraged.
115 bundlerepo and some debug commands - their use is discouraged.
114 """
116 """
115 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
117 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
116 deltaheadersize = struct.calcsize(deltaheader)
118 deltaheadersize = struct.calcsize(deltaheader)
117 version = '01'
119 version = '01'
118 _grouplistcount = 1 # One list of files after the manifests
120 _grouplistcount = 1 # One list of files after the manifests
119
121
120 def __init__(self, fh, alg, extras=None):
122 def __init__(self, fh, alg, extras=None):
121 if alg is None:
123 if alg is None:
122 alg = 'UN'
124 alg = 'UN'
123 if alg not in util.compengines.supportedbundletypes:
125 if alg not in util.compengines.supportedbundletypes:
124 raise error.Abort(_('unknown stream compression type: %s')
126 raise error.Abort(_('unknown stream compression type: %s')
125 % alg)
127 % alg)
126 if alg == 'BZ':
128 if alg == 'BZ':
127 alg = '_truncatedBZ'
129 alg = '_truncatedBZ'
128
130
129 compengine = util.compengines.forbundletype(alg)
131 compengine = util.compengines.forbundletype(alg)
130 self._stream = compengine.decompressorreader(fh)
132 self._stream = compengine.decompressorreader(fh)
131 self._type = alg
133 self._type = alg
132 self.extras = extras or {}
134 self.extras = extras or {}
133 self.callback = None
135 self.callback = None
134
136
135 # These methods (compressed, read, seek, tell) all appear to only
137 # These methods (compressed, read, seek, tell) all appear to only
136 # be used by bundlerepo, but it's a little hard to tell.
138 # be used by bundlerepo, but it's a little hard to tell.
137 def compressed(self):
139 def compressed(self):
138 return self._type is not None and self._type != 'UN'
140 return self._type is not None and self._type != 'UN'
139 def read(self, l):
141 def read(self, l):
140 return self._stream.read(l)
142 return self._stream.read(l)
141 def seek(self, pos):
143 def seek(self, pos):
142 return self._stream.seek(pos)
144 return self._stream.seek(pos)
143 def tell(self):
145 def tell(self):
144 return self._stream.tell()
146 return self._stream.tell()
145 def close(self):
147 def close(self):
146 return self._stream.close()
148 return self._stream.close()
147
149
148 def _chunklength(self):
150 def _chunklength(self):
149 d = readexactly(self._stream, 4)
151 d = readexactly(self._stream, 4)
150 l = struct.unpack(">l", d)[0]
152 l = struct.unpack(">l", d)[0]
151 if l <= 4:
153 if l <= 4:
152 if l:
154 if l:
153 raise error.Abort(_("invalid chunk length %d") % l)
155 raise error.Abort(_("invalid chunk length %d") % l)
154 return 0
156 return 0
155 if self.callback:
157 if self.callback:
156 self.callback()
158 self.callback()
157 return l - 4
159 return l - 4
158
160
159 def changelogheader(self):
161 def changelogheader(self):
160 """v10 does not have a changelog header chunk"""
162 """v10 does not have a changelog header chunk"""
161 return {}
163 return {}
162
164
163 def manifestheader(self):
165 def manifestheader(self):
164 """v10 does not have a manifest header chunk"""
166 """v10 does not have a manifest header chunk"""
165 return {}
167 return {}
166
168
167 def filelogheader(self):
169 def filelogheader(self):
168 """return the header of the filelogs chunk, v10 only has the filename"""
170 """return the header of the filelogs chunk, v10 only has the filename"""
169 l = self._chunklength()
171 l = self._chunklength()
170 if not l:
172 if not l:
171 return {}
173 return {}
172 fname = readexactly(self._stream, l)
174 fname = readexactly(self._stream, l)
173 return {'filename': fname}
175 return {'filename': fname}
174
176
175 def _deltaheader(self, headertuple, prevnode):
177 def _deltaheader(self, headertuple, prevnode):
176 node, p1, p2, cs = headertuple
178 node, p1, p2, cs = headertuple
177 if prevnode is None:
179 if prevnode is None:
178 deltabase = p1
180 deltabase = p1
179 else:
181 else:
180 deltabase = prevnode
182 deltabase = prevnode
181 flags = 0
183 flags = 0
182 return node, p1, p2, deltabase, cs, flags
184 return node, p1, p2, deltabase, cs, flags
183
185
184 def deltachunk(self, prevnode):
186 def deltachunk(self, prevnode):
185 l = self._chunklength()
187 l = self._chunklength()
186 if not l:
188 if not l:
187 return {}
189 return {}
188 headerdata = readexactly(self._stream, self.deltaheadersize)
190 headerdata = readexactly(self._stream, self.deltaheadersize)
189 header = struct.unpack(self.deltaheader, headerdata)
191 header = struct.unpack(self.deltaheader, headerdata)
190 delta = readexactly(self._stream, l - self.deltaheadersize)
192 delta = readexactly(self._stream, l - self.deltaheadersize)
191 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
193 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
192 return (node, p1, p2, cs, deltabase, delta, flags)
194 return (node, p1, p2, cs, deltabase, delta, flags)
193
195
194 def getchunks(self):
196 def getchunks(self):
195 """returns all the chunks contains in the bundle
197 """returns all the chunks contains in the bundle
196
198
197 Used when you need to forward the binary stream to a file or another
199 Used when you need to forward the binary stream to a file or another
198 network API. To do so, it parse the changegroup data, otherwise it will
200 network API. To do so, it parse the changegroup data, otherwise it will
199 block in case of sshrepo because it don't know the end of the stream.
201 block in case of sshrepo because it don't know the end of the stream.
200 """
202 """
201 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
203 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
202 # and a list of filelogs. For changegroup 3, we expect 4 parts:
204 # and a list of filelogs. For changegroup 3, we expect 4 parts:
203 # changelog, manifestlog, a list of tree manifestlogs, and a list of
205 # changelog, manifestlog, a list of tree manifestlogs, and a list of
204 # filelogs.
206 # filelogs.
205 #
207 #
206 # Changelog and manifestlog parts are terminated with empty chunks. The
208 # Changelog and manifestlog parts are terminated with empty chunks. The
207 # tree and file parts are a list of entry sections. Each entry section
209 # tree and file parts are a list of entry sections. Each entry section
208 # is a series of chunks terminating in an empty chunk. The list of these
210 # is a series of chunks terminating in an empty chunk. The list of these
209 # entry sections is terminated in yet another empty chunk, so we know
211 # entry sections is terminated in yet another empty chunk, so we know
210 # we've reached the end of the tree/file list when we reach an empty
212 # we've reached the end of the tree/file list when we reach an empty
211 # chunk that was proceeded by no non-empty chunks.
213 # chunk that was proceeded by no non-empty chunks.
212
214
213 parts = 0
215 parts = 0
214 while parts < 2 + self._grouplistcount:
216 while parts < 2 + self._grouplistcount:
215 noentries = True
217 noentries = True
216 while True:
218 while True:
217 chunk = getchunk(self)
219 chunk = getchunk(self)
218 if not chunk:
220 if not chunk:
219 # The first two empty chunks represent the end of the
221 # The first two empty chunks represent the end of the
220 # changelog and the manifestlog portions. The remaining
222 # changelog and the manifestlog portions. The remaining
221 # empty chunks represent either A) the end of individual
223 # empty chunks represent either A) the end of individual
222 # tree or file entries in the file list, or B) the end of
224 # tree or file entries in the file list, or B) the end of
223 # the entire list. It's the end of the entire list if there
225 # the entire list. It's the end of the entire list if there
224 # were no entries (i.e. noentries is True).
226 # were no entries (i.e. noentries is True).
225 if parts < 2:
227 if parts < 2:
226 parts += 1
228 parts += 1
227 elif noentries:
229 elif noentries:
228 parts += 1
230 parts += 1
229 break
231 break
230 noentries = False
232 noentries = False
231 yield chunkheader(len(chunk))
233 yield chunkheader(len(chunk))
232 pos = 0
234 pos = 0
233 while pos < len(chunk):
235 while pos < len(chunk):
234 next = pos + 2**20
236 next = pos + 2**20
235 yield chunk[pos:next]
237 yield chunk[pos:next]
236 pos = next
238 pos = next
237 yield closechunk()
239 yield closechunk()
238
240
239 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
241 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
240 # We know that we'll never have more manifests than we had
242 # We know that we'll never have more manifests than we had
241 # changesets.
243 # changesets.
242 self.callback = prog(_('manifests'), numchanges)
244 self.callback = prog(_('manifests'), numchanges)
243 # no need to check for empty manifest group here:
245 # no need to check for empty manifest group here:
244 # if the result of the merge of 1 and 2 is the same in 3 and 4,
246 # if the result of the merge of 1 and 2 is the same in 3 and 4,
245 # no new manifest will be created and the manifest group will
247 # no new manifest will be created and the manifest group will
246 # be empty during the pull
248 # be empty during the pull
247 self.manifestheader()
249 self.manifestheader()
248 deltas = self.deltaiter()
250 deltas = self.deltaiter()
249 repo.manifestlog._revlog.addgroup(deltas, revmap, trp)
251 repo.manifestlog._revlog.addgroup(deltas, revmap, trp)
250 repo.ui.progress(_('manifests'), None)
252 repo.ui.progress(_('manifests'), None)
251 self.callback = None
253 self.callback = None
252
254
253 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
255 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
254 expectedtotal=None):
256 expectedtotal=None):
255 """Add the changegroup returned by source.read() to this repo.
257 """Add the changegroup returned by source.read() to this repo.
256 srctype is a string like 'push', 'pull', or 'unbundle'. url is
258 srctype is a string like 'push', 'pull', or 'unbundle'. url is
257 the URL of the repo where this changegroup is coming from.
259 the URL of the repo where this changegroup is coming from.
258
260
259 Return an integer summarizing the change to this repo:
261 Return an integer summarizing the change to this repo:
260 - nothing changed or no source: 0
262 - nothing changed or no source: 0
261 - more heads than before: 1+added heads (2..n)
263 - more heads than before: 1+added heads (2..n)
262 - fewer heads than before: -1-removed heads (-2..-n)
264 - fewer heads than before: -1-removed heads (-2..-n)
263 - number of heads stays the same: 1
265 - number of heads stays the same: 1
264 """
266 """
265 repo = repo.unfiltered()
267 repo = repo.unfiltered()
266 def csmap(x):
268 def csmap(x):
267 repo.ui.debug("add changeset %s\n" % short(x))
269 repo.ui.debug("add changeset %s\n" % short(x))
268 return len(cl)
270 return len(cl)
269
271
270 def revmap(x):
272 def revmap(x):
271 return cl.rev(x)
273 return cl.rev(x)
272
274
273 changesets = files = revisions = 0
275 changesets = files = revisions = 0
274
276
275 try:
277 try:
276 # The transaction may already carry source information. In this
278 # The transaction may already carry source information. In this
277 # case we use the top level data. We overwrite the argument
279 # case we use the top level data. We overwrite the argument
278 # because we need to use the top level value (if they exist)
280 # because we need to use the top level value (if they exist)
279 # in this function.
281 # in this function.
280 srctype = tr.hookargs.setdefault('source', srctype)
282 srctype = tr.hookargs.setdefault('source', srctype)
281 url = tr.hookargs.setdefault('url', url)
283 url = tr.hookargs.setdefault('url', url)
282 repo.hook('prechangegroup',
284 repo.hook('prechangegroup',
283 throw=True, **pycompat.strkwargs(tr.hookargs))
285 throw=True, **pycompat.strkwargs(tr.hookargs))
284
286
285 # write changelog data to temp files so concurrent readers
287 # write changelog data to temp files so concurrent readers
286 # will not see an inconsistent view
288 # will not see an inconsistent view
287 cl = repo.changelog
289 cl = repo.changelog
288 cl.delayupdate(tr)
290 cl.delayupdate(tr)
289 oldheads = set(cl.heads())
291 oldheads = set(cl.heads())
290
292
291 trp = weakref.proxy(tr)
293 trp = weakref.proxy(tr)
292 # pull off the changeset group
294 # pull off the changeset group
293 repo.ui.status(_("adding changesets\n"))
295 repo.ui.status(_("adding changesets\n"))
294 clstart = len(cl)
296 clstart = len(cl)
295 class prog(object):
297 class prog(object):
296 def __init__(self, step, total):
298 def __init__(self, step, total):
297 self._step = step
299 self._step = step
298 self._total = total
300 self._total = total
299 self._count = 1
301 self._count = 1
300 def __call__(self):
302 def __call__(self):
301 repo.ui.progress(self._step, self._count, unit=_('chunks'),
303 repo.ui.progress(self._step, self._count, unit=_('chunks'),
302 total=self._total)
304 total=self._total)
303 self._count += 1
305 self._count += 1
304 self.callback = prog(_('changesets'), expectedtotal)
306 self.callback = prog(_('changesets'), expectedtotal)
305
307
306 efiles = set()
308 efiles = set()
307 def onchangelog(cl, node):
309 def onchangelog(cl, node):
308 efiles.update(cl.readfiles(node))
310 efiles.update(cl.readfiles(node))
309
311
310 self.changelogheader()
312 self.changelogheader()
311 deltas = self.deltaiter()
313 deltas = self.deltaiter()
312 cgnodes = cl.addgroup(deltas, csmap, trp, addrevisioncb=onchangelog)
314 cgnodes = cl.addgroup(deltas, csmap, trp, addrevisioncb=onchangelog)
313 efiles = len(efiles)
315 efiles = len(efiles)
314
316
315 if not cgnodes:
317 if not cgnodes:
316 repo.ui.develwarn('applied empty changegroup',
318 repo.ui.develwarn('applied empty changegroup',
317 config='warn-empty-changegroup')
319 config='warn-empty-changegroup')
318 clend = len(cl)
320 clend = len(cl)
319 changesets = clend - clstart
321 changesets = clend - clstart
320 repo.ui.progress(_('changesets'), None)
322 repo.ui.progress(_('changesets'), None)
321 self.callback = None
323 self.callback = None
322
324
323 # pull off the manifest group
325 # pull off the manifest group
324 repo.ui.status(_("adding manifests\n"))
326 repo.ui.status(_("adding manifests\n"))
325 self._unpackmanifests(repo, revmap, trp, prog, changesets)
327 self._unpackmanifests(repo, revmap, trp, prog, changesets)
326
328
327 needfiles = {}
329 needfiles = {}
328 if repo.ui.configbool('server', 'validate'):
330 if repo.ui.configbool('server', 'validate'):
329 cl = repo.changelog
331 cl = repo.changelog
330 ml = repo.manifestlog
332 ml = repo.manifestlog
331 # validate incoming csets have their manifests
333 # validate incoming csets have their manifests
332 for cset in xrange(clstart, clend):
334 for cset in xrange(clstart, clend):
333 mfnode = cl.changelogrevision(cset).manifest
335 mfnode = cl.changelogrevision(cset).manifest
334 mfest = ml[mfnode].readdelta()
336 mfest = ml[mfnode].readdelta()
335 # store file cgnodes we must see
337 # store file cgnodes we must see
336 for f, n in mfest.iteritems():
338 for f, n in mfest.iteritems():
337 needfiles.setdefault(f, set()).add(n)
339 needfiles.setdefault(f, set()).add(n)
338
340
339 # process the files
341 # process the files
340 repo.ui.status(_("adding file changes\n"))
342 repo.ui.status(_("adding file changes\n"))
341 newrevs, newfiles = _addchangegroupfiles(
343 newrevs, newfiles = _addchangegroupfiles(
342 repo, self, revmap, trp, efiles, needfiles)
344 repo, self, revmap, trp, efiles, needfiles)
343 revisions += newrevs
345 revisions += newrevs
344 files += newfiles
346 files += newfiles
345
347
346 deltaheads = 0
348 deltaheads = 0
347 if oldheads:
349 if oldheads:
348 heads = cl.heads()
350 heads = cl.heads()
349 deltaheads = len(heads) - len(oldheads)
351 deltaheads = len(heads) - len(oldheads)
350 for h in heads:
352 for h in heads:
351 if h not in oldheads and repo[h].closesbranch():
353 if h not in oldheads and repo[h].closesbranch():
352 deltaheads -= 1
354 deltaheads -= 1
353 htext = ""
355 htext = ""
354 if deltaheads:
356 if deltaheads:
355 htext = _(" (%+d heads)") % deltaheads
357 htext = _(" (%+d heads)") % deltaheads
356
358
357 repo.ui.status(_("added %d changesets"
359 repo.ui.status(_("added %d changesets"
358 " with %d changes to %d files%s\n")
360 " with %d changes to %d files%s\n")
359 % (changesets, revisions, files, htext))
361 % (changesets, revisions, files, htext))
360 repo.invalidatevolatilesets()
362 repo.invalidatevolatilesets()
361
363
362 if changesets > 0:
364 if changesets > 0:
363 if 'node' not in tr.hookargs:
365 if 'node' not in tr.hookargs:
364 tr.hookargs['node'] = hex(cl.node(clstart))
366 tr.hookargs['node'] = hex(cl.node(clstart))
365 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
367 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
366 hookargs = dict(tr.hookargs)
368 hookargs = dict(tr.hookargs)
367 else:
369 else:
368 hookargs = dict(tr.hookargs)
370 hookargs = dict(tr.hookargs)
369 hookargs['node'] = hex(cl.node(clstart))
371 hookargs['node'] = hex(cl.node(clstart))
370 hookargs['node_last'] = hex(cl.node(clend - 1))
372 hookargs['node_last'] = hex(cl.node(clend - 1))
371 repo.hook('pretxnchangegroup',
373 repo.hook('pretxnchangegroup',
372 throw=True, **pycompat.strkwargs(hookargs))
374 throw=True, **pycompat.strkwargs(hookargs))
373
375
374 added = [cl.node(r) for r in xrange(clstart, clend)]
376 added = [cl.node(r) for r in xrange(clstart, clend)]
375 phaseall = None
377 phaseall = None
376 if srctype in ('push', 'serve'):
378 if srctype in ('push', 'serve'):
377 # Old servers can not push the boundary themselves.
379 # Old servers can not push the boundary themselves.
378 # New servers won't push the boundary if changeset already
380 # New servers won't push the boundary if changeset already
379 # exists locally as secret
381 # exists locally as secret
380 #
382 #
381 # We should not use added here but the list of all change in
383 # We should not use added here but the list of all change in
382 # the bundle
384 # the bundle
383 if repo.publishing():
385 if repo.publishing():
384 targetphase = phaseall = phases.public
386 targetphase = phaseall = phases.public
385 else:
387 else:
386 # closer target phase computation
388 # closer target phase computation
387
389
388 # Those changesets have been pushed from the
390 # Those changesets have been pushed from the
389 # outside, their phases are going to be pushed
391 # outside, their phases are going to be pushed
390 # alongside. Therefor `targetphase` is
392 # alongside. Therefor `targetphase` is
391 # ignored.
393 # ignored.
392 targetphase = phaseall = phases.draft
394 targetphase = phaseall = phases.draft
393 if added:
395 if added:
394 phases.registernew(repo, tr, targetphase, added)
396 phases.registernew(repo, tr, targetphase, added)
395 if phaseall is not None:
397 if phaseall is not None:
396 phases.advanceboundary(repo, tr, phaseall, cgnodes)
398 phases.advanceboundary(repo, tr, phaseall, cgnodes)
397
399
398 if changesets > 0:
400 if changesets > 0:
399
401
400 def runhooks():
402 def runhooks():
401 # These hooks run when the lock releases, not when the
403 # These hooks run when the lock releases, not when the
402 # transaction closes. So it's possible for the changelog
404 # transaction closes. So it's possible for the changelog
403 # to have changed since we last saw it.
405 # to have changed since we last saw it.
404 if clstart >= len(repo):
406 if clstart >= len(repo):
405 return
407 return
406
408
407 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
409 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
408
410
409 for n in added:
411 for n in added:
410 args = hookargs.copy()
412 args = hookargs.copy()
411 args['node'] = hex(n)
413 args['node'] = hex(n)
412 del args['node_last']
414 del args['node_last']
413 repo.hook("incoming", **pycompat.strkwargs(args))
415 repo.hook("incoming", **pycompat.strkwargs(args))
414
416
415 newheads = [h for h in repo.heads()
417 newheads = [h for h in repo.heads()
416 if h not in oldheads]
418 if h not in oldheads]
417 repo.ui.log("incoming",
419 repo.ui.log("incoming",
418 "%d incoming changes - new heads: %s\n",
420 "%d incoming changes - new heads: %s\n",
419 len(added),
421 len(added),
420 ', '.join([hex(c[:6]) for c in newheads]))
422 ', '.join([hex(c[:6]) for c in newheads]))
421
423
422 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
424 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
423 lambda tr: repo._afterlock(runhooks))
425 lambda tr: repo._afterlock(runhooks))
424 finally:
426 finally:
425 repo.ui.flush()
427 repo.ui.flush()
426 # never return 0 here:
428 # never return 0 here:
427 if deltaheads < 0:
429 if deltaheads < 0:
428 ret = deltaheads - 1
430 ret = deltaheads - 1
429 else:
431 else:
430 ret = deltaheads + 1
432 ret = deltaheads + 1
431 return ret
433 return ret
432
434
433 def deltaiter(self):
435 def deltaiter(self):
434 """
436 """
435 returns an iterator of the deltas in this changegroup
437 returns an iterator of the deltas in this changegroup
436
438
437 Useful for passing to the underlying storage system to be stored.
439 Useful for passing to the underlying storage system to be stored.
438 """
440 """
439 chain = None
441 chain = None
440 for chunkdata in iter(lambda: self.deltachunk(chain), {}):
442 for chunkdata in iter(lambda: self.deltachunk(chain), {}):
441 # Chunkdata: (node, p1, p2, cs, deltabase, delta, flags)
443 # Chunkdata: (node, p1, p2, cs, deltabase, delta, flags)
442 yield chunkdata
444 yield chunkdata
443 chain = chunkdata[0]
445 chain = chunkdata[0]
444
446
445 class cg2unpacker(cg1unpacker):
447 class cg2unpacker(cg1unpacker):
446 """Unpacker for cg2 streams.
448 """Unpacker for cg2 streams.
447
449
448 cg2 streams add support for generaldelta, so the delta header
450 cg2 streams add support for generaldelta, so the delta header
449 format is slightly different. All other features about the data
451 format is slightly different. All other features about the data
450 remain the same.
452 remain the same.
451 """
453 """
452 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
454 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
453 deltaheadersize = struct.calcsize(deltaheader)
455 deltaheadersize = struct.calcsize(deltaheader)
454 version = '02'
456 version = '02'
455
457
456 def _deltaheader(self, headertuple, prevnode):
458 def _deltaheader(self, headertuple, prevnode):
457 node, p1, p2, deltabase, cs = headertuple
459 node, p1, p2, deltabase, cs = headertuple
458 flags = 0
460 flags = 0
459 return node, p1, p2, deltabase, cs, flags
461 return node, p1, p2, deltabase, cs, flags
460
462
461 class cg3unpacker(cg2unpacker):
463 class cg3unpacker(cg2unpacker):
462 """Unpacker for cg3 streams.
464 """Unpacker for cg3 streams.
463
465
464 cg3 streams add support for exchanging treemanifests and revlog
466 cg3 streams add support for exchanging treemanifests and revlog
465 flags. It adds the revlog flags to the delta header and an empty chunk
467 flags. It adds the revlog flags to the delta header and an empty chunk
466 separating manifests and files.
468 separating manifests and files.
467 """
469 """
468 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
470 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
469 deltaheadersize = struct.calcsize(deltaheader)
471 deltaheadersize = struct.calcsize(deltaheader)
470 version = '03'
472 version = '03'
471 _grouplistcount = 2 # One list of manifests and one list of files
473 _grouplistcount = 2 # One list of manifests and one list of files
472
474
473 def _deltaheader(self, headertuple, prevnode):
475 def _deltaheader(self, headertuple, prevnode):
474 node, p1, p2, deltabase, cs, flags = headertuple
476 node, p1, p2, deltabase, cs, flags = headertuple
475 return node, p1, p2, deltabase, cs, flags
477 return node, p1, p2, deltabase, cs, flags
476
478
477 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
479 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
478 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
480 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
479 numchanges)
481 numchanges)
480 for chunkdata in iter(self.filelogheader, {}):
482 for chunkdata in iter(self.filelogheader, {}):
481 # If we get here, there are directory manifests in the changegroup
483 # If we get here, there are directory manifests in the changegroup
482 d = chunkdata["filename"]
484 d = chunkdata["filename"]
483 repo.ui.debug("adding %s revisions\n" % d)
485 repo.ui.debug("adding %s revisions\n" % d)
484 dirlog = repo.manifestlog._revlog.dirlog(d)
486 dirlog = repo.manifestlog._revlog.dirlog(d)
485 deltas = self.deltaiter()
487 deltas = self.deltaiter()
486 if not dirlog.addgroup(deltas, revmap, trp):
488 if not dirlog.addgroup(deltas, revmap, trp):
487 raise error.Abort(_("received dir revlog group is empty"))
489 raise error.Abort(_("received dir revlog group is empty"))
488
490
489 class headerlessfixup(object):
491 class headerlessfixup(object):
490 def __init__(self, fh, h):
492 def __init__(self, fh, h):
491 self._h = h
493 self._h = h
492 self._fh = fh
494 self._fh = fh
493 def read(self, n):
495 def read(self, n):
494 if self._h:
496 if self._h:
495 d, self._h = self._h[:n], self._h[n:]
497 d, self._h = self._h[:n], self._h[n:]
496 if len(d) < n:
498 if len(d) < n:
497 d += readexactly(self._fh, n - len(d))
499 d += readexactly(self._fh, n - len(d))
498 return d
500 return d
499 return readexactly(self._fh, n)
501 return readexactly(self._fh, n)
500
502
501 class cg1packer(object):
503 class cg1packer(object):
502 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
504 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
503 version = '01'
505 version = '01'
504 def __init__(self, repo, bundlecaps=None):
506 def __init__(self, repo, bundlecaps=None):
505 """Given a source repo, construct a bundler.
507 """Given a source repo, construct a bundler.
506
508
507 bundlecaps is optional and can be used to specify the set of
509 bundlecaps is optional and can be used to specify the set of
508 capabilities which can be used to build the bundle. While bundlecaps is
510 capabilities which can be used to build the bundle. While bundlecaps is
509 unused in core Mercurial, extensions rely on this feature to communicate
511 unused in core Mercurial, extensions rely on this feature to communicate
510 capabilities to customize the changegroup packer.
512 capabilities to customize the changegroup packer.
511 """
513 """
512 # Set of capabilities we can use to build the bundle.
514 # Set of capabilities we can use to build the bundle.
513 if bundlecaps is None:
515 if bundlecaps is None:
514 bundlecaps = set()
516 bundlecaps = set()
515 self._bundlecaps = bundlecaps
517 self._bundlecaps = bundlecaps
516 # experimental config: bundle.reorder
518 # experimental config: bundle.reorder
517 reorder = repo.ui.config('bundle', 'reorder')
519 reorder = repo.ui.config('bundle', 'reorder')
518 if reorder == 'auto':
520 if reorder == 'auto':
519 reorder = None
521 reorder = None
520 else:
522 else:
521 reorder = stringutil.parsebool(reorder)
523 reorder = stringutil.parsebool(reorder)
522 self._repo = repo
524 self._repo = repo
523 self._reorder = reorder
525 self._reorder = reorder
524 self._progress = repo.ui.progress
526 self._progress = repo.ui.progress
525 if self._repo.ui.verbose and not self._repo.ui.debugflag:
527 if self._repo.ui.verbose and not self._repo.ui.debugflag:
526 self._verbosenote = self._repo.ui.note
528 self._verbosenote = self._repo.ui.note
527 else:
529 else:
528 self._verbosenote = lambda s: None
530 self._verbosenote = lambda s: None
529
531
530 def close(self):
532 def close(self):
531 return closechunk()
533 return closechunk()
532
534
533 def fileheader(self, fname):
535 def fileheader(self, fname):
534 return chunkheader(len(fname)) + fname
536 return chunkheader(len(fname)) + fname
535
537
536 # Extracted both for clarity and for overriding in extensions.
538 # Extracted both for clarity and for overriding in extensions.
537 def _sortgroup(self, revlog, nodelist, lookup):
539 def _sortgroup(self, revlog, nodelist, lookup):
538 """Sort nodes for change group and turn them into revnums."""
540 """Sort nodes for change group and turn them into revnums."""
539 # for generaldelta revlogs, we linearize the revs; this will both be
541 # for generaldelta revlogs, we linearize the revs; this will both be
540 # much quicker and generate a much smaller bundle
542 # much quicker and generate a much smaller bundle
541 if (revlog._generaldelta and self._reorder is None) or self._reorder:
543 if (revlog._generaldelta and self._reorder is None) or self._reorder:
542 dag = dagutil.revlogdag(revlog)
544 dag = dagutil.revlogdag(revlog)
543 return dag.linearize(set(revlog.rev(n) for n in nodelist))
545 return dag.linearize(set(revlog.rev(n) for n in nodelist))
544 else:
546 else:
545 return sorted([revlog.rev(n) for n in nodelist])
547 return sorted([revlog.rev(n) for n in nodelist])
546
548
547 def group(self, nodelist, revlog, lookup, units=None):
549 def group(self, nodelist, revlog, lookup, units=None):
548 """Calculate a delta group, yielding a sequence of changegroup chunks
550 """Calculate a delta group, yielding a sequence of changegroup chunks
549 (strings).
551 (strings).
550
552
551 Given a list of changeset revs, return a set of deltas and
553 Given a list of changeset revs, return a set of deltas and
552 metadata corresponding to nodes. The first delta is
554 metadata corresponding to nodes. The first delta is
553 first parent(nodelist[0]) -> nodelist[0], the receiver is
555 first parent(nodelist[0]) -> nodelist[0], the receiver is
554 guaranteed to have this parent as it has all history before
556 guaranteed to have this parent as it has all history before
555 these changesets. In the case firstparent is nullrev the
557 these changesets. In the case firstparent is nullrev the
556 changegroup starts with a full revision.
558 changegroup starts with a full revision.
557
559
558 If units is not None, progress detail will be generated, units specifies
560 If units is not None, progress detail will be generated, units specifies
559 the type of revlog that is touched (changelog, manifest, etc.).
561 the type of revlog that is touched (changelog, manifest, etc.).
560 """
562 """
561 # if we don't have any revisions touched by these changesets, bail
563 # if we don't have any revisions touched by these changesets, bail
562 if len(nodelist) == 0:
564 if len(nodelist) == 0:
563 yield self.close()
565 yield self.close()
564 return
566 return
565
567
566 revs = self._sortgroup(revlog, nodelist, lookup)
568 revs = self._sortgroup(revlog, nodelist, lookup)
567
569
568 # add the parent of the first rev
570 # add the parent of the first rev
569 p = revlog.parentrevs(revs[0])[0]
571 p = revlog.parentrevs(revs[0])[0]
570 revs.insert(0, p)
572 revs.insert(0, p)
571
573
572 # build deltas
574 # build deltas
573 total = len(revs) - 1
575 total = len(revs) - 1
574 msgbundling = _('bundling')
576 msgbundling = _('bundling')
575 for r in xrange(len(revs) - 1):
577 for r in xrange(len(revs) - 1):
576 if units is not None:
578 if units is not None:
577 self._progress(msgbundling, r + 1, unit=units, total=total)
579 self._progress(msgbundling, r + 1, unit=units, total=total)
578 prev, curr = revs[r], revs[r + 1]
580 prev, curr = revs[r], revs[r + 1]
579 linknode = lookup(revlog.node(curr))
581 linknode = lookup(revlog.node(curr))
580 for c in self.revchunk(revlog, curr, prev, linknode):
582 for c in self.revchunk(revlog, curr, prev, linknode):
581 yield c
583 yield c
582
584
583 if units is not None:
585 if units is not None:
584 self._progress(msgbundling, None)
586 self._progress(msgbundling, None)
585 yield self.close()
587 yield self.close()
586
588
587 # filter any nodes that claim to be part of the known set
589 # filter any nodes that claim to be part of the known set
588 def prune(self, revlog, missing, commonrevs):
590 def prune(self, revlog, missing, commonrevs):
589 rr, rl = revlog.rev, revlog.linkrev
591 rr, rl = revlog.rev, revlog.linkrev
590 return [n for n in missing if rl(rr(n)) not in commonrevs]
592 return [n for n in missing if rl(rr(n)) not in commonrevs]
591
593
592 def _packmanifests(self, dir, mfnodes, lookuplinknode):
594 def _packmanifests(self, dir, mfnodes, lookuplinknode):
593 """Pack flat manifests into a changegroup stream."""
595 """Pack flat manifests into a changegroup stream."""
594 assert not dir
596 assert not dir
595 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
597 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
596 lookuplinknode, units=_('manifests')):
598 lookuplinknode, units=_('manifests')):
597 yield chunk
599 yield chunk
598
600
599 def _manifestsdone(self):
601 def _manifestsdone(self):
600 return ''
602 return ''
601
603
602 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
604 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
603 '''yield a sequence of changegroup chunks (strings)'''
605 '''yield a sequence of changegroup chunks (strings)'''
604 repo = self._repo
606 repo = self._repo
605 cl = repo.changelog
607 cl = repo.changelog
606
608
607 clrevorder = {}
609 clrevorder = {}
608 mfs = {} # needed manifests
610 mfs = {} # needed manifests
609 fnodes = {} # needed file nodes
611 fnodes = {} # needed file nodes
610 changedfiles = set()
612 changedfiles = set()
611
613
612 # Callback for the changelog, used to collect changed files and manifest
614 # Callback for the changelog, used to collect changed files and manifest
613 # nodes.
615 # nodes.
614 # Returns the linkrev node (identity in the changelog case).
616 # Returns the linkrev node (identity in the changelog case).
615 def lookupcl(x):
617 def lookupcl(x):
616 c = cl.read(x)
618 c = cl.read(x)
617 clrevorder[x] = len(clrevorder)
619 clrevorder[x] = len(clrevorder)
618 n = c[0]
620 n = c[0]
619 # record the first changeset introducing this manifest version
621 # record the first changeset introducing this manifest version
620 mfs.setdefault(n, x)
622 mfs.setdefault(n, x)
621 # Record a complete list of potentially-changed files in
623 # Record a complete list of potentially-changed files in
622 # this manifest.
624 # this manifest.
623 changedfiles.update(c[3])
625 changedfiles.update(c[3])
624 return x
626 return x
625
627
626 self._verbosenote(_('uncompressed size of bundle content:\n'))
628 self._verbosenote(_('uncompressed size of bundle content:\n'))
627 size = 0
629 size = 0
628 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
630 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
629 size += len(chunk)
631 size += len(chunk)
630 yield chunk
632 yield chunk
631 self._verbosenote(_('%8.i (changelog)\n') % size)
633 self._verbosenote(_('%8.i (changelog)\n') % size)
632
634
633 # We need to make sure that the linkrev in the changegroup refers to
635 # We need to make sure that the linkrev in the changegroup refers to
634 # the first changeset that introduced the manifest or file revision.
636 # the first changeset that introduced the manifest or file revision.
635 # The fastpath is usually safer than the slowpath, because the filelogs
637 # The fastpath is usually safer than the slowpath, because the filelogs
636 # are walked in revlog order.
638 # are walked in revlog order.
637 #
639 #
638 # When taking the slowpath with reorder=None and the manifest revlog
640 # When taking the slowpath with reorder=None and the manifest revlog
639 # uses generaldelta, the manifest may be walked in the "wrong" order.
641 # uses generaldelta, the manifest may be walked in the "wrong" order.
640 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
642 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
641 # cc0ff93d0c0c).
643 # cc0ff93d0c0c).
642 #
644 #
643 # When taking the fastpath, we are only vulnerable to reordering
645 # When taking the fastpath, we are only vulnerable to reordering
644 # of the changelog itself. The changelog never uses generaldelta, so
646 # of the changelog itself. The changelog never uses generaldelta, so
645 # it is only reordered when reorder=True. To handle this case, we
647 # it is only reordered when reorder=True. To handle this case, we
646 # simply take the slowpath, which already has the 'clrevorder' logic.
648 # simply take the slowpath, which already has the 'clrevorder' logic.
647 # This was also fixed in cc0ff93d0c0c.
649 # This was also fixed in cc0ff93d0c0c.
648 fastpathlinkrev = fastpathlinkrev and not self._reorder
650 fastpathlinkrev = fastpathlinkrev and not self._reorder
649 # Treemanifests don't work correctly with fastpathlinkrev
651 # Treemanifests don't work correctly with fastpathlinkrev
650 # either, because we don't discover which directory nodes to
652 # either, because we don't discover which directory nodes to
651 # send along with files. This could probably be fixed.
653 # send along with files. This could probably be fixed.
652 fastpathlinkrev = fastpathlinkrev and (
654 fastpathlinkrev = fastpathlinkrev and (
653 'treemanifest' not in repo.requirements)
655 'treemanifest' not in repo.requirements)
654
656
655 for chunk in self.generatemanifests(commonrevs, clrevorder,
657 for chunk in self.generatemanifests(commonrevs, clrevorder,
656 fastpathlinkrev, mfs, fnodes, source):
658 fastpathlinkrev, mfs, fnodes, source):
657 yield chunk
659 yield chunk
658 mfs.clear()
660 mfs.clear()
659 clrevs = set(cl.rev(x) for x in clnodes)
661 clrevs = set(cl.rev(x) for x in clnodes)
660
662
661 if not fastpathlinkrev:
663 if not fastpathlinkrev:
662 def linknodes(unused, fname):
664 def linknodes(unused, fname):
663 return fnodes.get(fname, {})
665 return fnodes.get(fname, {})
664 else:
666 else:
665 cln = cl.node
667 cln = cl.node
666 def linknodes(filerevlog, fname):
668 def linknodes(filerevlog, fname):
667 llr = filerevlog.linkrev
669 llr = filerevlog.linkrev
668 fln = filerevlog.node
670 fln = filerevlog.node
669 revs = ((r, llr(r)) for r in filerevlog)
671 revs = ((r, llr(r)) for r in filerevlog)
670 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
672 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
671
673
672 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
674 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
673 source):
675 source):
674 yield chunk
676 yield chunk
675
677
676 yield self.close()
678 yield self.close()
677
679
678 if clnodes:
680 if clnodes:
679 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
681 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
680
682
681 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
683 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
682 fnodes, source):
684 fnodes, source):
683 """Returns an iterator of changegroup chunks containing manifests.
685 """Returns an iterator of changegroup chunks containing manifests.
684
686
685 `source` is unused here, but is used by extensions like remotefilelog to
687 `source` is unused here, but is used by extensions like remotefilelog to
686 change what is sent based in pulls vs pushes, etc.
688 change what is sent based in pulls vs pushes, etc.
687 """
689 """
688 repo = self._repo
690 repo = self._repo
689 mfl = repo.manifestlog
691 mfl = repo.manifestlog
690 dirlog = mfl._revlog.dirlog
692 dirlog = mfl._revlog.dirlog
691 tmfnodes = {'': mfs}
693 tmfnodes = {'': mfs}
692
694
693 # Callback for the manifest, used to collect linkrevs for filelog
695 # Callback for the manifest, used to collect linkrevs for filelog
694 # revisions.
696 # revisions.
695 # Returns the linkrev node (collected in lookupcl).
697 # Returns the linkrev node (collected in lookupcl).
696 def makelookupmflinknode(dir, nodes):
698 def makelookupmflinknode(dir, nodes):
697 if fastpathlinkrev:
699 if fastpathlinkrev:
698 assert not dir
700 assert not dir
699 return mfs.__getitem__
701 return mfs.__getitem__
700
702
701 def lookupmflinknode(x):
703 def lookupmflinknode(x):
702 """Callback for looking up the linknode for manifests.
704 """Callback for looking up the linknode for manifests.
703
705
704 Returns the linkrev node for the specified manifest.
706 Returns the linkrev node for the specified manifest.
705
707
706 SIDE EFFECT:
708 SIDE EFFECT:
707
709
708 1) fclnodes gets populated with the list of relevant
710 1) fclnodes gets populated with the list of relevant
709 file nodes if we're not using fastpathlinkrev
711 file nodes if we're not using fastpathlinkrev
710 2) When treemanifests are in use, collects treemanifest nodes
712 2) When treemanifests are in use, collects treemanifest nodes
711 to send
713 to send
712
714
713 Note that this means manifests must be completely sent to
715 Note that this means manifests must be completely sent to
714 the client before you can trust the list of files and
716 the client before you can trust the list of files and
715 treemanifests to send.
717 treemanifests to send.
716 """
718 """
717 clnode = nodes[x]
719 clnode = nodes[x]
718 mdata = mfl.get(dir, x).readfast(shallow=True)
720 mdata = mfl.get(dir, x).readfast(shallow=True)
719 for p, n, fl in mdata.iterentries():
721 for p, n, fl in mdata.iterentries():
720 if fl == 't': # subdirectory manifest
722 if fl == 't': # subdirectory manifest
721 subdir = dir + p + '/'
723 subdir = dir + p + '/'
722 tmfclnodes = tmfnodes.setdefault(subdir, {})
724 tmfclnodes = tmfnodes.setdefault(subdir, {})
723 tmfclnode = tmfclnodes.setdefault(n, clnode)
725 tmfclnode = tmfclnodes.setdefault(n, clnode)
724 if clrevorder[clnode] < clrevorder[tmfclnode]:
726 if clrevorder[clnode] < clrevorder[tmfclnode]:
725 tmfclnodes[n] = clnode
727 tmfclnodes[n] = clnode
726 else:
728 else:
727 f = dir + p
729 f = dir + p
728 fclnodes = fnodes.setdefault(f, {})
730 fclnodes = fnodes.setdefault(f, {})
729 fclnode = fclnodes.setdefault(n, clnode)
731 fclnode = fclnodes.setdefault(n, clnode)
730 if clrevorder[clnode] < clrevorder[fclnode]:
732 if clrevorder[clnode] < clrevorder[fclnode]:
731 fclnodes[n] = clnode
733 fclnodes[n] = clnode
732 return clnode
734 return clnode
733 return lookupmflinknode
735 return lookupmflinknode
734
736
735 size = 0
737 size = 0
736 while tmfnodes:
738 while tmfnodes:
737 dir, nodes = tmfnodes.popitem()
739 dir, nodes = tmfnodes.popitem()
738 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
740 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
739 if not dir or prunednodes:
741 if not dir or prunednodes:
740 for x in self._packmanifests(dir, prunednodes,
742 for x in self._packmanifests(dir, prunednodes,
741 makelookupmflinknode(dir, nodes)):
743 makelookupmflinknode(dir, nodes)):
742 size += len(x)
744 size += len(x)
743 yield x
745 yield x
744 self._verbosenote(_('%8.i (manifests)\n') % size)
746 self._verbosenote(_('%8.i (manifests)\n') % size)
745 yield self._manifestsdone()
747 yield self._manifestsdone()
746
748
747 # The 'source' parameter is useful for extensions
749 # The 'source' parameter is useful for extensions
748 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
750 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
749 repo = self._repo
751 repo = self._repo
750 progress = self._progress
752 progress = self._progress
751 msgbundling = _('bundling')
753 msgbundling = _('bundling')
752
754
753 total = len(changedfiles)
755 total = len(changedfiles)
754 # for progress output
756 # for progress output
755 msgfiles = _('files')
757 msgfiles = _('files')
756 for i, fname in enumerate(sorted(changedfiles)):
758 for i, fname in enumerate(sorted(changedfiles)):
757 filerevlog = repo.file(fname)
759 filerevlog = repo.file(fname)
758 if not filerevlog:
760 if not filerevlog:
759 raise error.Abort(_("empty or missing revlog for %s") % fname)
761 raise error.Abort(_("empty or missing revlog for %s") % fname)
760
762
761 linkrevnodes = linknodes(filerevlog, fname)
763 linkrevnodes = linknodes(filerevlog, fname)
762 # Lookup for filenodes, we collected the linkrev nodes above in the
764 # Lookup for filenodes, we collected the linkrev nodes above in the
763 # fastpath case and with lookupmf in the slowpath case.
765 # fastpath case and with lookupmf in the slowpath case.
764 def lookupfilelog(x):
766 def lookupfilelog(x):
765 return linkrevnodes[x]
767 return linkrevnodes[x]
766
768
767 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
769 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
768 if filenodes:
770 if filenodes:
769 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
771 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
770 total=total)
772 total=total)
771 h = self.fileheader(fname)
773 h = self.fileheader(fname)
772 size = len(h)
774 size = len(h)
773 yield h
775 yield h
774 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
776 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
775 size += len(chunk)
777 size += len(chunk)
776 yield chunk
778 yield chunk
777 self._verbosenote(_('%8.i %s\n') % (size, fname))
779 self._verbosenote(_('%8.i %s\n') % (size, fname))
778 progress(msgbundling, None)
780 progress(msgbundling, None)
779
781
780 def deltaparent(self, revlog, rev, p1, p2, prev):
782 def deltaparent(self, revlog, rev, p1, p2, prev):
781 if not revlog.candelta(prev, rev):
783 if not revlog.candelta(prev, rev):
782 raise error.ProgrammingError('cg1 should not be used in this case')
784 raise error.ProgrammingError('cg1 should not be used in this case')
783 return prev
785 return prev
784
786
785 def revchunk(self, revlog, rev, prev, linknode):
787 def revchunk(self, revlog, rev, prev, linknode):
786 node = revlog.node(rev)
788 node = revlog.node(rev)
787 p1, p2 = revlog.parentrevs(rev)
789 p1, p2 = revlog.parentrevs(rev)
788 base = self.deltaparent(revlog, rev, p1, p2, prev)
790 base = self.deltaparent(revlog, rev, p1, p2, prev)
789
791
790 prefix = ''
792 prefix = ''
791 if revlog.iscensored(base) or revlog.iscensored(rev):
793 if revlog.iscensored(base) or revlog.iscensored(rev):
792 try:
794 try:
793 delta = revlog.revision(node, raw=True)
795 delta = revlog.revision(node, raw=True)
794 except error.CensoredNodeError as e:
796 except error.CensoredNodeError as e:
795 delta = e.tombstone
797 delta = e.tombstone
796 if base == nullrev:
798 if base == nullrev:
797 prefix = mdiff.trivialdiffheader(len(delta))
799 prefix = mdiff.trivialdiffheader(len(delta))
798 else:
800 else:
799 baselen = revlog.rawsize(base)
801 baselen = revlog.rawsize(base)
800 prefix = mdiff.replacediffheader(baselen, len(delta))
802 prefix = mdiff.replacediffheader(baselen, len(delta))
801 elif base == nullrev:
803 elif base == nullrev:
802 delta = revlog.revision(node, raw=True)
804 delta = revlog.revision(node, raw=True)
803 prefix = mdiff.trivialdiffheader(len(delta))
805 prefix = mdiff.trivialdiffheader(len(delta))
804 else:
806 else:
805 delta = revlog.revdiff(base, rev)
807 delta = revlog.revdiff(base, rev)
806 p1n, p2n = revlog.parents(node)
808 p1n, p2n = revlog.parents(node)
807 basenode = revlog.node(base)
809 basenode = revlog.node(base)
808 flags = revlog.flags(rev)
810 flags = revlog.flags(rev)
809 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
811 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
810 meta += prefix
812 meta += prefix
811 l = len(meta) + len(delta)
813 l = len(meta) + len(delta)
812 yield chunkheader(l)
814 yield chunkheader(l)
813 yield meta
815 yield meta
814 yield delta
816 yield delta
815 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
817 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
816 # do nothing with basenode, it is implicitly the previous one in HG10
818 # do nothing with basenode, it is implicitly the previous one in HG10
817 # do nothing with flags, it is implicitly 0 for cg1 and cg2
819 # do nothing with flags, it is implicitly 0 for cg1 and cg2
818 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
820 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
819
821
820 class cg2packer(cg1packer):
822 class cg2packer(cg1packer):
821 version = '02'
823 version = '02'
822 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
824 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
823
825
824 def __init__(self, repo, bundlecaps=None):
826 def __init__(self, repo, bundlecaps=None):
825 super(cg2packer, self).__init__(repo, bundlecaps)
827 super(cg2packer, self).__init__(repo, bundlecaps)
826 if self._reorder is None:
828 if self._reorder is None:
827 # Since generaldelta is directly supported by cg2, reordering
829 # Since generaldelta is directly supported by cg2, reordering
828 # generally doesn't help, so we disable it by default (treating
830 # generally doesn't help, so we disable it by default (treating
829 # bundle.reorder=auto just like bundle.reorder=False).
831 # bundle.reorder=auto just like bundle.reorder=False).
830 self._reorder = False
832 self._reorder = False
831
833
832 def deltaparent(self, revlog, rev, p1, p2, prev):
834 def deltaparent(self, revlog, rev, p1, p2, prev):
833 dp = revlog.deltaparent(rev)
835 dp = revlog.deltaparent(rev)
834 if dp == nullrev and revlog.storedeltachains:
836 if dp == nullrev and revlog.storedeltachains:
835 # Avoid sending full revisions when delta parent is null. Pick prev
837 # Avoid sending full revisions when delta parent is null. Pick prev
836 # in that case. It's tempting to pick p1 in this case, as p1 will
838 # in that case. It's tempting to pick p1 in this case, as p1 will
837 # be smaller in the common case. However, computing a delta against
839 # be smaller in the common case. However, computing a delta against
838 # p1 may require resolving the raw text of p1, which could be
840 # p1 may require resolving the raw text of p1, which could be
839 # expensive. The revlog caches should have prev cached, meaning
841 # expensive. The revlog caches should have prev cached, meaning
840 # less CPU for changegroup generation. There is likely room to add
842 # less CPU for changegroup generation. There is likely room to add
841 # a flag and/or config option to control this behavior.
843 # a flag and/or config option to control this behavior.
842 base = prev
844 base = prev
843 elif dp == nullrev:
845 elif dp == nullrev:
844 # revlog is configured to use full snapshot for a reason,
846 # revlog is configured to use full snapshot for a reason,
845 # stick to full snapshot.
847 # stick to full snapshot.
846 base = nullrev
848 base = nullrev
847 elif dp not in (p1, p2, prev):
849 elif dp not in (p1, p2, prev):
848 # Pick prev when we can't be sure remote has the base revision.
850 # Pick prev when we can't be sure remote has the base revision.
849 return prev
851 return prev
850 else:
852 else:
851 base = dp
853 base = dp
852 if base != nullrev and not revlog.candelta(base, rev):
854 if base != nullrev and not revlog.candelta(base, rev):
853 base = nullrev
855 base = nullrev
854 return base
856 return base
855
857
856 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
858 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
857 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
859 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
858 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
860 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
859
861
860 class cg3packer(cg2packer):
862 class cg3packer(cg2packer):
861 version = '03'
863 version = '03'
862 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
864 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
863
865
864 def _packmanifests(self, dir, mfnodes, lookuplinknode):
866 def _packmanifests(self, dir, mfnodes, lookuplinknode):
865 if dir:
867 if dir:
866 yield self.fileheader(dir)
868 yield self.fileheader(dir)
867
869
868 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
870 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
869 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
871 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
870 units=_('manifests')):
872 units=_('manifests')):
871 yield chunk
873 yield chunk
872
874
873 def _manifestsdone(self):
875 def _manifestsdone(self):
874 return self.close()
876 return self.close()
875
877
876 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
878 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
877 return struct.pack(
879 return struct.pack(
878 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
880 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
879
881
880 _packermap = {'01': (cg1packer, cg1unpacker),
882 _packermap = {'01': (cg1packer, cg1unpacker),
881 # cg2 adds support for exchanging generaldelta
883 # cg2 adds support for exchanging generaldelta
882 '02': (cg2packer, cg2unpacker),
884 '02': (cg2packer, cg2unpacker),
883 # cg3 adds support for exchanging revlog flags and treemanifests
885 # cg3 adds support for exchanging revlog flags and treemanifests
884 '03': (cg3packer, cg3unpacker),
886 '03': (cg3packer, cg3unpacker),
885 }
887 }
886
888
887 def allsupportedversions(repo):
889 def allsupportedversions(repo):
888 versions = set(_packermap.keys())
890 versions = set(_packermap.keys())
889 if not (repo.ui.configbool('experimental', 'changegroup3') or
891 if not (repo.ui.configbool('experimental', 'changegroup3') or
890 repo.ui.configbool('experimental', 'treemanifest') or
892 repo.ui.configbool('experimental', 'treemanifest') or
891 'treemanifest' in repo.requirements):
893 'treemanifest' in repo.requirements):
892 versions.discard('03')
894 versions.discard('03')
893 return versions
895 return versions
894
896
895 # Changegroup versions that can be applied to the repo
897 # Changegroup versions that can be applied to the repo
896 def supportedincomingversions(repo):
898 def supportedincomingversions(repo):
897 return allsupportedversions(repo)
899 return allsupportedversions(repo)
898
900
899 # Changegroup versions that can be created from the repo
901 # Changegroup versions that can be created from the repo
900 def supportedoutgoingversions(repo):
902 def supportedoutgoingversions(repo):
901 versions = allsupportedversions(repo)
903 versions = allsupportedversions(repo)
902 if 'treemanifest' in repo.requirements:
904 if 'treemanifest' in repo.requirements:
903 # Versions 01 and 02 support only flat manifests and it's just too
905 # Versions 01 and 02 support only flat manifests and it's just too
904 # expensive to convert between the flat manifest and tree manifest on
906 # expensive to convert between the flat manifest and tree manifest on
905 # the fly. Since tree manifests are hashed differently, all of history
907 # the fly. Since tree manifests are hashed differently, all of history
906 # would have to be converted. Instead, we simply don't even pretend to
908 # would have to be converted. Instead, we simply don't even pretend to
907 # support versions 01 and 02.
909 # support versions 01 and 02.
908 versions.discard('01')
910 versions.discard('01')
909 versions.discard('02')
911 versions.discard('02')
910 if NARROW_REQUIREMENT in repo.requirements:
912 if NARROW_REQUIREMENT in repo.requirements:
911 # Versions 01 and 02 don't support revlog flags, and we need to
913 # Versions 01 and 02 don't support revlog flags, and we need to
912 # support that for stripping and unbundling to work.
914 # support that for stripping and unbundling to work.
913 versions.discard('01')
915 versions.discard('01')
914 versions.discard('02')
916 versions.discard('02')
917 if LFS_REQUIREMENT in repo.requirements:
918 # Versions 01 and 02 don't support revlog flags, and we need to
919 # mark LFS entries with REVIDX_EXTSTORED.
920 versions.discard('01')
921 versions.discard('02')
922
915 return versions
923 return versions
916
924
917 def localversion(repo):
925 def localversion(repo):
918 # Finds the best version to use for bundles that are meant to be used
926 # Finds the best version to use for bundles that are meant to be used
919 # locally, such as those from strip and shelve, and temporary bundles.
927 # locally, such as those from strip and shelve, and temporary bundles.
920 return max(supportedoutgoingversions(repo))
928 return max(supportedoutgoingversions(repo))
921
929
922 def safeversion(repo):
930 def safeversion(repo):
923 # Finds the smallest version that it's safe to assume clients of the repo
931 # Finds the smallest version that it's safe to assume clients of the repo
924 # will support. For example, all hg versions that support generaldelta also
932 # will support. For example, all hg versions that support generaldelta also
925 # support changegroup 02.
933 # support changegroup 02.
926 versions = supportedoutgoingversions(repo)
934 versions = supportedoutgoingversions(repo)
927 if 'generaldelta' in repo.requirements:
935 if 'generaldelta' in repo.requirements:
928 versions.discard('01')
936 versions.discard('01')
929 assert versions
937 assert versions
930 return min(versions)
938 return min(versions)
931
939
932 def getbundler(version, repo, bundlecaps=None):
940 def getbundler(version, repo, bundlecaps=None):
933 assert version in supportedoutgoingversions(repo)
941 assert version in supportedoutgoingversions(repo)
934 return _packermap[version][0](repo, bundlecaps)
942 return _packermap[version][0](repo, bundlecaps)
935
943
936 def getunbundler(version, fh, alg, extras=None):
944 def getunbundler(version, fh, alg, extras=None):
937 return _packermap[version][1](fh, alg, extras=extras)
945 return _packermap[version][1](fh, alg, extras=extras)
938
946
939 def _changegroupinfo(repo, nodes, source):
947 def _changegroupinfo(repo, nodes, source):
940 if repo.ui.verbose or source == 'bundle':
948 if repo.ui.verbose or source == 'bundle':
941 repo.ui.status(_("%d changesets found\n") % len(nodes))
949 repo.ui.status(_("%d changesets found\n") % len(nodes))
942 if repo.ui.debugflag:
950 if repo.ui.debugflag:
943 repo.ui.debug("list of changesets:\n")
951 repo.ui.debug("list of changesets:\n")
944 for node in nodes:
952 for node in nodes:
945 repo.ui.debug("%s\n" % hex(node))
953 repo.ui.debug("%s\n" % hex(node))
946
954
947 def makechangegroup(repo, outgoing, version, source, fastpath=False,
955 def makechangegroup(repo, outgoing, version, source, fastpath=False,
948 bundlecaps=None):
956 bundlecaps=None):
949 cgstream = makestream(repo, outgoing, version, source,
957 cgstream = makestream(repo, outgoing, version, source,
950 fastpath=fastpath, bundlecaps=bundlecaps)
958 fastpath=fastpath, bundlecaps=bundlecaps)
951 return getunbundler(version, util.chunkbuffer(cgstream), None,
959 return getunbundler(version, util.chunkbuffer(cgstream), None,
952 {'clcount': len(outgoing.missing) })
960 {'clcount': len(outgoing.missing) })
953
961
954 def makestream(repo, outgoing, version, source, fastpath=False,
962 def makestream(repo, outgoing, version, source, fastpath=False,
955 bundlecaps=None):
963 bundlecaps=None):
956 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
964 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
957
965
958 repo = repo.unfiltered()
966 repo = repo.unfiltered()
959 commonrevs = outgoing.common
967 commonrevs = outgoing.common
960 csets = outgoing.missing
968 csets = outgoing.missing
961 heads = outgoing.missingheads
969 heads = outgoing.missingheads
962 # We go through the fast path if we get told to, or if all (unfiltered
970 # We go through the fast path if we get told to, or if all (unfiltered
963 # heads have been requested (since we then know there all linkrevs will
971 # heads have been requested (since we then know there all linkrevs will
964 # be pulled by the client).
972 # be pulled by the client).
965 heads.sort()
973 heads.sort()
966 fastpathlinkrev = fastpath or (
974 fastpathlinkrev = fastpath or (
967 repo.filtername is None and heads == sorted(repo.heads()))
975 repo.filtername is None and heads == sorted(repo.heads()))
968
976
969 repo.hook('preoutgoing', throw=True, source=source)
977 repo.hook('preoutgoing', throw=True, source=source)
970 _changegroupinfo(repo, csets, source)
978 _changegroupinfo(repo, csets, source)
971 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
979 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
972
980
973 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
981 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
974 revisions = 0
982 revisions = 0
975 files = 0
983 files = 0
976 for chunkdata in iter(source.filelogheader, {}):
984 for chunkdata in iter(source.filelogheader, {}):
977 files += 1
985 files += 1
978 f = chunkdata["filename"]
986 f = chunkdata["filename"]
979 repo.ui.debug("adding %s revisions\n" % f)
987 repo.ui.debug("adding %s revisions\n" % f)
980 repo.ui.progress(_('files'), files, unit=_('files'),
988 repo.ui.progress(_('files'), files, unit=_('files'),
981 total=expectedfiles)
989 total=expectedfiles)
982 fl = repo.file(f)
990 fl = repo.file(f)
983 o = len(fl)
991 o = len(fl)
984 try:
992 try:
985 deltas = source.deltaiter()
993 deltas = source.deltaiter()
986 if not fl.addgroup(deltas, revmap, trp):
994 if not fl.addgroup(deltas, revmap, trp):
987 raise error.Abort(_("received file revlog group is empty"))
995 raise error.Abort(_("received file revlog group is empty"))
988 except error.CensoredBaseError as e:
996 except error.CensoredBaseError as e:
989 raise error.Abort(_("received delta base is censored: %s") % e)
997 raise error.Abort(_("received delta base is censored: %s") % e)
990 revisions += len(fl) - o
998 revisions += len(fl) - o
991 if f in needfiles:
999 if f in needfiles:
992 needs = needfiles[f]
1000 needs = needfiles[f]
993 for new in xrange(o, len(fl)):
1001 for new in xrange(o, len(fl)):
994 n = fl.node(new)
1002 n = fl.node(new)
995 if n in needs:
1003 if n in needs:
996 needs.remove(n)
1004 needs.remove(n)
997 else:
1005 else:
998 raise error.Abort(
1006 raise error.Abort(
999 _("received spurious file revlog entry"))
1007 _("received spurious file revlog entry"))
1000 if not needs:
1008 if not needs:
1001 del needfiles[f]
1009 del needfiles[f]
1002 repo.ui.progress(_('files'), None)
1010 repo.ui.progress(_('files'), None)
1003
1011
1004 for f, needs in needfiles.iteritems():
1012 for f, needs in needfiles.iteritems():
1005 fl = repo.file(f)
1013 fl = repo.file(f)
1006 for n in needs:
1014 for n in needs:
1007 try:
1015 try:
1008 fl.rev(n)
1016 fl.rev(n)
1009 except error.LookupError:
1017 except error.LookupError:
1010 raise error.Abort(
1018 raise error.Abort(
1011 _('missing file data for %s:%s - run hg verify') %
1019 _('missing file data for %s:%s - run hg verify') %
1012 (f, hex(n)))
1020 (f, hex(n)))
1013
1021
1014 return revisions, files
1022 return revisions, files
General Comments 0
You need to be logged in to leave comments. Login now