##// END OF EJS Templates
lfs: move the 'supportedoutgoingversions' handling to changegroup.py...
Matt Harbison -
r37150:a54113fc default
parent child Browse files
Show More
@@ -1,390 +1,387 b''
1 1 # lfs - hash-preserving large file support using Git-LFS protocol
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 """lfs - large file support (EXPERIMENTAL)
9 9
10 10 This extension allows large files to be tracked outside of the normal
11 11 repository storage and stored on a centralized server, similar to the
12 12 ``largefiles`` extension. The ``git-lfs`` protocol is used when
13 13 communicating with the server, so existing git infrastructure can be
14 14 harnessed. Even though the files are stored outside of the repository,
15 15 they are still integrity checked in the same manner as normal files.
16 16
17 17 The files stored outside of the repository are downloaded on demand,
18 18 which reduces the time to clone, and possibly the local disk usage.
19 19 This changes fundamental workflows in a DVCS, so careful thought
20 20 should be given before deploying it. :hg:`convert` can be used to
21 21 convert LFS repositories to normal repositories that no longer
22 22 require this extension, and do so without changing the commit hashes.
23 23 This allows the extension to be disabled if the centralized workflow
24 24 becomes burdensome. However, the pre and post convert clones will
25 25 not be able to communicate with each other unless the extension is
26 26 enabled on both.
27 27
28 28 To start a new repository, or to add LFS files to an existing one, just
29 29 create an ``.hglfs`` file as described below in the root directory of
30 30 the repository. Typically, this file should be put under version
31 31 control, so that the settings will propagate to other repositories with
32 32 push and pull. During any commit, Mercurial will consult this file to
33 33 determine if an added or modified file should be stored externally. The
34 34 type of storage depends on the characteristics of the file at each
35 35 commit. A file that is near a size threshold may switch back and forth
36 36 between LFS and normal storage, as needed.
37 37
38 38 Alternately, both normal repositories and largefile controlled
39 39 repositories can be converted to LFS by using :hg:`convert` and the
40 40 ``lfs.track`` config option described below. The ``.hglfs`` file
41 41 should then be created and added, to control subsequent LFS selection.
42 42 The hashes are also unchanged in this case. The LFS and non-LFS
43 43 repositories can be distinguished because the LFS repository will
44 44 abort any command if this extension is disabled.
45 45
46 46 Committed LFS files are held locally, until the repository is pushed.
47 47 Prior to pushing the normal repository data, the LFS files that are
48 48 tracked by the outgoing commits are automatically uploaded to the
49 49 configured central server. No LFS files are transferred on
50 50 :hg:`pull` or :hg:`clone`. Instead, the files are downloaded on
51 51 demand as they need to be read, if a cached copy cannot be found
52 52 locally. Both committing and downloading an LFS file will link the
53 53 file to a usercache, to speed up future access. See the `usercache`
54 54 config setting described below.
55 55
56 56 .hglfs::
57 57
58 58 The extension reads its configuration from a versioned ``.hglfs``
59 59 configuration file found in the root of the working directory. The
60 60 ``.hglfs`` file uses the same syntax as all other Mercurial
61 61 configuration files. It uses a single section, ``[track]``.
62 62
63 63 The ``[track]`` section specifies which files are stored as LFS (or
64 64 not). Each line is keyed by a file pattern, with a predicate value.
65 65 The first file pattern match is used, so put more specific patterns
66 66 first. The available predicates are ``all()``, ``none()``, and
67 67 ``size()``. See "hg help filesets.size" for the latter.
68 68
69 69 Example versioned ``.hglfs`` file::
70 70
71 71 [track]
72 72 # No Makefile or python file, anywhere, will be LFS
73 73 **Makefile = none()
74 74 **.py = none()
75 75
76 76 **.zip = all()
77 77 **.exe = size(">1MB")
78 78
79 79 # Catchall for everything not matched above
80 80 ** = size(">10MB")
81 81
82 82 Configs::
83 83
84 84 [lfs]
85 85 # Remote endpoint. Multiple protocols are supported:
86 86 # - http(s)://user:pass@example.com/path
87 87 # git-lfs endpoint
88 88 # - file:///tmp/path
89 89 # local filesystem, usually for testing
90 90 # if unset, lfs will prompt setting this when it must use this value.
91 91 # (default: unset)
92 92 url = https://example.com/repo.git/info/lfs
93 93
94 94 # Which files to track in LFS. Path tests are "**.extname" for file
95 95 # extensions, and "path:under/some/directory" for path prefix. Both
96 96 # are relative to the repository root.
97 97 # File size can be tested with the "size()" fileset, and tests can be
98 98 # joined with fileset operators. (See "hg help filesets.operators".)
99 99 #
100 100 # Some examples:
101 101 # - all() # everything
102 102 # - none() # nothing
103 103 # - size(">20MB") # larger than 20MB
104 104 # - !**.txt # anything not a *.txt file
105 105 # - **.zip | **.tar.gz | **.7z # some types of compressed files
106 106 # - path:bin # files under "bin" in the project root
107 107 # - (**.php & size(">2MB")) | (**.js & size(">5MB")) | **.tar.gz
108 108 # | (path:bin & !path:/bin/README) | size(">1GB")
109 109 # (default: none())
110 110 #
111 111 # This is ignored if there is a tracked '.hglfs' file, and this setting
112 112 # will eventually be deprecated and removed.
113 113 track = size(">10M")
114 114
115 115 # how many times to retry before giving up on transferring an object
116 116 retry = 5
117 117
118 118 # the local directory to store lfs files for sharing across local clones.
119 119 # If not set, the cache is located in an OS specific cache location.
120 120 usercache = /path/to/global/cache
121 121 """
122 122
123 123 from __future__ import absolute_import
124 124
125 125 from mercurial.i18n import _
126 126
127 127 from mercurial import (
128 128 bundle2,
129 129 changegroup,
130 130 cmdutil,
131 131 config,
132 132 context,
133 133 error,
134 134 exchange,
135 135 extensions,
136 136 filelog,
137 137 fileset,
138 138 hg,
139 139 localrepo,
140 140 minifileset,
141 141 node,
142 142 pycompat,
143 143 registrar,
144 144 revlog,
145 145 scmutil,
146 146 templateutil,
147 147 upgrade,
148 148 util,
149 149 vfs as vfsmod,
150 150 wireproto,
151 151 )
152 152
153 153 from . import (
154 154 blobstore,
155 155 wrapper,
156 156 )
157 157
158 158 # Note for extension authors: ONLY specify testedwith = 'ships-with-hg-core' for
159 159 # extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
160 160 # be specifying the version(s) of Mercurial they are tested with, or
161 161 # leave the attribute unspecified.
162 162 testedwith = 'ships-with-hg-core'
163 163
164 164 configtable = {}
165 165 configitem = registrar.configitem(configtable)
166 166
167 167 configitem('experimental', 'lfs.user-agent',
168 168 default=None,
169 169 )
170 170 configitem('experimental', 'lfs.worker-enable',
171 171 default=False,
172 172 )
173 173
174 174 configitem('lfs', 'url',
175 175 default=None,
176 176 )
177 177 configitem('lfs', 'usercache',
178 178 default=None,
179 179 )
180 180 # Deprecated
181 181 configitem('lfs', 'threshold',
182 182 default=None,
183 183 )
184 184 configitem('lfs', 'track',
185 185 default='none()',
186 186 )
187 187 configitem('lfs', 'retry',
188 188 default=5,
189 189 )
190 190
191 191 cmdtable = {}
192 192 command = registrar.command(cmdtable)
193 193
194 194 templatekeyword = registrar.templatekeyword()
195 195 filesetpredicate = registrar.filesetpredicate()
196 196
197 197 def featuresetup(ui, supported):
198 198 # don't die on seeing a repo with the lfs requirement
199 199 supported |= {'lfs'}
200 200
201 201 def uisetup(ui):
202 202 localrepo.localrepository.featuresetupfuncs.add(featuresetup)
203 203
204 204 def reposetup(ui, repo):
205 205 # Nothing to do with a remote repo
206 206 if not repo.local():
207 207 return
208 208
209 209 repo.svfs.lfslocalblobstore = blobstore.local(repo)
210 210 repo.svfs.lfsremoteblobstore = blobstore.remote(repo)
211 211
212 212 class lfsrepo(repo.__class__):
213 213 @localrepo.unfilteredmethod
214 214 def commitctx(self, ctx, error=False):
215 215 repo.svfs.options['lfstrack'] = _trackedmatcher(self)
216 216 return super(lfsrepo, self).commitctx(ctx, error)
217 217
218 218 repo.__class__ = lfsrepo
219 219
220 220 if 'lfs' not in repo.requirements:
221 221 def checkrequireslfs(ui, repo, **kwargs):
222 222 if 'lfs' not in repo.requirements:
223 223 last = kwargs.get(r'node_last')
224 224 _bin = node.bin
225 225 if last:
226 226 s = repo.set('%n:%n', _bin(kwargs[r'node']), _bin(last))
227 227 else:
228 228 s = repo.set('%n', _bin(kwargs[r'node']))
229 229 for ctx in s:
230 230 # TODO: is there a way to just walk the files in the commit?
231 231 if any(ctx[f].islfs() for f in ctx.files() if f in ctx):
232 232 repo.requirements.add('lfs')
233 233 repo._writerequirements()
234 234 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
235 235 break
236 236
237 237 ui.setconfig('hooks', 'commit.lfs', checkrequireslfs, 'lfs')
238 238 ui.setconfig('hooks', 'pretxnchangegroup.lfs', checkrequireslfs, 'lfs')
239 239 else:
240 240 repo.prepushoutgoinghooks.add('lfs', wrapper.prepush)
241 241
242 242 def _trackedmatcher(repo):
243 243 """Return a function (path, size) -> bool indicating whether or not to
244 244 track a given file with lfs."""
245 245 if not repo.wvfs.exists('.hglfs'):
246 246 # No '.hglfs' in wdir. Fallback to config for now.
247 247 trackspec = repo.ui.config('lfs', 'track')
248 248
249 249 # deprecated config: lfs.threshold
250 250 threshold = repo.ui.configbytes('lfs', 'threshold')
251 251 if threshold:
252 252 fileset.parse(trackspec) # make sure syntax errors are confined
253 253 trackspec = "(%s) | size('>%d')" % (trackspec, threshold)
254 254
255 255 return minifileset.compile(trackspec)
256 256
257 257 data = repo.wvfs.tryread('.hglfs')
258 258 if not data:
259 259 return lambda p, s: False
260 260
261 261 # Parse errors here will abort with a message that points to the .hglfs file
262 262 # and line number.
263 263 cfg = config.config()
264 264 cfg.parse('.hglfs', data)
265 265
266 266 try:
267 267 rules = [(minifileset.compile(pattern), minifileset.compile(rule))
268 268 for pattern, rule in cfg.items('track')]
269 269 except error.ParseError as e:
270 270 # The original exception gives no indicator that the error is in the
271 271 # .hglfs file, so add that.
272 272
273 273 # TODO: See if the line number of the file can be made available.
274 274 raise error.Abort(_('parse error in .hglfs: %s') % e)
275 275
276 276 def _match(path, size):
277 277 for pat, rule in rules:
278 278 if pat(path, size):
279 279 return rule(path, size)
280 280
281 281 return False
282 282
283 283 return _match
284 284
285 285 def wrapfilelog(filelog):
286 286 wrapfunction = extensions.wrapfunction
287 287
288 288 wrapfunction(filelog, 'addrevision', wrapper.filelogaddrevision)
289 289 wrapfunction(filelog, 'renamed', wrapper.filelogrenamed)
290 290 wrapfunction(filelog, 'size', wrapper.filelogsize)
291 291
292 292 def extsetup(ui):
293 293 wrapfilelog(filelog.filelog)
294 294
295 295 wrapfunction = extensions.wrapfunction
296 296
297 297 wrapfunction(cmdutil, '_updatecatformatter', wrapper._updatecatformatter)
298 298 wrapfunction(scmutil, 'wrapconvertsink', wrapper.convertsink)
299 299
300 300 wrapfunction(upgrade, '_finishdatamigration',
301 301 wrapper.upgradefinishdatamigration)
302 302
303 303 wrapfunction(upgrade, 'preservedrequirements',
304 304 wrapper.upgraderequirements)
305 305
306 306 wrapfunction(upgrade, 'supporteddestrequirements',
307 307 wrapper.upgraderequirements)
308 308
309 309 wrapfunction(changegroup,
310 'supportedoutgoingversions',
311 wrapper.supportedoutgoingversions)
312 wrapfunction(changegroup,
313 310 'allsupportedversions',
314 311 wrapper.allsupportedversions)
315 312
316 313 wrapfunction(exchange, 'push', wrapper.push)
317 314 wrapfunction(wireproto, '_capabilities', wrapper._capabilities)
318 315
319 316 wrapfunction(context.basefilectx, 'cmp', wrapper.filectxcmp)
320 317 wrapfunction(context.basefilectx, 'isbinary', wrapper.filectxisbinary)
321 318 context.basefilectx.islfs = wrapper.filectxislfs
322 319
323 320 revlog.addflagprocessor(
324 321 revlog.REVIDX_EXTSTORED,
325 322 (
326 323 wrapper.readfromstore,
327 324 wrapper.writetostore,
328 325 wrapper.bypasscheckhash,
329 326 ),
330 327 )
331 328
332 329 wrapfunction(hg, 'clone', wrapper.hgclone)
333 330 wrapfunction(hg, 'postshare', wrapper.hgpostshare)
334 331
335 332 scmutil.fileprefetchhooks.add('lfs', wrapper._prefetchfiles)
336 333
337 334 # Make bundle choose changegroup3 instead of changegroup2. This affects
338 335 # "hg bundle" command. Note: it does not cover all bundle formats like
339 336 # "packed1". Using "packed1" with lfs will likely cause trouble.
340 337 names = [k for k, v in exchange._bundlespeccgversions.items() if v == '02']
341 338 for k in names:
342 339 exchange._bundlespeccgversions[k] = '03'
343 340
344 341 # bundlerepo uses "vfsmod.readonlyvfs(othervfs)", we need to make sure lfs
345 342 # options and blob stores are passed from othervfs to the new readonlyvfs.
346 343 wrapfunction(vfsmod.readonlyvfs, '__init__', wrapper.vfsinit)
347 344
348 345 # when writing a bundle via "hg bundle" command, upload related LFS blobs
349 346 wrapfunction(bundle2, 'writenewbundle', wrapper.writenewbundle)
350 347
351 348 @filesetpredicate('lfs()', callstatus=True)
352 349 def lfsfileset(mctx, x):
353 350 """File that uses LFS storage."""
354 351 # i18n: "lfs" is a keyword
355 352 fileset.getargs(x, 0, 0, _("lfs takes no arguments"))
356 353 return [f for f in mctx.subset
357 354 if wrapper.pointerfromctx(mctx.ctx, f, removed=True) is not None]
358 355
359 356 @templatekeyword('lfs_files', requires={'ctx'})
360 357 def lfsfiles(context, mapping):
361 358 """List of strings. All files modified, added, or removed by this
362 359 changeset."""
363 360 ctx = context.resource(mapping, 'ctx')
364 361
365 362 pointers = wrapper.pointersfromctx(ctx, removed=True) # {path: pointer}
366 363 files = sorted(pointers.keys())
367 364
368 365 def pointer(v):
369 366 # In the file spec, version is first and the other keys are sorted.
370 367 sortkeyfunc = lambda x: (x[0] != 'version', x)
371 368 items = sorted(pointers[v].iteritems(), key=sortkeyfunc)
372 369 return util.sortdict(items)
373 370
374 371 makemap = lambda v: {
375 372 'file': v,
376 373 'lfsoid': pointers[v].oid() if pointers[v] else None,
377 374 'lfspointer': templateutil.hybriddict(pointer(v)),
378 375 }
379 376
380 377 # TODO: make the separator ', '?
381 378 f = templateutil._showcompatlist(context, mapping, 'lfs_file', files)
382 379 return templateutil.hybrid(f, files, makemap, pycompat.identity)
383 380
384 381 @command('debuglfsupload',
385 382 [('r', 'rev', [], _('upload large files introduced by REV'))])
386 383 def debuglfsupload(ui, repo, **opts):
387 384 """upload lfs blobs added by the working copy parent or given revisions"""
388 385 revs = opts.get(r'rev', [])
389 386 pointers = wrapper.extractpointers(repo, scmutil.revrange(repo, revs))
390 387 wrapper.uploadblobs(repo, pointers)
@@ -1,395 +1,387 b''
1 1 # wrapper.py - methods wrapping core mercurial logic
2 2 #
3 3 # Copyright 2017 Facebook, Inc.
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import hashlib
11 11
12 12 from mercurial.i18n import _
13 13 from mercurial.node import bin, hex, nullid, short
14 14
15 15 from mercurial import (
16 16 error,
17 17 filelog,
18 18 revlog,
19 19 util,
20 20 )
21 21
22 22 from mercurial.utils import (
23 23 stringutil,
24 24 )
25 25
26 26 from ..largefiles import lfutil
27 27
28 28 from . import (
29 29 blobstore,
30 30 pointer,
31 31 )
32 32
33 def supportedoutgoingversions(orig, repo):
34 versions = orig(repo)
35 if 'lfs' in repo.requirements:
36 versions.discard('01')
37 versions.discard('02')
38 versions.add('03')
39 return versions
40
41 33 def allsupportedversions(orig, ui):
42 34 versions = orig(ui)
43 35 versions.add('03')
44 36 return versions
45 37
46 38 def _capabilities(orig, repo, proto):
47 39 '''Wrap server command to announce lfs server capability'''
48 40 caps = orig(repo, proto)
49 41 # XXX: change to 'lfs=serve' when separate git server isn't required?
50 42 caps.append('lfs')
51 43 return caps
52 44
53 45 def bypasscheckhash(self, text):
54 46 return False
55 47
56 48 def readfromstore(self, text):
57 49 """Read filelog content from local blobstore transform for flagprocessor.
58 50
59 51 Default tranform for flagprocessor, returning contents from blobstore.
60 52 Returns a 2-typle (text, validatehash) where validatehash is True as the
61 53 contents of the blobstore should be checked using checkhash.
62 54 """
63 55 p = pointer.deserialize(text)
64 56 oid = p.oid()
65 57 store = self.opener.lfslocalblobstore
66 58 if not store.has(oid):
67 59 p.filename = self.filename
68 60 self.opener.lfsremoteblobstore.readbatch([p], store)
69 61
70 62 # The caller will validate the content
71 63 text = store.read(oid, verify=False)
72 64
73 65 # pack hg filelog metadata
74 66 hgmeta = {}
75 67 for k in p.keys():
76 68 if k.startswith('x-hg-'):
77 69 name = k[len('x-hg-'):]
78 70 hgmeta[name] = p[k]
79 71 if hgmeta or text.startswith('\1\n'):
80 72 text = filelog.packmeta(hgmeta, text)
81 73
82 74 return (text, True)
83 75
84 76 def writetostore(self, text):
85 77 # hg filelog metadata (includes rename, etc)
86 78 hgmeta, offset = filelog.parsemeta(text)
87 79 if offset and offset > 0:
88 80 # lfs blob does not contain hg filelog metadata
89 81 text = text[offset:]
90 82
91 83 # git-lfs only supports sha256
92 84 oid = hex(hashlib.sha256(text).digest())
93 85 self.opener.lfslocalblobstore.write(oid, text)
94 86
95 87 # replace contents with metadata
96 88 longoid = 'sha256:%s' % oid
97 89 metadata = pointer.gitlfspointer(oid=longoid, size='%d' % len(text))
98 90
99 91 # by default, we expect the content to be binary. however, LFS could also
100 92 # be used for non-binary content. add a special entry for non-binary data.
101 93 # this will be used by filectx.isbinary().
102 94 if not stringutil.binary(text):
103 95 # not hg filelog metadata (affecting commit hash), no "x-hg-" prefix
104 96 metadata['x-is-binary'] = '0'
105 97
106 98 # translate hg filelog metadata to lfs metadata with "x-hg-" prefix
107 99 if hgmeta is not None:
108 100 for k, v in hgmeta.iteritems():
109 101 metadata['x-hg-%s' % k] = v
110 102
111 103 rawtext = metadata.serialize()
112 104 return (rawtext, False)
113 105
114 106 def _islfs(rlog, node=None, rev=None):
115 107 if rev is None:
116 108 if node is None:
117 109 # both None - likely working copy content where node is not ready
118 110 return False
119 111 rev = rlog.rev(node)
120 112 else:
121 113 node = rlog.node(rev)
122 114 if node == nullid:
123 115 return False
124 116 flags = rlog.flags(rev)
125 117 return bool(flags & revlog.REVIDX_EXTSTORED)
126 118
127 119 def filelogaddrevision(orig, self, text, transaction, link, p1, p2,
128 120 cachedelta=None, node=None,
129 121 flags=revlog.REVIDX_DEFAULT_FLAGS, **kwds):
130 122 textlen = len(text)
131 123 # exclude hg rename meta from file size
132 124 meta, offset = filelog.parsemeta(text)
133 125 if offset:
134 126 textlen -= offset
135 127
136 128 lfstrack = self.opener.options['lfstrack']
137 129
138 130 if lfstrack(self.filename, textlen):
139 131 flags |= revlog.REVIDX_EXTSTORED
140 132
141 133 return orig(self, text, transaction, link, p1, p2, cachedelta=cachedelta,
142 134 node=node, flags=flags, **kwds)
143 135
144 136 def filelogrenamed(orig, self, node):
145 137 if _islfs(self, node):
146 138 rawtext = self.revision(node, raw=True)
147 139 if not rawtext:
148 140 return False
149 141 metadata = pointer.deserialize(rawtext)
150 142 if 'x-hg-copy' in metadata and 'x-hg-copyrev' in metadata:
151 143 return metadata['x-hg-copy'], bin(metadata['x-hg-copyrev'])
152 144 else:
153 145 return False
154 146 return orig(self, node)
155 147
156 148 def filelogsize(orig, self, rev):
157 149 if _islfs(self, rev=rev):
158 150 # fast path: use lfs metadata to answer size
159 151 rawtext = self.revision(rev, raw=True)
160 152 metadata = pointer.deserialize(rawtext)
161 153 return int(metadata['size'])
162 154 return orig(self, rev)
163 155
164 156 def filectxcmp(orig, self, fctx):
165 157 """returns True if text is different than fctx"""
166 158 # some fctx (ex. hg-git) is not based on basefilectx and do not have islfs
167 159 if self.islfs() and getattr(fctx, 'islfs', lambda: False)():
168 160 # fast path: check LFS oid
169 161 p1 = pointer.deserialize(self.rawdata())
170 162 p2 = pointer.deserialize(fctx.rawdata())
171 163 return p1.oid() != p2.oid()
172 164 return orig(self, fctx)
173 165
174 166 def filectxisbinary(orig, self):
175 167 if self.islfs():
176 168 # fast path: use lfs metadata to answer isbinary
177 169 metadata = pointer.deserialize(self.rawdata())
178 170 # if lfs metadata says nothing, assume it's binary by default
179 171 return bool(int(metadata.get('x-is-binary', 1)))
180 172 return orig(self)
181 173
182 174 def filectxislfs(self):
183 175 return _islfs(self.filelog(), self.filenode())
184 176
185 177 def _updatecatformatter(orig, fm, ctx, matcher, path, decode):
186 178 orig(fm, ctx, matcher, path, decode)
187 179 fm.data(rawdata=ctx[path].rawdata())
188 180
189 181 def convertsink(orig, sink):
190 182 sink = orig(sink)
191 183 if sink.repotype == 'hg':
192 184 class lfssink(sink.__class__):
193 185 def putcommit(self, files, copies, parents, commit, source, revmap,
194 186 full, cleanp2):
195 187 pc = super(lfssink, self).putcommit
196 188 node = pc(files, copies, parents, commit, source, revmap, full,
197 189 cleanp2)
198 190
199 191 if 'lfs' not in self.repo.requirements:
200 192 ctx = self.repo[node]
201 193
202 194 # The file list may contain removed files, so check for
203 195 # membership before assuming it is in the context.
204 196 if any(f in ctx and ctx[f].islfs() for f, n in files):
205 197 self.repo.requirements.add('lfs')
206 198 self.repo._writerequirements()
207 199
208 200 # Permanently enable lfs locally
209 201 self.repo.vfs.append(
210 202 'hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
211 203
212 204 return node
213 205
214 206 sink.__class__ = lfssink
215 207
216 208 return sink
217 209
218 210 def vfsinit(orig, self, othervfs):
219 211 orig(self, othervfs)
220 212 # copy lfs related options
221 213 for k, v in othervfs.options.items():
222 214 if k.startswith('lfs'):
223 215 self.options[k] = v
224 216 # also copy lfs blobstores. note: this can run before reposetup, so lfs
225 217 # blobstore attributes are not always ready at this time.
226 218 for name in ['lfslocalblobstore', 'lfsremoteblobstore']:
227 219 if util.safehasattr(othervfs, name):
228 220 setattr(self, name, getattr(othervfs, name))
229 221
230 222 def hgclone(orig, ui, opts, *args, **kwargs):
231 223 result = orig(ui, opts, *args, **kwargs)
232 224
233 225 if result is not None:
234 226 sourcerepo, destrepo = result
235 227 repo = destrepo.local()
236 228
237 229 # When cloning to a remote repo (like through SSH), no repo is available
238 230 # from the peer. Therefore the hgrc can't be updated.
239 231 if not repo:
240 232 return result
241 233
242 234 # If lfs is required for this repo, permanently enable it locally
243 235 if 'lfs' in repo.requirements:
244 236 repo.vfs.append('hgrc',
245 237 util.tonativeeol('\n[extensions]\nlfs=\n'))
246 238
247 239 return result
248 240
249 241 def hgpostshare(orig, sourcerepo, destrepo, bookmarks=True, defaultpath=None):
250 242 orig(sourcerepo, destrepo, bookmarks, defaultpath)
251 243
252 244 # If lfs is required for this repo, permanently enable it locally
253 245 if 'lfs' in destrepo.requirements:
254 246 destrepo.vfs.append('hgrc', util.tonativeeol('\n[extensions]\nlfs=\n'))
255 247
256 248 def _prefetchfiles(repo, ctx, files):
257 249 """Ensure that required LFS blobs are present, fetching them as a group if
258 250 needed."""
259 251 pointers = []
260 252 localstore = repo.svfs.lfslocalblobstore
261 253
262 254 for f in files:
263 255 p = pointerfromctx(ctx, f)
264 256 if p and not localstore.has(p.oid()):
265 257 p.filename = f
266 258 pointers.append(p)
267 259
268 260 if pointers:
269 261 repo.svfs.lfsremoteblobstore.readbatch(pointers, localstore)
270 262
271 263 def _canskipupload(repo):
272 264 # if remotestore is a null store, upload is a no-op and can be skipped
273 265 return isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
274 266
275 267 def candownload(repo):
276 268 # if remotestore is a null store, downloads will lead to nothing
277 269 return not isinstance(repo.svfs.lfsremoteblobstore, blobstore._nullremote)
278 270
279 271 def uploadblobsfromrevs(repo, revs):
280 272 '''upload lfs blobs introduced by revs
281 273
282 274 Note: also used by other extensions e. g. infinitepush. avoid renaming.
283 275 '''
284 276 if _canskipupload(repo):
285 277 return
286 278 pointers = extractpointers(repo, revs)
287 279 uploadblobs(repo, pointers)
288 280
289 281 def prepush(pushop):
290 282 """Prepush hook.
291 283
292 284 Read through the revisions to push, looking for filelog entries that can be
293 285 deserialized into metadata so that we can block the push on their upload to
294 286 the remote blobstore.
295 287 """
296 288 return uploadblobsfromrevs(pushop.repo, pushop.outgoing.missing)
297 289
298 290 def push(orig, repo, remote, *args, **kwargs):
299 291 """bail on push if the extension isn't enabled on remote when needed"""
300 292 if 'lfs' in repo.requirements:
301 293 # If the remote peer is for a local repo, the requirement tests in the
302 294 # base class method enforce lfs support. Otherwise, some revisions in
303 295 # this repo use lfs, and the remote repo needs the extension loaded.
304 296 if not remote.local() and not remote.capable('lfs'):
305 297 # This is a copy of the message in exchange.push() when requirements
306 298 # are missing between local repos.
307 299 m = _("required features are not supported in the destination: %s")
308 300 raise error.Abort(m % 'lfs',
309 301 hint=_('enable the lfs extension on the server'))
310 302 return orig(repo, remote, *args, **kwargs)
311 303
312 304 def writenewbundle(orig, ui, repo, source, filename, bundletype, outgoing,
313 305 *args, **kwargs):
314 306 """upload LFS blobs added by outgoing revisions on 'hg bundle'"""
315 307 uploadblobsfromrevs(repo, outgoing.missing)
316 308 return orig(ui, repo, source, filename, bundletype, outgoing, *args,
317 309 **kwargs)
318 310
319 311 def extractpointers(repo, revs):
320 312 """return a list of lfs pointers added by given revs"""
321 313 repo.ui.debug('lfs: computing set of blobs to upload\n')
322 314 pointers = {}
323 315 for r in revs:
324 316 ctx = repo[r]
325 317 for p in pointersfromctx(ctx).values():
326 318 pointers[p.oid()] = p
327 319 return sorted(pointers.values())
328 320
329 321 def pointerfromctx(ctx, f, removed=False):
330 322 """return a pointer for the named file from the given changectx, or None if
331 323 the file isn't LFS.
332 324
333 325 Optionally, the pointer for a file deleted from the context can be returned.
334 326 Since no such pointer is actually stored, and to distinguish from a non LFS
335 327 file, this pointer is represented by an empty dict.
336 328 """
337 329 _ctx = ctx
338 330 if f not in ctx:
339 331 if not removed:
340 332 return None
341 333 if f in ctx.p1():
342 334 _ctx = ctx.p1()
343 335 elif f in ctx.p2():
344 336 _ctx = ctx.p2()
345 337 else:
346 338 return None
347 339 fctx = _ctx[f]
348 340 if not _islfs(fctx.filelog(), fctx.filenode()):
349 341 return None
350 342 try:
351 343 p = pointer.deserialize(fctx.rawdata())
352 344 if ctx == _ctx:
353 345 return p
354 346 return {}
355 347 except pointer.InvalidPointer as ex:
356 348 raise error.Abort(_('lfs: corrupted pointer (%s@%s): %s\n')
357 349 % (f, short(_ctx.node()), ex))
358 350
359 351 def pointersfromctx(ctx, removed=False):
360 352 """return a dict {path: pointer} for given single changectx.
361 353
362 354 If ``removed`` == True and the LFS file was removed from ``ctx``, the value
363 355 stored for the path is an empty dict.
364 356 """
365 357 result = {}
366 358 for f in ctx.files():
367 359 p = pointerfromctx(ctx, f, removed=removed)
368 360 if p is not None:
369 361 result[f] = p
370 362 return result
371 363
372 364 def uploadblobs(repo, pointers):
373 365 """upload given pointers from local blobstore"""
374 366 if not pointers:
375 367 return
376 368
377 369 remoteblob = repo.svfs.lfsremoteblobstore
378 370 remoteblob.writebatch(pointers, repo.svfs.lfslocalblobstore)
379 371
380 372 def upgradefinishdatamigration(orig, ui, srcrepo, dstrepo, requirements):
381 373 orig(ui, srcrepo, dstrepo, requirements)
382 374
383 375 srclfsvfs = srcrepo.svfs.lfslocalblobstore.vfs
384 376 dstlfsvfs = dstrepo.svfs.lfslocalblobstore.vfs
385 377
386 378 for dirpath, dirs, files in srclfsvfs.walk():
387 379 for oid in files:
388 380 ui.write(_('copying lfs blob %s\n') % oid)
389 381 lfutil.link(srclfsvfs.join(oid), dstlfsvfs.join(oid))
390 382
391 383 def upgraderequirements(orig, repo):
392 384 reqs = orig(repo)
393 385 if 'lfs' in repo.requirements:
394 386 reqs.add('lfs')
395 387 return reqs
@@ -1,1014 +1,1022 b''
1 1 # changegroup.py - Mercurial changegroup manipulation functions
2 2 #
3 3 # Copyright 2006 Matt Mackall <mpm@selenic.com>
4 4 #
5 5 # This software may be used and distributed according to the terms of the
6 6 # GNU General Public License version 2 or any later version.
7 7
8 8 from __future__ import absolute_import
9 9
10 10 import os
11 11 import struct
12 12 import tempfile
13 13 import weakref
14 14
15 15 from .i18n import _
16 16 from .node import (
17 17 hex,
18 18 nullrev,
19 19 short,
20 20 )
21 21
22 22 from . import (
23 23 dagutil,
24 24 error,
25 25 mdiff,
26 26 phases,
27 27 pycompat,
28 28 util,
29 29 )
30 30
31 31 from .utils import (
32 32 stringutil,
33 33 )
34 34
35 35 _CHANGEGROUPV1_DELTA_HEADER = "20s20s20s20s"
36 36 _CHANGEGROUPV2_DELTA_HEADER = "20s20s20s20s20s"
37 37 _CHANGEGROUPV3_DELTA_HEADER = ">20s20s20s20s20sH"
38 38
39 LFS_REQUIREMENT = 'lfs'
40
39 41 # When narrowing is finalized and no longer subject to format changes,
40 42 # we should move this to just "narrow" or similar.
41 43 NARROW_REQUIREMENT = 'narrowhg-experimental'
42 44
43 45 readexactly = util.readexactly
44 46
45 47 def getchunk(stream):
46 48 """return the next chunk from stream as a string"""
47 49 d = readexactly(stream, 4)
48 50 l = struct.unpack(">l", d)[0]
49 51 if l <= 4:
50 52 if l:
51 53 raise error.Abort(_("invalid chunk length %d") % l)
52 54 return ""
53 55 return readexactly(stream, l - 4)
54 56
55 57 def chunkheader(length):
56 58 """return a changegroup chunk header (string)"""
57 59 return struct.pack(">l", length + 4)
58 60
59 61 def closechunk():
60 62 """return a changegroup chunk header (string) for a zero-length chunk"""
61 63 return struct.pack(">l", 0)
62 64
63 65 def writechunks(ui, chunks, filename, vfs=None):
64 66 """Write chunks to a file and return its filename.
65 67
66 68 The stream is assumed to be a bundle file.
67 69 Existing files will not be overwritten.
68 70 If no filename is specified, a temporary file is created.
69 71 """
70 72 fh = None
71 73 cleanup = None
72 74 try:
73 75 if filename:
74 76 if vfs:
75 77 fh = vfs.open(filename, "wb")
76 78 else:
77 79 # Increase default buffer size because default is usually
78 80 # small (4k is common on Linux).
79 81 fh = open(filename, "wb", 131072)
80 82 else:
81 83 fd, filename = tempfile.mkstemp(prefix="hg-bundle-", suffix=".hg")
82 84 fh = os.fdopen(fd, r"wb")
83 85 cleanup = filename
84 86 for c in chunks:
85 87 fh.write(c)
86 88 cleanup = None
87 89 return filename
88 90 finally:
89 91 if fh is not None:
90 92 fh.close()
91 93 if cleanup is not None:
92 94 if filename and vfs:
93 95 vfs.unlink(cleanup)
94 96 else:
95 97 os.unlink(cleanup)
96 98
97 99 class cg1unpacker(object):
98 100 """Unpacker for cg1 changegroup streams.
99 101
100 102 A changegroup unpacker handles the framing of the revision data in
101 103 the wire format. Most consumers will want to use the apply()
102 104 method to add the changes from the changegroup to a repository.
103 105
104 106 If you're forwarding a changegroup unmodified to another consumer,
105 107 use getchunks(), which returns an iterator of changegroup
106 108 chunks. This is mostly useful for cases where you need to know the
107 109 data stream has ended by observing the end of the changegroup.
108 110
109 111 deltachunk() is useful only if you're applying delta data. Most
110 112 consumers should prefer apply() instead.
111 113
112 114 A few other public methods exist. Those are used only for
113 115 bundlerepo and some debug commands - their use is discouraged.
114 116 """
115 117 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
116 118 deltaheadersize = struct.calcsize(deltaheader)
117 119 version = '01'
118 120 _grouplistcount = 1 # One list of files after the manifests
119 121
120 122 def __init__(self, fh, alg, extras=None):
121 123 if alg is None:
122 124 alg = 'UN'
123 125 if alg not in util.compengines.supportedbundletypes:
124 126 raise error.Abort(_('unknown stream compression type: %s')
125 127 % alg)
126 128 if alg == 'BZ':
127 129 alg = '_truncatedBZ'
128 130
129 131 compengine = util.compengines.forbundletype(alg)
130 132 self._stream = compengine.decompressorreader(fh)
131 133 self._type = alg
132 134 self.extras = extras or {}
133 135 self.callback = None
134 136
135 137 # These methods (compressed, read, seek, tell) all appear to only
136 138 # be used by bundlerepo, but it's a little hard to tell.
137 139 def compressed(self):
138 140 return self._type is not None and self._type != 'UN'
139 141 def read(self, l):
140 142 return self._stream.read(l)
141 143 def seek(self, pos):
142 144 return self._stream.seek(pos)
143 145 def tell(self):
144 146 return self._stream.tell()
145 147 def close(self):
146 148 return self._stream.close()
147 149
148 150 def _chunklength(self):
149 151 d = readexactly(self._stream, 4)
150 152 l = struct.unpack(">l", d)[0]
151 153 if l <= 4:
152 154 if l:
153 155 raise error.Abort(_("invalid chunk length %d") % l)
154 156 return 0
155 157 if self.callback:
156 158 self.callback()
157 159 return l - 4
158 160
159 161 def changelogheader(self):
160 162 """v10 does not have a changelog header chunk"""
161 163 return {}
162 164
163 165 def manifestheader(self):
164 166 """v10 does not have a manifest header chunk"""
165 167 return {}
166 168
167 169 def filelogheader(self):
168 170 """return the header of the filelogs chunk, v10 only has the filename"""
169 171 l = self._chunklength()
170 172 if not l:
171 173 return {}
172 174 fname = readexactly(self._stream, l)
173 175 return {'filename': fname}
174 176
175 177 def _deltaheader(self, headertuple, prevnode):
176 178 node, p1, p2, cs = headertuple
177 179 if prevnode is None:
178 180 deltabase = p1
179 181 else:
180 182 deltabase = prevnode
181 183 flags = 0
182 184 return node, p1, p2, deltabase, cs, flags
183 185
184 186 def deltachunk(self, prevnode):
185 187 l = self._chunklength()
186 188 if not l:
187 189 return {}
188 190 headerdata = readexactly(self._stream, self.deltaheadersize)
189 191 header = struct.unpack(self.deltaheader, headerdata)
190 192 delta = readexactly(self._stream, l - self.deltaheadersize)
191 193 node, p1, p2, deltabase, cs, flags = self._deltaheader(header, prevnode)
192 194 return (node, p1, p2, cs, deltabase, delta, flags)
193 195
194 196 def getchunks(self):
195 197 """returns all the chunks contains in the bundle
196 198
197 199 Used when you need to forward the binary stream to a file or another
198 200 network API. To do so, it parse the changegroup data, otherwise it will
199 201 block in case of sshrepo because it don't know the end of the stream.
200 202 """
201 203 # For changegroup 1 and 2, we expect 3 parts: changelog, manifestlog,
202 204 # and a list of filelogs. For changegroup 3, we expect 4 parts:
203 205 # changelog, manifestlog, a list of tree manifestlogs, and a list of
204 206 # filelogs.
205 207 #
206 208 # Changelog and manifestlog parts are terminated with empty chunks. The
207 209 # tree and file parts are a list of entry sections. Each entry section
208 210 # is a series of chunks terminating in an empty chunk. The list of these
209 211 # entry sections is terminated in yet another empty chunk, so we know
210 212 # we've reached the end of the tree/file list when we reach an empty
211 213 # chunk that was proceeded by no non-empty chunks.
212 214
213 215 parts = 0
214 216 while parts < 2 + self._grouplistcount:
215 217 noentries = True
216 218 while True:
217 219 chunk = getchunk(self)
218 220 if not chunk:
219 221 # The first two empty chunks represent the end of the
220 222 # changelog and the manifestlog portions. The remaining
221 223 # empty chunks represent either A) the end of individual
222 224 # tree or file entries in the file list, or B) the end of
223 225 # the entire list. It's the end of the entire list if there
224 226 # were no entries (i.e. noentries is True).
225 227 if parts < 2:
226 228 parts += 1
227 229 elif noentries:
228 230 parts += 1
229 231 break
230 232 noentries = False
231 233 yield chunkheader(len(chunk))
232 234 pos = 0
233 235 while pos < len(chunk):
234 236 next = pos + 2**20
235 237 yield chunk[pos:next]
236 238 pos = next
237 239 yield closechunk()
238 240
239 241 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
240 242 # We know that we'll never have more manifests than we had
241 243 # changesets.
242 244 self.callback = prog(_('manifests'), numchanges)
243 245 # no need to check for empty manifest group here:
244 246 # if the result of the merge of 1 and 2 is the same in 3 and 4,
245 247 # no new manifest will be created and the manifest group will
246 248 # be empty during the pull
247 249 self.manifestheader()
248 250 deltas = self.deltaiter()
249 251 repo.manifestlog._revlog.addgroup(deltas, revmap, trp)
250 252 repo.ui.progress(_('manifests'), None)
251 253 self.callback = None
252 254
253 255 def apply(self, repo, tr, srctype, url, targetphase=phases.draft,
254 256 expectedtotal=None):
255 257 """Add the changegroup returned by source.read() to this repo.
256 258 srctype is a string like 'push', 'pull', or 'unbundle'. url is
257 259 the URL of the repo where this changegroup is coming from.
258 260
259 261 Return an integer summarizing the change to this repo:
260 262 - nothing changed or no source: 0
261 263 - more heads than before: 1+added heads (2..n)
262 264 - fewer heads than before: -1-removed heads (-2..-n)
263 265 - number of heads stays the same: 1
264 266 """
265 267 repo = repo.unfiltered()
266 268 def csmap(x):
267 269 repo.ui.debug("add changeset %s\n" % short(x))
268 270 return len(cl)
269 271
270 272 def revmap(x):
271 273 return cl.rev(x)
272 274
273 275 changesets = files = revisions = 0
274 276
275 277 try:
276 278 # The transaction may already carry source information. In this
277 279 # case we use the top level data. We overwrite the argument
278 280 # because we need to use the top level value (if they exist)
279 281 # in this function.
280 282 srctype = tr.hookargs.setdefault('source', srctype)
281 283 url = tr.hookargs.setdefault('url', url)
282 284 repo.hook('prechangegroup',
283 285 throw=True, **pycompat.strkwargs(tr.hookargs))
284 286
285 287 # write changelog data to temp files so concurrent readers
286 288 # will not see an inconsistent view
287 289 cl = repo.changelog
288 290 cl.delayupdate(tr)
289 291 oldheads = set(cl.heads())
290 292
291 293 trp = weakref.proxy(tr)
292 294 # pull off the changeset group
293 295 repo.ui.status(_("adding changesets\n"))
294 296 clstart = len(cl)
295 297 class prog(object):
296 298 def __init__(self, step, total):
297 299 self._step = step
298 300 self._total = total
299 301 self._count = 1
300 302 def __call__(self):
301 303 repo.ui.progress(self._step, self._count, unit=_('chunks'),
302 304 total=self._total)
303 305 self._count += 1
304 306 self.callback = prog(_('changesets'), expectedtotal)
305 307
306 308 efiles = set()
307 309 def onchangelog(cl, node):
308 310 efiles.update(cl.readfiles(node))
309 311
310 312 self.changelogheader()
311 313 deltas = self.deltaiter()
312 314 cgnodes = cl.addgroup(deltas, csmap, trp, addrevisioncb=onchangelog)
313 315 efiles = len(efiles)
314 316
315 317 if not cgnodes:
316 318 repo.ui.develwarn('applied empty changegroup',
317 319 config='warn-empty-changegroup')
318 320 clend = len(cl)
319 321 changesets = clend - clstart
320 322 repo.ui.progress(_('changesets'), None)
321 323 self.callback = None
322 324
323 325 # pull off the manifest group
324 326 repo.ui.status(_("adding manifests\n"))
325 327 self._unpackmanifests(repo, revmap, trp, prog, changesets)
326 328
327 329 needfiles = {}
328 330 if repo.ui.configbool('server', 'validate'):
329 331 cl = repo.changelog
330 332 ml = repo.manifestlog
331 333 # validate incoming csets have their manifests
332 334 for cset in xrange(clstart, clend):
333 335 mfnode = cl.changelogrevision(cset).manifest
334 336 mfest = ml[mfnode].readdelta()
335 337 # store file cgnodes we must see
336 338 for f, n in mfest.iteritems():
337 339 needfiles.setdefault(f, set()).add(n)
338 340
339 341 # process the files
340 342 repo.ui.status(_("adding file changes\n"))
341 343 newrevs, newfiles = _addchangegroupfiles(
342 344 repo, self, revmap, trp, efiles, needfiles)
343 345 revisions += newrevs
344 346 files += newfiles
345 347
346 348 deltaheads = 0
347 349 if oldheads:
348 350 heads = cl.heads()
349 351 deltaheads = len(heads) - len(oldheads)
350 352 for h in heads:
351 353 if h not in oldheads and repo[h].closesbranch():
352 354 deltaheads -= 1
353 355 htext = ""
354 356 if deltaheads:
355 357 htext = _(" (%+d heads)") % deltaheads
356 358
357 359 repo.ui.status(_("added %d changesets"
358 360 " with %d changes to %d files%s\n")
359 361 % (changesets, revisions, files, htext))
360 362 repo.invalidatevolatilesets()
361 363
362 364 if changesets > 0:
363 365 if 'node' not in tr.hookargs:
364 366 tr.hookargs['node'] = hex(cl.node(clstart))
365 367 tr.hookargs['node_last'] = hex(cl.node(clend - 1))
366 368 hookargs = dict(tr.hookargs)
367 369 else:
368 370 hookargs = dict(tr.hookargs)
369 371 hookargs['node'] = hex(cl.node(clstart))
370 372 hookargs['node_last'] = hex(cl.node(clend - 1))
371 373 repo.hook('pretxnchangegroup',
372 374 throw=True, **pycompat.strkwargs(hookargs))
373 375
374 376 added = [cl.node(r) for r in xrange(clstart, clend)]
375 377 phaseall = None
376 378 if srctype in ('push', 'serve'):
377 379 # Old servers can not push the boundary themselves.
378 380 # New servers won't push the boundary if changeset already
379 381 # exists locally as secret
380 382 #
381 383 # We should not use added here but the list of all change in
382 384 # the bundle
383 385 if repo.publishing():
384 386 targetphase = phaseall = phases.public
385 387 else:
386 388 # closer target phase computation
387 389
388 390 # Those changesets have been pushed from the
389 391 # outside, their phases are going to be pushed
390 392 # alongside. Therefor `targetphase` is
391 393 # ignored.
392 394 targetphase = phaseall = phases.draft
393 395 if added:
394 396 phases.registernew(repo, tr, targetphase, added)
395 397 if phaseall is not None:
396 398 phases.advanceboundary(repo, tr, phaseall, cgnodes)
397 399
398 400 if changesets > 0:
399 401
400 402 def runhooks():
401 403 # These hooks run when the lock releases, not when the
402 404 # transaction closes. So it's possible for the changelog
403 405 # to have changed since we last saw it.
404 406 if clstart >= len(repo):
405 407 return
406 408
407 409 repo.hook("changegroup", **pycompat.strkwargs(hookargs))
408 410
409 411 for n in added:
410 412 args = hookargs.copy()
411 413 args['node'] = hex(n)
412 414 del args['node_last']
413 415 repo.hook("incoming", **pycompat.strkwargs(args))
414 416
415 417 newheads = [h for h in repo.heads()
416 418 if h not in oldheads]
417 419 repo.ui.log("incoming",
418 420 "%d incoming changes - new heads: %s\n",
419 421 len(added),
420 422 ', '.join([hex(c[:6]) for c in newheads]))
421 423
422 424 tr.addpostclose('changegroup-runhooks-%020i' % clstart,
423 425 lambda tr: repo._afterlock(runhooks))
424 426 finally:
425 427 repo.ui.flush()
426 428 # never return 0 here:
427 429 if deltaheads < 0:
428 430 ret = deltaheads - 1
429 431 else:
430 432 ret = deltaheads + 1
431 433 return ret
432 434
433 435 def deltaiter(self):
434 436 """
435 437 returns an iterator of the deltas in this changegroup
436 438
437 439 Useful for passing to the underlying storage system to be stored.
438 440 """
439 441 chain = None
440 442 for chunkdata in iter(lambda: self.deltachunk(chain), {}):
441 443 # Chunkdata: (node, p1, p2, cs, deltabase, delta, flags)
442 444 yield chunkdata
443 445 chain = chunkdata[0]
444 446
445 447 class cg2unpacker(cg1unpacker):
446 448 """Unpacker for cg2 streams.
447 449
448 450 cg2 streams add support for generaldelta, so the delta header
449 451 format is slightly different. All other features about the data
450 452 remain the same.
451 453 """
452 454 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
453 455 deltaheadersize = struct.calcsize(deltaheader)
454 456 version = '02'
455 457
456 458 def _deltaheader(self, headertuple, prevnode):
457 459 node, p1, p2, deltabase, cs = headertuple
458 460 flags = 0
459 461 return node, p1, p2, deltabase, cs, flags
460 462
461 463 class cg3unpacker(cg2unpacker):
462 464 """Unpacker for cg3 streams.
463 465
464 466 cg3 streams add support for exchanging treemanifests and revlog
465 467 flags. It adds the revlog flags to the delta header and an empty chunk
466 468 separating manifests and files.
467 469 """
468 470 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
469 471 deltaheadersize = struct.calcsize(deltaheader)
470 472 version = '03'
471 473 _grouplistcount = 2 # One list of manifests and one list of files
472 474
473 475 def _deltaheader(self, headertuple, prevnode):
474 476 node, p1, p2, deltabase, cs, flags = headertuple
475 477 return node, p1, p2, deltabase, cs, flags
476 478
477 479 def _unpackmanifests(self, repo, revmap, trp, prog, numchanges):
478 480 super(cg3unpacker, self)._unpackmanifests(repo, revmap, trp, prog,
479 481 numchanges)
480 482 for chunkdata in iter(self.filelogheader, {}):
481 483 # If we get here, there are directory manifests in the changegroup
482 484 d = chunkdata["filename"]
483 485 repo.ui.debug("adding %s revisions\n" % d)
484 486 dirlog = repo.manifestlog._revlog.dirlog(d)
485 487 deltas = self.deltaiter()
486 488 if not dirlog.addgroup(deltas, revmap, trp):
487 489 raise error.Abort(_("received dir revlog group is empty"))
488 490
489 491 class headerlessfixup(object):
490 492 def __init__(self, fh, h):
491 493 self._h = h
492 494 self._fh = fh
493 495 def read(self, n):
494 496 if self._h:
495 497 d, self._h = self._h[:n], self._h[n:]
496 498 if len(d) < n:
497 499 d += readexactly(self._fh, n - len(d))
498 500 return d
499 501 return readexactly(self._fh, n)
500 502
501 503 class cg1packer(object):
502 504 deltaheader = _CHANGEGROUPV1_DELTA_HEADER
503 505 version = '01'
504 506 def __init__(self, repo, bundlecaps=None):
505 507 """Given a source repo, construct a bundler.
506 508
507 509 bundlecaps is optional and can be used to specify the set of
508 510 capabilities which can be used to build the bundle. While bundlecaps is
509 511 unused in core Mercurial, extensions rely on this feature to communicate
510 512 capabilities to customize the changegroup packer.
511 513 """
512 514 # Set of capabilities we can use to build the bundle.
513 515 if bundlecaps is None:
514 516 bundlecaps = set()
515 517 self._bundlecaps = bundlecaps
516 518 # experimental config: bundle.reorder
517 519 reorder = repo.ui.config('bundle', 'reorder')
518 520 if reorder == 'auto':
519 521 reorder = None
520 522 else:
521 523 reorder = stringutil.parsebool(reorder)
522 524 self._repo = repo
523 525 self._reorder = reorder
524 526 self._progress = repo.ui.progress
525 527 if self._repo.ui.verbose and not self._repo.ui.debugflag:
526 528 self._verbosenote = self._repo.ui.note
527 529 else:
528 530 self._verbosenote = lambda s: None
529 531
530 532 def close(self):
531 533 return closechunk()
532 534
533 535 def fileheader(self, fname):
534 536 return chunkheader(len(fname)) + fname
535 537
536 538 # Extracted both for clarity and for overriding in extensions.
537 539 def _sortgroup(self, revlog, nodelist, lookup):
538 540 """Sort nodes for change group and turn them into revnums."""
539 541 # for generaldelta revlogs, we linearize the revs; this will both be
540 542 # much quicker and generate a much smaller bundle
541 543 if (revlog._generaldelta and self._reorder is None) or self._reorder:
542 544 dag = dagutil.revlogdag(revlog)
543 545 return dag.linearize(set(revlog.rev(n) for n in nodelist))
544 546 else:
545 547 return sorted([revlog.rev(n) for n in nodelist])
546 548
547 549 def group(self, nodelist, revlog, lookup, units=None):
548 550 """Calculate a delta group, yielding a sequence of changegroup chunks
549 551 (strings).
550 552
551 553 Given a list of changeset revs, return a set of deltas and
552 554 metadata corresponding to nodes. The first delta is
553 555 first parent(nodelist[0]) -> nodelist[0], the receiver is
554 556 guaranteed to have this parent as it has all history before
555 557 these changesets. In the case firstparent is nullrev the
556 558 changegroup starts with a full revision.
557 559
558 560 If units is not None, progress detail will be generated, units specifies
559 561 the type of revlog that is touched (changelog, manifest, etc.).
560 562 """
561 563 # if we don't have any revisions touched by these changesets, bail
562 564 if len(nodelist) == 0:
563 565 yield self.close()
564 566 return
565 567
566 568 revs = self._sortgroup(revlog, nodelist, lookup)
567 569
568 570 # add the parent of the first rev
569 571 p = revlog.parentrevs(revs[0])[0]
570 572 revs.insert(0, p)
571 573
572 574 # build deltas
573 575 total = len(revs) - 1
574 576 msgbundling = _('bundling')
575 577 for r in xrange(len(revs) - 1):
576 578 if units is not None:
577 579 self._progress(msgbundling, r + 1, unit=units, total=total)
578 580 prev, curr = revs[r], revs[r + 1]
579 581 linknode = lookup(revlog.node(curr))
580 582 for c in self.revchunk(revlog, curr, prev, linknode):
581 583 yield c
582 584
583 585 if units is not None:
584 586 self._progress(msgbundling, None)
585 587 yield self.close()
586 588
587 589 # filter any nodes that claim to be part of the known set
588 590 def prune(self, revlog, missing, commonrevs):
589 591 rr, rl = revlog.rev, revlog.linkrev
590 592 return [n for n in missing if rl(rr(n)) not in commonrevs]
591 593
592 594 def _packmanifests(self, dir, mfnodes, lookuplinknode):
593 595 """Pack flat manifests into a changegroup stream."""
594 596 assert not dir
595 597 for chunk in self.group(mfnodes, self._repo.manifestlog._revlog,
596 598 lookuplinknode, units=_('manifests')):
597 599 yield chunk
598 600
599 601 def _manifestsdone(self):
600 602 return ''
601 603
602 604 def generate(self, commonrevs, clnodes, fastpathlinkrev, source):
603 605 '''yield a sequence of changegroup chunks (strings)'''
604 606 repo = self._repo
605 607 cl = repo.changelog
606 608
607 609 clrevorder = {}
608 610 mfs = {} # needed manifests
609 611 fnodes = {} # needed file nodes
610 612 changedfiles = set()
611 613
612 614 # Callback for the changelog, used to collect changed files and manifest
613 615 # nodes.
614 616 # Returns the linkrev node (identity in the changelog case).
615 617 def lookupcl(x):
616 618 c = cl.read(x)
617 619 clrevorder[x] = len(clrevorder)
618 620 n = c[0]
619 621 # record the first changeset introducing this manifest version
620 622 mfs.setdefault(n, x)
621 623 # Record a complete list of potentially-changed files in
622 624 # this manifest.
623 625 changedfiles.update(c[3])
624 626 return x
625 627
626 628 self._verbosenote(_('uncompressed size of bundle content:\n'))
627 629 size = 0
628 630 for chunk in self.group(clnodes, cl, lookupcl, units=_('changesets')):
629 631 size += len(chunk)
630 632 yield chunk
631 633 self._verbosenote(_('%8.i (changelog)\n') % size)
632 634
633 635 # We need to make sure that the linkrev in the changegroup refers to
634 636 # the first changeset that introduced the manifest or file revision.
635 637 # The fastpath is usually safer than the slowpath, because the filelogs
636 638 # are walked in revlog order.
637 639 #
638 640 # When taking the slowpath with reorder=None and the manifest revlog
639 641 # uses generaldelta, the manifest may be walked in the "wrong" order.
640 642 # Without 'clrevorder', we would get an incorrect linkrev (see fix in
641 643 # cc0ff93d0c0c).
642 644 #
643 645 # When taking the fastpath, we are only vulnerable to reordering
644 646 # of the changelog itself. The changelog never uses generaldelta, so
645 647 # it is only reordered when reorder=True. To handle this case, we
646 648 # simply take the slowpath, which already has the 'clrevorder' logic.
647 649 # This was also fixed in cc0ff93d0c0c.
648 650 fastpathlinkrev = fastpathlinkrev and not self._reorder
649 651 # Treemanifests don't work correctly with fastpathlinkrev
650 652 # either, because we don't discover which directory nodes to
651 653 # send along with files. This could probably be fixed.
652 654 fastpathlinkrev = fastpathlinkrev and (
653 655 'treemanifest' not in repo.requirements)
654 656
655 657 for chunk in self.generatemanifests(commonrevs, clrevorder,
656 658 fastpathlinkrev, mfs, fnodes, source):
657 659 yield chunk
658 660 mfs.clear()
659 661 clrevs = set(cl.rev(x) for x in clnodes)
660 662
661 663 if not fastpathlinkrev:
662 664 def linknodes(unused, fname):
663 665 return fnodes.get(fname, {})
664 666 else:
665 667 cln = cl.node
666 668 def linknodes(filerevlog, fname):
667 669 llr = filerevlog.linkrev
668 670 fln = filerevlog.node
669 671 revs = ((r, llr(r)) for r in filerevlog)
670 672 return dict((fln(r), cln(lr)) for r, lr in revs if lr in clrevs)
671 673
672 674 for chunk in self.generatefiles(changedfiles, linknodes, commonrevs,
673 675 source):
674 676 yield chunk
675 677
676 678 yield self.close()
677 679
678 680 if clnodes:
679 681 repo.hook('outgoing', node=hex(clnodes[0]), source=source)
680 682
681 683 def generatemanifests(self, commonrevs, clrevorder, fastpathlinkrev, mfs,
682 684 fnodes, source):
683 685 """Returns an iterator of changegroup chunks containing manifests.
684 686
685 687 `source` is unused here, but is used by extensions like remotefilelog to
686 688 change what is sent based in pulls vs pushes, etc.
687 689 """
688 690 repo = self._repo
689 691 mfl = repo.manifestlog
690 692 dirlog = mfl._revlog.dirlog
691 693 tmfnodes = {'': mfs}
692 694
693 695 # Callback for the manifest, used to collect linkrevs for filelog
694 696 # revisions.
695 697 # Returns the linkrev node (collected in lookupcl).
696 698 def makelookupmflinknode(dir, nodes):
697 699 if fastpathlinkrev:
698 700 assert not dir
699 701 return mfs.__getitem__
700 702
701 703 def lookupmflinknode(x):
702 704 """Callback for looking up the linknode for manifests.
703 705
704 706 Returns the linkrev node for the specified manifest.
705 707
706 708 SIDE EFFECT:
707 709
708 710 1) fclnodes gets populated with the list of relevant
709 711 file nodes if we're not using fastpathlinkrev
710 712 2) When treemanifests are in use, collects treemanifest nodes
711 713 to send
712 714
713 715 Note that this means manifests must be completely sent to
714 716 the client before you can trust the list of files and
715 717 treemanifests to send.
716 718 """
717 719 clnode = nodes[x]
718 720 mdata = mfl.get(dir, x).readfast(shallow=True)
719 721 for p, n, fl in mdata.iterentries():
720 722 if fl == 't': # subdirectory manifest
721 723 subdir = dir + p + '/'
722 724 tmfclnodes = tmfnodes.setdefault(subdir, {})
723 725 tmfclnode = tmfclnodes.setdefault(n, clnode)
724 726 if clrevorder[clnode] < clrevorder[tmfclnode]:
725 727 tmfclnodes[n] = clnode
726 728 else:
727 729 f = dir + p
728 730 fclnodes = fnodes.setdefault(f, {})
729 731 fclnode = fclnodes.setdefault(n, clnode)
730 732 if clrevorder[clnode] < clrevorder[fclnode]:
731 733 fclnodes[n] = clnode
732 734 return clnode
733 735 return lookupmflinknode
734 736
735 737 size = 0
736 738 while tmfnodes:
737 739 dir, nodes = tmfnodes.popitem()
738 740 prunednodes = self.prune(dirlog(dir), nodes, commonrevs)
739 741 if not dir or prunednodes:
740 742 for x in self._packmanifests(dir, prunednodes,
741 743 makelookupmflinknode(dir, nodes)):
742 744 size += len(x)
743 745 yield x
744 746 self._verbosenote(_('%8.i (manifests)\n') % size)
745 747 yield self._manifestsdone()
746 748
747 749 # The 'source' parameter is useful for extensions
748 750 def generatefiles(self, changedfiles, linknodes, commonrevs, source):
749 751 repo = self._repo
750 752 progress = self._progress
751 753 msgbundling = _('bundling')
752 754
753 755 total = len(changedfiles)
754 756 # for progress output
755 757 msgfiles = _('files')
756 758 for i, fname in enumerate(sorted(changedfiles)):
757 759 filerevlog = repo.file(fname)
758 760 if not filerevlog:
759 761 raise error.Abort(_("empty or missing revlog for %s") % fname)
760 762
761 763 linkrevnodes = linknodes(filerevlog, fname)
762 764 # Lookup for filenodes, we collected the linkrev nodes above in the
763 765 # fastpath case and with lookupmf in the slowpath case.
764 766 def lookupfilelog(x):
765 767 return linkrevnodes[x]
766 768
767 769 filenodes = self.prune(filerevlog, linkrevnodes, commonrevs)
768 770 if filenodes:
769 771 progress(msgbundling, i + 1, item=fname, unit=msgfiles,
770 772 total=total)
771 773 h = self.fileheader(fname)
772 774 size = len(h)
773 775 yield h
774 776 for chunk in self.group(filenodes, filerevlog, lookupfilelog):
775 777 size += len(chunk)
776 778 yield chunk
777 779 self._verbosenote(_('%8.i %s\n') % (size, fname))
778 780 progress(msgbundling, None)
779 781
780 782 def deltaparent(self, revlog, rev, p1, p2, prev):
781 783 if not revlog.candelta(prev, rev):
782 784 raise error.ProgrammingError('cg1 should not be used in this case')
783 785 return prev
784 786
785 787 def revchunk(self, revlog, rev, prev, linknode):
786 788 node = revlog.node(rev)
787 789 p1, p2 = revlog.parentrevs(rev)
788 790 base = self.deltaparent(revlog, rev, p1, p2, prev)
789 791
790 792 prefix = ''
791 793 if revlog.iscensored(base) or revlog.iscensored(rev):
792 794 try:
793 795 delta = revlog.revision(node, raw=True)
794 796 except error.CensoredNodeError as e:
795 797 delta = e.tombstone
796 798 if base == nullrev:
797 799 prefix = mdiff.trivialdiffheader(len(delta))
798 800 else:
799 801 baselen = revlog.rawsize(base)
800 802 prefix = mdiff.replacediffheader(baselen, len(delta))
801 803 elif base == nullrev:
802 804 delta = revlog.revision(node, raw=True)
803 805 prefix = mdiff.trivialdiffheader(len(delta))
804 806 else:
805 807 delta = revlog.revdiff(base, rev)
806 808 p1n, p2n = revlog.parents(node)
807 809 basenode = revlog.node(base)
808 810 flags = revlog.flags(rev)
809 811 meta = self.builddeltaheader(node, p1n, p2n, basenode, linknode, flags)
810 812 meta += prefix
811 813 l = len(meta) + len(delta)
812 814 yield chunkheader(l)
813 815 yield meta
814 816 yield delta
815 817 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
816 818 # do nothing with basenode, it is implicitly the previous one in HG10
817 819 # do nothing with flags, it is implicitly 0 for cg1 and cg2
818 820 return struct.pack(self.deltaheader, node, p1n, p2n, linknode)
819 821
820 822 class cg2packer(cg1packer):
821 823 version = '02'
822 824 deltaheader = _CHANGEGROUPV2_DELTA_HEADER
823 825
824 826 def __init__(self, repo, bundlecaps=None):
825 827 super(cg2packer, self).__init__(repo, bundlecaps)
826 828 if self._reorder is None:
827 829 # Since generaldelta is directly supported by cg2, reordering
828 830 # generally doesn't help, so we disable it by default (treating
829 831 # bundle.reorder=auto just like bundle.reorder=False).
830 832 self._reorder = False
831 833
832 834 def deltaparent(self, revlog, rev, p1, p2, prev):
833 835 dp = revlog.deltaparent(rev)
834 836 if dp == nullrev and revlog.storedeltachains:
835 837 # Avoid sending full revisions when delta parent is null. Pick prev
836 838 # in that case. It's tempting to pick p1 in this case, as p1 will
837 839 # be smaller in the common case. However, computing a delta against
838 840 # p1 may require resolving the raw text of p1, which could be
839 841 # expensive. The revlog caches should have prev cached, meaning
840 842 # less CPU for changegroup generation. There is likely room to add
841 843 # a flag and/or config option to control this behavior.
842 844 base = prev
843 845 elif dp == nullrev:
844 846 # revlog is configured to use full snapshot for a reason,
845 847 # stick to full snapshot.
846 848 base = nullrev
847 849 elif dp not in (p1, p2, prev):
848 850 # Pick prev when we can't be sure remote has the base revision.
849 851 return prev
850 852 else:
851 853 base = dp
852 854 if base != nullrev and not revlog.candelta(base, rev):
853 855 base = nullrev
854 856 return base
855 857
856 858 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
857 859 # Do nothing with flags, it is implicitly 0 in cg1 and cg2
858 860 return struct.pack(self.deltaheader, node, p1n, p2n, basenode, linknode)
859 861
860 862 class cg3packer(cg2packer):
861 863 version = '03'
862 864 deltaheader = _CHANGEGROUPV3_DELTA_HEADER
863 865
864 866 def _packmanifests(self, dir, mfnodes, lookuplinknode):
865 867 if dir:
866 868 yield self.fileheader(dir)
867 869
868 870 dirlog = self._repo.manifestlog._revlog.dirlog(dir)
869 871 for chunk in self.group(mfnodes, dirlog, lookuplinknode,
870 872 units=_('manifests')):
871 873 yield chunk
872 874
873 875 def _manifestsdone(self):
874 876 return self.close()
875 877
876 878 def builddeltaheader(self, node, p1n, p2n, basenode, linknode, flags):
877 879 return struct.pack(
878 880 self.deltaheader, node, p1n, p2n, basenode, linknode, flags)
879 881
880 882 _packermap = {'01': (cg1packer, cg1unpacker),
881 883 # cg2 adds support for exchanging generaldelta
882 884 '02': (cg2packer, cg2unpacker),
883 885 # cg3 adds support for exchanging revlog flags and treemanifests
884 886 '03': (cg3packer, cg3unpacker),
885 887 }
886 888
887 889 def allsupportedversions(repo):
888 890 versions = set(_packermap.keys())
889 891 if not (repo.ui.configbool('experimental', 'changegroup3') or
890 892 repo.ui.configbool('experimental', 'treemanifest') or
891 893 'treemanifest' in repo.requirements):
892 894 versions.discard('03')
893 895 return versions
894 896
895 897 # Changegroup versions that can be applied to the repo
896 898 def supportedincomingversions(repo):
897 899 return allsupportedversions(repo)
898 900
899 901 # Changegroup versions that can be created from the repo
900 902 def supportedoutgoingversions(repo):
901 903 versions = allsupportedversions(repo)
902 904 if 'treemanifest' in repo.requirements:
903 905 # Versions 01 and 02 support only flat manifests and it's just too
904 906 # expensive to convert between the flat manifest and tree manifest on
905 907 # the fly. Since tree manifests are hashed differently, all of history
906 908 # would have to be converted. Instead, we simply don't even pretend to
907 909 # support versions 01 and 02.
908 910 versions.discard('01')
909 911 versions.discard('02')
910 912 if NARROW_REQUIREMENT in repo.requirements:
911 913 # Versions 01 and 02 don't support revlog flags, and we need to
912 914 # support that for stripping and unbundling to work.
913 915 versions.discard('01')
914 916 versions.discard('02')
917 if LFS_REQUIREMENT in repo.requirements:
918 # Versions 01 and 02 don't support revlog flags, and we need to
919 # mark LFS entries with REVIDX_EXTSTORED.
920 versions.discard('01')
921 versions.discard('02')
922
915 923 return versions
916 924
917 925 def localversion(repo):
918 926 # Finds the best version to use for bundles that are meant to be used
919 927 # locally, such as those from strip and shelve, and temporary bundles.
920 928 return max(supportedoutgoingversions(repo))
921 929
922 930 def safeversion(repo):
923 931 # Finds the smallest version that it's safe to assume clients of the repo
924 932 # will support. For example, all hg versions that support generaldelta also
925 933 # support changegroup 02.
926 934 versions = supportedoutgoingversions(repo)
927 935 if 'generaldelta' in repo.requirements:
928 936 versions.discard('01')
929 937 assert versions
930 938 return min(versions)
931 939
932 940 def getbundler(version, repo, bundlecaps=None):
933 941 assert version in supportedoutgoingversions(repo)
934 942 return _packermap[version][0](repo, bundlecaps)
935 943
936 944 def getunbundler(version, fh, alg, extras=None):
937 945 return _packermap[version][1](fh, alg, extras=extras)
938 946
939 947 def _changegroupinfo(repo, nodes, source):
940 948 if repo.ui.verbose or source == 'bundle':
941 949 repo.ui.status(_("%d changesets found\n") % len(nodes))
942 950 if repo.ui.debugflag:
943 951 repo.ui.debug("list of changesets:\n")
944 952 for node in nodes:
945 953 repo.ui.debug("%s\n" % hex(node))
946 954
947 955 def makechangegroup(repo, outgoing, version, source, fastpath=False,
948 956 bundlecaps=None):
949 957 cgstream = makestream(repo, outgoing, version, source,
950 958 fastpath=fastpath, bundlecaps=bundlecaps)
951 959 return getunbundler(version, util.chunkbuffer(cgstream), None,
952 960 {'clcount': len(outgoing.missing) })
953 961
954 962 def makestream(repo, outgoing, version, source, fastpath=False,
955 963 bundlecaps=None):
956 964 bundler = getbundler(version, repo, bundlecaps=bundlecaps)
957 965
958 966 repo = repo.unfiltered()
959 967 commonrevs = outgoing.common
960 968 csets = outgoing.missing
961 969 heads = outgoing.missingheads
962 970 # We go through the fast path if we get told to, or if all (unfiltered
963 971 # heads have been requested (since we then know there all linkrevs will
964 972 # be pulled by the client).
965 973 heads.sort()
966 974 fastpathlinkrev = fastpath or (
967 975 repo.filtername is None and heads == sorted(repo.heads()))
968 976
969 977 repo.hook('preoutgoing', throw=True, source=source)
970 978 _changegroupinfo(repo, csets, source)
971 979 return bundler.generate(commonrevs, csets, fastpathlinkrev, source)
972 980
973 981 def _addchangegroupfiles(repo, source, revmap, trp, expectedfiles, needfiles):
974 982 revisions = 0
975 983 files = 0
976 984 for chunkdata in iter(source.filelogheader, {}):
977 985 files += 1
978 986 f = chunkdata["filename"]
979 987 repo.ui.debug("adding %s revisions\n" % f)
980 988 repo.ui.progress(_('files'), files, unit=_('files'),
981 989 total=expectedfiles)
982 990 fl = repo.file(f)
983 991 o = len(fl)
984 992 try:
985 993 deltas = source.deltaiter()
986 994 if not fl.addgroup(deltas, revmap, trp):
987 995 raise error.Abort(_("received file revlog group is empty"))
988 996 except error.CensoredBaseError as e:
989 997 raise error.Abort(_("received delta base is censored: %s") % e)
990 998 revisions += len(fl) - o
991 999 if f in needfiles:
992 1000 needs = needfiles[f]
993 1001 for new in xrange(o, len(fl)):
994 1002 n = fl.node(new)
995 1003 if n in needs:
996 1004 needs.remove(n)
997 1005 else:
998 1006 raise error.Abort(
999 1007 _("received spurious file revlog entry"))
1000 1008 if not needs:
1001 1009 del needfiles[f]
1002 1010 repo.ui.progress(_('files'), None)
1003 1011
1004 1012 for f, needs in needfiles.iteritems():
1005 1013 fl = repo.file(f)
1006 1014 for n in needs:
1007 1015 try:
1008 1016 fl.rev(n)
1009 1017 except error.LookupError:
1010 1018 raise error.Abort(
1011 1019 _('missing file data for %s:%s - run hg verify') %
1012 1020 (f, hex(n)))
1013 1021
1014 1022 return revisions, files
General Comments 0
You need to be logged in to leave comments. Login now