##// END OF EJS Templates
largefiles: for update -C, only update largefiles when necessary...
largefiles: for update -C, only update largefiles when necessary Before, a --clean update with largefiles would use the "optimization" that it didn't read hashes from standin files before and after the update. Instead of trusting the content of the standin files, it would rehash all the actual largefiles that lfdirstate reported clean and update the standins that didn't have the expected content. It could thus in some "impossible" situations automatically recover from some "largefile got out sync with its standin" issues (even there apparently still were weird corner cases where it could fail). This extra checking is similar to what core --clean intentionally do not do, and it made update --clean unbearable slow. Usually in core Mercurial, --clean will rely on the dirstate to find the files it should update. (It is thus intentionally possible (when trying to trick the system or if there should be bugs) to end up in situations where --clean not will restore the working directory content correctly.) Checking every file when we "know" it is ok is however not an option - that would be too slow. Instead, trust the content of the standin files. Use the same logic for --clean as for linear updates and trust the dirstate and that our "logic" will keep them in sync. It is much cheaper to just rehash the largefiles reported dirty by a status walk and read all standins than to hash largefiles. Most of the changes are just a change of indentation now when the different kinds of updates no longer are handled that differently. Standins for added files are however only written when doing a normal update, while deleted and removed files only will be updated for --clean updates.

File last commit:

r16913:f2719b38 default
r24787:9d5c2789 default
Show More
test-parseindex.t
61 lines | 1.6 KiB | text/troff | Tads3Lexer
revlog.parseindex must be able to parse the index file even if
an index entry is split between two 64k blocks. The ideal test
would be to create an index file with inline data where
64k < size < 64k + 64 (64k is the size of the read buffer, 64 is
the size of an index entry) and with an index entry starting right
before the 64k block boundary, and try to read it.
We approximate that by reducing the read buffer to 1 byte.
$ hg init a
$ cd a
$ echo abc > foo
$ hg add foo
$ hg commit -m 'add foo'
$ echo >> foo
$ hg commit -m 'change foo'
$ hg log -r 0:
changeset: 0:7c31755bf9b5
user: test
date: Thu Jan 01 00:00:00 1970 +0000
summary: add foo
changeset: 1:26333235a41c
tag: tip
user: test
date: Thu Jan 01 00:00:00 1970 +0000
summary: change foo
$ cat >> test.py << EOF
> from mercurial import changelog, scmutil
> from mercurial.node import *
>
> class singlebyteread(object):
> def __init__(self, real):
> self.real = real
>
> def read(self, size=-1):
> if size == 65536:
> size = 1
> return self.real.read(size)
>
> def __getattr__(self, key):
> return getattr(self.real, key)
>
> def opener(*args):
> o = scmutil.opener(*args)
> def wrapper(*a):
> f = o(*a)
> return singlebyteread(f)
> return wrapper
>
> cl = changelog.changelog(opener('.hg/store'))
> print len(cl), 'revisions:'
> for r in cl:
> print short(cl.node(r))
> EOF
$ python test.py
2 revisions:
7c31755bf9b5
26333235a41c
$ cd ..