##// END OF EJS Templates
revlog: extract function for getting node from known-to-exist rev...
revlog: extract function for getting node from known-to-exist rev Many of the calls to index_node() (which converts a rev to a nodeid) are done with a rev that's know to exist. If the function fails, there's something really wrong and we should just abort. This was done in only one place. This patch starts by extracting that code to a function that we can reuse in later patches. Differential Revision: https://phab.mercurial-scm.org/D3456

File last commit:

r37457:556984ae default
r37877:a91f31a1 default
Show More
test-largefiles-small-disk.t
75 lines | 2.0 KiB | text/troff | Tads3Lexer
/ tests / test-largefiles-small-disk.t
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 Test how largefiles abort in case the disk runs full
$ cat > criple.py <<EOF
Augie Fackler
tests: update test-largefiles-small-disk to pass our import checker
r33964 > from __future__ import absolute_import
> import errno
> import os
> import shutil
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 > from mercurial import util
> #
> # this makes the original largefiles code abort:
Jun Wu
dirstate: try to use hardlink to backup dirstate...
r31207 > _origcopyfileobj = shutil.copyfileobj
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 > def copyfileobj(fsrc, fdst, length=16*1024):
Jun Wu
dirstate: try to use hardlink to backup dirstate...
r31207 > # allow journal files (used by transaction) to be written
Gregory Szorc
py3: use b'' in inline extension...
r36130 > if b'journal.' in fdst.name:
Jun Wu
dirstate: try to use hardlink to backup dirstate...
r31207 > return _origcopyfileobj(fsrc, fdst, length)
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 > fdst.write(fsrc.read(4))
> raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
> shutil.copyfileobj = copyfileobj
> #
> # this makes the rewritten code abort:
Mads Kiilerich
util: increase filechunkiter size to 128k...
r30181 > def filechunkiter(f, size=131072, limit=None):
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 > yield f.read(4)
> raise IOError(errno.ENOSPC, os.strerror(errno.ENOSPC))
> util.filechunkiter = filechunkiter
Martin Geisler
largefiles: copy files into .hg/largefiles atomically...
r15572 > #
> def oslink(src, dest):
> raise OSError("no hardlinks, try copying instead")
> util.oslink = oslink
Martin Geisler
largefiles: write .hg/largefiles/ files atomically...
r15571 > EOF
$ echo "[extensions]" >> $HGRCPATH
$ echo "largefiles =" >> $HGRCPATH
$ hg init alice
$ cd alice
$ echo "this is a very big file" > big
$ hg add --large big
$ hg commit --config extensions.criple=$TESTTMP/criple.py -m big
abort: No space left on device
[255]
The largefile is not created in .hg/largefiles:
$ ls .hg/largefiles
dirstate
The user cache is not even created:
>>> import os; os.path.exists("$HOME/.cache/largefiles/")
False
Martin Geisler
largefiles: copy files into .hg/largefiles atomically...
r15572
Make the commit with space on the device:
$ hg commit -m big
Now make a clone with a full disk, and make sure lfutil.link function
makes copies instead of hardlinks:
$ cd ..
$ hg --config extensions.criple=$TESTTMP/criple.py clone --pull alice bob
requesting all changes
adding changesets
adding manifests
adding file changes
added 1 changesets with 1 changes to 1 files
Denis Laxalde
transaction-summary: show the range of new revisions upon pull/unbundle (BC)...
r34662 new changesets 390cf214e9ac
Martin Geisler
largefiles: copy files into .hg/largefiles atomically...
r15572 updating to branch default
getting changed largefiles
abort: No space left on device
[255]
The largefile is not created in .hg/largefiles:
$ ls bob/.hg/largefiles
Mads Kiilerich
largefiles: update in two steps, handle interrupted updates better...
r20063 dirstate