##// END OF EJS Templates
pager: set some environment variables if they're not set...
pager: set some environment variables if they're not set Git did this already [1] [2]. We want this behavior too [3]. This provides a better default user experience (like, supporting colors) if users have things like "PAGER=less" set, which is not uncommon. The environment variables are provided by a method so extensions can override them on demand. [1]: https://github.com/git/git/blob/6a5ff7acb5965718cc7016c0ab6c601454fd7cde/pager.c#L87 [2]: https://github.com/git/git/blob/6a5ff7acb5965718cc7016c0ab6c601454fd7cde/Makefile#L1545 [3]: https://www.mercurial-scm.org/pipermail/mercurial-devel/2017-March/094780.html

File last commit:

r31252:e7a35f18 default
r31954:e518192d default
Show More
test-clone-uncompressed.t
90 lines | 2.6 KiB | text/troff | Tads3Lexer
/ tests / test-clone-uncompressed.t
Gregory Szorc
streamclone: use backgroundfilecloser (issue4889)...
r27897 #require serve
timeless
bdiff: (pure) support array.array arrays (issue5130)
r28389 Initialize repository
the status call is to check for issue5130
Gregory Szorc
streamclone: use backgroundfilecloser (issue4889)...
r27897 $ hg init server
$ cd server
$ touch foo
$ hg -q commit -A -m initial
>>> for i in range(1024):
... with open(str(i), 'wb') as fh:
... fh.write(str(i))
$ hg -q commit -A -m 'add a lot of files'
timeless
bdiff: (pure) support array.array arrays (issue5130)
r28389 $ hg st
Gregory Szorc
streamclone: use backgroundfilecloser (issue4889)...
r27897 $ hg serve -p $HGPORT -d --pid-file=hg.pid
$ cat hg.pid >> $DAEMON_PIDS
$ cd ..
Basic clone
$ hg clone --uncompressed -U http://localhost:$HGPORT clone1
streaming all changes
1027 files to transfer, 96.3 KB of data
transferred 96.3 KB in * seconds (*/sec) (glob)
searching for changes
no changes found
Clone with background file closing enabled
$ hg --debug --config worker.backgroundclose=true --config worker.backgroundcloseminfilecount=1 clone --uncompressed -U http://localhost:$HGPORT clone-background | grep -v adding
using http://localhost:$HGPORT/
sending capabilities command
sending branchmap command
streaming all changes
sending stream_out command
1027 files to transfer, 96.3 KB of data
starting 4 threads for background file closing
transferred 96.3 KB in * seconds (*/sec) (glob)
query 1; heads
sending batch command
searching for changes
all remote heads known locally
no changes found
sending getbundle command
bundle2-input-bundle: with-transaction
bundle2-input-part: "listkeys" (params: 1 mandatory) supported
Mike Hommey
bundle2: properly request phases during getbundle...
r29064 bundle2-input-part: total payload size 58
Gregory Szorc
streamclone: use backgroundfilecloser (issue4889)...
r27897 bundle2-input-part: "listkeys" (params: 1 mandatory) supported
bundle2-input-bundle: 1 parts total
checking for updated bookmarks
Mads Kiilerich
tests: add test of stream clone of repo that is changing...
r28517
Stream clone while repo is changing:
$ mkdir changing
$ cd changing
extension for delaying the server process so we reliably can modify the repo
while cloning
$ cat > delayer.py <<EOF
> import time
Pierre-Yves David
vfs: use 'vfs' module directly in 'test-clone-uncompressed'...
r31252 > from mercurial import extensions, vfs
Mads Kiilerich
tests: add test of stream clone of repo that is changing...
r28517 > def __call__(orig, self, path, *args, **kwargs):
> if path == 'data/f1.i':
> time.sleep(2)
> return orig(self, path, *args, **kwargs)
Pierre-Yves David
vfs: use 'vfs' module directly in 'test-clone-uncompressed'...
r31252 > extensions.wrapfunction(vfs.vfs, '__call__', __call__)
Mads Kiilerich
tests: add test of stream clone of repo that is changing...
r28517 > EOF
prepare repo with small and big file to cover both code paths in emitrevlogdata
$ hg init repo
$ touch repo/f1
$ $TESTDIR/seq.py 50000 > repo/f2
$ hg -R repo ci -Aqm "0"
$ hg -R repo serve -p $HGPORT1 -d --pid-file=hg.pid --config extensions.delayer=delayer.py
$ cat hg.pid >> $DAEMON_PIDS
clone while modifying the repo between stating file with write lock and
actually serving file content
$ hg clone -q --uncompressed -U http://localhost:$HGPORT1 clone &
$ sleep 1
$ echo >> repo/f1
$ echo >> repo/f2
$ hg -R repo ci -m "1"
$ wait
$ hg -R clone id
Mads Kiilerich
streamclone: fix error when store files grow while stream cloning...
r28518 000000000000