##// END OF EJS Templates
merge: mark file gets as not thread safe (issue5933)...
merge: mark file gets as not thread safe (issue5933) In default installs, this has the effect of disabling the thread-based worker on Windows when manifesting files in the working directory. My measurements have shown that with revlog-based repositories, Mercurial spends a lot of CPU time in revlog code resolving file data. This ends up incurring a lot of context switching across threads and slows down `hg update` operations when going from an empty working directory to the tip of the repo. On mozilla-unified (246,351 files) on an i7-6700K (4+4 CPUs): before: 487s wall after: 360s wall (equivalent to worker.enabled=false) cpus=2: 379s wall Even with only 2 threads, the thread pool is still slower. The introduction of the thread-based worker (02b36e860e0b) states that it resulted in a "~50%" speedup for `hg sparse --enable-profile` and `hg sparse --disable-profile`. This disagrees with my measurement above. I theorize a few reasons for this: 1) Removal of files from the working directory is I/O - not CPU - bound and should benefit from a thread pool (unless I/O is insanely fast and the GIL release is near instantaneous). So tests like `hg sparse --enable-profile` may exercise deletion throughput and aren't good benchmarks for worker tasks that are CPU heavy. 2) The patch was authored by someone at Facebook. The results were likely measured against a repository using remotefilelog. And I believe that revision retrieval during working directory updates with remotefilelog will often use a remote store, thus being I/O and not CPU bound. This probably resulted in an overstated performance gain. Since there appears to be a need to enable the thread-based worker with some stores, I've made the flagging of file gets as thread safe configurable. I've made it experimental because I don't want to formalize a boolean flag for this option and because this attribute is best captured against the store implementation. But we don't have a proper store API for this yet. I'd rather cross this bridge later. It is possible there are revlog-based repositories that do benefit from a thread-based worker. I didn't do very comprehensive testing. If there are, we may want to devise a more proper algorithm for whether to use the thread-based worker, including possibly config options to limit the number of threads to use. But until I see evidence that justifies complexity, simplicity wins. Differential Revision: https://phab.mercurial-scm.org/D3963

File last commit:

r38476:e5916f12 default
r38755:be498426 default
Show More
hg-docker
111 lines | 3.2 KiB | text/plain | TextLexer
#!/usr/bin/env python3
#
# Copyright 2018 Gregory Szorc <gregory.szorc@gmail.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
import argparse
import pathlib
import shutil
import subprocess
import sys
def get_docker() -> str:
docker = shutil.which('docker.io') or shutil.which('docker')
if not docker:
print('could not find docker executable')
return 1
try:
out = subprocess.check_output([docker, '-h'], stderr=subprocess.STDOUT)
if b'Jansens' in out:
print('%s is the Docking System Tray; try installing docker.io' %
docker)
sys.exit(1)
except subprocess.CalledProcessError as e:
print('error calling `%s -h`: %s' % (docker, e.output))
sys.exit(1)
out = subprocess.check_output([docker, 'version'],
stderr=subprocess.STDOUT)
lines = out.splitlines()
if not any(l.startswith((b'Client:', b'Client version:')) for l in lines):
print('`%s version` does not look like Docker' % docker)
sys.exit(1)
if not any(l.startswith((b'Server:', b'Server version:')) for l in lines):
print('`%s version` does not look like Docker' % docker)
sys.exit(1)
return docker
def get_dockerfile(path: pathlib.Path, args: list) -> bytes:
with path.open('rb') as fh:
df = fh.read()
for k, v in args:
df = df.replace(b'%%%s%%' % k, v)
return df
def build_docker_image(dockerfile: pathlib.Path, params: list, tag: str):
"""Build a Docker image from a templatized Dockerfile."""
docker = get_docker()
dockerfile_path = pathlib.Path(dockerfile)
dockerfile = get_dockerfile(dockerfile_path, params)
print('building Dockerfile:')
print(dockerfile.decode('utf-8', 'replace'))
args = [
docker,
'build',
'--build-arg', 'http_proxy',
'--build-arg', 'https_proxy',
'--tag', tag,
'-',
]
print('executing: %r' % args)
subprocess.run(args, input=dockerfile, check=True)
def command_build(args):
build_args = []
for arg in args.build_arg:
k, v = arg.split('=', 1)
build_args.append((k.encode('utf-8'), v.encode('utf-8')))
build_docker_image(pathlib.Path(args.dockerfile),
build_args,
args.tag)
def command_docker(args):
print(get_docker())
def main() -> int:
parser = argparse.ArgumentParser()
subparsers = parser.add_subparsers(title='subcommands')
build = subparsers.add_parser('build', help='Build a Docker image')
build.set_defaults(func=command_build)
build.add_argument('--build-arg', action='append', default=[],
help='Substitution to perform in Dockerfile; '
'format: key=value')
build.add_argument('dockerfile', help='path to Dockerfile to use')
build.add_argument('tag', help='Tag to apply to created image')
docker = subparsers.add_parser('docker-path', help='Resolve path to Docker')
docker.set_defaults(func=command_docker)
args = parser.parse_args()
return args.func(args)
if __name__ == '__main__':
sys.exit(main())