##// END OF EJS Templates
deltas: skip if projected delta size does not match text size constraint...
deltas: skip if projected delta size does not match text size constraint Before computing any delta, we get a basic estimation of the delta size we can expect and the resulted compressed value. We then checks this projected size against the ½ⁿ size constraints. This allows to exclude potential base candidates before doing any expensive computation. This only apply to the intermediate-snapshot case since this constraint only apply to them. In practice we only perform this new checks for the manifestlog. Manifest log combine two property: it is likely to have delta chain issue and its diffing/compression is fairly predictable. The initial author of this changeset is Valentin Gatien-Baron providing the initial idea and initial testing, Pierre-Yves David later consolidated the code in the right location and run more extensive testing.

File last commit:

r42648:55b724df default
r42663:a0b26fc8 default
Show More
worker.py
393 lines | 13.9 KiB | text/x-python | PythonLexer
Bryan O'Sullivan
worker: count the number of CPUs...
r18635 # worker.py - master-slave parallelism support
#
# Copyright 2013 Facebook, Inc.
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
Gregory Szorc
worker: use absolute_import
r25992 from __future__ import absolute_import
import errno
import os
import signal
import sys
Wojciech Lis
workers: implemented worker on windows...
r35427 import threading
Wojciech Lis
worker: make windows workers daemons...
r35448 import time
Gregory Szorc
worker: use absolute_import
r25992
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 try:
import selectors
selectors.BaseSelector
except ImportError:
from .thirdparty import selectors2 as selectors
Gregory Szorc
worker: use absolute_import
r25992 from .i18n import _
Jun Wu
worker: migrate to util.iterfile
r30396 from . import (
Pulkit Goyal
py3: replace os.environ with encoding.environ (part 2 of 5)
r30635 encoding,
Jun Wu
worker: migrate to util.iterfile
r30396 error,
Pulkit Goyal
py3: replace os.name with pycompat.osname (part 1 of 2)...
r30639 pycompat,
Jun Wu
worker: use os._exit for posix worker in all cases...
r30521 scmutil,
Jun Wu
worker: migrate to util.iterfile
r30396 util,
)
Bryan O'Sullivan
worker: count the number of CPUs...
r18635
def countcpus():
'''try to count the number of CPUs on the system'''
Gregory Szorc
worker: restore old countcpus code (issue4869)...
r26568
# posix
Bryan O'Sullivan
worker: count the number of CPUs...
r18635 try:
Pulkit Goyal
py3: pass str in os.sysconf()...
r32611 n = int(os.sysconf(r'SC_NPROCESSORS_ONLN'))
Gregory Szorc
worker: restore old countcpus code (issue4869)...
r26568 if n > 0:
return n
except (AttributeError, ValueError):
pass
# windows
try:
Pulkit Goyal
py3: replace os.environ with encoding.environ (part 2 of 5)
r30635 n = int(encoding.environ['NUMBER_OF_PROCESSORS'])
Gregory Szorc
worker: restore old countcpus code (issue4869)...
r26568 if n > 0:
return n
except (KeyError, ValueError):
pass
return 1
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636
def _numworkers(ui):
s = ui.config('worker', 'numcpus')
if s:
try:
n = int(s)
if n >= 1:
return n
except ValueError:
Pierre-Yves David
error: get Abort from 'error' instead of 'util'...
r26587 raise error.Abort(_('number of cpus must be an integer'))
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636 return min(max(countcpus(), 4), 32)
Wojciech Lis
workers: implemented worker on windows...
r35427 if pycompat.isposix or pycompat.iswindows:
Gregory Szorc
worker: rename variable to reflect constant...
r38753 _STARTUP_COST = 0.01
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754 # The Windows worker is thread based. If tasks are CPU bound, threads
# in the presence of the GIL result in excessive context switching and
# this overhead can slow down execution.
_DISALLOW_THREAD_UNSAFE = pycompat.iswindows
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636 else:
Gregory Szorc
worker: rename variable to reflect constant...
r38753 _STARTUP_COST = 1e30
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754 _DISALLOW_THREAD_UNSAFE = False
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754 def worthwhile(ui, costperop, nops, threadsafe=True):
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636 '''try to determine whether the benefit of multiple processes can
outweigh the cost of starting them'''
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754
if not threadsafe and _DISALLOW_THREAD_UNSAFE:
return False
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636 linear = costperop * nops
workers = _numworkers(ui)
Gregory Szorc
worker: rename variable to reflect constant...
r38753 benefit = linear - (_STARTUP_COST * workers + linear / workers)
Bryan O'Sullivan
worker: estimate whether it's worth running a task in parallel...
r18636 return benefit >= 0.15
Bryan O'Sullivan
worker: partition a list (of tasks) into equal-sized chunks
r18637
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 def worker(ui, costperarg, func, staticargs, args, hasretval=False,
threadsafe=True):
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 '''run a function, possibly in parallel in multiple worker
processes.
returns a progress iterator
costperarg - cost of a single task
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 func - function to run. It is expected to return a progress iterator.
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638
staticargs - arguments to pass to every invocation of the function
args - arguments to split into chunks, to pass to individual
workers
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 hasretval - when True, func and the current function return an progress
iterator then a list (encoded as an iterator that yield many (False, ..)
then a (True, list)). The resulting list is in the natural order.
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754 threadsafe - whether work items are thread safe and can be executed using
a thread-based worker. Should be disabled for CPU heavy tasks that don't
release the GIL.
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 '''
Wojciech Lis
workers: add config to enable/diable workers...
r35447 enabled = ui.configbool('worker', 'enabled')
Gregory Szorc
worker: ability to disable thread unsafe tasks...
r38754 if enabled and worthwhile(ui, costperarg, len(args), threadsafe=threadsafe):
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 return _platformworker(ui, func, staticargs, args, hasretval)
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 return func(*staticargs + (args,))
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 def _posixworker(ui, func, staticargs, args, hasretval):
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 workers = _numworkers(ui)
Bryan O'Sullivan
worker: fix a race in SIGINT handling...
r18708 oldhandler = signal.getsignal(signal.SIGINT)
signal.signal(signal.SIGINT, signal.SIG_IGN)
Jun Wu
worker: change "pids" to a set...
r30413 pids, problem = set(), [0]
Jun Wu
worker: move killworkers and waitforworkers up...
r30410 def killworkers():
Yuya Nishihara
worker: make sure killworkers() never be interrupted by another SIGCHLD...
r30423 # unregister SIGCHLD handler as all children will be killed. This
# function shouldn't be interrupted by another SIGCHLD; otherwise pids
# could be updated while iterating, which would cause inconsistency.
signal.signal(signal.SIGCHLD, oldchldhandler)
Jun Wu
worker: move killworkers and waitforworkers up...
r30410 # if one worker bails, there's no good reason to wait for the rest
for p in pids:
try:
os.kill(p, signal.SIGTERM)
except OSError as err:
if err.errno != errno.ESRCH:
raise
Jun Wu
worker: allow waitforworkers to be non-blocking...
r30412 def waitforworkers(blocking=True):
Jun Wu
worker: make waitforworkers reentrant...
r30414 for pid in pids.copy():
p = st = 0
while True:
try:
p, st = os.waitpid(pid, (0 if blocking else os.WNOHANG))
Yuya Nishihara
worker: fix missed break on successful waitpid()...
r30422 break
Jun Wu
worker: make waitforworkers reentrant...
r30414 except OSError as e:
if e.errno == errno.EINTR:
continue
elif e.errno == errno.ECHILD:
Yuya Nishihara
worker: discard waited pid by anyone who noticed it first...
r30425 # child would already be reaped, but pids yet been
# updated (maybe interrupted just after waitpid)
pids.discard(pid)
break
Jun Wu
worker: make waitforworkers reentrant...
r30414 else:
raise
FUJIWARA Katsunori
worker: ignore meaningless exit status indication returned by os.waitpid()...
r31063 if not p:
# skip subsequent steps, because child process should
# be still running in this case
continue
pids.discard(p)
st = _exitstatus(st)
Jun Wu
worker: move killworkers and waitforworkers up...
r30410 if st and not problem[0]:
problem[0] = st
Jun Wu
worker: add a SIGCHLD handler to collect worker immediately...
r30415 def sigchldhandler(signum, frame):
waitforworkers(blocking=False)
Yuya Nishihara
worker: kill workers after all zombie processes are reaped...
r30424 if problem[0]:
killworkers()
Jun Wu
worker: add a SIGCHLD handler to collect worker immediately...
r30415 oldchldhandler = signal.signal(signal.SIGCHLD, sigchldhandler)
David Soria Parra
worker: flush ui buffers before running the worker...
r31696 ui.flush()
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 parentpid = os.getpid()
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 pipes = []
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 retvals = []
for i, pargs in enumerate(partition(args, workers)):
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 # Every worker gets its own pipe to send results on, so we don't have to
# implement atomic writes larger than PIPE_BUF. Each forked process has
# its own pipe's descriptors in the local variables, and the parent
# process has the full list of pipe descriptors (and it doesn't really
# care what order they're in).
rfd, wfd = os.pipe()
pipes.append((rfd, wfd))
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 retvals.append(None)
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 # make sure we use os._exit in all worker code paths. otherwise the
# worker may do some clean-ups which could cause surprises like
# deadlock. see sshpeer.cleanup for example.
# override error handling *before* fork. this is necessary because
# exception (signal) may arrive after fork, before "pid =" assignment
# completes, and other exception handler (dispatch.py) can lead to
# unexpected code path without os._exit.
ret = -1
try:
pid = os.fork()
if pid == 0:
signal.signal(signal.SIGINT, oldhandler)
signal.signal(signal.SIGCHLD, oldchldhandler)
Jun Wu
worker: use os._exit for posix worker in all cases...
r30521
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 def workerfunc():
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 for r, w in pipes[:-1]:
os.close(r)
os.close(w)
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 os.close(rfd)
Danny Hooper
worker: support more return types in posix worker...
r38553 for result in func(*(staticargs + (pargs,))):
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 os.write(wfd, util.pickle.dumps((i, result)))
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 return 0
ret = scmutil.callcatch(ui, workerfunc)
except: # parent re-raises, child never returns
if os.getpid() == parentpid:
raise
exctype = sys.exc_info()[0]
force = not issubclass(exctype, KeyboardInterrupt)
ui.traceback(force=force)
finally:
if os.getpid() != parentpid:
Yuya Nishihara
worker: flush messages written by child processes before exit...
r31118 try:
ui.flush()
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 except: # never returns, no re-raises
pass
Jun Wu
worker: use os._exit for posix worker in all cases...
r30521 finally:
Jun Wu
worker: rewrite error handling so os._exit covers all cases...
r32112 os._exit(ret & 255)
Jun Wu
worker: change "pids" to a set...
r30413 pids.add(pid)
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 selector = selectors.DefaultSelector()
for rfd, wfd in pipes:
os.close(wfd)
selector.register(os.fdopen(rfd, r'rb', 0), selectors.EVENT_READ)
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 def cleanup():
signal.signal(signal.SIGINT, oldhandler)
Jun Wu
worker: stop using a separate thread waiting for children...
r30416 waitforworkers()
Jun Wu
worker: add a SIGCHLD handler to collect worker immediately...
r30415 signal.signal(signal.SIGCHLD, oldchldhandler)
Yuya Nishihara
worker: call selector.close() to release polling resources
r38763 selector.close()
Yuya Nishihara
worker: do not swallow exception occurred in main process...
r41024 return problem[0]
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 try:
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 openpipes = len(pipes)
while openpipes > 0:
for key, events in selector.select():
try:
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 i, res = util.pickle.load(key.fileobj)
if hasretval and res[0]:
retvals[i] = res[1]
else:
yield res
Danny Hooper
worker: use one pipe per posix worker and select() in parent process...
r38752 except EOFError:
selector.unregister(key.fileobj)
key.fileobj.close()
openpipes -= 1
except IOError as e:
if e.errno == errno.EINTR:
continue
raise
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 except: # re-raises
Bryan O'Sullivan
worker: handle worker failures more aggressively...
r18709 killworkers()
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 cleanup()
raise
Yuya Nishihara
worker: do not swallow exception occurred in main process...
r41024 status = cleanup()
if status:
if status < 0:
os.kill(os.getpid(), -status)
sys.exit(status)
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 if hasretval:
yield True, sum(retvals, [])
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638
Bryan O'Sullivan
worker: on error, exit similarly to the first failing worker...
r18707 def _posixexitstatus(code):
'''convert a posix exit status into the same form returned by
os.spawnv
returns None if the process was stopped instead of exiting'''
if os.WIFEXITED(code):
return os.WEXITSTATUS(code)
elif os.WIFSIGNALED(code):
return -os.WTERMSIG(code)
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 def _windowsworker(ui, func, staticargs, args, hasretval):
Wojciech Lis
workers: implemented worker on windows...
r35427 class Worker(threading.Thread):
Matt Harbison
py3: roll up threading.Thread constructor args into **kwargs...
r40475 def __init__(self, taskqueue, resultqueue, func, staticargs, *args,
**kwargs):
threading.Thread.__init__(self, *args, **kwargs)
Wojciech Lis
workers: implemented worker on windows...
r35427 self._taskqueue = taskqueue
self._resultqueue = resultqueue
self._func = func
self._staticargs = staticargs
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 self._interrupted = False
Wojciech Lis
worker: make windows workers daemons...
r35448 self.daemon = True
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 self.exception = None
def interrupt(self):
self._interrupted = True
Wojciech Lis
workers: implemented worker on windows...
r35427
def run(self):
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 try:
while not self._taskqueue.empty():
try:
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 i, args = self._taskqueue.get_nowait()
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 for res in self._func(*self._staticargs + (args,)):
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 self._resultqueue.put((i, res))
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 # threading doesn't provide a native way to
# interrupt execution. handle it manually at every
# iteration.
if self._interrupted:
return
Gregory Szorc
pycompat: export queue module instead of symbols in module (API)...
r37863 except pycompat.queue.Empty:
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 break
except Exception as e:
# store the exception such that the main thread can resurface
# it as if the func was running without workers.
self.exception = e
raise
threads = []
Wojciech Lis
worker: make windows workers daemons...
r35448 def trykillworkers():
# Allow up to 1 second to clean worker threads nicely
cleanupend = time.time() + 1
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 for t in threads:
t.interrupt()
for t in threads:
Wojciech Lis
worker: make windows workers daemons...
r35448 remainingtime = cleanupend - time.time()
t.join(remainingtime)
Wojciech Lis
workers: handling exceptions in windows workers...
r35428 if t.is_alive():
Wojciech Lis
worker: make windows workers daemons...
r35448 # pass over the workers joining failure. it is more
# important to surface the inital exception than the
# fact that one of workers may be processing a large
# task and does not get to handle the interruption.
ui.warn(_("failed to kill worker threads while "
"handling an exception\n"))
return
Wojciech Lis
workers: implemented worker on windows...
r35427
workers = _numworkers(ui)
Gregory Szorc
pycompat: export queue module instead of symbols in module (API)...
r37863 resultqueue = pycompat.queue.Queue()
taskqueue = pycompat.queue.Queue()
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 retvals = []
Wojciech Lis
workers: implemented worker on windows...
r35427 # partition work to more pieces than workers to minimize the chance
# of uneven distribution of large tasks between the workers
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 for pargs in enumerate(partition(args, workers * 20)):
retvals.append(None)
Wojciech Lis
workers: implemented worker on windows...
r35427 taskqueue.put(pargs)
for _i in range(workers):
t = Worker(taskqueue, resultqueue, func, staticargs)
threads.append(t)
t.start()
Wojciech Lis
worker: make windows workers daemons...
r35448 try:
while len(threads) > 0:
while not resultqueue.empty():
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 (i, res) = resultqueue.get()
if hasretval and res[0]:
retvals[i] = res[1]
else:
yield res
Wojciech Lis
worker: make windows workers daemons...
r35448 threads[0].join(0.05)
finishedthreads = [_t for _t in threads if not _t.is_alive()]
for t in finishedthreads:
if t.exception is not None:
raise t.exception
threads.remove(t)
Wojciech Lis
worker: handle interrupt on windows...
r35469 except (Exception, KeyboardInterrupt): # re-raises
Wojciech Lis
worker: make windows workers daemons...
r35448 trykillworkers()
raise
Wojciech Lis
workers: implemented worker on windows...
r35427 while not resultqueue.empty():
Valentin Gatien-Baron
worker: support parallelization of functions with return values...
r42655 (i, res) = resultqueue.get()
if hasretval and res[0]:
retvals[i] = res[1]
else:
yield res
if hasretval:
yield True, sum(retvals, [])
Wojciech Lis
workers: implemented worker on windows...
r35427
if pycompat.iswindows:
_platformworker = _windowsworker
else:
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638 _platformworker = _posixworker
Bryan O'Sullivan
worker: on error, exit similarly to the first failing worker...
r18707 _exitstatus = _posixexitstatus
Bryan O'Sullivan
worker: allow a function to be run in multiple worker processes...
r18638
Bryan O'Sullivan
worker: partition a list (of tasks) into equal-sized chunks
r18637 def partition(lst, nslices):
Gregory Szorc
worker: change partition strategy to every Nth element...
r28181 '''partition a list into N slices of roughly equal size
The current strategy takes every Nth element from the input. If
we ever write workers that need to preserve grouping in input
we should consider allowing callers to specify a partition strategy.
Gregory Szorc
worker: document poor partitioning scheme impact...
r28292
mpm is not a fan of this partitioning strategy when files are involved.
In his words:
Single-threaded Mercurial makes a point of creating and visiting
files in a fixed order (alphabetical). When creating files in order,
a typical filesystem is likely to allocate them on nearby regions on
disk. Thus, when revisiting in the same order, locality is maximized
and various forms of OS and disk-level caching and read-ahead get a
chance to work.
This effect can be quite significant on spinning disks. I discovered it
circa Mercurial v0.4 when revlogs were named by hashes of filenames.
Tarring a repo and copying it to another disk effectively randomized
the revlog ordering on disk by sorting the revlogs by hash and suddenly
performance of my kernel checkout benchmark dropped by ~10x because the
"working set" of sectors visited no longer fit in the drive's cache and
the workload switched from streaming to random I/O.
What we should really be doing is have workers read filenames from a
ordered queue. This preserves locality and also keeps any worker from
getting more than one file out of balance.
Gregory Szorc
worker: change partition strategy to every Nth element...
r28181 '''
for i in range(nslices):
yield lst[i::nslices]