revlog: don't flush data file after every added revision...
revlog: don't flush data file after every added revision
The current behavior of revlogs is to flush the data file when writing
data to it. Tracing system calls revealed that changegroup processing
incurred numerous write(2) calls for values much smaller than the
default buffer size (Python defaults to 4096, but it can be adjusted
based on detected block size at run time by CPython).
The reason we flush revlogs is so readers have all data available.
For example, the current code in revlog.py will re-open the revlog
file (instead of seeking an existing file handle) to read the text
of a revision. This happens when starting a new delta chain when
adding several revisions from changegroups, for example. Yes, this
is likely sub-optimal (we should probably be sharing file descriptors
between readers and writers to avoid the flushing and associated
overhead of re-opening files).
While flushing revlogs is necessary, it appears all callers are
diligent about flushing files before a read is performed (see
buildtext() in _addrevision()), making the flush in
_writeentry() redundant and unncessary. So, we remove it. In practice,
this means we incur a write(2) a) when the buffer is full (typically
4096 bytes) b) when a new delta chain is created rather than after
every added revision. This applies to every revlog, but by volume
it mostly impacts filelogs.
Removing the redundant flush from _writeentry() significantly
reduces the number of write(2) calls during changegroup processing on
my Linux machine. When applying a changegroup of the hg repo based on
my local repo, the total number of write(2) calls during application
of the mercurial/localrepo.py revlogs dropped from 1,320 to 217 with
this patch applied. Total I/O related system calls dropped from 1,577
to 474.
When unbundling a mozilla-central gzipped bundle (264,403 changesets
with 1,492,215 changes to 222,507 files), total write(2) calls
dropped from 1,252,881 to 827,106 and total system calls dropped from
3,601,259 to 3,178,636 - a reduction of 425,775!
While the system call reduction is significant, it appears
to have no impact on wall time on my Linux and Windows machines. Still,
fewer syscalls is fewer syscalls. Surely this can't hurt. If nothing
else, it makes examining remaining system call usage simpler and opens
the door to experimenting with the performance impact of different
buffer sizes.