##// END OF EJS Templates
context: write dirstate out explicitly after marking files as clean...
context: write dirstate out explicitly after marking files as clean To detect change of a file without redundant comparison of file content, dirstate recognizes a file as certainly clean, if: (1) it is already known as "normal", (2) dirstate entry for it has valid (= not "-1") timestamp, and (3) mode, size and timestamp of it on the filesystem are as same as ones expected in dirstate This works as expected in many cases, but doesn't in the corner case that changing a file keeps mode, size and timestamp of it on the filesystem. The timetable below shows steps in one of typical such situations: ---- ----------------------------------- ---------------- timestamp of "f" ---------------- dirstate file- time action mem file system ---- ----------------------------------- ---- ----- ----- N -1 *** - make file "f" clean N - execute 'hg foobar' - instantiate 'dirstate' -1 -1 - 'dirstate.normal("f")' N -1 (e.g. via dirty check) - change "f", but keep size N N+1 - release wlock - 'dirstate.write()' N N - 'hg status' shows "f" as "clean" N N N ---- ----------------------------------- ---- ----- ----- The most important point is that 'dirstate.write()' is executed at N+1 or later. This causes writing dirstate timestamp N of "f" out successfully. If it is executed at N, 'parsers.pack_dirstate()' replaces timestamp N with "-1" before actual writing dirstate out. Occasional test failure for unexpected file status is typical example of this corner case. Batch execution with small working directory is finished in no time, and rarely satisfies condition (2) above. This issue can occur in cases below; - 'hg revert --rev REV' for revisions other than the parent - failure of 'merge.update()' before 'merge.recordupdates()' The root cause of this issue is that files are changed without flushing in-memory dirstate changes via 'repo.commit()' (even though omitting 'dirstate.normallookup()' on changed files also causes this issue). To detect changes of files correctly, this patch writes in-memory dirstate changes out explicitly after marking files as clean in 'workingctx._checklookup()', which is invoked via 'repo.status()'. After this change, timetable is changed as below: ---- ----------------------------------- ---------------- timestamp of "f" ---------------- dirstate file- time action mem file system ---- ----------------------------------- ---- ----- ----- N -1 *** - make file "f" clean N - execute 'hg foobar' - instantiate 'dirstate' -1 -1 - 'dirstate.normal("f")' N -1 (e.g. via dirty check) ----------------------------------- ---- ----- ----- - 'dirsttate.write()' -1 -1 ----------------------------------- ---- ----- ----- - change "f", but keep size N N+1 - release wlock - 'dirstate.write()' -1 -1 - 'hg status' -1 -1 N ---- ----------------------------------- ---- ----- ----- To reproduce this issue in tests certainly, this patch emulates some timing critical actions as below: - timestamp of "f" in '.hg/dirstate' is -1 at the beginning 'hg debugrebuildstate' before command invocation ensures it. - make file "f" clean at N - change "f" at N 'touch -t 200001010000' before and after command invocation changes mtime of "f" to "2000-01-01 00:00" (= N). - invoke 'dirstate.write()' via 'repo.status()' at N 'fakedirstatewritetime.py' forces 'pack_dirstate()' to use "2000-01-01 00:00" as "now", only if 'pack_dirstate()' is invoked via 'workingctx._checklookup()'. - invoke 'dirstate.write()' via releasing wlock at N+1 (or "not at N") 'pack_dirstate()' via releasing wlock uses actual timestamp at runtime as "now", and it should be different from the "2000-01-01 00:00" of "f". BTW, this patch also changes 'test-largefiles-misc.t', because adding 'dirstate.write()' makes recent dirstate changes visible to external process.

File last commit:

r25660:328739ea default
r25753:fe03f522 default
Show More
relink.py
187 lines | 6.3 KiB | text/x-python | PythonLexer
# Mercurial extension to provide 'hg relink' command
#
# Copyright (C) 2007 Brendan Cully <brendan@kublai.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
"""recreates hardlinks between repository clones"""
from mercurial import cmdutil, hg, util
from mercurial.i18n import _
import os, stat
cmdtable = {}
command = cmdutil.command(cmdtable)
# Note for extension authors: ONLY specify testedwith = 'internal' for
# extensions which SHIP WITH MERCURIAL. Non-mainline extensions should
# be specifying the version(s) of Mercurial they are tested with, or
# leave the attribute unspecified.
testedwith = 'internal'
@command('relink', [], _('[ORIGIN]'))
def relink(ui, repo, origin=None, **opts):
"""recreate hardlinks between two repositories
When repositories are cloned locally, their data files will be
hardlinked so that they only use the space of a single repository.
Unfortunately, subsequent pulls into either repository will break
hardlinks for any files touched by the new changesets, even if
both repositories end up pulling the same changes.
Similarly, passing --rev to "hg clone" will fail to use any
hardlinks, falling back to a complete copy of the source
repository.
This command lets you recreate those hardlinks and reclaim that
wasted space.
This repository will be relinked to share space with ORIGIN, which
must be on the same local disk. If ORIGIN is omitted, looks for
"default-relink", then "default", in [paths].
Do not attempt any read operations on this repository while the
command is running. (Both repositories will be locked against
writes.)
"""
if (not util.safehasattr(util, 'samefile') or
not util.safehasattr(util, 'samedevice')):
raise util.Abort(_('hardlinks are not supported on this system'))
src = hg.repository(repo.baseui, ui.expandpath(origin or 'default-relink',
origin or 'default'))
ui.status(_('relinking %s to %s\n') % (src.store.path, repo.store.path))
if repo.root == src.root:
ui.status(_('there is nothing to relink\n'))
return
if not util.samedevice(src.store.path, repo.store.path):
# No point in continuing
raise util.Abort(_('source and destination are on different devices'))
locallock = repo.lock()
try:
remotelock = src.lock()
try:
candidates = sorted(collect(src, ui))
targets = prune(candidates, src.store.path, repo.store.path, ui)
do_relink(src.store.path, repo.store.path, targets, ui)
finally:
remotelock.release()
finally:
locallock.release()
def collect(src, ui):
seplen = len(os.path.sep)
candidates = []
live = len(src['tip'].manifest())
# Your average repository has some files which were deleted before
# the tip revision. We account for that by assuming that there are
# 3 tracked files for every 2 live files as of the tip version of
# the repository.
#
# mozilla-central as of 2010-06-10 had a ratio of just over 7:5.
total = live * 3 // 2
src = src.store.path
pos = 0
ui.status(_("tip has %d files, estimated total number of files: %s\n")
% (live, total))
for dirpath, dirnames, filenames in os.walk(src):
dirnames.sort()
relpath = dirpath[len(src) + seplen:]
for filename in sorted(filenames):
if filename[-2:] not in ('.d', '.i'):
continue
st = os.stat(os.path.join(dirpath, filename))
if not stat.S_ISREG(st.st_mode):
continue
pos += 1
candidates.append((os.path.join(relpath, filename), st))
ui.progress(_('collecting'), pos, filename, _('files'), total)
ui.progress(_('collecting'), None)
ui.status(_('collected %d candidate storage files\n') % len(candidates))
return candidates
def prune(candidates, src, dst, ui):
def linkfilter(src, dst, st):
try:
ts = os.stat(dst)
except OSError:
# Destination doesn't have this file?
return False
if util.samefile(src, dst):
return False
if not util.samedevice(src, dst):
# No point in continuing
raise util.Abort(
_('source and destination are on different devices'))
if st.st_size != ts.st_size:
return False
return st
targets = []
total = len(candidates)
pos = 0
for fn, st in candidates:
pos += 1
srcpath = os.path.join(src, fn)
tgt = os.path.join(dst, fn)
ts = linkfilter(srcpath, tgt, st)
if not ts:
ui.debug('not linkable: %s\n' % fn)
continue
targets.append((fn, ts.st_size))
ui.progress(_('pruning'), pos, fn, _('files'), total)
ui.progress(_('pruning'), None)
ui.status(_('pruned down to %d probably relinkable files\n') % len(targets))
return targets
def do_relink(src, dst, files, ui):
def relinkfile(src, dst):
bak = dst + '.bak'
os.rename(dst, bak)
try:
util.oslink(src, dst)
except OSError:
os.rename(bak, dst)
raise
os.remove(bak)
CHUNKLEN = 65536
relinked = 0
savedbytes = 0
pos = 0
total = len(files)
for f, sz in files:
pos += 1
source = os.path.join(src, f)
tgt = os.path.join(dst, f)
# Binary mode, so that read() works correctly, especially on Windows
sfp = file(source, 'rb')
dfp = file(tgt, 'rb')
sin = sfp.read(CHUNKLEN)
while sin:
din = dfp.read(CHUNKLEN)
if sin != din:
break
sin = sfp.read(CHUNKLEN)
sfp.close()
dfp.close()
if sin:
ui.debug('not linkable: %s\n' % f)
continue
try:
relinkfile(source, tgt)
ui.progress(_('relinking'), pos, f, _('files'), total)
relinked += 1
savedbytes += sz
except OSError as inst:
ui.warn('%s: %s\n' % (tgt, str(inst)))
ui.progress(_('relinking'), None)
ui.status(_('relinked %d files (%s reclaimed)\n') %
(relinked, util.bytecount(savedbytes)))