##// END OF EJS Templates
help: document bundle specifications...
help: document bundle specifications I softly formalized the concept of a "bundle specification" a while ago when I was working on clone bundles and stream clone bundles and wanted a more robust way to define what exactly is in a bundle file. The concept has existed for a while. Since it is part of the clone bundles feature and exposed to the user via the "-t" argument to `hg bundle`, it is something we need to support for the long haul. After the 4.1 release, I heard a few people comment that they didn't realize you could generate zstd bundles with `hg bundle`. I'm partially to blame for not documenting it in bundle's docstring. Additionally, I added a hacky, experimental feature for controlling the compression level of bundles in 76104a4899ad. As the commit message says, I went with a quick and dirty solution out of time constraints. Furthermore, I wanted to eventually store this configuration in the "bundlespec" so it could be made more flexible. Given: a) bundlespecs are here to stay b) we don't have great documentation over what they are, despite being a user-facing feature c) the list of available compression engines and their behavior isn't exposed d) we need an extensible place to modify behavior of compression engines I want to move forward with formalizing bundlespecs as a user-facing feature. This commit does that by introducing a "bundlespec" help page. Leaning on the just-added compression engine documentation and API, the topic also conveniently lists available compression engines and details about them. This makes features like zstd bundle compression more discoverable. e.g. you can now `hg help -k zstd` and it lists the "bundlespec" topic.

File last commit:

r31584:985a98c6 default
r31793:69d8fcf2 default
Show More
similar.py
123 lines | 4.1 KiB | text/x-python | PythonLexer
# similar.py - mechanisms for finding similar files
#
# Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
#
# This software may be used and distributed according to the terms of the
# GNU General Public License version 2 or any later version.
from __future__ import absolute_import
from .i18n import _
from . import (
bdiff,
mdiff,
)
def _findexactmatches(repo, added, removed):
'''find renamed files that have no changes
Takes a list of new filectxs and a list of removed filectxs, and yields
(before, after) tuples of exact matches.
'''
numfiles = len(added) + len(removed)
# Build table of removed files: {hash(fctx.data()): [fctx, ...]}.
# We use hash() to discard fctx.data() from memory.
hashes = {}
for i, fctx in enumerate(removed):
repo.ui.progress(_('searching for exact renames'), i, total=numfiles,
unit=_('files'))
h = hash(fctx.data())
if h not in hashes:
hashes[h] = [fctx]
else:
hashes[h].append(fctx)
# For each added file, see if it corresponds to a removed file.
for i, fctx in enumerate(added):
repo.ui.progress(_('searching for exact renames'), i + len(removed),
total=numfiles, unit=_('files'))
adata = fctx.data()
h = hash(adata)
for rfctx in hashes.get(h, []):
# compare between actual file contents for exact identity
if adata == rfctx.data():
yield (rfctx, fctx)
break
# Done
repo.ui.progress(_('searching for exact renames'), None)
def _ctxdata(fctx):
# lazily load text
orig = fctx.data()
return orig, mdiff.splitnewlines(orig)
def _score(fctx, otherdata):
orig, lines = otherdata
text = fctx.data()
# bdiff.blocks() returns blocks of matching lines
# count the number of bytes in each
equal = 0
matches = bdiff.blocks(text, orig)
for x1, x2, y1, y2 in matches:
for line in lines[y1:y2]:
equal += len(line)
lengths = len(text) + len(orig)
return equal * 2.0 / lengths
def score(fctx1, fctx2):
return _score(fctx1, _ctxdata(fctx2))
def _findsimilarmatches(repo, added, removed, threshold):
'''find potentially renamed files based on similar file content
Takes a list of new filectxs and a list of removed filectxs, and yields
(before, after, score) tuples of partial matches.
'''
copies = {}
for i, r in enumerate(removed):
repo.ui.progress(_('searching for similar files'), i,
total=len(removed), unit=_('files'))
data = None
for a in added:
bestscore = copies.get(a, (None, threshold))[1]
if data is None:
data = _ctxdata(r)
myscore = _score(a, data)
if myscore > bestscore:
copies[a] = (r, myscore)
repo.ui.progress(_('searching'), None)
for dest, v in copies.iteritems():
source, bscore = v
yield source, dest, bscore
def _dropempty(fctxs):
return [x for x in fctxs if x.size() > 0]
def findrenames(repo, added, removed, threshold):
'''find renamed files -- yields (before, after, score) tuples'''
wctx = repo[None]
pctx = wctx.p1()
# Zero length files will be frequently unrelated to each other, and
# tracking the deletion/addition of such a file will probably cause more
# harm than good. We strip them out here to avoid matching them later on.
addedfiles = _dropempty(wctx[fp] for fp in sorted(added))
removedfiles = _dropempty(pctx[fp] for fp in sorted(removed) if fp in pctx)
# Find exact matches.
matchedfiles = set()
for (a, b) in _findexactmatches(repo, addedfiles, removedfiles):
matchedfiles.add(b)
yield (a.path(), b.path(), 1.0)
# If the user requested similar files to be matched, search for them also.
if threshold < 1.0:
addedfiles = [x for x in addedfiles if x not in matchedfiles]
for (a, b, score) in _findsimilarmatches(repo, addedfiles,
removedfiles, threshold):
yield (a.path(), b.path(), score)