##// END OF EJS Templates
encoding: fix trim() to be O(n) instead of O(n^2)...
encoding: fix trim() to be O(n) instead of O(n^2) `encoding.trim()` iterated over the possible lengths smaller than the input and created a slice for each. It then calculated the column width of the result, which is of course O(n), so the overall algorithm was O(n). This patch rewrites it to iterate over the unicode characters, keeping track of the length so far. Also, the old algorithm started from the end of the string, which made it much worse when the input is large and the limit is small (such as the typical 72 we pass to it). You can time it by running something like this: ``` time python3 -c 'from mercurial.utils import stringutil; print(stringutil.ellipsis(b"0123456789" * 1000, 5))' ``` That drops from 4.05 s to 83 ms with this patch (and most of that is of course startup time). Differential Revision: https://phab.mercurial-scm.org/D12089

File last commit:

r44296:b7af8a02 default
r49518:f1ed5c30 default
Show More
dirs_corpus.py
29 lines | 717 B | text/x-python | PythonLexer
from __future__ import absolute_import, print_function
import argparse
import zipfile
ap = argparse.ArgumentParser()
ap.add_argument("out", metavar="some.zip", type=str, nargs=1)
args = ap.parse_args()
with zipfile.ZipFile(args.out[0], "w", zipfile.ZIP_STORED) as zf:
zf.writestr(
"greek-tree",
"\n".join(
[
"iota",
"A/mu",
"A/B/lambda",
"A/B/E/alpha",
"A/B/E/beta",
"A/D/gamma",
"A/D/G/pi",
"A/D/G/rho",
"A/D/G/tau",
"A/D/H/chi",
"A/D/H/omega",
"A/D/H/psi",
]
),
)