Show More
@@ -0,0 +1,69 b'' | |||||
|
1 | = Mercurial 6.2rc0 = | |||
|
2 | ||||
|
3 | '''This is the first release to support Python 3.6+ ''only''''' | |||
|
4 | ||||
|
5 | == New Features == | |||
|
6 | * Introduce a way to auto-upgrade a repo for certain requirements (see `hg help config.format`) | |||
|
7 | * filemerge: add support for partial conflict resolution by external tool | |||
|
8 | * contrib: add a partial-merge tool for sorted lists (such as Python imports) | |||
|
9 | * revlog: reorder p1 and p2 when p1 is null and p2 is not while respecting issue6528 | |||
|
10 | * rhg: add support for ignoring all extensions | |||
|
11 | * completion: install completers to conventional locations | |||
|
12 | * revert: ask user to confirm before tracking new file when interactive | |||
|
13 | * Rust implementation now uses the new dirstate API | |||
|
14 | * sslutil: be less strict about which ciphers are allowed when using --insecure | |||
|
15 | * sslutil: support TLSV1_ALERT_PROTOCOL_VERSION reason code | |||
|
16 | * absorb: make `--edit-lines` imply `--apply-changes` | |||
|
17 | * diff: add help text to highlight the ability to do merge diffs | |||
|
18 | * censor: make rhg fall back to python when encountering a censored node | |||
|
19 | * clone: use better names for temp files | |||
|
20 | * debuglock: make the command more useful in non-interactive mode | |||
|
21 | * debugdeltachain: distinct between snapshot and other diffs | |||
|
22 | * debugindex: rename to debugindex debug-revlog-index | |||
|
23 | * Make debug-revlog-index give out more information | |||
|
24 | * sparse: use the rust code even when sparse is present | |||
|
25 | ||||
|
26 | == Bug Fixes == | |||
|
27 | * Python 3 bugfixes | |||
|
28 | * Windows bugfixes | |||
|
29 | * templates: make `firstline` filter not keep '\v', '\f' and similar | |||
|
30 | * rhg: sort unsupported extensions in error message | |||
|
31 | * Improve performance of all functions that extract the first line of a text | |||
|
32 | * crecord: avoid duplicating lines when reverting noeol->eol change | |||
|
33 | * Some config.path options are now discoverable via config | |||
|
34 | * mail: don't complain about a multi-word email.method | |||
|
35 | * bundlespec: do not overwrite bundlespec value with the config one | |||
|
36 | * bundlespec: do not check for `-` in the params portion of the bundlespec | |||
|
37 | * bundlespec: handle the presence of obsmarker part | |||
|
38 | * sparse: start moving away from the global variable for detection of usage | |||
|
39 | * rust-changelog: don't skip empty lines when iterating over changeset lines | |||
|
40 | * narrow: support debugupgraderepo | |||
|
41 | * bundle: quick fix to ludicrous performance penalty | |||
|
42 | * followlines: don't put Unicode directly into the .js file (issue6559) | |||
|
43 | * manifest: improve error message in case for tree manifest | |||
|
44 | * revlog: use %d to format int instead of %lu (issue6565) | |||
|
45 | * revlog: use appropriate format char for int ("i" instead of I") | |||
|
46 | * worker: stop relying on garbage collection to release memoryview | |||
|
47 | * worker: implement _blockingreader.readinto() (issue6444) | |||
|
48 | * worker: avoid potential partial write of pickled data | |||
|
49 | ||||
|
50 | == Backwards Compatibility Changes == | |||
|
51 | * '''Removed Python 2 support''': this includes a lot of cleanup in our codebase, automation, testing, etc. | |||
|
52 | * debugindex: rename to debugindex debug-revlog-index | |||
|
53 | ||||
|
54 | == Miscellaneous == | |||
|
55 | ||||
|
56 | * Fix typos and add missing items from documentation | |||
|
57 | * dirstate-tree: optimize HashMap lookups with raw_entry_mut | |||
|
58 | * Rust dependencies have been upgraded | |||
|
59 | * revlog: rank computation is done by Rust when available | |||
|
60 | * Improve discovery test tooling | |||
|
61 | * Audit the number of queries done in discovery | |||
|
62 | * Improved .hgignore of the mercurial-devel repository itself | |||
|
63 | * Improve test coverage of dirstate-v2 | |||
|
64 | * rust-requirements: allow loading repos with `bookmarksinstore` requirement | |||
|
65 | * Various Rust refactorings to help with revlog management | |||
|
66 | * Improve debugability of Rust structs | |||
|
67 | * Improve unit testing of the Rust dirstatemap | |||
|
68 | * Improve robustness of the Rust dirstatemap to corruption | |||
|
69 | * Improve changelog-v2 upgrade system |
@@ -229,3 +229,5 b' 5bd6bcd31dd1ebb63b8914b00064f96297267af7' | |||||
229 | 0ddd5e1f5f67438af85d12e4ce6c39021dde9916 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmJyo/kZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVsTVDACmg+uABE36kJcVJewoVK2I2JAdrO2llq3QbvzNb0eRL7bGy5UKJvF7fy/1FfayZT9/YTc6kGcRIeG+jUUiGRxMr0fOP9RixG78OyV14MmN1vkNTfMbk6BBrkYRbJJioLyk9qsXU6HbfRUdaCkOqwOKXKHm/4lzG/JFvL4JL6v++idx8W/7sADKILNy2DtP22YaRMgz38iM3ejgZghw7ie607C6lYq4wMs39jTZdZ3s6XoN+VgsLJWsI1LFnIADU5Zry8EAFERsvphiM2zG8lkrbPjpvwtidBz999TYnnGLvTMZA5ubspQRERc/eNDRbKdA55cCWNg3DhTancOiu3bQXdYCjF1MCN9g5Q11zbEzdwrbrY0NF7AUq1VW4kGFgChIJ0IuTQ/YETbcbih2Xs4nkAGt64YPtHzmOffF1a2/SUzH3AwgMmhBQBqxa02YTqyKJDHHqgTyFrZIkH/jb+rdfIskaOZZo6JcGUoacFOUhFfhSxxB1kN2HEHvEAQPMkc= |
|
229 | 0ddd5e1f5f67438af85d12e4ce6c39021dde9916 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmJyo/kZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVsTVDACmg+uABE36kJcVJewoVK2I2JAdrO2llq3QbvzNb0eRL7bGy5UKJvF7fy/1FfayZT9/YTc6kGcRIeG+jUUiGRxMr0fOP9RixG78OyV14MmN1vkNTfMbk6BBrkYRbJJioLyk9qsXU6HbfRUdaCkOqwOKXKHm/4lzG/JFvL4JL6v++idx8W/7sADKILNy2DtP22YaRMgz38iM3ejgZghw7ie607C6lYq4wMs39jTZdZ3s6XoN+VgsLJWsI1LFnIADU5Zry8EAFERsvphiM2zG8lkrbPjpvwtidBz999TYnnGLvTMZA5ubspQRERc/eNDRbKdA55cCWNg3DhTancOiu3bQXdYCjF1MCN9g5Q11zbEzdwrbrY0NF7AUq1VW4kGFgChIJ0IuTQ/YETbcbih2Xs4nkAGt64YPtHzmOffF1a2/SUzH3AwgMmhBQBqxa02YTqyKJDHHqgTyFrZIkH/jb+rdfIskaOZZo6JcGUoacFOUhFfhSxxB1kN2HEHvEAQPMkc= | |
230 | 6b10151b962108f65bfa12b3918b1021ca334f73 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmKYxvUZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVqsDC/9EKBjkHvQeY55bqhqqyf5Mccw8cXH5/WBsyJYtEl+W6ykFRlTUUukY0MKzc1xCGG4sryTwqf8qxW92Yqt4bwoFIKIEpOa6CGsf18Ir/fMVNaOmYABtbbLqFgkuarNLz5wIMkGXugqZ4RUhs7HvL0Rsgb24mWpS5temzb2f0URP5uKFCY4MMC+oBFHKFfkn9MwAVIkX+iAakDR4x6dbSPKPNRwRqILKSnGosDZ+dnvvjJTbqZdLowU5OBXdUoa57j9xxcSzCme0hQ0VNuPcn4DQ/N2yZrCsJvvv3soE94jMkhbnfLZ3/EulQAVZZs9Hjur4w/Hk9g8+YK5lIvJDUSX3cBRiYKuGojxDMnXP5f1hW4YdDVCFhnwczeG7Q20fybjwWvB+QgYUkHzGbdCYSHCWE7f/HhTivEPSudYP4SdMnEdWNx2Rqvs+QsgFAEiIgc6lhupyZwyfIdhgxPJ/BAsjUDJnFR0dj86yVoWjoQfkEyf6toK3OjrHNLPEPfWX4Ac= |
|
230 | 6b10151b962108f65bfa12b3918b1021ca334f73 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmKYxvUZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVqsDC/9EKBjkHvQeY55bqhqqyf5Mccw8cXH5/WBsyJYtEl+W6ykFRlTUUukY0MKzc1xCGG4sryTwqf8qxW92Yqt4bwoFIKIEpOa6CGsf18Ir/fMVNaOmYABtbbLqFgkuarNLz5wIMkGXugqZ4RUhs7HvL0Rsgb24mWpS5temzb2f0URP5uKFCY4MMC+oBFHKFfkn9MwAVIkX+iAakDR4x6dbSPKPNRwRqILKSnGosDZ+dnvvjJTbqZdLowU5OBXdUoa57j9xxcSzCme0hQ0VNuPcn4DQ/N2yZrCsJvvv3soE94jMkhbnfLZ3/EulQAVZZs9Hjur4w/Hk9g8+YK5lIvJDUSX3cBRiYKuGojxDMnXP5f1hW4YdDVCFhnwczeG7Q20fybjwWvB+QgYUkHzGbdCYSHCWE7f/HhTivEPSudYP4SdMnEdWNx2Rqvs+QsgFAEiIgc6lhupyZwyfIdhgxPJ/BAsjUDJnFR0dj86yVoWjoQfkEyf6toK3OjrHNLPEPfWX4Ac= | |
231 | 0cc5f74ff7f0f4ac2427096bddbe102dbc2453ae 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmKrK5wZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVvSmC/93B3If9OY0eqbzScqY4S6XgtC1mR3tkQirYaUujCrrt75P8jlFABn1UdrOgXwjHhm+eVxxvlg/JoexSfro89j8UFFqlVzxvDXipVFFGj/n8AeRctkNiaLpDT8ejDQic7ED566gLSeAWlZ6TA14c4+O6SC1vQxr5BCEiQjBVM7bc91O4GB/VTf/31teCtdmjScv0wsISKMJdVBIOcjOaDM1dzSlWE2wNzK551hHr7D3T5v78NJ7+5NbgqzOScRpFxzO8ndDa9YCqVdpixOVbCt1PruxUc9gYjbHbCUnm+3iZ+MnGtSZdyM7XC6BLhg3IGBinzCxff3+K/1p0VR3pr53TGXdQLfkpkRiWVQlWxQUl2MFbGhpFtvqNACMKJrL/tyTFjC+2GWBTetju8OWeqpVKWmLroL6RZaotMQzNG3sRnNwDrVL9VufT1abP9LQm71Rj1c1SsvRNaFhgBannTnaQoz6UQXvM0Rr1foUESJudU5rKr4kiJdSGMqIAsH15z8= |
|
231 | 0cc5f74ff7f0f4ac2427096bddbe102dbc2453ae 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmKrK5wZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVvSmC/93B3If9OY0eqbzScqY4S6XgtC1mR3tkQirYaUujCrrt75P8jlFABn1UdrOgXwjHhm+eVxxvlg/JoexSfro89j8UFFqlVzxvDXipVFFGj/n8AeRctkNiaLpDT8ejDQic7ED566gLSeAWlZ6TA14c4+O6SC1vQxr5BCEiQjBVM7bc91O4GB/VTf/31teCtdmjScv0wsISKMJdVBIOcjOaDM1dzSlWE2wNzK551hHr7D3T5v78NJ7+5NbgqzOScRpFxzO8ndDa9YCqVdpixOVbCt1PruxUc9gYjbHbCUnm+3iZ+MnGtSZdyM7XC6BLhg3IGBinzCxff3+K/1p0VR3pr53TGXdQLfkpkRiWVQlWxQUl2MFbGhpFtvqNACMKJrL/tyTFjC+2GWBTetju8OWeqpVKWmLroL6RZaotMQzNG3sRnNwDrVL9VufT1abP9LQm71Rj1c1SsvRNaFhgBannTnaQoz6UQXvM0Rr1foUESJudU5rKr4kiJdSGMqIAsH15z8= | |
|
232 | 288de6f5d724bba7bf1669e2838f196962bb7528 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmKrVSEZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVqfUDACWYt2x2yNeb3SgCQsMhntFoKgwZ/CKFpiaz8W6jYij4mnwwWNAcflJAG3NJPK1I4RJrQky+omTmoc7dTAxfbjds7kA8AsXrVIFyP7HV5OKLEACWEAlCrtBLoj+gSYwO+yHQD7CnWqcMqYocHzsfVIr6qT9QQMlixP4lCiKh8ZrwPRGameONVfDBdL+tzw/WnkA5bVeRIlGpHoPe1y7xjP1kfj0a39aDezOcNqzxnzCuhpi+AC1xOpGi9ZqYhF6CmcDVRW6m7NEonbWasYpefpxtVa1xVreI1OIeBO30l7OsPI4DNn+dUpA4tA2VvvU+4RMsHPeT5R2VadXjF3xoH1LSdxv5fSKmRDr98GSwC5MzvTgMzskfMJ3n4Z7jhfPUz4YW4DBr71H27b1Mfdnl2cwXyT/0fD9peBWXe4ZBJ6VegPBUOjuIu0lUyfk7Zj9zb6l1AZC536Q1KolJPswQm9VyrX9Mtk70s0e1Fp3q1oohZVxdLPQvpR4empP0WMdPgg= | |||
|
233 | 094a5fa3cf52f936e0de3f1e507c818bee5ece6b 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmLL1jYZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVn4gC/9Ls9JQEQrJPVfqp9+VicJIUUww/aKYWedlQJOlv4oEQJzYQQU9WfJq2d9OAuX2+cXCo7BC+NdjhjKjv7n0+gK0HuhfYYUoXiJvcfa4GSeEyxxnDf55lBCDxURstVrExU7c5OKiG+dPcsTPdvRdkpeAT/4gaewZ1cR0yZILNjpUeSWzQ7zhheXqfooyVkubdZY60XCNo9cSosOl1beNdNB/K5OkCNcYOa2AbiBY8XszQTCc+OU8tj7Ti8LGLZTW2vGD1QdVmqEPhtSQzRvcjbcRPoqXy/4duhN5V6QQ/O57hEF/6m3lXbCzNUDTqBw14Q3+WyLBR8npVwG7LXTCPuTtgv8Pk1ZBqY1UPf67xQu7WZN3EGWc9yuRKGkdetjZ09PJL7dcxctBkje3kQKmv7sdtCEo2DTugw38WN4beQA2hBKgqdUQVjfL+BbD48V+RnTdB4N0Hp7gw0gQdYsI14ZNe5wWhw98COi443dlVgKFl4jriVNM8aS1TQVOy15xyxA= |
@@ -242,3 +242,5 b' 5bd6bcd31dd1ebb63b8914b00064f96297267af7' | |||||
242 | 0ddd5e1f5f67438af85d12e4ce6c39021dde9916 6.1.2 |
|
242 | 0ddd5e1f5f67438af85d12e4ce6c39021dde9916 6.1.2 | |
243 | 6b10151b962108f65bfa12b3918b1021ca334f73 6.1.3 |
|
243 | 6b10151b962108f65bfa12b3918b1021ca334f73 6.1.3 | |
244 | 0cc5f74ff7f0f4ac2427096bddbe102dbc2453ae 6.1.4 |
|
244 | 0cc5f74ff7f0f4ac2427096bddbe102dbc2453ae 6.1.4 | |
|
245 | 288de6f5d724bba7bf1669e2838f196962bb7528 6.2rc0 | |||
|
246 | 094a5fa3cf52f936e0de3f1e507c818bee5ece6b 6.2 |
@@ -47,6 +47,7 b' allowsymbolimports = (' | |||||
47 | 'mercurial.thirdparty.zope', |
|
47 | 'mercurial.thirdparty.zope', | |
48 | 'mercurial.thirdparty.zope.interface', |
|
48 | 'mercurial.thirdparty.zope.interface', | |
49 | 'typing', |
|
49 | 'typing', | |
|
50 | 'xml.etree.ElementTree', | |||
50 | ) |
|
51 | ) | |
51 |
|
52 | |||
52 | # Allow list of symbols that can be directly imported. |
|
53 | # Allow list of symbols that can be directly imported. |
@@ -8,6 +8,10 b'' | |||||
8 | import os |
|
8 | import os | |
9 | import re |
|
9 | import re | |
10 | import shutil |
|
10 | import shutil | |
|
11 | from xml.etree.ElementTree import ( | |||
|
12 | ElementTree, | |||
|
13 | XMLParser, | |||
|
14 | ) | |||
11 |
|
15 | |||
12 | from mercurial.i18n import _ |
|
16 | from mercurial.i18n import _ | |
13 | from mercurial import ( |
|
17 | from mercurial import ( | |
@@ -20,26 +24,6 b' from . import common' | |||||
20 |
|
24 | |||
21 | NoRepo = common.NoRepo |
|
25 | NoRepo = common.NoRepo | |
22 |
|
26 | |||
23 | # The naming drift of ElementTree is fun! |
|
|||
24 |
|
||||
25 | try: |
|
|||
26 | import xml.etree.cElementTree.ElementTree as ElementTree |
|
|||
27 | import xml.etree.cElementTree.XMLParser as XMLParser |
|
|||
28 | except ImportError: |
|
|||
29 | try: |
|
|||
30 | import xml.etree.ElementTree.ElementTree as ElementTree |
|
|||
31 | import xml.etree.ElementTree.XMLParser as XMLParser |
|
|||
32 | except ImportError: |
|
|||
33 | try: |
|
|||
34 | import elementtree.cElementTree.ElementTree as ElementTree |
|
|||
35 | import elementtree.cElementTree.XMLParser as XMLParser |
|
|||
36 | except ImportError: |
|
|||
37 | try: |
|
|||
38 | import elementtree.ElementTree.ElementTree as ElementTree |
|
|||
39 | import elementtree.ElementTree.XMLParser as XMLParser |
|
|||
40 | except ImportError: |
|
|||
41 | pass |
|
|||
42 |
|
||||
43 |
|
27 | |||
44 | class darcs_source(common.converter_source, common.commandline): |
|
28 | class darcs_source(common.converter_source, common.commandline): | |
45 | def __init__(self, ui, repotype, path, revs=None): |
|
29 | def __init__(self, ui, repotype, path, revs=None): | |
@@ -58,7 +42,7 b' class darcs_source(common.converter_sour' | |||||
58 | _(b'darcs version 2.1 or newer needed (found %r)') % version |
|
42 | _(b'darcs version 2.1 or newer needed (found %r)') % version | |
59 | ) |
|
43 | ) | |
60 |
|
44 | |||
61 |
if |
|
45 | if "ElementTree" not in globals(): | |
62 | raise error.Abort(_(b"Python ElementTree module is not available")) |
|
46 | raise error.Abort(_(b"Python ElementTree module is not available")) | |
63 |
|
47 | |||
64 | self.path = os.path.realpath(path) |
|
48 | self.path = os.path.realpath(path) | |
@@ -94,9 +78,9 b' class darcs_source(common.converter_sour' | |||||
94 | ) |
|
78 | ) | |
95 | tagname = None |
|
79 | tagname = None | |
96 | child = None |
|
80 | child = None | |
97 |
for elt in tree.findall( |
|
81 | for elt in tree.findall('patch'): | |
98 |
node = elt.get( |
|
82 | node = self.recode(elt.get('hash')) | |
99 |
name = elt.findtext( |
|
83 | name = self.recode(elt.findtext('name', '')) | |
100 | if name.startswith(b'TAG '): |
|
84 | if name.startswith(b'TAG '): | |
101 | tagname = name[4:].strip() |
|
85 | tagname = name[4:].strip() | |
102 | elif tagname is not None: |
|
86 | elif tagname is not None: | |
@@ -126,7 +110,7 b' class darcs_source(common.converter_sour' | |||||
126 | # While we are decoding the XML as latin-1 to be as liberal as |
|
110 | # While we are decoding the XML as latin-1 to be as liberal as | |
127 | # possible, etree will still raise an exception if any |
|
111 | # possible, etree will still raise an exception if any | |
128 | # non-printable characters are in the XML changelog. |
|
112 | # non-printable characters are in the XML changelog. | |
129 |
parser = XMLParser(encoding= |
|
113 | parser = XMLParser(encoding='latin-1') | |
130 | p = self._run(cmd, **kwargs) |
|
114 | p = self._run(cmd, **kwargs) | |
131 | etree.parse(p.stdout, parser=parser) |
|
115 | etree.parse(p.stdout, parser=parser) | |
132 | p.wait() |
|
116 | p.wait() | |
@@ -136,7 +120,7 b' class darcs_source(common.converter_sour' | |||||
136 | def format(self): |
|
120 | def format(self): | |
137 | output, status = self.run(b'show', b'repo', repodir=self.path) |
|
121 | output, status = self.run(b'show', b'repo', repodir=self.path) | |
138 | self.checkexit(status) |
|
122 | self.checkexit(status) | |
139 | m = re.search(r'^\s*Format:\s*(.*)$', output, re.MULTILINE) |
|
123 | m = re.search(br'^\s*Format:\s*(.*)$', output, re.MULTILINE) | |
140 | if not m: |
|
124 | if not m: | |
141 | return None |
|
125 | return None | |
142 | return b','.join(sorted(f.strip() for f in m.group(1).split(b','))) |
|
126 | return b','.join(sorted(f.strip() for f in m.group(1).split(b','))) | |
@@ -159,13 +143,13 b' class darcs_source(common.converter_sour' | |||||
159 | def getcommit(self, rev): |
|
143 | def getcommit(self, rev): | |
160 | elt = self.changes[rev] |
|
144 | elt = self.changes[rev] | |
161 | dateformat = b'%a %b %d %H:%M:%S %Z %Y' |
|
145 | dateformat = b'%a %b %d %H:%M:%S %Z %Y' | |
162 |
date = dateutil.strdate(elt.get( |
|
146 | date = dateutil.strdate(elt.get('local_date'), dateformat) | |
163 |
desc = elt.findtext( |
|
147 | desc = elt.findtext('name') + '\n' + elt.findtext('comment', '') | |
164 | # etree can return unicode objects for name, comment, and author, |
|
148 | # etree can return unicode objects for name, comment, and author, | |
165 | # so recode() is used to ensure str objects are emitted. |
|
149 | # so recode() is used to ensure str objects are emitted. | |
166 | newdateformat = b'%Y-%m-%d %H:%M:%S %1%2' |
|
150 | newdateformat = b'%Y-%m-%d %H:%M:%S %1%2' | |
167 | return common.commit( |
|
151 | return common.commit( | |
168 |
author=self.recode(elt.get( |
|
152 | author=self.recode(elt.get('author')), | |
169 | date=dateutil.datestr(date, newdateformat), |
|
153 | date=dateutil.datestr(date, newdateformat), | |
170 | desc=self.recode(desc).strip(), |
|
154 | desc=self.recode(desc).strip(), | |
171 | parents=self.parents[rev], |
|
155 | parents=self.parents[rev], | |
@@ -176,7 +160,7 b' class darcs_source(common.converter_sour' | |||||
176 | b'pull', |
|
160 | b'pull', | |
177 | self.path, |
|
161 | self.path, | |
178 | all=True, |
|
162 | all=True, | |
179 | match=b'hash %s' % rev, |
|
163 | match=b'hash %s' % self.recode(rev), | |
180 | no_test=True, |
|
164 | no_test=True, | |
181 | no_posthook=True, |
|
165 | no_posthook=True, | |
182 | external_merge=b'/bin/false', |
|
166 | external_merge=b'/bin/false', | |
@@ -194,13 +178,14 b' class darcs_source(common.converter_sour' | |||||
194 | copies = {} |
|
178 | copies = {} | |
195 | changes = [] |
|
179 | changes = [] | |
196 | man = None |
|
180 | man = None | |
197 |
for elt in self.changes[rev].find( |
|
181 | for elt in self.changes[rev].find('summary'): | |
198 |
if elt.tag in ( |
|
182 | if elt.tag in ('add_directory', 'remove_directory'): | |
199 | continue |
|
183 | continue | |
200 |
if elt.tag == |
|
184 | if elt.tag == 'move': | |
201 | if man is None: |
|
185 | if man is None: | |
202 | man = self.manifest() |
|
186 | man = self.manifest() | |
203 |
source |
|
187 | source = self.recode(elt.get('from')) | |
|
188 | dest = self.recode(elt.get('to')) | |||
204 | if source in man: |
|
189 | if source in man: | |
205 | # File move |
|
190 | # File move | |
206 | changes.append((source, rev)) |
|
191 | changes.append((source, rev)) | |
@@ -217,7 +202,7 b' class darcs_source(common.converter_sour' | |||||
217 | changes.append((fdest, rev)) |
|
202 | changes.append((fdest, rev)) | |
218 | copies[fdest] = f |
|
203 | copies[fdest] = f | |
219 | else: |
|
204 | else: | |
220 | changes.append((elt.text.strip(), rev)) |
|
205 | changes.append((self.recode(elt.text.strip()), rev)) | |
221 | self.pull(rev) |
|
206 | self.pull(rev) | |
222 | self.lastrev = rev |
|
207 | self.lastrev = rev | |
223 | return sorted(changes), copies, set() |
|
208 | return sorted(changes), copies, set() |
@@ -683,7 +683,11 b' def determine_upgrade_actions(' | |||||
683 |
|
683 | |||
684 | newactions.append(d) |
|
684 | newactions.append(d) | |
685 |
|
685 | |||
686 | newactions.extend(o for o in sorted(optimizations) if o not in newactions) |
|
686 | newactions.extend( | |
|
687 | o | |||
|
688 | for o in sorted(optimizations, key=(lambda x: x.name)) | |||
|
689 | if o not in newactions | |||
|
690 | ) | |||
687 |
|
691 | |||
688 | # FUTURE consider adding some optimizations here for certain transitions. |
|
692 | # FUTURE consider adding some optimizations here for certain transitions. | |
689 | # e.g. adding generaldelta could schedule parent redeltas. |
|
693 | # e.g. adding generaldelta could schedule parent redeltas. |
@@ -80,12 +80,32 b' io.BufferedIOBase.register(LineBufferedW' | |||||
80 |
|
80 | |||
81 |
|
81 | |||
82 | def make_line_buffered(stream): |
|
82 | def make_line_buffered(stream): | |
83 | if not isinstance(stream, io.BufferedIOBase): |
|
83 | # First, check if we need to wrap the stream. | |
84 | # On Python 3, buffered streams can be expected to subclass |
|
84 | check_stream = stream | |
85 | # BufferedIOBase. This is definitively the case for the streams |
|
85 | while True: | |
86 | # initialized by the interpreter. For unbuffered streams, we don't need |
|
86 | if isinstance(check_stream, WriteAllWrapper): | |
87 | # to emulate line buffering. |
|
87 | check_stream = check_stream.orig | |
|
88 | elif pycompat.iswindows and isinstance( | |||
|
89 | check_stream, | |||
|
90 | # pytype: disable=module-attr | |||
|
91 | platform.winstdout | |||
|
92 | # pytype: enable=module-attr | |||
|
93 | ): | |||
|
94 | check_stream = check_stream.fp | |||
|
95 | else: | |||
|
96 | break | |||
|
97 | if isinstance(check_stream, io.RawIOBase): | |||
|
98 | # The stream is unbuffered, we don't need to emulate line buffering. | |||
88 | return stream |
|
99 | return stream | |
|
100 | elif isinstance(check_stream, io.BufferedIOBase): | |||
|
101 | # The stream supports some kind of buffering. We can't assume that | |||
|
102 | # lines are flushed. Fall back to wrapping the stream. | |||
|
103 | pass | |||
|
104 | else: | |||
|
105 | raise NotImplementedError( | |||
|
106 | "can't determine whether stream is buffered or not" | |||
|
107 | ) | |||
|
108 | ||||
89 | if isinstance(stream, LineBufferedWrapper): |
|
109 | if isinstance(stream, LineBufferedWrapper): | |
90 | return stream |
|
110 | return stream | |
91 | return LineBufferedWrapper(stream) |
|
111 | return LineBufferedWrapper(stream) |
@@ -83,7 +83,7 b' impl TruncatedTimestamp {' | |||||
83 | second_ambiguous, |
|
83 | second_ambiguous, | |
84 | }) |
|
84 | }) | |
85 | } else { |
|
85 | } else { | |
86 | Err(DirstateV2ParseError) |
|
86 | Err(DirstateV2ParseError::new("when reading datetime")) | |
87 | } |
|
87 | } | |
88 | } |
|
88 | } | |
89 |
|
89 |
@@ -463,7 +463,7 b" impl<'on_disk> DirstateMap<'on_disk> {" | |||||
463 | if let Some(data) = on_disk.get(..data_size) { |
|
463 | if let Some(data) = on_disk.get(..data_size) { | |
464 | Ok(on_disk::read(data, metadata)?) |
|
464 | Ok(on_disk::read(data, metadata)?) | |
465 | } else { |
|
465 | } else { | |
466 | Err(DirstateV2ParseError.into()) |
|
466 | Err(DirstateV2ParseError::new("not enough bytes on disk").into()) | |
467 | } |
|
467 | } | |
468 | } |
|
468 | } | |
469 |
|
469 |
@@ -175,11 +175,21 b' type OptPathSlice = PathSlice;' | |||||
175 | /// |
|
175 | /// | |
176 | /// This should only happen if Mercurial is buggy or a repository is corrupted. |
|
176 | /// This should only happen if Mercurial is buggy or a repository is corrupted. | |
177 | #[derive(Debug)] |
|
177 | #[derive(Debug)] | |
178 |
pub struct DirstateV2ParseError |
|
178 | pub struct DirstateV2ParseError { | |
|
179 | message: String, | |||
|
180 | } | |||
|
181 | ||||
|
182 | impl DirstateV2ParseError { | |||
|
183 | pub fn new<S: Into<String>>(message: S) -> Self { | |||
|
184 | Self { | |||
|
185 | message: message.into(), | |||
|
186 | } | |||
|
187 | } | |||
|
188 | } | |||
179 |
|
189 | |||
180 | impl From<DirstateV2ParseError> for HgError { |
|
190 | impl From<DirstateV2ParseError> for HgError { | |
181 |
fn from( |
|
191 | fn from(e: DirstateV2ParseError) -> Self { | |
182 | HgError::corrupted("dirstate-v2 parse error") |
|
192 | HgError::corrupted(format!("dirstate-v2 parse error: {}", e.message)) | |
183 | } |
|
193 | } | |
184 | } |
|
194 | } | |
185 |
|
195 | |||
@@ -262,13 +272,16 b" impl<'on_disk> Docket<'on_disk> {" | |||||
262 | pub fn read_docket( |
|
272 | pub fn read_docket( | |
263 | on_disk: &[u8], |
|
273 | on_disk: &[u8], | |
264 | ) -> Result<Docket<'_>, DirstateV2ParseError> { |
|
274 | ) -> Result<Docket<'_>, DirstateV2ParseError> { | |
265 | let (header, uuid) = |
|
275 | let (header, uuid) = DocketHeader::from_bytes(on_disk).map_err(|e| { | |
266 | DocketHeader::from_bytes(on_disk).map_err(|_| DirstateV2ParseError)?; |
|
276 | DirstateV2ParseError::new(format!("when reading docket, {}", e)) | |
|
277 | })?; | |||
267 | let uuid_size = header.uuid_size as usize; |
|
278 | let uuid_size = header.uuid_size as usize; | |
268 | if header.marker == *V2_FORMAT_MARKER && uuid.len() == uuid_size { |
|
279 | if header.marker == *V2_FORMAT_MARKER && uuid.len() == uuid_size { | |
269 | Ok(Docket { header, uuid }) |
|
280 | Ok(Docket { header, uuid }) | |
270 | } else { |
|
281 | } else { | |
271 |
Err(DirstateV2ParseError |
|
282 | Err(DirstateV2ParseError::new( | |
|
283 | "invalid format marker or uuid size", | |||
|
284 | )) | |||
272 | } |
|
285 | } | |
273 | } |
|
286 | } | |
274 |
|
287 | |||
@@ -281,14 +294,17 b" pub(super) fn read<'on_disk>(" | |||||
281 | map.dirstate_version = DirstateVersion::V2; |
|
294 | map.dirstate_version = DirstateVersion::V2; | |
282 | return Ok(map); |
|
295 | return Ok(map); | |
283 | } |
|
296 | } | |
284 | let (meta, _) = TreeMetadata::from_bytes(metadata) |
|
297 | let (meta, _) = TreeMetadata::from_bytes(metadata).map_err(|e| { | |
285 | .map_err(|_| DirstateV2ParseError)?; |
|
298 | DirstateV2ParseError::new(format!("when parsing tree metadata, {}", e)) | |
|
299 | })?; | |||
286 | let dirstate_map = DirstateMap { |
|
300 | let dirstate_map = DirstateMap { | |
287 | on_disk, |
|
301 | on_disk, | |
288 |
root: dirstate_map::ChildNodes::OnDisk( |
|
302 | root: dirstate_map::ChildNodes::OnDisk( | |
289 | on_disk, |
|
303 | read_nodes(on_disk, meta.root_nodes).map_err(|mut e| { | |
290 | meta.root_nodes, |
|
304 | e.message = format!("{}, when reading root notes", e.message); | |
291 | )?), |
|
305 | e | |
|
306 | })?, | |||
|
307 | ), | |||
292 | nodes_with_entry_count: meta.nodes_with_entry_count.get(), |
|
308 | nodes_with_entry_count: meta.nodes_with_entry_count.get(), | |
293 | nodes_with_copy_source_count: meta.nodes_with_copy_source_count.get(), |
|
309 | nodes_with_copy_source_count: meta.nodes_with_copy_source_count.get(), | |
294 | ignore_patterns_hash: meta.ignore_patterns_hash, |
|
310 | ignore_patterns_hash: meta.ignore_patterns_hash, | |
@@ -317,7 +333,7 b' impl Node {' | |||||
317 | .expect("dirstate-v2 base_name_start out of bounds"); |
|
333 | .expect("dirstate-v2 base_name_start out of bounds"); | |
318 | Ok(start) |
|
334 | Ok(start) | |
319 | } else { |
|
335 | } else { | |
320 | Err(DirstateV2ParseError) |
|
336 | Err(DirstateV2ParseError::new("not enough bytes for base name")) | |
321 | } |
|
337 | } | |
322 | } |
|
338 | } | |
323 |
|
339 | |||
@@ -571,11 +587,19 b' where' | |||||
571 | // `&[u8]` cannot occupy the entire addess space. |
|
587 | // `&[u8]` cannot occupy the entire addess space. | |
572 | let start = start.get().try_into().unwrap_or(std::usize::MAX); |
|
588 | let start = start.get().try_into().unwrap_or(std::usize::MAX); | |
573 | let len = len.try_into().unwrap_or(std::usize::MAX); |
|
589 | let len = len.try_into().unwrap_or(std::usize::MAX); | |
574 | on_disk |
|
590 | let bytes = match on_disk.get(start..) { | |
575 | .get(start..) |
|
591 | Some(bytes) => bytes, | |
576 | .and_then(|bytes| T::slice_from_bytes(bytes, len).ok()) |
|
592 | None => { | |
|
593 | return Err(DirstateV2ParseError::new( | |||
|
594 | "not enough bytes from disk", | |||
|
595 | )) | |||
|
596 | } | |||
|
597 | }; | |||
|
598 | T::slice_from_bytes(bytes, len) | |||
|
599 | .map_err(|e| { | |||
|
600 | DirstateV2ParseError::new(format!("when reading a slice, {}", e)) | |||
|
601 | }) | |||
577 | .map(|(slice, _rest)| slice) |
|
602 | .map(|(slice, _rest)| slice) | |
578 | .ok_or_else(|| DirstateV2ParseError) |
|
|||
579 | } |
|
603 | } | |
580 |
|
604 | |||
581 | pub(crate) fn for_each_tracked_path<'on_disk>( |
|
605 | pub(crate) fn for_each_tracked_path<'on_disk>( | |
@@ -583,8 +607,9 b" pub(crate) fn for_each_tracked_path<'on_" | |||||
583 | metadata: &[u8], |
|
607 | metadata: &[u8], | |
584 | mut f: impl FnMut(&'on_disk HgPath), |
|
608 | mut f: impl FnMut(&'on_disk HgPath), | |
585 | ) -> Result<(), DirstateV2ParseError> { |
|
609 | ) -> Result<(), DirstateV2ParseError> { | |
586 | let (meta, _) = TreeMetadata::from_bytes(metadata) |
|
610 | let (meta, _) = TreeMetadata::from_bytes(metadata).map_err(|e| { | |
587 | .map_err(|_| DirstateV2ParseError)?; |
|
611 | DirstateV2ParseError::new(format!("when parsing tree metadata, {}", e)) | |
|
612 | })?; | |||
588 | fn recur<'on_disk>( |
|
613 | fn recur<'on_disk>( | |
589 | on_disk: &'on_disk [u8], |
|
614 | on_disk: &'on_disk [u8], | |
590 | nodes: ChildNodes, |
|
615 | nodes: ChildNodes, |
@@ -5,7 +5,6 b' use std::ops::Deref;' | |||||
5 | use std::path::Path; |
|
5 | use std::path::Path; | |
6 |
|
6 | |||
7 | use flate2::read::ZlibDecoder; |
|
7 | use flate2::read::ZlibDecoder; | |
8 | use micro_timer::timed; |
|
|||
9 | use sha1::{Digest, Sha1}; |
|
8 | use sha1::{Digest, Sha1}; | |
10 | use zstd; |
|
9 | use zstd; | |
11 |
|
10 | |||
@@ -49,18 +48,20 b' impl From<NodeMapError> for RevlogError ' | |||||
49 | fn from(error: NodeMapError) -> Self { |
|
48 | fn from(error: NodeMapError) -> Self { | |
50 | match error { |
|
49 | match error { | |
51 | NodeMapError::MultipleResults => RevlogError::AmbiguousPrefix, |
|
50 | NodeMapError::MultipleResults => RevlogError::AmbiguousPrefix, | |
52 |
NodeMapError::RevisionNotInIndex( |
|
51 | NodeMapError::RevisionNotInIndex(rev) => RevlogError::corrupted( | |
|
52 | format!("nodemap point to revision {} not in index", rev), | |||
|
53 | ), | |||
53 | } |
|
54 | } | |
54 | } |
|
55 | } | |
55 | } |
|
56 | } | |
56 |
|
57 | |||
57 | fn corrupted() -> HgError { |
|
58 | fn corrupted<S: AsRef<str>>(context: S) -> HgError { | |
58 | HgError::corrupted("corrupted revlog") |
|
59 | HgError::corrupted(format!("corrupted revlog, {}", context.as_ref())) | |
59 | } |
|
60 | } | |
60 |
|
61 | |||
61 | impl RevlogError { |
|
62 | impl RevlogError { | |
62 | fn corrupted() -> Self { |
|
63 | fn corrupted<S: AsRef<str>>(context: S) -> Self { | |
63 | RevlogError::Other(corrupted()) |
|
64 | RevlogError::Other(corrupted(context)) | |
64 | } |
|
65 | } | |
65 | } |
|
66 | } | |
66 |
|
67 | |||
@@ -81,7 +82,6 b' impl Revlog {' | |||||
81 | /// |
|
82 | /// | |
82 | /// It will also open the associated data file if index and data are not |
|
83 | /// It will also open the associated data file if index and data are not | |
83 | /// interleaved. |
|
84 | /// interleaved. | |
84 | #[timed] |
|
|||
85 | pub fn open( |
|
85 | pub fn open( | |
86 | store_vfs: &Vfs, |
|
86 | store_vfs: &Vfs, | |
87 | index_path: impl AsRef<Path>, |
|
87 | index_path: impl AsRef<Path>, | |
@@ -155,7 +155,6 b' impl Revlog {' | |||||
155 |
|
155 | |||
156 | /// Return the revision number for the given node ID, if it exists in this |
|
156 | /// Return the revision number for the given node ID, if it exists in this | |
157 | /// revlog |
|
157 | /// revlog | |
158 | #[timed] |
|
|||
159 | pub fn rev_from_node( |
|
158 | pub fn rev_from_node( | |
160 | &self, |
|
159 | &self, | |
161 | node: NodePrefix, |
|
160 | node: NodePrefix, | |
@@ -205,7 +204,6 b' impl Revlog {' | |||||
205 | /// All entries required to build the final data out of deltas will be |
|
204 | /// All entries required to build the final data out of deltas will be | |
206 | /// retrieved as needed, and the deltas will be applied to the inital |
|
205 | /// retrieved as needed, and the deltas will be applied to the inital | |
207 | /// snapshot to rebuild the final data. |
|
206 | /// snapshot to rebuild the final data. | |
208 | #[timed] |
|
|||
209 | pub fn get_rev_data( |
|
207 | pub fn get_rev_data( | |
210 | &self, |
|
208 | &self, | |
211 | rev: Revision, |
|
209 | rev: Revision, | |
@@ -240,7 +238,6 b' impl Revlog {' | |||||
240 |
|
238 | |||
241 | /// Build the full data of a revision out its snapshot |
|
239 | /// Build the full data of a revision out its snapshot | |
242 | /// and its deltas. |
|
240 | /// and its deltas. | |
243 | #[timed] |
|
|||
244 | fn build_data_from_deltas( |
|
241 | fn build_data_from_deltas( | |
245 | snapshot: RevlogEntry, |
|
242 | snapshot: RevlogEntry, | |
246 | deltas: &[RevlogEntry], |
|
243 | deltas: &[RevlogEntry], | |
@@ -329,7 +326,8 b' impl Revlog {' | |||||
329 | &self, |
|
326 | &self, | |
330 | rev: Revision, |
|
327 | rev: Revision, | |
331 | ) -> Result<RevlogEntry, HgError> { |
|
328 | ) -> Result<RevlogEntry, HgError> { | |
332 |
|
|
329 | self.get_entry(rev) | |
|
330 | .map_err(|_| corrupted(format!("revision {} out of range", rev))) | |||
333 | } |
|
331 | } | |
334 | } |
|
332 | } | |
335 |
|
333 | |||
@@ -449,7 +447,10 b" impl<'a> RevlogEntry<'a> {" | |||||
449 | ) { |
|
447 | ) { | |
450 | Ok(data) |
|
448 | Ok(data) | |
451 | } else { |
|
449 | } else { | |
452 |
Err(corrupted( |
|
450 | Err(corrupted(format!( | |
|
451 | "hash check failed for revision {}", | |||
|
452 | self.rev | |||
|
453 | ))) | |||
453 | } |
|
454 | } | |
454 | } |
|
455 | } | |
455 |
|
456 | |||
@@ -478,7 +479,10 b" impl<'a> RevlogEntry<'a> {" | |||||
478 | // zstd data. |
|
479 | // zstd data. | |
479 | b'\x28' => Ok(Cow::Owned(self.uncompressed_zstd_data()?)), |
|
480 | b'\x28' => Ok(Cow::Owned(self.uncompressed_zstd_data()?)), | |
480 | // A proper new format should have had a repo/store requirement. |
|
481 | // A proper new format should have had a repo/store requirement. | |
481 |
|
|
482 | format_type => Err(corrupted(format!( | |
|
483 | "unknown compression header '{}'", | |||
|
484 | format_type | |||
|
485 | ))), | |||
482 | } |
|
486 | } | |
483 | } |
|
487 | } | |
484 |
|
488 | |||
@@ -486,12 +490,16 b" impl<'a> RevlogEntry<'a> {" | |||||
486 | let mut decoder = ZlibDecoder::new(self.bytes); |
|
490 | let mut decoder = ZlibDecoder::new(self.bytes); | |
487 | if self.is_delta() { |
|
491 | if self.is_delta() { | |
488 | let mut buf = Vec::with_capacity(self.compressed_len as usize); |
|
492 | let mut buf = Vec::with_capacity(self.compressed_len as usize); | |
489 | decoder.read_to_end(&mut buf).map_err(|_| corrupted())?; |
|
493 | decoder | |
|
494 | .read_to_end(&mut buf) | |||
|
495 | .map_err(|e| corrupted(e.to_string()))?; | |||
490 | Ok(buf) |
|
496 | Ok(buf) | |
491 | } else { |
|
497 | } else { | |
492 | let cap = self.uncompressed_len.max(0) as usize; |
|
498 | let cap = self.uncompressed_len.max(0) as usize; | |
493 | let mut buf = vec![0; cap]; |
|
499 | let mut buf = vec![0; cap]; | |
494 | decoder.read_exact(&mut buf).map_err(|_| corrupted())?; |
|
500 | decoder | |
|
501 | .read_exact(&mut buf) | |||
|
502 | .map_err(|e| corrupted(e.to_string()))?; | |||
495 | Ok(buf) |
|
503 | Ok(buf) | |
496 | } |
|
504 | } | |
497 | } |
|
505 | } | |
@@ -500,15 +508,15 b" impl<'a> RevlogEntry<'a> {" | |||||
500 | if self.is_delta() { |
|
508 | if self.is_delta() { | |
501 | let mut buf = Vec::with_capacity(self.compressed_len as usize); |
|
509 | let mut buf = Vec::with_capacity(self.compressed_len as usize); | |
502 | zstd::stream::copy_decode(self.bytes, &mut buf) |
|
510 | zstd::stream::copy_decode(self.bytes, &mut buf) | |
503 |
.map_err(| |
|
511 | .map_err(|e| corrupted(e.to_string()))?; | |
504 | Ok(buf) |
|
512 | Ok(buf) | |
505 | } else { |
|
513 | } else { | |
506 | let cap = self.uncompressed_len.max(0) as usize; |
|
514 | let cap = self.uncompressed_len.max(0) as usize; | |
507 | let mut buf = vec![0; cap]; |
|
515 | let mut buf = vec![0; cap]; | |
508 | let len = zstd::block::decompress_to_buffer(self.bytes, &mut buf) |
|
516 | let len = zstd::block::decompress_to_buffer(self.bytes, &mut buf) | |
509 |
.map_err(| |
|
517 | .map_err(|e| corrupted(e.to_string()))?; | |
510 | if len != self.uncompressed_len as usize { |
|
518 | if len != self.uncompressed_len as usize { | |
511 | Err(corrupted()) |
|
519 | Err(corrupted("uncompressed length does not match")) | |
512 | } else { |
|
520 | } else { | |
513 | Ok(buf) |
|
521 | Ok(buf) | |
514 | } |
|
522 | } |
@@ -107,7 +107,10 b' py_class!(pub class MixedIndex |py| {' | |||||
107 | String::from_utf8_lossy(node.data(py)).to_string() |
|
107 | String::from_utf8_lossy(node.data(py)).to_string() | |
108 | }; |
|
108 | }; | |
109 |
|
109 | |||
110 |
let prefix = NodePrefix::from_hex(&node_as_string) |
|
110 | let prefix = NodePrefix::from_hex(&node_as_string) | |
|
111 | .map_err(|_| PyErr::new::<ValueError, _>( | |||
|
112 | py, format!("Invalid node or prefix '{}'", node_as_string)) | |||
|
113 | )?; | |||
111 |
|
114 | |||
112 | nt.find_bin(idx, prefix) |
|
115 | nt.find_bin(idx, prefix) | |
113 | // TODO make an inner API returning the node directly |
|
116 | // TODO make an inner API returning the node directly |
@@ -67,10 +67,19 b' pub fn run(invocation: &crate::CliInvoca' | |||||
67 | let message = "`..` or `.` path segment"; |
|
67 | let message = "`..` or `.` path segment"; | |
68 | return Err(CommandError::unsupported(message)); |
|
68 | return Err(CommandError::unsupported(message)); | |
69 | } |
|
69 | } | |
|
70 | let relative_path = working_directory | |||
|
71 | .strip_prefix(&cwd) | |||
|
72 | .unwrap_or(&working_directory); | |||
70 | let stripped = normalized |
|
73 | let stripped = normalized | |
71 | .strip_prefix(&working_directory) |
|
74 | .strip_prefix(&working_directory) | |
72 | // TODO: error message for path arguments outside of the repo |
|
75 | .map_err(|_| { | |
73 |
|
|
76 | CommandError::abort(format!( | |
|
77 | "abort: {} not under root '{}'\n(consider using '--cwd {}')", | |||
|
78 | file, | |||
|
79 | working_directory.display(), | |||
|
80 | relative_path.display(), | |||
|
81 | )) | |||
|
82 | })?; | |||
74 | let hg_file = HgPathBuf::try_from(stripped.to_path_buf()) |
|
83 | let hg_file = HgPathBuf::try_from(stripped.to_path_buf()) | |
75 | .map_err(|e| CommandError::abort(e.to_string()))?; |
|
84 | .map_err(|e| CommandError::abort(e.to_string()))?; | |
76 | files.push(hg_file); |
|
85 | files.push(hg_file); |
@@ -517,10 +517,13 b' fn unsure_is_modified(' | |||||
517 | } |
|
517 | } | |
518 | let filelog = repo.filelog(hg_path)?; |
|
518 | let filelog = repo.filelog(hg_path)?; | |
519 | let fs_len = fs_metadata.len(); |
|
519 | let fs_len = fs_metadata.len(); | |
520 |
let file |
|
520 | let file_node = entry.node_id()?; | |
521 |
|
|
521 | let filelog_entry = filelog.entry_for_node(file_node).map_err(|_| { | |
522 |
|
|
522 | HgError::corrupted(format!( | |
523 | })?; |
|
523 | "filelog missing node {:?} from manifest", | |
|
524 | file_node | |||
|
525 | )) | |||
|
526 | })?; | |||
524 | if filelog_entry.file_data_len_not_equal_to(fs_len) { |
|
527 | if filelog_entry.file_data_len_not_equal_to(fs_len) { | |
525 | // No need to read file contents: |
|
528 | // No need to read file contents: | |
526 | // it cannot be equal if it has a different length. |
|
529 | // it cannot be equal if it has a different length. |
@@ -467,6 +467,7 b' modern form of the option' | |||||
467 | re-delta-fulladd |
|
467 | re-delta-fulladd | |
468 | every revision will be re-added as if it was new content. It will go through the full storage mechanism giving extensions a chance to process it (eg. lfs). This is similar to "re-delta-all" but even slower since more logic is involved. |
|
468 | every revision will be re-added as if it was new content. It will go through the full storage mechanism giving extensions a chance to process it (eg. lfs). This is similar to "re-delta-all" but even slower since more logic is involved. | |
469 |
|
469 | |||
|
470 | ||||
470 | $ hg debugupgrade --optimize re-delta-parent --quiet |
|
471 | $ hg debugupgrade --optimize re-delta-parent --quiet | |
471 | requirements |
|
472 | requirements | |
472 | preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-rust !) |
|
473 | preserved: dotencode, fncache, generaldelta, revlogv1, share-safe, sparserevlog, store (no-rust !) | |
@@ -480,6 +481,20 b' modern form of the option' | |||||
480 | - manifest |
|
481 | - manifest | |
481 |
|
482 | |||
482 |
|
483 | |||
|
484 | passing multiple optimization: | |||
|
485 | ||||
|
486 | $ hg debugupgrade --optimize re-delta-parent --optimize re-delta-multibase --quiet | |||
|
487 | requirements | |||
|
488 | preserved: * (glob) | |||
|
489 | ||||
|
490 | optimisations: re-delta-multibase, re-delta-parent | |||
|
491 | ||||
|
492 | processed revlogs: | |||
|
493 | - all-filelogs | |||
|
494 | - changelog | |||
|
495 | - manifest | |||
|
496 | ||||
|
497 | ||||
483 | unknown optimization: |
|
498 | unknown optimization: | |
484 |
|
499 | |||
485 | $ hg debugupgrade --optimize foobar |
|
500 | $ hg debugupgrade --optimize foobar |
General Comments 0
You need to be logged in to leave comments.
Login now