##// END OF EJS Templates
branching: merge stable into default
Raphaël Gomès -
r50480:18282cf1 merge default
parent child Browse files
Show More
@@ -0,0 +1,98 b''
1 = Mercurial 6.3 =
2
3 == New Features ==
4
5 * testlib: add `--raw-sha1` option to `f`
6 * rhg: add `config.rhg` helptext
7 * config: add alias from `hg help rhg` to `hg help rust`
8 * rhg: add a config option to fall back immediately
9 * bundle: introduce a --exact option
10 * perf-bundle: add a new command to benchmark bundle creation time
11 * perf-bundle: accept --rev arguments
12 * perf-bundle: accept --type argument
13 * perf-unbundle: add a perf command to time the unbundle operation
14 * perf: introduce a benchmark for delta-find
15 * contrib: add support for rhel9
16 * phase-shelve: Implement a 'shelve.store' experimental config
17 * debug-delta-find: introduce a quiet mode
18 * sort-revset: introduce a `random` variant
19 * phase: introduce a dedicated requirement for the `archived` phase
20 * rebase: add boolean config item rebase.store-source
21 * rhg: make [rhg status -v] work when it needs no extra output
22 * rhg: support "!" syntax for disabling extensions
23 * rhg: add debugrhgsparse command to help figure out bugs in rhg
24 * rhg: add sparse support
25 * rhg-status: add support for narrow clones
26 * templates: add filter to reverse list
27 * contrib: add pull_logger extension
28 * revset: handle wdir() in `roots()`
29 * revset: handle wdir() in `sort(..., -topo)`
30 * rhg: support tweakdefaults
31 * rhg: parallellize computation of [unsure_is_modified]
32
33 == Default Format Change ==
34
35 These changes affect newly created repositories (or new clones) done with
36 Mercurial 6.3.
37
38 == New Experimental Features ==
39
40 == Bug Fixes ==
41
42 * shelve: demonstrate that the state is different across platforms (issue6735)
43 * shelve: in test for trailing whitespace, strip commit (issue6735)
44 * shelve: remove strip and rely on prior state (issue6735)
45 * tests: fix http-bad-server expected errors for python 3.10 (issue6643)
46 * status: let `--no-copies` override `ui.statuscopies`
47 * releasenotes: use re.MULTILINE mode when checking admonitions
48 * rhg: fallback to slow path on invalid patterns in hgignore
49 * Fix a bunch of leftover str/bytes issues from Python 3 migration
50 * keepalive: ensure `close_all()` actually closes all cached connections
51 * lfs: fix blob corruption when tranferring with workers on posix
52 * lfs: avoid closing connections when the worker doesn't fork
53 * dirstate-v2: update constant that wasn't kept in sync
54 * dirstate-v2: fix edge case where entries aren't sorted
55 * upgrade: no longer keep all revlogs in memory at any point
56 * rust-status: save new dircache even if just invalidated
57 * dirstate-v2: hash the source of the ignore patterns as well
58 * rhg: fallback when encountering ellipsis revisions
59 * shelve: handle empty parents and nodestoremove in shelvedstate (issue6748)
60 * profile: prevent a crash when line number is unknown
61 * tags-fnode-cache: do not repeatedly open the filelog in a loop
62 * tags-fnode-cache: skip building a changectx in getfnode
63 * rust: create wrapper struct to reduce `regex` contention issues
64
65 == Backwards Compatibility Changes ==
66
67 * chg worker processes will now correctly load per-repository configuration
68 when given a both a relative `--repository` path and an alternate working
69 directory via `--cwd`. A side-effect of this change is that these workers
70 will now return an error if hg cannot find the current working directory,
71 even when a different directory is specified via `--cwd`.
72 * phase: rename the requirement for internal-phase from `internal-phase` to `use-internal-phase` (see 74fb1842f8b962cf03d7cd5b841dbcf2ae065587)
73
74 == Internal API Changes ==
75
76 == Miscellaneous ==
77
78 * sslutil: use proper attribute to select python 3.7+
79 * typing: suppress a few pyi-errors with more recent pytype
80 * ci: bump pytype to 2022.03.29
81 * bundlespec: add documentation about existing option
82 * subrepo: avoid opening console window for non-native subrepos on Windows
83 * setup: unconditionally enable the `long-paths-support` option on Windows
84 * setup: use the full executable manifest from `python.exe`
85 * tests: work around libmagic bug in svn subrepo tests
86 * packagelib: use python3 by default
87 * Improve `hg bisect` performance
88 * perf: properly process formatter option in perf::unbundle
89 * compare-disco: miscellaneous display improvements
90 * fsmonitor: better compatibility with newer Pythons
91 * revlog: finer computation of "issnapshot"
92 * rhg: don't fallback if `strip` or `rebase` are activated
93 * perf: make perf::bundle compatible before 61ba04693d65
94 * perf: make perf::bundle compatible down to 5.2
95 * perf-unbundle: improve compatibility
96 * run-tests: display the time it took to install Mercurial
97 * mergetools: don't let meld open all changed files on startup
98 * dirstate-v2: skip evaluation of hgignore regex on cached directories
@@ -234,3 +234,5 b' 094a5fa3cf52f936e0de3f1e507c818bee5ece6b'
234 f69bffd00abe3a1b94d1032eb2c92e611d16a192 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmLifPsZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVukEC/oCa6AzaJlWh6G45Ap7BCWyB3EDWmcep07W8zRTfHQuuXslNFxRfj8O1DLVP05nDa1Uo2u1nkDxTH+x1fX0q4G8U/yLzCNsiBkCWSeEM8IeolarzzzvFe9Zk+UoRoRlc+vKAjxChtYTEnggQXjLdK+EdbXfEz2kJwdYlGX3lLr0Q2BKnBjSUvFe1Ma/1wxEjZIhDr6t7o8I/49QmPjK7RCYW1WBv77gnml0Oo8cxjDUR9cjqfeKtXKbMJiCsoXCS0hx3vJkBOzcs4ONEIw934is38qPNBBsaUjMrrqm0Mxs6yFricYqGVpmtNijsSRsfS7ZgNfaGaC2Bnu1E7P0A+AzPMPf/BP4uW9ixMbP1hNdr/6N41n19lkdjyQXVWGhB8RM+muf3jc6ZVvgZPMlxvFiz4/rP9nVOdrB96ssFZ9V2Ca/j2tU40AOgjI6sYsAR8pSSgmIdqe+DZQISHTT8D+4uVbtwYD49VklBcxudlbd3dAc5z9rVI3upsyByfRMROc=
234 f69bffd00abe3a1b94d1032eb2c92e611d16a192 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmLifPsZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVukEC/oCa6AzaJlWh6G45Ap7BCWyB3EDWmcep07W8zRTfHQuuXslNFxRfj8O1DLVP05nDa1Uo2u1nkDxTH+x1fX0q4G8U/yLzCNsiBkCWSeEM8IeolarzzzvFe9Zk+UoRoRlc+vKAjxChtYTEnggQXjLdK+EdbXfEz2kJwdYlGX3lLr0Q2BKnBjSUvFe1Ma/1wxEjZIhDr6t7o8I/49QmPjK7RCYW1WBv77gnml0Oo8cxjDUR9cjqfeKtXKbMJiCsoXCS0hx3vJkBOzcs4ONEIw934is38qPNBBsaUjMrrqm0Mxs6yFricYqGVpmtNijsSRsfS7ZgNfaGaC2Bnu1E7P0A+AzPMPf/BP4uW9ixMbP1hNdr/6N41n19lkdjyQXVWGhB8RM+muf3jc6ZVvgZPMlxvFiz4/rP9nVOdrB96ssFZ9V2Ca/j2tU40AOgjI6sYsAR8pSSgmIdqe+DZQISHTT8D+4uVbtwYD49VklBcxudlbd3dAc5z9rVI3upsyByfRMROc=
235 b5c8524827d20fe2e0ca8fb1234a0fe35a1a36c7 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmMQxRoZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVm2gC/9HikIaOE49euIoLj6ctYsJY9PSQK4Acw7BXvdsTVMmW27o87NxH75bGBbmPQ57X1iuKLCQ1RoU3p2Eh1gPbkIsouWO3enBIfsFmkPtWQz28zpCrI9CUXg2ug4PGFPN9XyxNmhJ7vJ4Cst2tRxz9PBKUBO2EXJN1UKIdMvurIeT2sQrDQf1ePc85QkXx79231wZyF98smnV7UYU9ZPFnAzfcuRzdFn7UmH3KKxHTZQ6wAevj/fJXf5NdTlqbeNmq/t75/nGKXSFPWtRGfFs8JHGkkLgBiTJVsHYSqcnKNdVldIFUoJP4c2/SPyoBkqNvoIrr73XRo8tdDF1iY4ddmhHMSmKgSRqLnIEgew3Apa/IwPdolg+lMsOtcjgz4CB9agJ+O0+rdZd2ZUBNMN0nBSUh+lrkMjat8TJAlvut9h/6HAe4Dz8WheoWol8f8t1jLOJvbdvsMYi+Hf9CZjp7PlHT9y/TnDarcw2YIrf6Bv+Fm14ZDelu9VlF2zR1X8cofY=
235 b5c8524827d20fe2e0ca8fb1234a0fe35a1a36c7 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmMQxRoZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVm2gC/9HikIaOE49euIoLj6ctYsJY9PSQK4Acw7BXvdsTVMmW27o87NxH75bGBbmPQ57X1iuKLCQ1RoU3p2Eh1gPbkIsouWO3enBIfsFmkPtWQz28zpCrI9CUXg2ug4PGFPN9XyxNmhJ7vJ4Cst2tRxz9PBKUBO2EXJN1UKIdMvurIeT2sQrDQf1ePc85QkXx79231wZyF98smnV7UYU9ZPFnAzfcuRzdFn7UmH3KKxHTZQ6wAevj/fJXf5NdTlqbeNmq/t75/nGKXSFPWtRGfFs8JHGkkLgBiTJVsHYSqcnKNdVldIFUoJP4c2/SPyoBkqNvoIrr73XRo8tdDF1iY4ddmhHMSmKgSRqLnIEgew3Apa/IwPdolg+lMsOtcjgz4CB9agJ+O0+rdZd2ZUBNMN0nBSUh+lrkMjat8TJAlvut9h/6HAe4Dz8WheoWol8f8t1jLOJvbdvsMYi+Hf9CZjp7PlHT9y/TnDarcw2YIrf6Bv+Fm14ZDelu9VlF2zR1X8cofY=
236 dbdee8ac3e3fcdda1fa55b90c0a235125b7f8e6f 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmM77dQZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZViOTC/sEPicecV3h3v47VAIUigyKNWpcJ+epbRRaH6gqHTkexvULOPL6nJrdfBHkNry1KRtOcjaxQvtWZM+TRCfqsE++Q3ZYakRpWKontb/8xQSbmENvbnElLh6k0STxN/JVc480us7viDG5pHS9DLsgbkHmdCv5KdmSE0hphRrWX+5X7RTqpAfCgdwTkacB5Geu9QfRnuYjz6lvqbs5ITKtBGUYbg3hKzw2894FHtMqV6qa5rk1ZMmVDbQfKQaMVG41UWNoN7bLESi69EmF4q5jsXdIbuBy0KtNXmB+gdAaHN03B5xtc+IsQZOTHEUNlMgov3yEVTcA6fSG9/Z+CMsdCbyQxqkwakbwWS1L2WcAsrkHyafvbNdR2FU34iYRWOck8IUg2Ffv7UFrHabJDy+nY7vcTLb0f7lV4jLXMWEt1hvXWMYek6Y4jtWahg6fjmAdD3Uf4BMfsTdnQKPvJpWXx303jnST3xvFvuqbbbDlhLfAB9M6kxVntvCVkMlMpe39+gM=
236 dbdee8ac3e3fcdda1fa55b90c0a235125b7f8e6f 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmM77dQZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZViOTC/sEPicecV3h3v47VAIUigyKNWpcJ+epbRRaH6gqHTkexvULOPL6nJrdfBHkNry1KRtOcjaxQvtWZM+TRCfqsE++Q3ZYakRpWKontb/8xQSbmENvbnElLh6k0STxN/JVc480us7viDG5pHS9DLsgbkHmdCv5KdmSE0hphRrWX+5X7RTqpAfCgdwTkacB5Geu9QfRnuYjz6lvqbs5ITKtBGUYbg3hKzw2894FHtMqV6qa5rk1ZMmVDbQfKQaMVG41UWNoN7bLESi69EmF4q5jsXdIbuBy0KtNXmB+gdAaHN03B5xtc+IsQZOTHEUNlMgov3yEVTcA6fSG9/Z+CMsdCbyQxqkwakbwWS1L2WcAsrkHyafvbNdR2FU34iYRWOck8IUg2Ffv7UFrHabJDy+nY7vcTLb0f7lV4jLXMWEt1hvXWMYek6Y4jtWahg6fjmAdD3Uf4BMfsTdnQKPvJpWXx303jnST3xvFvuqbbbDlhLfAB9M6kxVntvCVkMlMpe39+gM=
237 a3356ab610fc50000cf0ba55c424a4d96da11db7 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmNWr44ZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVjalC/9ddIeZ1qc3ykUZb+vKw+rZ6WS0rnDgrfFYBQFooK106lB+IC2PlghXSrY2hXn/7Dk95bK90S9AO4TFidDPiRYuBYdXR+G+CzmYFtCQzGBgGyrWgpUYsZUeA3VNqZ+Zbwn/vRNiFVNDsrFudjE6xEwaYdepmoXJsv3NdgZME7T0ZcDIujIa7ihiXvGFPVzMyF/VZg4QvdmerC4pvkeKC3KRNjhBkMQbf0GtQ4kpgMFBj5bmgXbq9rftL5yYy+rDiRQ0qzpOMHbdxvSZjPhK/do5M3rt2cjPxtF+7R3AHxQ6plOf0G89BONYebopY92OIyA3Qg9d/zIKDmibhgyxj4G9YU3+38gPEpsNeEw0fkyxhQbCY3QpNX4JGFaxq5GVCUywvVIuqoiOcQeXlTDN70zhAQHUx0rcGe1Lc6I+rT6Y2lNjJIdiCiMAWIl0D+4SVrLqdMYdSMXcBajTxOudb9KZnu03zNMXuLb8FFk1lFzkY7AcWA++d02f15P3sVZsDXE=
238 04f1dba53c961dfdb875c8469adc96fa999cfbed 0 iQHNBAABCgA3FiEEH2b4zfZU6QXBHaBhoR4BzQ4F2VYFAmNyC5sZHGFscGhhcmVAcmFwaGFlbGdvbWVzLmRldgAKCRChHgHNDgXZVqF+C/4uLaV/4nizZkWD3PjU1WyFYDg4bWDFOHb+PWuQ/3uoHXu1/EaYRnqmcDyOSJ99aXZBQ78rm9xhjxdmbklZ4ll1EGkqfTiYH+ld+rqE8iaqlc/DVy7pFXaenYwxletzO1OezzwF4XDLi6hcqzY9CXA3NM40vf6W4Rs5bEIi4eSbgJSNB1ll6ZzjvkU5bWTUoxSH+fxIJUuo27El2etdlKFQkS3/oTzWHejpVn6SQ1KyojTHMQBDRK4rqJBISp3gTf4TEezb0q0HTutJYDFdQNIRqx7V1Ao4Ei+YNbenJzcWJOA/2uk4V0AvZ4tnjgAzBYKwvIL1HfoQ0OmILeXjlVzV7Xu0G57lavum0sKkz/KZLKyYhKQHjYQLE7YMSM2y6/UEoFNN577vB47CHUq446PSMb8dGs2rmj66rj4iz5ml0yX+V9O2PpmIKoPAu1Y5/6zB9rCL76MRx182IW2m3rm4lsTfXPBPtea/OFt6ylxqCJRxaA0pht4FiAOvicPKXh4=
@@ -247,3 +247,5 b' 094a5fa3cf52f936e0de3f1e507c818bee5ece6b'
247 f69bffd00abe3a1b94d1032eb2c92e611d16a192 6.2.1
247 f69bffd00abe3a1b94d1032eb2c92e611d16a192 6.2.1
248 b5c8524827d20fe2e0ca8fb1234a0fe35a1a36c7 6.2.2
248 b5c8524827d20fe2e0ca8fb1234a0fe35a1a36c7 6.2.2
249 dbdee8ac3e3fcdda1fa55b90c0a235125b7f8e6f 6.2.3
249 dbdee8ac3e3fcdda1fa55b90c0a235125b7f8e6f 6.2.3
250 a3356ab610fc50000cf0ba55c424a4d96da11db7 6.3rc0
251 04f1dba53c961dfdb875c8469adc96fa999cfbed 6.3.0
@@ -2676,11 +2676,34 b' def perf_unbundle(ui, repo, fname, **opt'
2676 """benchmark application of a bundle in a repository.
2676 """benchmark application of a bundle in a repository.
2677
2677
2678 This does not include the final transaction processing"""
2678 This does not include the final transaction processing"""
2679
2679 from mercurial import exchange
2680 from mercurial import exchange
2680 from mercurial import bundle2
2681 from mercurial import bundle2
2682 from mercurial import transaction
2681
2683
2682 opts = _byteskwargs(opts)
2684 opts = _byteskwargs(opts)
2683
2685
2686 ### some compatibility hotfix
2687 #
2688 # the data attribute is dropped in 63edc384d3b7 a changeset introducing a
2689 # critical regression that break transaction rollback for files that are
2690 # de-inlined.
2691 method = transaction.transaction._addentry
2692 pre_63edc384d3b7 = "data" in getargspec(method).args
2693 # the `detailed_exit_code` attribute is introduced in 33c0c25d0b0f
2694 # a changeset that is a close descendant of 18415fc918a1, the changeset
2695 # that conclude the fix run for the bug introduced in 63edc384d3b7.
2696 args = getargspec(error.Abort.__init__).args
2697 post_18415fc918a1 = "detailed_exit_code" in args
2698
2699 old_max_inline = None
2700 try:
2701 if not (pre_63edc384d3b7 or post_18415fc918a1):
2702 # disable inlining
2703 old_max_inline = mercurial.revlog._maxinline
2704 # large enough to never happen
2705 mercurial.revlog._maxinline = 2 ** 50
2706
2684 with repo.lock():
2707 with repo.lock():
2685 bundle = [None, None]
2708 bundle = [None, None]
2686 orig_quiet = repo.ui.quiet
2709 orig_quiet = repo.ui.quiet
@@ -2699,7 +2722,8 b' def perf_unbundle(ui, repo, fname, **opt'
2699 f.seek(0)
2722 f.seek(0)
2700 bundle[0] = exchange.readbundle(ui, f, fname)
2723 bundle[0] = exchange.readbundle(ui, f, fname)
2701 bundle[1] = repo.transaction(b'perf::unbundle')
2724 bundle[1] = repo.transaction(b'perf::unbundle')
2702 bundle[1]._report = noop_report # silence the transaction
2725 # silence the transaction
2726 bundle[1]._report = noop_report
2703
2727
2704 def apply():
2728 def apply():
2705 gen, tr = bundle
2729 gen, tr = bundle
@@ -2719,6 +2743,9 b' def perf_unbundle(ui, repo, fname, **opt'
2719 gen, tr = bundle
2743 gen, tr = bundle
2720 if tr is not None:
2744 if tr is not None:
2721 tr.abort()
2745 tr.abort()
2746 finally:
2747 if old_max_inline is not None:
2748 mercurial.revlog._maxinline = old_max_inline
2722
2749
2723
2750
2724 @command(
2751 @command(
@@ -23,9 +23,9 b' from . import common'
23 # these do not work with demandimport, blacklist
23 # these do not work with demandimport, blacklist
24 demandimport.IGNORES.update(
24 demandimport.IGNORES.update(
25 [
25 [
26 b'breezy.transactions',
26 'breezy.transactions',
27 b'breezy.urlutils',
27 'breezy.urlutils',
28 b'ElementPath',
28 'ElementPath',
29 ]
29 ]
30 )
30 )
31
31
@@ -11,7 +11,7 b''
11
11
12 from mercurial import demandimport
12 from mercurial import demandimport
13
13
14 demandimport.IGNORES.update([b'pkgutil', b'pkg_resources', b'__main__'])
14 demandimport.IGNORES.update(['pkgutil', 'pkg_resources', '__main__'])
15
15
16 from mercurial import (
16 from mercurial import (
17 encoding,
17 encoding,
@@ -601,14 +601,30 b' class _gitlfsremote:'
601 continue
601 continue
602 raise
602 raise
603
603
604 # Until https multiplexing gets sorted out
604 # Until https multiplexing gets sorted out. It's not clear if
605 # ConnectionManager.set_ready() is externally synchronized for thread
606 # safety with Windows workers.
605 if self.ui.configbool(b'experimental', b'lfs.worker-enable'):
607 if self.ui.configbool(b'experimental', b'lfs.worker-enable'):
608 # The POSIX workers are forks of this process, so before spinning
609 # them up, close all pooled connections. Otherwise, there's no way
610 # to coordinate between them about who is using what, and the
611 # transfers will get corrupted.
612 #
613 # TODO: add a function to keepalive.ConnectionManager to mark all
614 # ready connections as in use, and roll that back after the fork?
615 # That would allow the existing pool of connections in this process
616 # to be preserved.
617 def prefork():
618 for h in self.urlopener.handlers:
619 getattr(h, "close_all", lambda: None)()
620
606 oids = worker.worker(
621 oids = worker.worker(
607 self.ui,
622 self.ui,
608 0.1,
623 0.1,
609 transfer,
624 transfer,
610 (),
625 (),
611 sorted(objects, key=lambda o: o.get(b'oid')),
626 sorted(objects, key=lambda o: o.get(b'oid')),
627 prefork=prefork,
612 )
628 )
613 else:
629 else:
614 oids = transfer(sorted(objects, key=lambda o: o.get(b'oid')))
630 oids = transfer(sorted(objects, key=lambda o: o.get(b'oid')))
@@ -56,11 +56,11 b' assert TREE_METADATA_SIZE == TREE_METADA'
56 assert NODE_SIZE == NODE.size
56 assert NODE_SIZE == NODE.size
57
57
58 # match constant in mercurial/pure/parsers.py
58 # match constant in mercurial/pure/parsers.py
59 DIRSTATE_V2_DIRECTORY = 1 << 5
59 DIRSTATE_V2_DIRECTORY = 1 << 13
60
60
61
61
62 def parse_dirstate(map, copy_map, data, tree_metadata):
62 def parse_dirstate(map, copy_map, data, tree_metadata):
63 """parse a full v2-dirstate from a binary data into dictionnaries:
63 """parse a full v2-dirstate from a binary data into dictionaries:
64
64
65 - map: a {path: entry} mapping that will be filled
65 - map: a {path: entry} mapping that will be filled
66 - copy_map: a {path: copy-source} mapping that will be filled
66 - copy_map: a {path: copy-source} mapping that will be filled
@@ -176,7 +176,7 b' class Node:'
176 def pack_dirstate(map, copy_map):
176 def pack_dirstate(map, copy_map):
177 """
177 """
178 Pack `map` and `copy_map` into the dirstate v2 binary format and return
178 Pack `map` and `copy_map` into the dirstate v2 binary format and return
179 the bytearray.
179 the tuple of (data, metadata) bytearrays.
180
180
181 The on-disk format expects a tree-like structure where the leaves are
181 The on-disk format expects a tree-like structure where the leaves are
182 written first (and sorted per-directory), going up levels until the root
182 written first (and sorted per-directory), going up levels until the root
@@ -191,7 +191,7 b' def pack_dirstate(map, copy_map):'
191 # Algorithm explanation
191 # Algorithm explanation
192
192
193 This explanation does not talk about the different counters for tracked
193 This explanation does not talk about the different counters for tracked
194 descendents and storing the copies, but that work is pretty simple once this
194 descendants and storing the copies, but that work is pretty simple once this
195 algorithm is in place.
195 algorithm is in place.
196
196
197 ## Building a subtree
197 ## Building a subtree
@@ -272,9 +272,9 b' def pack_dirstate(map, copy_map):'
272 )
272 )
273 return data, tree_metadata
273 return data, tree_metadata
274
274
275 sorted_map = sorted(map.items(), key=lambda x: x[0])
275 sorted_map = sorted(map.items(), key=lambda x: x[0].split(b"/"))
276
276
277 # Use a stack to not have to only remember the nodes we currently need
277 # Use a stack to have to only remember the nodes we currently need
278 # instead of building the entire tree in memory
278 # instead of building the entire tree in memory
279 stack = []
279 stack = []
280 current_node = Node(b"", None)
280 current_node = Node(b"", None)
@@ -486,6 +486,7 b' helptable = sorted('
486 [
486 [
487 b'rust',
487 b'rust',
488 b'rustext',
488 b'rustext',
489 b'rhg',
489 ],
490 ],
490 _(b'Rust in Mercurial'),
491 _(b'Rust in Mercurial'),
491 loaddoc(b'rust'),
492 loaddoc(b'rust'),
@@ -2156,6 +2156,51 b' Alias definitions for revsets. See :hg:`'
2156 Currently, only the rebase and absorb commands consider this configuration.
2156 Currently, only the rebase and absorb commands consider this configuration.
2157 (EXPERIMENTAL)
2157 (EXPERIMENTAL)
2158
2158
2159 ``rhg``
2160 -------
2161
2162 The pure Rust fast-path for Mercurial. See `rust/README.rst` in the Mercurial repository.
2163
2164 ``fallback-executable``
2165 Path to the executable to run in a sub-process when falling back to
2166 another implementation of Mercurial.
2167
2168 ``fallback-immediately``
2169 Fall back to ``fallback-executable`` as soon as possible, regardless of
2170 the `rhg.on-unsupported` configuration. Useful for debugging, for example to
2171 bypass `rhg` if the deault `hg` points to `rhg`.
2172
2173 Note that because this requires loading the configuration, it is possible
2174 that `rhg` error out before being able to fall back.
2175
2176 ``ignored-extensions``
2177 Controls which extensions should be ignored by `rhg`. By default, `rhg`
2178 triggers the `rhg.on-unsupported` behavior any unsupported extensions.
2179 Users can disable that behavior when they know that a given extension
2180 does not need support from `rhg`.
2181
2182 Expects a list of extension names, or ``*`` to ignore all extensions.
2183
2184 Note: ``*:<suboption>`` is also a valid extension name for this
2185 configuration option.
2186 As of this writing, the only valid "global" suboption is ``required``.
2187
2188 ``on-unsupported``
2189 Controls the behavior of `rhg` when detecting unsupported features.
2190
2191 Possible values are `abort` (default), `abort-silent` and `fallback`.
2192
2193 ``abort``
2194 Print an error message describing what feature is not supported,
2195 and exit with code 252
2196
2197 ``abort-silent``
2198 Silently exit with code 252
2199
2200 ``fallback``
2201 Try running the fallback executable with the same parameters
2202 (and trace the fallback reason, use `RUST_LOG=trace` to see).
2203
2159 ``share``
2204 ``share``
2160 ---------
2205 ---------
2161
2206
@@ -2171,9 +2216,9 b' Alias definitions for revsets. See :hg:`'
2171 ``allow``
2216 ``allow``
2172 Respects the feature presence in the share source
2217 Respects the feature presence in the share source
2173 ``upgrade-abort``
2218 ``upgrade-abort``
2174 tries to upgrade the share to use share-safe; if it fails, aborts
2219 Tries to upgrade the share to use share-safe; if it fails, aborts
2175 ``upgrade-allow``
2220 ``upgrade-allow``
2176 tries to upgrade the share; if it fails, continue by
2221 Tries to upgrade the share; if it fails, continue by
2177 respecting the share source setting
2222 respecting the share source setting
2178
2223
2179 Check :hg:`help config.format.use-share-safe` for details about the
2224 Check :hg:`help config.format.use-share-safe` for details about the
@@ -2199,9 +2244,9 b' Alias definitions for revsets. See :hg:`'
2199 ``allow``
2244 ``allow``
2200 Respects the feature presence in the share source
2245 Respects the feature presence in the share source
2201 ``downgrade-abort``
2246 ``downgrade-abort``
2202 tries to downgrade the share to not use share-safe; if it fails, aborts
2247 Tries to downgrade the share to not use share-safe; if it fails, aborts
2203 ``downgrade-allow``
2248 ``downgrade-allow``
2204 tries to downgrade the share to not use share-safe;
2249 Tries to downgrade the share to not use share-safe;
2205 if it fails, continue by respecting the shared source setting
2250 if it fails, continue by respecting the shared source setting
2206
2251
2207 Check :hg:`help config.format.use-share-safe` for details about the
2252 Check :hg:`help config.format.use-share-safe` for details about the
@@ -283,8 +283,16 b' We define:'
283 in inclusion order. This definition is recursive, as included files can
283 in inclusion order. This definition is recursive, as included files can
284 themselves include more files.
284 themselves include more files.
285
285
286 This hash is defined as the SHA-1 of the concatenation (in sorted
286 * "filepath" as the bytes of the ignore file path
287 order) of the "expanded contents" of each "root" ignore file.
287 relative to the root of the repository if inside the repository,
288 or the untouched path as defined in the configuration.
289
290 This hash is defined as the SHA-1 of the following line format:
291
292 <filepath> <sha1 of the "expanded contents">\n
293
294 for each "root" ignore file. (in sorted order)
295
288 (Note that computing this does not require actually concatenating
296 (Note that computing this does not require actually concatenating
289 into a single contiguous byte sequence.
297 into a single contiguous byte sequence.
290 Instead a SHA-1 hasher object can be created
298 Instead a SHA-1 hasher object can be created
@@ -89,6 +89,8 b' execution of certain commands while addi'
89 The only way of trying it out is by building it from source. Please refer to
89 The only way of trying it out is by building it from source. Please refer to
90 `rust/README.rst` in the Mercurial repository.
90 `rust/README.rst` in the Mercurial repository.
91
91
92 See `hg help config.rhg` for configuration options.
93
92 Contributing
94 Contributing
93 ============
95 ============
94
96
@@ -166,7 +166,9 b' class ConnectionManager:'
166 if host:
166 if host:
167 return list(self._hostmap[host])
167 return list(self._hostmap[host])
168 else:
168 else:
169 return dict(self._hostmap)
169 return dict(
170 {h: list(conns) for (h, conns) in self._hostmap.items()}
171 )
170
172
171
173
172 class KeepAliveHandler:
174 class KeepAliveHandler:
@@ -700,6 +702,17 b' class HTTPConnection(httplib.HTTPConnect'
700 self.sentbytescount = 0
702 self.sentbytescount = 0
701 self.receivedbytescount = 0
703 self.receivedbytescount = 0
702
704
705 def __repr__(self):
706 base = super(HTTPConnection, self).__repr__()
707 local = "(unconnected)"
708 s = self.sock
709 if s:
710 try:
711 local = "%s:%d" % s.getsockname()
712 except OSError:
713 pass # Likely not connected
714 return "<%s: %s <--> %s:%d>" % (base, local, self.host, self.port)
715
703
716
704 #########################################################################
717 #########################################################################
705 ##### TEST FUNCTIONS
718 ##### TEST FUNCTIONS
@@ -3738,8 +3738,10 b' def newreporequirements(ui, createopts):'
3738
3738
3739 if ui.configbool(b'format', b'use-dirstate-tracked-hint'):
3739 if ui.configbool(b'format', b'use-dirstate-tracked-hint'):
3740 version = ui.configint(b'format', b'use-dirstate-tracked-hint.version')
3740 version = ui.configint(b'format', b'use-dirstate-tracked-hint.version')
3741 msg = _("ignoring unknown tracked key version: %d\n")
3741 msg = _(b"ignoring unknown tracked key version: %d\n")
3742 hint = _("see `hg help config.format.use-dirstate-tracked-hint-version")
3742 hint = _(
3743 b"see `hg help config.format.use-dirstate-tracked-hint-version"
3744 )
3743 if version != 1:
3745 if version != 1:
3744 ui.warn(msg % version, hint=hint)
3746 ui.warn(msg % version, hint=hint)
3745 else:
3747 else:
@@ -278,9 +278,9 b' class shelvedstate:'
278 try:
278 try:
279 d[b'originalwctx'] = bin(d[b'originalwctx'])
279 d[b'originalwctx'] = bin(d[b'originalwctx'])
280 d[b'pendingctx'] = bin(d[b'pendingctx'])
280 d[b'pendingctx'] = bin(d[b'pendingctx'])
281 d[b'parents'] = [bin(h) for h in d[b'parents'].split(b' ')]
281 d[b'parents'] = [bin(h) for h in d[b'parents'].split(b' ') if h]
282 d[b'nodestoremove'] = [
282 d[b'nodestoremove'] = [
283 bin(h) for h in d[b'nodestoremove'].split(b' ')
283 bin(h) for h in d[b'nodestoremove'].split(b' ') if h
284 ]
284 ]
285 except (ValueError, KeyError) as err:
285 except (ValueError, KeyError) as err:
286 raise error.CorruptedState(stringutil.forcebytestr(err))
286 raise error.CorruptedState(stringutil.forcebytestr(err))
@@ -236,8 +236,8 b' class CodeSite:'
236
236
237 def getsource(self, length):
237 def getsource(self, length):
238 if self.source is None:
238 if self.source is None:
239 lineno = self.lineno - 1
240 try:
239 try:
240 lineno = self.lineno - 1 # lineno can be None
241 with open(self.path, b'rb') as fp:
241 with open(self.path, b'rb') as fp:
242 for i, line in enumerate(fp):
242 for i, line in enumerate(fp):
243 if i == lineno:
243 if i == lineno:
@@ -773,7 +773,7 b' def display_hotpath(data, fp, limit=0.05'
773 codestring = codepattern % (
773 codestring = codepattern % (
774 prefix,
774 prefix,
775 b'line'.rjust(spacing_len),
775 b'line'.rjust(spacing_len),
776 site.lineno,
776 site.lineno if site.lineno is not None else -1,
777 b''.ljust(max(0, 4 - len(str(site.lineno)))),
777 b''.ljust(max(0, 4 - len(str(site.lineno)))),
778 site.getsource(30),
778 site.getsource(30),
779 )
779 )
@@ -491,11 +491,14 b' def _getfnodes(ui, repo, nodes):'
491 cachefnode = {}
491 cachefnode = {}
492 validated_fnodes = set()
492 validated_fnodes = set()
493 unknown_entries = set()
493 unknown_entries = set()
494
495 flog = None
494 for node in nodes:
496 for node in nodes:
495 fnode = fnodescache.getfnode(node)
497 fnode = fnodescache.getfnode(node)
496 flog = repo.file(b'.hgtags')
497 if fnode != repo.nullid:
498 if fnode != repo.nullid:
498 if fnode not in validated_fnodes:
499 if fnode not in validated_fnodes:
500 if flog is None:
501 flog = repo.file(b'.hgtags')
499 if flog.hasnode(fnode):
502 if flog.hasnode(fnode):
500 validated_fnodes.add(fnode)
503 validated_fnodes.add(fnode)
501 else:
504 else:
@@ -758,8 +761,7 b' class hgtagsfnodescache:'
758 if node == self._repo.nullid:
761 if node == self._repo.nullid:
759 return node
762 return node
760
763
761 ctx = self._repo[node]
764 rev = self._repo.changelog.rev(node)
762 rev = ctx.rev()
763
765
764 self.lookupcount += 1
766 self.lookupcount += 1
765
767
@@ -852,7 +852,7 b' class UpgradeOperation(BaseOperation):'
852
852
853 return False
853 return False
854
854
855 def _write_labeled(self, l, label):
855 def _write_labeled(self, l, label: bytes):
856 """
856 """
857 Utility function to aid writing of a list under one label
857 Utility function to aid writing of a list under one label
858 """
858 """
@@ -867,19 +867,19 b' class UpgradeOperation(BaseOperation):'
867 self.ui.write(_(b'requirements\n'))
867 self.ui.write(_(b'requirements\n'))
868 self.ui.write(_(b' preserved: '))
868 self.ui.write(_(b' preserved: '))
869 self._write_labeled(
869 self._write_labeled(
870 self._preserved_requirements, "upgrade-repo.requirement.preserved"
870 self._preserved_requirements, b"upgrade-repo.requirement.preserved"
871 )
871 )
872 self.ui.write((b'\n'))
872 self.ui.write((b'\n'))
873 if self._removed_requirements:
873 if self._removed_requirements:
874 self.ui.write(_(b' removed: '))
874 self.ui.write(_(b' removed: '))
875 self._write_labeled(
875 self._write_labeled(
876 self._removed_requirements, "upgrade-repo.requirement.removed"
876 self._removed_requirements, b"upgrade-repo.requirement.removed"
877 )
877 )
878 self.ui.write((b'\n'))
878 self.ui.write((b'\n'))
879 if self._added_requirements:
879 if self._added_requirements:
880 self.ui.write(_(b' added: '))
880 self.ui.write(_(b' added: '))
881 self._write_labeled(
881 self._write_labeled(
882 self._added_requirements, "upgrade-repo.requirement.added"
882 self._added_requirements, b"upgrade-repo.requirement.added"
883 )
883 )
884 self.ui.write((b'\n'))
884 self.ui.write((b'\n'))
885 self.ui.write(b'\n')
885 self.ui.write(b'\n')
@@ -893,7 +893,7 b' class UpgradeOperation(BaseOperation):'
893 self.ui.write(_(b'optimisations: '))
893 self.ui.write(_(b'optimisations: '))
894 self._write_labeled(
894 self._write_labeled(
895 [a.name for a in optimisations],
895 [a.name for a in optimisations],
896 "upgrade-repo.optimisation.performed",
896 b"upgrade-repo.optimisation.performed",
897 )
897 )
898 self.ui.write(b'\n\n')
898 self.ui.write(b'\n\n')
899
899
@@ -233,18 +233,18 b' def _clonerevlogs('
233
233
234 # This is for the separate progress bars.
234 # This is for the separate progress bars.
235 if rl_type & store.FILEFLAGS_CHANGELOG:
235 if rl_type & store.FILEFLAGS_CHANGELOG:
236 changelogs[unencoded] = (rl_type, rl)
236 changelogs[unencoded] = rl_type
237 crevcount += len(rl)
237 crevcount += len(rl)
238 csrcsize += datasize
238 csrcsize += datasize
239 crawsize += rawsize
239 crawsize += rawsize
240 elif rl_type & store.FILEFLAGS_MANIFESTLOG:
240 elif rl_type & store.FILEFLAGS_MANIFESTLOG:
241 manifests[unencoded] = (rl_type, rl)
241 manifests[unencoded] = rl_type
242 mcount += 1
242 mcount += 1
243 mrevcount += len(rl)
243 mrevcount += len(rl)
244 msrcsize += datasize
244 msrcsize += datasize
245 mrawsize += rawsize
245 mrawsize += rawsize
246 elif rl_type & store.FILEFLAGS_FILELOG:
246 elif rl_type & store.FILEFLAGS_FILELOG:
247 filelogs[unencoded] = (rl_type, rl)
247 filelogs[unencoded] = rl_type
248 fcount += 1
248 fcount += 1
249 frevcount += len(rl)
249 frevcount += len(rl)
250 fsrcsize += datasize
250 fsrcsize += datasize
@@ -289,7 +289,9 b' def _clonerevlogs('
289 )
289 )
290 )
290 )
291 progress = srcrepo.ui.makeprogress(_(b'file revisions'), total=frevcount)
291 progress = srcrepo.ui.makeprogress(_(b'file revisions'), total=frevcount)
292 for unencoded, (rl_type, oldrl) in sorted(filelogs.items()):
292 for unencoded, rl_type in sorted(filelogs.items()):
293 oldrl = _revlogfrompath(srcrepo, rl_type, unencoded)
294
293 newrl = _perform_clone(
295 newrl = _perform_clone(
294 ui,
296 ui,
295 dstrepo,
297 dstrepo,
@@ -329,7 +331,8 b' def _clonerevlogs('
329 progress = srcrepo.ui.makeprogress(
331 progress = srcrepo.ui.makeprogress(
330 _(b'manifest revisions'), total=mrevcount
332 _(b'manifest revisions'), total=mrevcount
331 )
333 )
332 for unencoded, (rl_type, oldrl) in sorted(manifests.items()):
334 for unencoded, rl_type in sorted(manifests.items()):
335 oldrl = _revlogfrompath(srcrepo, rl_type, unencoded)
333 newrl = _perform_clone(
336 newrl = _perform_clone(
334 ui,
337 ui,
335 dstrepo,
338 dstrepo,
@@ -368,7 +371,8 b' def _clonerevlogs('
368 progress = srcrepo.ui.makeprogress(
371 progress = srcrepo.ui.makeprogress(
369 _(b'changelog revisions'), total=crevcount
372 _(b'changelog revisions'), total=crevcount
370 )
373 )
371 for unencoded, (rl_type, oldrl) in sorted(changelogs.items()):
374 for unencoded, rl_type in sorted(changelogs.items()):
375 oldrl = _revlogfrompath(srcrepo, rl_type, unencoded)
372 newrl = _perform_clone(
376 newrl = _perform_clone(
373 ui,
377 ui,
374 dstrepo,
378 dstrepo,
@@ -125,7 +125,14 b' def worthwhile(ui, costperop, nops, thre'
125
125
126
126
127 def worker(
127 def worker(
128 ui, costperarg, func, staticargs, args, hasretval=False, threadsafe=True
128 ui,
129 costperarg,
130 func,
131 staticargs,
132 args,
133 hasretval=False,
134 threadsafe=True,
135 prefork=None,
129 ):
136 ):
130 """run a function, possibly in parallel in multiple worker
137 """run a function, possibly in parallel in multiple worker
131 processes.
138 processes.
@@ -149,6 +156,10 b' def worker('
149 threadsafe - whether work items are thread safe and can be executed using
156 threadsafe - whether work items are thread safe and can be executed using
150 a thread-based worker. Should be disabled for CPU heavy tasks that don't
157 a thread-based worker. Should be disabled for CPU heavy tasks that don't
151 release the GIL.
158 release the GIL.
159
160 prefork - a parameterless Callable that is invoked prior to forking the
161 process. fork() is only used on non-Windows platforms, but is also not
162 called on POSIX platforms if the work amount doesn't warrant a worker.
152 """
163 """
153 enabled = ui.configbool(b'worker', b'enabled')
164 enabled = ui.configbool(b'worker', b'enabled')
154 if enabled and _platformworker is _posixworker and not ismainthread():
165 if enabled and _platformworker is _posixworker and not ismainthread():
@@ -157,11 +168,13 b' def worker('
157 enabled = False
168 enabled = False
158
169
159 if enabled and worthwhile(ui, costperarg, len(args), threadsafe=threadsafe):
170 if enabled and worthwhile(ui, costperarg, len(args), threadsafe=threadsafe):
160 return _platformworker(ui, func, staticargs, args, hasretval)
171 return _platformworker(
172 ui, func, staticargs, args, hasretval, prefork=prefork
173 )
161 return func(*staticargs + (args,))
174 return func(*staticargs + (args,))
162
175
163
176
164 def _posixworker(ui, func, staticargs, args, hasretval):
177 def _posixworker(ui, func, staticargs, args, hasretval, prefork=None):
165 workers = _numworkers(ui)
178 workers = _numworkers(ui)
166 oldhandler = signal.getsignal(signal.SIGINT)
179 oldhandler = signal.getsignal(signal.SIGINT)
167 signal.signal(signal.SIGINT, signal.SIG_IGN)
180 signal.signal(signal.SIGINT, signal.SIG_IGN)
@@ -207,6 +220,10 b' def _posixworker(ui, func, staticargs, a'
207 parentpid = os.getpid()
220 parentpid = os.getpid()
208 pipes = []
221 pipes = []
209 retval = {}
222 retval = {}
223
224 if prefork:
225 prefork()
226
210 for pargs in partition(args, min(workers, len(args))):
227 for pargs in partition(args, min(workers, len(args))):
211 # Every worker gets its own pipe to send results on, so we don't have to
228 # Every worker gets its own pipe to send results on, so we don't have to
212 # implement atomic writes larger than PIPE_BUF. Each forked process has
229 # implement atomic writes larger than PIPE_BUF. Each forked process has
@@ -316,7 +333,7 b' def _posixexitstatus(code):'
316 return -(os.WTERMSIG(code))
333 return -(os.WTERMSIG(code))
317
334
318
335
319 def _windowsworker(ui, func, staticargs, args, hasretval):
336 def _windowsworker(ui, func, staticargs, args, hasretval, prefork=None):
320 class Worker(threading.Thread):
337 class Worker(threading.Thread):
321 def __init__(
338 def __init__(
322 self, taskqueue, resultqueue, func, staticargs, *args, **kwargs
339 self, taskqueue, resultqueue, func, staticargs, *args, **kwargs
@@ -13,12 +13,6 b' Mercurial XXX.'
13
13
14 == Backwards Compatibility Changes ==
14 == Backwards Compatibility Changes ==
15
15
16 * chg worker processes will now correctly load per-repository configuration
17 when given a both a relative `--repository` path and an alternate working
18 directory via `--cwd`. A side-effect of this change is that these workers
19 will now return an error if hg cannot find the current working directory,
20 even when a different directory is specified via `--cwd`.
21
22 == Internal API Changes ==
16 == Internal API Changes ==
23
17
24 == Miscellaneous ==
18 == Miscellaneous ==
@@ -479,6 +479,7 b' dependencies = ['
479 "same-file",
479 "same-file",
480 "sha-1 0.10.0",
480 "sha-1 0.10.0",
481 "tempfile",
481 "tempfile",
482 "thread_local",
482 "twox-hash",
483 "twox-hash",
483 "zstd",
484 "zstd",
484 ]
485 ]
@@ -1120,6 +1121,15 b' dependencies = ['
1120 ]
1121 ]
1121
1122
1122 [[package]]
1123 [[package]]
1124 name = "thread_local"
1125 version = "1.1.4"
1126 source = "registry+https://github.com/rust-lang/crates.io-index"
1127 checksum = "5516c27b78311c50bf42c071425c560ac799b11c30b31f87e3081965fe5e0180"
1128 dependencies = [
1129 "once_cell",
1130 ]
1131
1132 [[package]]
1123 name = "time"
1133 name = "time"
1124 version = "0.1.44"
1134 version = "0.1.44"
1125 source = "registry+https://github.com/rust-lang/crates.io-index"
1135 source = "registry+https://github.com/rust-lang/crates.io-index"
@@ -29,6 +29,7 b' sha-1 = "0.10.0"'
29 twox-hash = "1.6.2"
29 twox-hash = "1.6.2"
30 same-file = "1.0.6"
30 same-file = "1.0.6"
31 tempfile = "3.1.0"
31 tempfile = "3.1.0"
32 thread_local = "1.1.4"
32 crossbeam-channel = "0.5.0"
33 crossbeam-channel = "0.5.0"
33 micro-timer = "0.4.0"
34 micro-timer = "0.4.0"
34 log = "0.4.8"
35 log = "0.4.8"
@@ -10,6 +10,7 b' use crate::dirstate_tree::on_disk::Dirst'
10 use crate::matchers::get_ignore_function;
10 use crate::matchers::get_ignore_function;
11 use crate::matchers::Matcher;
11 use crate::matchers::Matcher;
12 use crate::utils::files::get_bytes_from_os_string;
12 use crate::utils::files::get_bytes_from_os_string;
13 use crate::utils::files::get_bytes_from_path;
13 use crate::utils::files::get_path_from_bytes;
14 use crate::utils::files::get_path_from_bytes;
14 use crate::utils::hg_path::HgPath;
15 use crate::utils::hg_path::HgPath;
15 use crate::BadMatch;
16 use crate::BadMatch;
@@ -67,7 +68,7 b" pub fn status<'dirstate>("
67 let (ignore_fn, warnings) = get_ignore_function(
68 let (ignore_fn, warnings) = get_ignore_function(
68 ignore_files,
69 ignore_files,
69 &root_dir,
70 &root_dir,
70 &mut |_pattern_bytes| {},
71 &mut |_source, _pattern_bytes| {},
71 )?;
72 )?;
72 (ignore_fn, warnings, None)
73 (ignore_fn, warnings, None)
73 }
74 }
@@ -76,7 +77,24 b" pub fn status<'dirstate>("
76 let (ignore_fn, warnings) = get_ignore_function(
77 let (ignore_fn, warnings) = get_ignore_function(
77 ignore_files,
78 ignore_files,
78 &root_dir,
79 &root_dir,
79 &mut |pattern_bytes| hasher.update(pattern_bytes),
80 &mut |source, pattern_bytes| {
81 // If inside the repo, use the relative version to
82 // make it deterministic inside tests.
83 // The performance hit should be negligible.
84 let source = source
85 .strip_prefix(&root_dir)
86 .unwrap_or(source);
87 let source = get_bytes_from_path(source);
88
89 let mut subhasher = Sha1::new();
90 subhasher.update(pattern_bytes);
91 let patterns_hash = subhasher.finalize();
92
93 hasher.update(source);
94 hasher.update(b" ");
95 hasher.update(patterns_hash);
96 hasher.update(b"\n");
97 },
80 )?;
98 )?;
81 let new_hash = *hasher.finalize().as_ref();
99 let new_hash = *hasher.finalize().as_ref();
82 let changed = new_hash != dmap.ignore_patterns_hash;
100 let changed = new_hash != dmap.ignore_patterns_hash;
@@ -122,8 +140,8 b" pub fn status<'dirstate>("
122 ignore_fn,
140 ignore_fn,
123 outcome: Mutex::new(outcome),
141 outcome: Mutex::new(outcome),
124 ignore_patterns_have_changed: patterns_changed,
142 ignore_patterns_have_changed: patterns_changed,
125 new_cachable_directories: Default::default(),
143 new_cacheable_directories: Default::default(),
126 outated_cached_directories: Default::default(),
144 outdated_cached_directories: Default::default(),
127 filesystem_time_at_status_start,
145 filesystem_time_at_status_start,
128 };
146 };
129 let is_at_repo_root = true;
147 let is_at_repo_root = true;
@@ -147,12 +165,12 b" pub fn status<'dirstate>("
147 is_at_repo_root,
165 is_at_repo_root,
148 )?;
166 )?;
149 let mut outcome = common.outcome.into_inner().unwrap();
167 let mut outcome = common.outcome.into_inner().unwrap();
150 let new_cachable = common.new_cachable_directories.into_inner().unwrap();
168 let new_cacheable = common.new_cacheable_directories.into_inner().unwrap();
151 let outdated = common.outated_cached_directories.into_inner().unwrap();
169 let outdated = common.outdated_cached_directories.into_inner().unwrap();
152
170
153 outcome.dirty = common.ignore_patterns_have_changed == Some(true)
171 outcome.dirty = common.ignore_patterns_have_changed == Some(true)
154 || !outdated.is_empty()
172 || !outdated.is_empty()
155 || (!new_cachable.is_empty()
173 || (!new_cacheable.is_empty()
156 && dmap.dirstate_version == DirstateVersion::V2);
174 && dmap.dirstate_version == DirstateVersion::V2);
157
175
158 // Remove outdated mtimes before adding new mtimes, in case a given
176 // Remove outdated mtimes before adding new mtimes, in case a given
@@ -160,7 +178,7 b" pub fn status<'dirstate>("
160 for path in &outdated {
178 for path in &outdated {
161 dmap.clear_cached_mtime(path)?;
179 dmap.clear_cached_mtime(path)?;
162 }
180 }
163 for (path, mtime) in &new_cachable {
181 for (path, mtime) in &new_cacheable {
164 dmap.set_cached_mtime(path, *mtime)?;
182 dmap.set_cached_mtime(path, *mtime)?;
165 }
183 }
166
184
@@ -175,9 +193,11 b" struct StatusCommon<'a, 'tree, 'on_disk:"
175 matcher: &'a (dyn Matcher + Sync),
193 matcher: &'a (dyn Matcher + Sync),
176 ignore_fn: IgnoreFnType<'a>,
194 ignore_fn: IgnoreFnType<'a>,
177 outcome: Mutex<DirstateStatus<'on_disk>>,
195 outcome: Mutex<DirstateStatus<'on_disk>>,
178 new_cachable_directories:
196 /// New timestamps of directories to be used for caching their readdirs
197 new_cacheable_directories:
179 Mutex<Vec<(Cow<'on_disk, HgPath>, TruncatedTimestamp)>>,
198 Mutex<Vec<(Cow<'on_disk, HgPath>, TruncatedTimestamp)>>,
180 outated_cached_directories: Mutex<Vec<Cow<'on_disk, HgPath>>>,
199 /// Used to invalidate the readdir cache of directories
200 outdated_cached_directories: Mutex<Vec<Cow<'on_disk, HgPath>>>,
181
201
182 /// Whether ignore files like `.hgignore` have changed since the previous
202 /// Whether ignore files like `.hgignore` have changed since the previous
183 /// time a `status()` call wrote their hash to the dirstate. `None` means
203 /// time a `status()` call wrote their hash to the dirstate. `None` means
@@ -305,17 +325,18 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
305 fn check_for_outdated_directory_cache(
325 fn check_for_outdated_directory_cache(
306 &self,
326 &self,
307 dirstate_node: &NodeRef<'tree, 'on_disk>,
327 dirstate_node: &NodeRef<'tree, 'on_disk>,
308 ) -> Result<(), DirstateV2ParseError> {
328 ) -> Result<bool, DirstateV2ParseError> {
309 if self.ignore_patterns_have_changed == Some(true)
329 if self.ignore_patterns_have_changed == Some(true)
310 && dirstate_node.cached_directory_mtime()?.is_some()
330 && dirstate_node.cached_directory_mtime()?.is_some()
311 {
331 {
312 self.outated_cached_directories.lock().unwrap().push(
332 self.outdated_cached_directories.lock().unwrap().push(
313 dirstate_node
333 dirstate_node
314 .full_path_borrowed(self.dmap.on_disk)?
334 .full_path_borrowed(self.dmap.on_disk)?
315 .detach_from_tree(),
335 .detach_from_tree(),
316 )
336 );
337 return Ok(true);
317 }
338 }
318 Ok(())
339 Ok(false)
319 }
340 }
320
341
321 /// If this returns true, we can get accurate results by only using
342 /// If this returns true, we can get accurate results by only using
@@ -487,6 +508,7 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
487 dirstate_node: NodeRef<'tree, 'on_disk>,
508 dirstate_node: NodeRef<'tree, 'on_disk>,
488 has_ignored_ancestor: &'ancestor HasIgnoredAncestor<'ancestor>,
509 has_ignored_ancestor: &'ancestor HasIgnoredAncestor<'ancestor>,
489 ) -> Result<(), DirstateV2ParseError> {
510 ) -> Result<(), DirstateV2ParseError> {
511 let outdated_dircache =
490 self.check_for_outdated_directory_cache(&dirstate_node)?;
512 self.check_for_outdated_directory_cache(&dirstate_node)?;
491 let hg_path = &dirstate_node.full_path_borrowed(self.dmap.on_disk)?;
513 let hg_path = &dirstate_node.full_path_borrowed(self.dmap.on_disk)?;
492 let file_or_symlink = fs_entry.is_file() || fs_entry.is_symlink();
514 let file_or_symlink = fs_entry.is_file() || fs_entry.is_symlink();
@@ -522,6 +544,7 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
522 children_all_have_dirstate_node_or_are_ignored,
544 children_all_have_dirstate_node_or_are_ignored,
523 fs_entry,
545 fs_entry,
524 dirstate_node,
546 dirstate_node,
547 outdated_dircache,
525 )?
548 )?
526 } else {
549 } else {
527 if file_or_symlink && self.matcher.matches(&hg_path) {
550 if file_or_symlink && self.matcher.matches(&hg_path) {
@@ -561,11 +584,17 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
561 Ok(())
584 Ok(())
562 }
585 }
563
586
587 /// Save directory mtime if applicable.
588 ///
589 /// `outdated_directory_cache` is `true` if we've just invalidated the
590 /// cache for this directory in `check_for_outdated_directory_cache`,
591 /// which forces the update.
564 fn maybe_save_directory_mtime(
592 fn maybe_save_directory_mtime(
565 &self,
593 &self,
566 children_all_have_dirstate_node_or_are_ignored: bool,
594 children_all_have_dirstate_node_or_are_ignored: bool,
567 directory_entry: &DirEntry,
595 directory_entry: &DirEntry,
568 dirstate_node: NodeRef<'tree, 'on_disk>,
596 dirstate_node: NodeRef<'tree, 'on_disk>,
597 outdated_directory_cache: bool,
569 ) -> Result<(), DirstateV2ParseError> {
598 ) -> Result<(), DirstateV2ParseError> {
570 if !children_all_have_dirstate_node_or_are_ignored {
599 if !children_all_have_dirstate_node_or_are_ignored {
571 return Ok(());
600 return Ok(());
@@ -635,9 +664,10 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
635 // We deem this scenario (unlike the previous one) to be
664 // We deem this scenario (unlike the previous one) to be
636 // unlikely enough in practice.
665 // unlikely enough in practice.
637
666
638 let is_up_to_date =
667 let is_up_to_date = if let Some(cached) =
639 if let Some(cached) = dirstate_node.cached_directory_mtime()? {
668 dirstate_node.cached_directory_mtime()?
640 cached.likely_equal(directory_mtime)
669 {
670 !outdated_directory_cache && cached.likely_equal(directory_mtime)
641 } else {
671 } else {
642 false
672 false
643 };
673 };
@@ -645,7 +675,7 b" impl<'a, 'tree, 'on_disk> StatusCommon<'"
645 let hg_path = dirstate_node
675 let hg_path = dirstate_node
646 .full_path_borrowed(self.dmap.on_disk)?
676 .full_path_borrowed(self.dmap.on_disk)?
647 .detach_from_tree();
677 .detach_from_tree();
648 self.new_cachable_directories
678 self.new_cacheable_directories
649 .lock()
679 .lock()
650 .unwrap()
680 .unwrap()
651 .push((hg_path, directory_mtime))
681 .push((hg_path, directory_mtime))
@@ -412,11 +412,11 b' pub fn parse_pattern_file_contents('
412 pub fn read_pattern_file(
412 pub fn read_pattern_file(
413 file_path: &Path,
413 file_path: &Path,
414 warn: bool,
414 warn: bool,
415 inspect_pattern_bytes: &mut impl FnMut(&[u8]),
415 inspect_pattern_bytes: &mut impl FnMut(&Path, &[u8]),
416 ) -> Result<(Vec<IgnorePattern>, Vec<PatternFileWarning>), PatternError> {
416 ) -> Result<(Vec<IgnorePattern>, Vec<PatternFileWarning>), PatternError> {
417 match std::fs::read(file_path) {
417 match std::fs::read(file_path) {
418 Ok(contents) => {
418 Ok(contents) => {
419 inspect_pattern_bytes(&contents);
419 inspect_pattern_bytes(file_path, &contents);
420 parse_pattern_file_contents(&contents, file_path, None, warn)
420 parse_pattern_file_contents(&contents, file_path, None, warn)
421 }
421 }
422 Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok((
422 Err(e) if e.kind() == std::io::ErrorKind::NotFound => Ok((
@@ -455,7 +455,7 b' pub type PatternResult<T> = Result<T, Pa'
455 pub fn get_patterns_from_file(
455 pub fn get_patterns_from_file(
456 pattern_file: &Path,
456 pattern_file: &Path,
457 root_dir: &Path,
457 root_dir: &Path,
458 inspect_pattern_bytes: &mut impl FnMut(&[u8]),
458 inspect_pattern_bytes: &mut impl FnMut(&Path, &[u8]),
459 ) -> PatternResult<(Vec<IgnorePattern>, Vec<PatternFileWarning>)> {
459 ) -> PatternResult<(Vec<IgnorePattern>, Vec<PatternFileWarning>)> {
460 let (patterns, mut warnings) =
460 let (patterns, mut warnings) =
461 read_pattern_file(pattern_file, true, inspect_pattern_bytes)?;
461 read_pattern_file(pattern_file, true, inspect_pattern_bytes)?;
@@ -573,6 +573,39 b' impl DifferenceMatcher {'
573 }
573 }
574 }
574 }
575
575
576 /// Wraps [`regex::bytes::Regex`] to improve performance in multithreaded
577 /// contexts.
578 ///
579 /// The `status` algorithm makes heavy use of threads, and calling `is_match`
580 /// from many threads at once is prone to contention, probably within the
581 /// scratch space needed as the regex DFA is built lazily.
582 ///
583 /// We are in the process of raising the issue upstream, but for now
584 /// the workaround used here is to store the `Regex` in a lazily populated
585 /// thread-local variable, sharing the initial read-only compilation, but
586 /// not the lazy dfa scratch space mentioned above.
587 ///
588 /// This reduces the contention observed with 16+ threads, but does not
589 /// completely remove it. Hopefully this can be addressed upstream.
590 struct RegexMatcher {
591 /// Compiled at the start of the status algorithm, used as a base for
592 /// cloning in each thread-local `self.local`, thus sharing the expensive
593 /// first compilation.
594 base: regex::bytes::Regex,
595 /// Thread-local variable that holds the `Regex` that is actually queried
596 /// from each thread.
597 local: thread_local::ThreadLocal<regex::bytes::Regex>,
598 }
599
600 impl RegexMatcher {
601 /// Returns whether the path matches the stored `Regex`.
602 pub fn is_match(&self, path: &HgPath) -> bool {
603 self.local
604 .get_or(|| self.base.clone())
605 .is_match(path.as_bytes())
606 }
607 }
608
576 /// Returns a function that matches an `HgPath` against the given regex
609 /// Returns a function that matches an `HgPath` against the given regex
577 /// pattern.
610 /// pattern.
578 ///
611 ///
@@ -580,9 +613,7 b' impl DifferenceMatcher {'
580 /// underlying engine (the `regex` crate), for instance anything with
613 /// underlying engine (the `regex` crate), for instance anything with
581 /// back-references.
614 /// back-references.
582 #[timed]
615 #[timed]
583 fn re_matcher(
616 fn re_matcher(pattern: &[u8]) -> PatternResult<RegexMatcher> {
584 pattern: &[u8],
585 ) -> PatternResult<impl Fn(&HgPath) -> bool + Sync> {
586 use std::io::Write;
617 use std::io::Write;
587
618
588 // The `regex` crate adds `.*` to the start and end of expressions if there
619 // The `regex` crate adds `.*` to the start and end of expressions if there
@@ -611,7 +642,10 b' fn re_matcher('
611 .build()
642 .build()
612 .map_err(|e| PatternError::UnsupportedSyntax(e.to_string()))?;
643 .map_err(|e| PatternError::UnsupportedSyntax(e.to_string()))?;
613
644
614 Ok(move |path: &HgPath| re.is_match(path.as_bytes()))
645 Ok(RegexMatcher {
646 base: re,
647 local: Default::default(),
648 })
615 }
649 }
616
650
617 /// Returns the regex pattern and a function that matches an `HgPath` against
651 /// Returns the regex pattern and a function that matches an `HgPath` against
@@ -638,7 +672,7 b" fn build_regex_match<'a, 'b>("
638 let func = if !(regexps.is_empty()) {
672 let func = if !(regexps.is_empty()) {
639 let matcher = re_matcher(&full_regex)?;
673 let matcher = re_matcher(&full_regex)?;
640 let func = move |filename: &HgPath| {
674 let func = move |filename: &HgPath| {
641 exact_set.contains(filename) || matcher(filename)
675 exact_set.contains(filename) || matcher.is_match(filename)
642 };
676 };
643 Box::new(func) as IgnoreFnType
677 Box::new(func) as IgnoreFnType
644 } else {
678 } else {
@@ -838,7 +872,7 b" fn build_match<'a, 'b>("
838 pub fn get_ignore_matcher<'a>(
872 pub fn get_ignore_matcher<'a>(
839 mut all_pattern_files: Vec<PathBuf>,
873 mut all_pattern_files: Vec<PathBuf>,
840 root_dir: &Path,
874 root_dir: &Path,
841 inspect_pattern_bytes: &mut impl FnMut(&[u8]),
875 inspect_pattern_bytes: &mut impl FnMut(&Path, &[u8]),
842 ) -> PatternResult<(IncludeMatcher<'a>, Vec<PatternFileWarning>)> {
876 ) -> PatternResult<(IncludeMatcher<'a>, Vec<PatternFileWarning>)> {
843 let mut all_patterns = vec![];
877 let mut all_patterns = vec![];
844 let mut all_warnings = vec![];
878 let mut all_warnings = vec![];
@@ -871,7 +905,7 b" pub fn get_ignore_matcher<'a>("
871 pub fn get_ignore_function<'a>(
905 pub fn get_ignore_function<'a>(
872 all_pattern_files: Vec<PathBuf>,
906 all_pattern_files: Vec<PathBuf>,
873 root_dir: &Path,
907 root_dir: &Path,
874 inspect_pattern_bytes: &mut impl FnMut(&[u8]),
908 inspect_pattern_bytes: &mut impl FnMut(&Path, &[u8]),
875 ) -> PatternResult<(IgnoreFnType<'a>, Vec<PatternFileWarning>)> {
909 ) -> PatternResult<(IgnoreFnType<'a>, Vec<PatternFileWarning>)> {
876 let res =
910 let res =
877 get_ignore_matcher(all_pattern_files, root_dir, inspect_pattern_bytes);
911 get_ignore_matcher(all_pattern_files, root_dir, inspect_pattern_bytes);
@@ -447,6 +447,11 b" impl<'a> RevlogEntry<'a> {"
447 ) {
447 ) {
448 Ok(data)
448 Ok(data)
449 } else {
449 } else {
450 if (self.flags & REVISION_FLAG_ELLIPSIS) != 0 {
451 return Err(HgError::unsupported(
452 "ellipsis revisions are not supported by rhg",
453 ));
454 }
450 Err(corrupted(format!(
455 Err(corrupted(format!(
451 "hash check failed for revision {}",
456 "hash check failed for revision {}",
452 self.rev
457 self.rev
@@ -19,35 +19,9 b' The executable can then be found at `rus'
19
19
20 `rhg` reads Mercurial configuration from the usual sources:
20 `rhg` reads Mercurial configuration from the usual sources:
21 the user’s `~/.hgrc`, a repository’s `.hg/hgrc`, command line `--config`, etc.
21 the user’s `~/.hgrc`, a repository’s `.hg/hgrc`, command line `--config`, etc.
22 It has some specific configuration in the `[rhg]` section:
22 It has some specific configuration in the `[rhg]` section.
23
24 * `on-unsupported` governs the behavior of `rhg` when it encounters something
25 that it does not support but “full” `hg` possibly does.
26 This can be in configuration, on the command line, or in a repository.
27
28 - `abort`, the default value, makes `rhg` print a message to stderr
29 to explain what is not supported, then terminate with a 252 exit code.
30 - `abort-silent` makes it terminate with the same exit code,
31 but without printing anything.
32 - `fallback` makes it silently call a (presumably Python-based) `hg`
33 subprocess with the same command-line parameters.
34 The `rhg.fallback-executable` configuration must be set.
35
23
36 * `fallback-executable`: path to the executable to run in a sub-process
24 See `hg help config.rhg` for details.
37 when falling back to a Python implementation of Mercurial.
38
39 * `allowed-extensions`: a list of extension names that `rhg` can ignore.
40
41 Mercurial extensions can modify the behavior of existing `hg` sub-commands,
42 including those that `rhg` otherwise supports.
43 Because it cannot load Python extensions, finding them
44 enabled in configuration is considered “unsupported” (see above).
45 A few exceptions are made for extensions that `rhg` does know about,
46 with the Rust implementation duplicating their behavior.
47
48 This configuration makes additional exceptions: `rhg` will proceed even if
49 those extensions are enabled.
50
51
25
52 ## Installation and configuration example
26 ## Installation and configuration example
53
27
@@ -25,7 +25,7 b' pub fn run(invocation: &crate::CliInvoca'
25 let (ignore_matcher, warnings) = get_ignore_matcher(
25 let (ignore_matcher, warnings) = get_ignore_matcher(
26 vec![ignore_file],
26 vec![ignore_file],
27 &repo.working_directory_path().to_owned(),
27 &repo.working_directory_path().to_owned(),
28 &mut |_pattern_bytes| (),
28 &mut |_source, _pattern_bytes| (),
29 )
29 )
30 .map_err(|e| StatusError::from(e))?;
30 .map_err(|e| StatusError::from(e))?;
31
31
@@ -221,7 +221,12 b' impl From<(RevlogError, &str)> for Comma'
221
221
222 impl From<StatusError> for CommandError {
222 impl From<StatusError> for CommandError {
223 fn from(error: StatusError) -> Self {
223 fn from(error: StatusError) -> Self {
224 CommandError::abort(format!("{}", error))
224 match error {
225 StatusError::Pattern(_) => {
226 CommandError::unsupported(format!("{}", error))
227 }
228 _ => CommandError::abort(format!("{}", error)),
229 }
225 }
230 }
226 }
231 }
227
232
@@ -301,7 +301,7 b' fn rhg_main(argv: Vec<OsString>) -> ! {'
301 }
301 }
302 };
302 };
303
303
304 let exit =
304 let simple_exit =
305 |ui: &Ui, config: &Config, result: Result<(), CommandError>| -> ! {
305 |ui: &Ui, config: &Config, result: Result<(), CommandError>| -> ! {
306 exit(
306 exit(
307 &argv,
307 &argv,
@@ -317,7 +317,7 b' fn rhg_main(argv: Vec<OsString>) -> ! {'
317 )
317 )
318 };
318 };
319 let early_exit = |config: &Config, error: CommandError| -> ! {
319 let early_exit = |config: &Config, error: CommandError| -> ! {
320 exit(&Ui::new_infallible(config), &config, Err(error))
320 simple_exit(&Ui::new_infallible(config), &config, Err(error))
321 };
321 };
322 let repo_result = match Repo::find(&non_repo_config, repo_path.to_owned())
322 let repo_result = match Repo::find(&non_repo_config, repo_path.to_owned())
323 {
323 {
@@ -348,6 +348,24 b' fn rhg_main(argv: Vec<OsString>) -> ! {'
348 let config = config_cow.as_ref();
348 let config = config_cow.as_ref();
349 let ui = Ui::new(&config)
349 let ui = Ui::new(&config)
350 .unwrap_or_else(|error| early_exit(&config, error.into()));
350 .unwrap_or_else(|error| early_exit(&config, error.into()));
351
352 if let Ok(true) = config.get_bool(b"rhg", b"fallback-immediately") {
353 exit(
354 &argv,
355 &initial_current_dir,
356 &ui,
357 OnUnsupported::Fallback {
358 executable: config
359 .get(b"rhg", b"fallback-executable")
360 .map(ToOwned::to_owned),
361 },
362 Err(CommandError::unsupported(
363 "`rhg.fallback-immediately is true`",
364 )),
365 false,
366 )
367 }
368
351 let result = main_with_result(
369 let result = main_with_result(
352 argv.iter().map(|s| s.to_owned()).collect(),
370 argv.iter().map(|s| s.to_owned()).collect(),
353 &process_start_time,
371 &process_start_time,
@@ -355,7 +373,7 b' fn rhg_main(argv: Vec<OsString>) -> ! {'
355 repo_result.as_ref(),
373 repo_result.as_ref(),
356 config,
374 config,
357 );
375 );
358 exit(&ui, &config, result)
376 simple_exit(&ui, &config, result)
359 }
377 }
360
378
361 fn main() -> ! {
379 fn main() -> ! {
@@ -63,7 +63,16 b' def visit(opts, filenames, outfile):'
63 if isfile:
63 if isfile:
64 if opts.type:
64 if opts.type:
65 facts.append(b'file')
65 facts.append(b'file')
66 if any((opts.hexdump, opts.dump, opts.md5, opts.sha1, opts.sha256)):
66 needs_reading = (
67 opts.hexdump,
68 opts.dump,
69 opts.md5,
70 opts.sha1,
71 opts.raw_sha1,
72 opts.sha256,
73 )
74
75 if any(needs_reading):
67 with open(f, 'rb') as fobj:
76 with open(f, 'rb') as fobj:
68 content = fobj.read()
77 content = fobj.read()
69 elif islink:
78 elif islink:
@@ -101,6 +110,9 b' def visit(opts, filenames, outfile):'
101 if opts.md5 and content is not None:
110 if opts.md5 and content is not None:
102 h = hashlib.md5(content)
111 h = hashlib.md5(content)
103 facts.append(b'md5=%s' % binascii.hexlify(h.digest())[: opts.bytes])
112 facts.append(b'md5=%s' % binascii.hexlify(h.digest())[: opts.bytes])
113 if opts.raw_sha1 and content is not None:
114 h = hashlib.sha1(content)
115 facts.append(b'raw-sha1=%s' % h.digest()[: opts.bytes])
104 if opts.sha1 and content is not None:
116 if opts.sha1 and content is not None:
105 h = hashlib.sha1(content)
117 h = hashlib.sha1(content)
106 facts.append(
118 facts.append(
@@ -186,6 +198,12 b' if __name__ == "__main__":'
186 )
198 )
187 parser.add_option(
199 parser.add_option(
188 "",
200 "",
201 "--raw-sha1",
202 action="store_true",
203 help="show raw bytes of the sha1 hash of the content",
204 )
205 parser.add_option(
206 "",
189 "--sha256",
207 "--sha256",
190 action="store_true",
208 action="store_true",
191 help="show sha256 hash of the content",
209 help="show sha256 hash of the content",
@@ -245,3 +245,17 b' We should make sure all of it (docket + '
245
245
246 $ hg status
246 $ hg status
247 A foo
247 A foo
248 $ cd ..
249
250 Check dirstate ordering
251 (e.g. `src/dirstate/` and `src/dirstate.rs` shouldn't cause issues)
252
253 $ hg init repro
254 $ cd repro
255 $ mkdir src
256 $ mkdir src/dirstate
257 $ touch src/dirstate/file1 src/dirstate/file2 src/dirstate.rs
258 $ touch file1 file2
259 $ hg commit -Aqm1
260 $ hg st
261 $ cd ..
@@ -59,18 +59,24 b' Should display baz only:'
59 ? syntax
59 ? syntax
60
60
61 $ echo "*.o" > .hgignore
61 $ echo "*.o" > .hgignore
62 #if no-rhg
63 $ hg status
62 $ hg status
64 abort: $TESTTMP/ignorerepo/.hgignore: invalid pattern (relre): *.o (glob)
63 abort: $TESTTMP/ignorerepo/.hgignore: invalid pattern (relre): *.o (glob)
65 [255]
64 [255]
66 #endif
65
67 #if rhg
66 $ echo 're:^(?!a).*\.o$' > .hgignore
68 $ hg status
67 $ hg status
69 Unsupported syntax regex parse error:
68 A dir/b.o
70 ^(?:*.o)
69 ? .hgignore
71 ^
70 ? a.c
72 error: repetition operator missing expression
71 ? a.o
73 [255]
72 ? syntax
73 #if rhg
74 $ hg status --config rhg.on-unsupported=abort
75 unsupported feature: Unsupported syntax regex parse error:
76 ^(?:^(?!a).*\.o$)
77 ^^^
78 error: look-around, including look-ahead and look-behind, is not supported
79 [252]
74 #endif
80 #endif
75
81
76 Ensure given files are relative to cwd
82 Ensure given files are relative to cwd
@@ -415,17 +421,51 b' Windows paths are accepted on input'
415 Check the hash of ignore patterns written in the dirstate
421 Check the hash of ignore patterns written in the dirstate
416 This is an optimization that is only relevant when using the Rust extensions
422 This is an optimization that is only relevant when using the Rust extensions
417
423
424 $ cat_filename_and_hash () {
425 > for i in "$@"; do
426 > printf "$i "
427 > cat "$i" | "$TESTDIR"/f --raw-sha1 | sed 's/^raw-sha1=//'
428 > done
429 > }
418 $ hg status > /dev/null
430 $ hg status > /dev/null
419 $ cat .hg/testhgignore .hg/testhgignorerel .hgignore dir2/.hgignore dir1/.hgignore dir1/.hgignoretwo | $TESTDIR/f --sha1
431 $ cat_filename_and_hash .hg/testhgignore .hg/testhgignorerel .hgignore dir2/.hgignore dir1/.hgignore dir1/.hgignoretwo | $TESTDIR/f --sha1
420 sha1=6e315b60f15fb5dfa02be00f3e2c8f923051f5ff
432 sha1=c0beb296395d48ced8e14f39009c4ea6e409bfe6
421 $ hg debugstate --docket | grep ignore
433 $ hg debugstate --docket | grep ignore
422 ignore pattern hash: 6e315b60f15fb5dfa02be00f3e2c8f923051f5ff
434 ignore pattern hash: c0beb296395d48ced8e14f39009c4ea6e409bfe6
423
435
424 $ echo rel > .hg/testhgignorerel
436 $ echo rel > .hg/testhgignorerel
425 $ hg status > /dev/null
437 $ hg status > /dev/null
426 $ cat .hg/testhgignore .hg/testhgignorerel .hgignore dir2/.hgignore dir1/.hgignore dir1/.hgignoretwo | $TESTDIR/f --sha1
438 $ cat_filename_and_hash .hg/testhgignore .hg/testhgignorerel .hgignore dir2/.hgignore dir1/.hgignore dir1/.hgignoretwo | $TESTDIR/f --sha1
427 sha1=dea19cc7119213f24b6b582a4bae7b0cb063e34e
439 sha1=b8e63d3428ec38abc68baa27631516d5ec46b7fa
428 $ hg debugstate --docket | grep ignore
440 $ hg debugstate --docket | grep ignore
429 ignore pattern hash: dea19cc7119213f24b6b582a4bae7b0cb063e34e
441 ignore pattern hash: b8e63d3428ec38abc68baa27631516d5ec46b7fa
442 $ cd ..
443
444 Check that the hash depends on the source of the hgignore patterns
445 (otherwise the context is lost and things like subinclude are cached improperly)
446
447 $ hg init ignore-collision
448 $ cd ignore-collision
449 $ echo > .hg/testhgignorerel
450
451 $ mkdir dir1/ dir1/subdir
452 $ touch dir1/subdir/f dir1/subdir/ignored1
453 $ echo 'ignored1' > dir1/.hgignore
454
455 $ mkdir dir2 dir2/subdir
456 $ touch dir2/subdir/f dir2/subdir/ignored2
457 $ echo 'ignored2' > dir2/.hgignore
458 $ echo 'subinclude:dir2/.hgignore' >> .hgignore
459 $ echo 'subinclude:dir1/.hgignore' >> .hgignore
460
461 $ hg commit -Aqm_
462
463 $ > dir1/.hgignore
464 $ echo 'ignored' > dir2/.hgignore
465 $ echo 'ignored1' >> dir2/.hgignore
466 $ hg status
467 M dir1/.hgignore
468 M dir2/.hgignore
469 ? dir1/subdir/ignored1
430
470
431 #endif
471 #endif
@@ -168,6 +168,10 b' Fallback to Python'
168 $ rhg cat original --exclude="*.rs"
168 $ rhg cat original --exclude="*.rs"
169 original content
169 original content
170
170
171 Check that `fallback-immediately` overrides `$NO_FALLBACK`
172 $ $NO_FALLBACK rhg cat original --exclude="*.rs" --config rhg.fallback-immediately=1
173 original content
174
171 $ (unset RHG_FALLBACK_EXECUTABLE; rhg cat original --exclude="*.rs")
175 $ (unset RHG_FALLBACK_EXECUTABLE; rhg cat original --exclude="*.rs")
172 abort: 'rhg.on-unsupported=fallback' without 'rhg.fallback-executable' set.
176 abort: 'rhg.on-unsupported=fallback' without 'rhg.fallback-executable' set.
173 [255]
177 [255]
@@ -944,7 +944,7 b' It is still not set when there are unkno'
944 $ hg debugdirstate --all --no-dates | grep '^ '
944 $ hg debugdirstate --all --no-dates | grep '^ '
945 0 -1 unset subdir
945 0 -1 unset subdir
946
946
947 Now the directory is eligible for caching, so its mtime is save in the dirstate
947 Now the directory is eligible for caching, so its mtime is saved in the dirstate
948
948
949 $ rm subdir/unknown
949 $ rm subdir/unknown
950 $ sleep 0.1 # ensure the kernels internal clock for mtimes has ticked
950 $ sleep 0.1 # ensure the kernels internal clock for mtimes has ticked
@@ -976,4 +976,27 b' Removing a node from the dirstate resets'
976 $ hg status
976 $ hg status
977 ? subdir/a
977 ? subdir/a
978
978
979 Changing the hgignore rules makes us recompute the status (and rewrite the dirstate).
980
981 $ rm subdir/a
982 $ mkdir another-subdir
983 $ touch another-subdir/something-else
984
985 $ cat > "$TESTDIR"/extra-hgignore <<EOF
986 > something-else
987 > EOF
988
989 $ hg status --config ui.ignore.global="$TESTDIR"/extra-hgignore
990 $ hg debugdirstate --all --no-dates | grep '^ '
991 0 -1 set subdir
992
993 $ hg status
994 ? another-subdir/something-else
995
996 One invocation of status is enough to populate the cache even if it's invalidated
997 in the same run.
998
999 $ hg debugdirstate --all --no-dates | grep '^ '
1000 0 -1 set subdir
1001
979 #endif
1002 #endif
@@ -13,6 +13,7 b' Tests of the file helper tool'
13 check if file is newer (or same)
13 check if file is newer (or same)
14 -r, --recurse recurse into directories
14 -r, --recurse recurse into directories
15 -S, --sha1 show sha1 hash of the content
15 -S, --sha1 show sha1 hash of the content
16 --raw-sha1 show raw bytes of the sha1 hash of the content
16 --sha256 show sha256 hash of the content
17 --sha256 show sha256 hash of the content
17 -M, --md5 show md5 hash of the content
18 -M, --md5 show md5 hash of the content
18 -D, --dump dump file content
19 -D, --dump dump file content
General Comments 0
You need to be logged in to leave comments. Login now