##// END OF EJS Templates
hgrc: introduce HGRCSKIPREPO to skip reading the repository's hgrc...
marmoute -
r44583:d56a2d6f default
parent child Browse files
Show More
@@ -1,121 +1,124 b''
1 HG
1 HG
2 Path to the 'hg' executable, automatically passed when running
2 Path to the 'hg' executable, automatically passed when running
3 hooks, extensions or external tools. If unset or empty, this is
3 hooks, extensions or external tools. If unset or empty, this is
4 the hg executable's name if it's frozen, or an executable named
4 the hg executable's name if it's frozen, or an executable named
5 'hg' (with %PATHEXT% [defaulting to COM/EXE/BAT/CMD] extensions on
5 'hg' (with %PATHEXT% [defaulting to COM/EXE/BAT/CMD] extensions on
6 Windows) is searched.
6 Windows) is searched.
7
7
8 HGEDITOR
8 HGEDITOR
9 This is the name of the editor to run when committing. See EDITOR.
9 This is the name of the editor to run when committing. See EDITOR.
10
10
11 (deprecated, see :hg:`help config.ui.editor`)
11 (deprecated, see :hg:`help config.ui.editor`)
12
12
13 HGENCODING
13 HGENCODING
14 This overrides the default locale setting detected by Mercurial.
14 This overrides the default locale setting detected by Mercurial.
15 This setting is used to convert data including usernames,
15 This setting is used to convert data including usernames,
16 changeset descriptions, tag names, and branches. This setting can
16 changeset descriptions, tag names, and branches. This setting can
17 be overridden with the --encoding command-line option.
17 be overridden with the --encoding command-line option.
18
18
19 HGENCODINGMODE
19 HGENCODINGMODE
20 This sets Mercurial's behavior for handling unknown characters
20 This sets Mercurial's behavior for handling unknown characters
21 while transcoding user input. The default is "strict", which
21 while transcoding user input. The default is "strict", which
22 causes Mercurial to abort if it can't map a character. Other
22 causes Mercurial to abort if it can't map a character. Other
23 settings include "replace", which replaces unknown characters, and
23 settings include "replace", which replaces unknown characters, and
24 "ignore", which drops them. This setting can be overridden with
24 "ignore", which drops them. This setting can be overridden with
25 the --encodingmode command-line option.
25 the --encodingmode command-line option.
26
26
27 HGENCODINGAMBIGUOUS
27 HGENCODINGAMBIGUOUS
28 This sets Mercurial's behavior for handling characters with
28 This sets Mercurial's behavior for handling characters with
29 "ambiguous" widths like accented Latin characters with East Asian
29 "ambiguous" widths like accented Latin characters with East Asian
30 fonts. By default, Mercurial assumes ambiguous characters are
30 fonts. By default, Mercurial assumes ambiguous characters are
31 narrow, set this variable to "wide" if such characters cause
31 narrow, set this variable to "wide" if such characters cause
32 formatting problems.
32 formatting problems.
33
33
34 HGMERGE
34 HGMERGE
35 An executable to use for resolving merge conflicts. The program
35 An executable to use for resolving merge conflicts. The program
36 will be executed with three arguments: local file, remote file,
36 will be executed with three arguments: local file, remote file,
37 ancestor file.
37 ancestor file.
38
38
39 (deprecated, see :hg:`help config.ui.merge`)
39 (deprecated, see :hg:`help config.ui.merge`)
40
40
41 HGRCPATH
41 HGRCPATH
42 A list of files or directories to search for configuration
42 A list of files or directories to search for configuration
43 files. Item separator is ":" on Unix, ";" on Windows. If HGRCPATH
43 files. Item separator is ":" on Unix, ";" on Windows. If HGRCPATH
44 is not set, platform default search path is used. If empty, only
44 is not set, platform default search path is used. If empty, only
45 the .hg/hgrc from the current repository is read.
45 the .hg/hgrc from the current repository is read.
46
46
47 For each element in HGRCPATH:
47 For each element in HGRCPATH:
48
48
49 - if it's a directory, all files ending with .rc are added
49 - if it's a directory, all files ending with .rc are added
50 - otherwise, the file itself will be added
50 - otherwise, the file itself will be added
51
51
52 HGRCSKIPREPO
53 When set, the .hg/hgrc from repositories are not read.
54
52 HGPLAIN
55 HGPLAIN
53 When set, this disables any configuration settings that might
56 When set, this disables any configuration settings that might
54 change Mercurial's default output. This includes encoding,
57 change Mercurial's default output. This includes encoding,
55 defaults, verbose mode, debug mode, quiet mode, tracebacks, and
58 defaults, verbose mode, debug mode, quiet mode, tracebacks, and
56 localization. This can be useful when scripting against Mercurial
59 localization. This can be useful when scripting against Mercurial
57 in the face of existing user configuration.
60 in the face of existing user configuration.
58
61
59 In addition to the features disabled by ``HGPLAIN=``, the following
62 In addition to the features disabled by ``HGPLAIN=``, the following
60 values can be specified to adjust behavior:
63 values can be specified to adjust behavior:
61
64
62 ``+strictflags``
65 ``+strictflags``
63 Restrict parsing of command line flags.
66 Restrict parsing of command line flags.
64
67
65 Equivalent options set via command line flags or environment
68 Equivalent options set via command line flags or environment
66 variables are not overridden.
69 variables are not overridden.
67
70
68 See :hg:`help scripting` for details.
71 See :hg:`help scripting` for details.
69
72
70 HGPLAINEXCEPT
73 HGPLAINEXCEPT
71 This is a comma-separated list of features to preserve when
74 This is a comma-separated list of features to preserve when
72 HGPLAIN is enabled. Currently the following values are supported:
75 HGPLAIN is enabled. Currently the following values are supported:
73
76
74 ``alias``
77 ``alias``
75 Don't remove aliases.
78 Don't remove aliases.
76 ``color``
79 ``color``
77 Don't disable colored output.
80 Don't disable colored output.
78 ``i18n``
81 ``i18n``
79 Preserve internationalization.
82 Preserve internationalization.
80 ``revsetalias``
83 ``revsetalias``
81 Don't remove revset aliases.
84 Don't remove revset aliases.
82 ``templatealias``
85 ``templatealias``
83 Don't remove template aliases.
86 Don't remove template aliases.
84 ``progress``
87 ``progress``
85 Don't hide progress output.
88 Don't hide progress output.
86
89
87 Setting HGPLAINEXCEPT to anything (even an empty string) will
90 Setting HGPLAINEXCEPT to anything (even an empty string) will
88 enable plain mode.
91 enable plain mode.
89
92
90 HGUSER
93 HGUSER
91 This is the string used as the author of a commit. If not set,
94 This is the string used as the author of a commit. If not set,
92 available values will be considered in this order:
95 available values will be considered in this order:
93
96
94 - HGUSER (deprecated)
97 - HGUSER (deprecated)
95 - configuration files from the HGRCPATH
98 - configuration files from the HGRCPATH
96 - EMAIL
99 - EMAIL
97 - interactive prompt
100 - interactive prompt
98 - LOGNAME (with ``@hostname`` appended)
101 - LOGNAME (with ``@hostname`` appended)
99
102
100 (deprecated, see :hg:`help config.ui.username`)
103 (deprecated, see :hg:`help config.ui.username`)
101
104
102 EMAIL
105 EMAIL
103 May be used as the author of a commit; see HGUSER.
106 May be used as the author of a commit; see HGUSER.
104
107
105 LOGNAME
108 LOGNAME
106 May be used as the author of a commit; see HGUSER.
109 May be used as the author of a commit; see HGUSER.
107
110
108 VISUAL
111 VISUAL
109 This is the name of the editor to use when committing. See EDITOR.
112 This is the name of the editor to use when committing. See EDITOR.
110
113
111 EDITOR
114 EDITOR
112 Sometimes Mercurial needs to open a text file in an editor for a
115 Sometimes Mercurial needs to open a text file in an editor for a
113 user to modify, for example when writing commit messages. The
116 user to modify, for example when writing commit messages. The
114 editor it uses is determined by looking at the environment
117 editor it uses is determined by looking at the environment
115 variables HGEDITOR, VISUAL and EDITOR, in that order. The first
118 variables HGEDITOR, VISUAL and EDITOR, in that order. The first
116 non-empty one is chosen. If all of them are empty, the editor
119 non-empty one is chosen. If all of them are empty, the editor
117 defaults to 'vi'.
120 defaults to 'vi'.
118
121
119 PYTHONPATH
122 PYTHONPATH
120 This is used by Python to find imported modules and may need to be
123 This is used by Python to find imported modules and may need to be
121 set appropriately if this Mercurial is not installed system-wide.
124 set appropriately if this Mercurial is not installed system-wide.
@@ -1,202 +1,210 b''
1 It is common for machines (as opposed to humans) to consume Mercurial.
1 It is common for machines (as opposed to humans) to consume Mercurial.
2 This help topic describes some of the considerations for interfacing
2 This help topic describes some of the considerations for interfacing
3 machines with Mercurial.
3 machines with Mercurial.
4
4
5 Choosing an Interface
5 Choosing an Interface
6 =====================
6 =====================
7
7
8 Machines have a choice of several methods to interface with Mercurial.
8 Machines have a choice of several methods to interface with Mercurial.
9 These include:
9 These include:
10
10
11 - Executing the ``hg`` process
11 - Executing the ``hg`` process
12 - Querying a HTTP server
12 - Querying a HTTP server
13 - Calling out to a command server
13 - Calling out to a command server
14
14
15 Executing ``hg`` processes is very similar to how humans interact with
15 Executing ``hg`` processes is very similar to how humans interact with
16 Mercurial in the shell. It should already be familiar to you.
16 Mercurial in the shell. It should already be familiar to you.
17
17
18 :hg:`serve` can be used to start a server. By default, this will start
18 :hg:`serve` can be used to start a server. By default, this will start
19 a "hgweb" HTTP server. This HTTP server has support for machine-readable
19 a "hgweb" HTTP server. This HTTP server has support for machine-readable
20 output, such as JSON. For more, see :hg:`help hgweb`.
20 output, such as JSON. For more, see :hg:`help hgweb`.
21
21
22 :hg:`serve` can also start a "command server." Clients can connect
22 :hg:`serve` can also start a "command server." Clients can connect
23 to this server and issue Mercurial commands over a special protocol.
23 to this server and issue Mercurial commands over a special protocol.
24 For more details on the command server, including links to client
24 For more details on the command server, including links to client
25 libraries, see https://www.mercurial-scm.org/wiki/CommandServer.
25 libraries, see https://www.mercurial-scm.org/wiki/CommandServer.
26
26
27 :hg:`serve` based interfaces (the hgweb and command servers) have the
27 :hg:`serve` based interfaces (the hgweb and command servers) have the
28 advantage over simple ``hg`` process invocations in that they are
28 advantage over simple ``hg`` process invocations in that they are
29 likely more efficient. This is because there is significant overhead
29 likely more efficient. This is because there is significant overhead
30 to spawn new Python processes.
30 to spawn new Python processes.
31
31
32 .. tip::
32 .. tip::
33
33
34 If you need to invoke several ``hg`` processes in short order and/or
34 If you need to invoke several ``hg`` processes in short order and/or
35 performance is important to you, use of a server-based interface
35 performance is important to you, use of a server-based interface
36 is highly recommended.
36 is highly recommended.
37
37
38 Environment Variables
38 Environment Variables
39 =====================
39 =====================
40
40
41 As documented in :hg:`help environment`, various environment variables
41 As documented in :hg:`help environment`, various environment variables
42 influence the operation of Mercurial. The following are particularly
42 influence the operation of Mercurial. The following are particularly
43 relevant for machines consuming Mercurial:
43 relevant for machines consuming Mercurial:
44
44
45 HGPLAIN
45 HGPLAIN
46 If not set, Mercurial's output could be influenced by configuration
46 If not set, Mercurial's output could be influenced by configuration
47 settings that impact its encoding, verbose mode, localization, etc.
47 settings that impact its encoding, verbose mode, localization, etc.
48
48
49 It is highly recommended for machines to set this variable when
49 It is highly recommended for machines to set this variable when
50 invoking ``hg`` processes.
50 invoking ``hg`` processes.
51
51
52 HGENCODING
52 HGENCODING
53 If not set, the locale used by Mercurial will be detected from the
53 If not set, the locale used by Mercurial will be detected from the
54 environment. If the determined locale does not support display of
54 environment. If the determined locale does not support display of
55 certain characters, Mercurial may render these character sequences
55 certain characters, Mercurial may render these character sequences
56 incorrectly (often by using "?" as a placeholder for invalid
56 incorrectly (often by using "?" as a placeholder for invalid
57 characters in the current locale).
57 characters in the current locale).
58
58
59 Explicitly setting this environment variable is a good practice to
59 Explicitly setting this environment variable is a good practice to
60 guarantee consistent results. "utf-8" is a good choice on UNIX-like
60 guarantee consistent results. "utf-8" is a good choice on UNIX-like
61 environments.
61 environments.
62
62
63 HGRCPATH
63 HGRCPATH
64 If not set, Mercurial will inherit config options from config files
64 If not set, Mercurial will inherit config options from config files
65 using the process described in :hg:`help config`. This includes
65 using the process described in :hg:`help config`. This includes
66 inheriting user or system-wide config files.
66 inheriting user or system-wide config files.
67
67
68 When utmost control over the Mercurial configuration is desired, the
68 When utmost control over the Mercurial configuration is desired, the
69 value of ``HGRCPATH`` can be set to an explicit file with known good
69 value of ``HGRCPATH`` can be set to an explicit file with known good
70 configs. In rare cases, the value can be set to an empty file or the
70 configs. In rare cases, the value can be set to an empty file or the
71 null device (often ``/dev/null``) to bypass loading of any user or
71 null device (often ``/dev/null``) to bypass loading of any user or
72 system config files. Note that these approaches can have unintended
72 system config files. Note that these approaches can have unintended
73 consequences, as the user and system config files often define things
73 consequences, as the user and system config files often define things
74 like the username and extensions that may be required to interface
74 like the username and extensions that may be required to interface
75 with a repository.
75 with a repository.
76
76
77 HGRCSKIPREPO
78 When set, the .hg/hgrc from repositories are not read.
79
80 Note that not reading the repository's configuration can have
81 unintended consequences, as the repository config files can define
82 things like extensions that are required for access to the
83 repository.
84
77 Command-line Flags
85 Command-line Flags
78 ==================
86 ==================
79
87
80 Mercurial's default command-line parser is designed for humans, and is not
88 Mercurial's default command-line parser is designed for humans, and is not
81 robust against malicious input. For instance, you can start a debugger by
89 robust against malicious input. For instance, you can start a debugger by
82 passing ``--debugger`` as an option value::
90 passing ``--debugger`` as an option value::
83
91
84 $ REV=--debugger sh -c 'hg log -r "$REV"'
92 $ REV=--debugger sh -c 'hg log -r "$REV"'
85
93
86 This happens because several command-line flags need to be scanned without
94 This happens because several command-line flags need to be scanned without
87 using a concrete command table, which may be modified while loading repository
95 using a concrete command table, which may be modified while loading repository
88 settings and extensions.
96 settings and extensions.
89
97
90 Since Mercurial 4.4.2, the parsing of such flags may be restricted by setting
98 Since Mercurial 4.4.2, the parsing of such flags may be restricted by setting
91 ``HGPLAIN=+strictflags``. When this feature is enabled, all early options
99 ``HGPLAIN=+strictflags``. When this feature is enabled, all early options
92 (e.g. ``-R/--repository``, ``--cwd``, ``--config``) must be specified first
100 (e.g. ``-R/--repository``, ``--cwd``, ``--config``) must be specified first
93 amongst the other global options, and cannot be injected to an arbitrary
101 amongst the other global options, and cannot be injected to an arbitrary
94 location::
102 location::
95
103
96 $ HGPLAIN=+strictflags hg -R "$REPO" log -r "$REV"
104 $ HGPLAIN=+strictflags hg -R "$REPO" log -r "$REV"
97
105
98 In earlier Mercurial versions where ``+strictflags`` isn't available, you
106 In earlier Mercurial versions where ``+strictflags`` isn't available, you
99 can mitigate the issue by concatenating an option value with its flag::
107 can mitigate the issue by concatenating an option value with its flag::
100
108
101 $ hg log -r"$REV" --keyword="$KEYWORD"
109 $ hg log -r"$REV" --keyword="$KEYWORD"
102
110
103 Consuming Command Output
111 Consuming Command Output
104 ========================
112 ========================
105
113
106 It is common for machines to need to parse the output of Mercurial
114 It is common for machines to need to parse the output of Mercurial
107 commands for relevant data. This section describes the various
115 commands for relevant data. This section describes the various
108 techniques for doing so.
116 techniques for doing so.
109
117
110 Parsing Raw Command Output
118 Parsing Raw Command Output
111 --------------------------
119 --------------------------
112
120
113 Likely the simplest and most effective solution for consuming command
121 Likely the simplest and most effective solution for consuming command
114 output is to simply invoke ``hg`` commands as you would as a user and
122 output is to simply invoke ``hg`` commands as you would as a user and
115 parse their output.
123 parse their output.
116
124
117 The output of many commands can easily be parsed with tools like
125 The output of many commands can easily be parsed with tools like
118 ``grep``, ``sed``, and ``awk``.
126 ``grep``, ``sed``, and ``awk``.
119
127
120 A potential downside with parsing command output is that the output
128 A potential downside with parsing command output is that the output
121 of commands can change when Mercurial is upgraded. While Mercurial
129 of commands can change when Mercurial is upgraded. While Mercurial
122 does generally strive for strong backwards compatibility, command
130 does generally strive for strong backwards compatibility, command
123 output does occasionally change. Having tests for your automated
131 output does occasionally change. Having tests for your automated
124 interactions with ``hg`` commands is generally recommended, but is
132 interactions with ``hg`` commands is generally recommended, but is
125 even more important when raw command output parsing is involved.
133 even more important when raw command output parsing is involved.
126
134
127 Using Templates to Control Output
135 Using Templates to Control Output
128 ---------------------------------
136 ---------------------------------
129
137
130 Many ``hg`` commands support templatized output via the
138 Many ``hg`` commands support templatized output via the
131 ``-T/--template`` argument. For more, see :hg:`help templates`.
139 ``-T/--template`` argument. For more, see :hg:`help templates`.
132
140
133 Templates are useful for explicitly controlling output so that
141 Templates are useful for explicitly controlling output so that
134 you get exactly the data you want formatted how you want it. For
142 you get exactly the data you want formatted how you want it. For
135 example, ``log -T {node}\n`` can be used to print a newline
143 example, ``log -T {node}\n`` can be used to print a newline
136 delimited list of changeset nodes instead of a human-tailored
144 delimited list of changeset nodes instead of a human-tailored
137 output containing authors, dates, descriptions, etc.
145 output containing authors, dates, descriptions, etc.
138
146
139 .. tip::
147 .. tip::
140
148
141 If parsing raw command output is too complicated, consider
149 If parsing raw command output is too complicated, consider
142 using templates to make your life easier.
150 using templates to make your life easier.
143
151
144 The ``-T/--template`` argument allows specifying pre-defined styles.
152 The ``-T/--template`` argument allows specifying pre-defined styles.
145 Mercurial ships with the machine-readable styles ``cbor``, ``json``,
153 Mercurial ships with the machine-readable styles ``cbor``, ``json``,
146 and ``xml``, which provide CBOR, JSON, and XML output, respectively.
154 and ``xml``, which provide CBOR, JSON, and XML output, respectively.
147 These are useful for producing output that is machine readable as-is.
155 These are useful for producing output that is machine readable as-is.
148
156
149 (Mercurial 5.0 is required for CBOR style.)
157 (Mercurial 5.0 is required for CBOR style.)
150
158
151 .. important::
159 .. important::
152
160
153 The ``json`` and ``xml`` styles are considered experimental. While
161 The ``json`` and ``xml`` styles are considered experimental. While
154 they may be attractive to use for easily obtaining machine-readable
162 they may be attractive to use for easily obtaining machine-readable
155 output, their behavior may change in subsequent versions.
163 output, their behavior may change in subsequent versions.
156
164
157 These styles may also exhibit unexpected results when dealing with
165 These styles may also exhibit unexpected results when dealing with
158 certain encodings. Mercurial treats things like filenames as a
166 certain encodings. Mercurial treats things like filenames as a
159 series of bytes and normalizing certain byte sequences to JSON
167 series of bytes and normalizing certain byte sequences to JSON
160 or XML with certain encoding settings can lead to surprises.
168 or XML with certain encoding settings can lead to surprises.
161
169
162 Command Server Output
170 Command Server Output
163 ---------------------
171 ---------------------
164
172
165 If using the command server to interact with Mercurial, you are likely
173 If using the command server to interact with Mercurial, you are likely
166 using an existing library/API that abstracts implementation details of
174 using an existing library/API that abstracts implementation details of
167 the command server. If so, this interface layer may perform parsing for
175 the command server. If so, this interface layer may perform parsing for
168 you, saving you the work of implementing it yourself.
176 you, saving you the work of implementing it yourself.
169
177
170 Output Verbosity
178 Output Verbosity
171 ----------------
179 ----------------
172
180
173 Commands often have varying output verbosity, even when machine
181 Commands often have varying output verbosity, even when machine
174 readable styles are being used (e.g. ``-T json``). Adding
182 readable styles are being used (e.g. ``-T json``). Adding
175 ``-v/--verbose`` and ``--debug`` to the command's arguments can
183 ``-v/--verbose`` and ``--debug`` to the command's arguments can
176 increase the amount of data exposed by Mercurial.
184 increase the amount of data exposed by Mercurial.
177
185
178 An alternate way to get the data you need is by explicitly specifying
186 An alternate way to get the data you need is by explicitly specifying
179 a template.
187 a template.
180
188
181 Other Topics
189 Other Topics
182 ============
190 ============
183
191
184 revsets
192 revsets
185 Revisions sets is a functional query language for selecting a set
193 Revisions sets is a functional query language for selecting a set
186 of revisions. Think of it as SQL for Mercurial repositories. Revsets
194 of revisions. Think of it as SQL for Mercurial repositories. Revsets
187 are useful for querying repositories for specific data.
195 are useful for querying repositories for specific data.
188
196
189 See :hg:`help revsets` for more.
197 See :hg:`help revsets` for more.
190
198
191 share extension
199 share extension
192 The ``share`` extension provides functionality for sharing
200 The ``share`` extension provides functionality for sharing
193 repository data across several working copies. It can even
201 repository data across several working copies. It can even
194 automatically "pool" storage for logically related repositories when
202 automatically "pool" storage for logically related repositories when
195 cloning.
203 cloning.
196
204
197 Configuring the ``share`` extension can lead to significant resource
205 Configuring the ``share`` extension can lead to significant resource
198 utilization reduction, particularly around disk space and the
206 utilization reduction, particularly around disk space and the
199 network. This is especially true for continuous integration (CI)
207 network. This is especially true for continuous integration (CI)
200 environments.
208 environments.
201
209
202 See :hg:`help -e share` for more.
210 See :hg:`help -e share` for more.
@@ -1,3784 +1,3786 b''
1 # localrepo.py - read/write repository class for mercurial
1 # localrepo.py - read/write repository class for mercurial
2 #
2 #
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
3 # Copyright 2005-2007 Matt Mackall <mpm@selenic.com>
4 #
4 #
5 # This software may be used and distributed according to the terms of the
5 # This software may be used and distributed according to the terms of the
6 # GNU General Public License version 2 or any later version.
6 # GNU General Public License version 2 or any later version.
7
7
8 from __future__ import absolute_import
8 from __future__ import absolute_import
9
9
10 import errno
10 import errno
11 import os
11 import os
12 import random
12 import random
13 import sys
13 import sys
14 import time
14 import time
15 import weakref
15 import weakref
16
16
17 from .i18n import _
17 from .i18n import _
18 from .node import (
18 from .node import (
19 bin,
19 bin,
20 hex,
20 hex,
21 nullid,
21 nullid,
22 nullrev,
22 nullrev,
23 short,
23 short,
24 )
24 )
25 from .pycompat import (
25 from .pycompat import (
26 delattr,
26 delattr,
27 getattr,
27 getattr,
28 )
28 )
29 from . import (
29 from . import (
30 bookmarks,
30 bookmarks,
31 branchmap,
31 branchmap,
32 bundle2,
32 bundle2,
33 changegroup,
33 changegroup,
34 color,
34 color,
35 context,
35 context,
36 dirstate,
36 dirstate,
37 dirstateguard,
37 dirstateguard,
38 discovery,
38 discovery,
39 encoding,
39 encoding,
40 error,
40 error,
41 exchange,
41 exchange,
42 extensions,
42 extensions,
43 filelog,
43 filelog,
44 hook,
44 hook,
45 lock as lockmod,
45 lock as lockmod,
46 match as matchmod,
46 match as matchmod,
47 merge as mergemod,
47 merge as mergemod,
48 mergeutil,
48 mergeutil,
49 namespaces,
49 namespaces,
50 narrowspec,
50 narrowspec,
51 obsolete,
51 obsolete,
52 pathutil,
52 pathutil,
53 phases,
53 phases,
54 pushkey,
54 pushkey,
55 pycompat,
55 pycompat,
56 repoview,
56 repoview,
57 revset,
57 revset,
58 revsetlang,
58 revsetlang,
59 scmutil,
59 scmutil,
60 sparse,
60 sparse,
61 store as storemod,
61 store as storemod,
62 subrepoutil,
62 subrepoutil,
63 tags as tagsmod,
63 tags as tagsmod,
64 transaction,
64 transaction,
65 txnutil,
65 txnutil,
66 util,
66 util,
67 vfs as vfsmod,
67 vfs as vfsmod,
68 )
68 )
69
69
70 from .interfaces import (
70 from .interfaces import (
71 repository,
71 repository,
72 util as interfaceutil,
72 util as interfaceutil,
73 )
73 )
74
74
75 from .utils import (
75 from .utils import (
76 hashutil,
76 hashutil,
77 procutil,
77 procutil,
78 stringutil,
78 stringutil,
79 )
79 )
80
80
81 from .revlogutils import constants as revlogconst
81 from .revlogutils import constants as revlogconst
82
82
83 release = lockmod.release
83 release = lockmod.release
84 urlerr = util.urlerr
84 urlerr = util.urlerr
85 urlreq = util.urlreq
85 urlreq = util.urlreq
86
86
87 # set of (path, vfs-location) tuples. vfs-location is:
87 # set of (path, vfs-location) tuples. vfs-location is:
88 # - 'plain for vfs relative paths
88 # - 'plain for vfs relative paths
89 # - '' for svfs relative paths
89 # - '' for svfs relative paths
90 _cachedfiles = set()
90 _cachedfiles = set()
91
91
92
92
93 class _basefilecache(scmutil.filecache):
93 class _basefilecache(scmutil.filecache):
94 """All filecache usage on repo are done for logic that should be unfiltered
94 """All filecache usage on repo are done for logic that should be unfiltered
95 """
95 """
96
96
97 def __get__(self, repo, type=None):
97 def __get__(self, repo, type=None):
98 if repo is None:
98 if repo is None:
99 return self
99 return self
100 # proxy to unfiltered __dict__ since filtered repo has no entry
100 # proxy to unfiltered __dict__ since filtered repo has no entry
101 unfi = repo.unfiltered()
101 unfi = repo.unfiltered()
102 try:
102 try:
103 return unfi.__dict__[self.sname]
103 return unfi.__dict__[self.sname]
104 except KeyError:
104 except KeyError:
105 pass
105 pass
106 return super(_basefilecache, self).__get__(unfi, type)
106 return super(_basefilecache, self).__get__(unfi, type)
107
107
108 def set(self, repo, value):
108 def set(self, repo, value):
109 return super(_basefilecache, self).set(repo.unfiltered(), value)
109 return super(_basefilecache, self).set(repo.unfiltered(), value)
110
110
111
111
112 class repofilecache(_basefilecache):
112 class repofilecache(_basefilecache):
113 """filecache for files in .hg but outside of .hg/store"""
113 """filecache for files in .hg but outside of .hg/store"""
114
114
115 def __init__(self, *paths):
115 def __init__(self, *paths):
116 super(repofilecache, self).__init__(*paths)
116 super(repofilecache, self).__init__(*paths)
117 for path in paths:
117 for path in paths:
118 _cachedfiles.add((path, b'plain'))
118 _cachedfiles.add((path, b'plain'))
119
119
120 def join(self, obj, fname):
120 def join(self, obj, fname):
121 return obj.vfs.join(fname)
121 return obj.vfs.join(fname)
122
122
123
123
124 class storecache(_basefilecache):
124 class storecache(_basefilecache):
125 """filecache for files in the store"""
125 """filecache for files in the store"""
126
126
127 def __init__(self, *paths):
127 def __init__(self, *paths):
128 super(storecache, self).__init__(*paths)
128 super(storecache, self).__init__(*paths)
129 for path in paths:
129 for path in paths:
130 _cachedfiles.add((path, b''))
130 _cachedfiles.add((path, b''))
131
131
132 def join(self, obj, fname):
132 def join(self, obj, fname):
133 return obj.sjoin(fname)
133 return obj.sjoin(fname)
134
134
135
135
136 class mixedrepostorecache(_basefilecache):
136 class mixedrepostorecache(_basefilecache):
137 """filecache for a mix files in .hg/store and outside"""
137 """filecache for a mix files in .hg/store and outside"""
138
138
139 def __init__(self, *pathsandlocations):
139 def __init__(self, *pathsandlocations):
140 # scmutil.filecache only uses the path for passing back into our
140 # scmutil.filecache only uses the path for passing back into our
141 # join(), so we can safely pass a list of paths and locations
141 # join(), so we can safely pass a list of paths and locations
142 super(mixedrepostorecache, self).__init__(*pathsandlocations)
142 super(mixedrepostorecache, self).__init__(*pathsandlocations)
143 _cachedfiles.update(pathsandlocations)
143 _cachedfiles.update(pathsandlocations)
144
144
145 def join(self, obj, fnameandlocation):
145 def join(self, obj, fnameandlocation):
146 fname, location = fnameandlocation
146 fname, location = fnameandlocation
147 if location == b'plain':
147 if location == b'plain':
148 return obj.vfs.join(fname)
148 return obj.vfs.join(fname)
149 else:
149 else:
150 if location != b'':
150 if location != b'':
151 raise error.ProgrammingError(
151 raise error.ProgrammingError(
152 b'unexpected location: %s' % location
152 b'unexpected location: %s' % location
153 )
153 )
154 return obj.sjoin(fname)
154 return obj.sjoin(fname)
155
155
156
156
157 def isfilecached(repo, name):
157 def isfilecached(repo, name):
158 """check if a repo has already cached "name" filecache-ed property
158 """check if a repo has already cached "name" filecache-ed property
159
159
160 This returns (cachedobj-or-None, iscached) tuple.
160 This returns (cachedobj-or-None, iscached) tuple.
161 """
161 """
162 cacheentry = repo.unfiltered()._filecache.get(name, None)
162 cacheentry = repo.unfiltered()._filecache.get(name, None)
163 if not cacheentry:
163 if not cacheentry:
164 return None, False
164 return None, False
165 return cacheentry.obj, True
165 return cacheentry.obj, True
166
166
167
167
168 class unfilteredpropertycache(util.propertycache):
168 class unfilteredpropertycache(util.propertycache):
169 """propertycache that apply to unfiltered repo only"""
169 """propertycache that apply to unfiltered repo only"""
170
170
171 def __get__(self, repo, type=None):
171 def __get__(self, repo, type=None):
172 unfi = repo.unfiltered()
172 unfi = repo.unfiltered()
173 if unfi is repo:
173 if unfi is repo:
174 return super(unfilteredpropertycache, self).__get__(unfi)
174 return super(unfilteredpropertycache, self).__get__(unfi)
175 return getattr(unfi, self.name)
175 return getattr(unfi, self.name)
176
176
177
177
178 class filteredpropertycache(util.propertycache):
178 class filteredpropertycache(util.propertycache):
179 """propertycache that must take filtering in account"""
179 """propertycache that must take filtering in account"""
180
180
181 def cachevalue(self, obj, value):
181 def cachevalue(self, obj, value):
182 object.__setattr__(obj, self.name, value)
182 object.__setattr__(obj, self.name, value)
183
183
184
184
185 def hasunfilteredcache(repo, name):
185 def hasunfilteredcache(repo, name):
186 """check if a repo has an unfilteredpropertycache value for <name>"""
186 """check if a repo has an unfilteredpropertycache value for <name>"""
187 return name in vars(repo.unfiltered())
187 return name in vars(repo.unfiltered())
188
188
189
189
190 def unfilteredmethod(orig):
190 def unfilteredmethod(orig):
191 """decorate method that always need to be run on unfiltered version"""
191 """decorate method that always need to be run on unfiltered version"""
192
192
193 def wrapper(repo, *args, **kwargs):
193 def wrapper(repo, *args, **kwargs):
194 return orig(repo.unfiltered(), *args, **kwargs)
194 return orig(repo.unfiltered(), *args, **kwargs)
195
195
196 return wrapper
196 return wrapper
197
197
198
198
199 moderncaps = {
199 moderncaps = {
200 b'lookup',
200 b'lookup',
201 b'branchmap',
201 b'branchmap',
202 b'pushkey',
202 b'pushkey',
203 b'known',
203 b'known',
204 b'getbundle',
204 b'getbundle',
205 b'unbundle',
205 b'unbundle',
206 }
206 }
207 legacycaps = moderncaps.union({b'changegroupsubset'})
207 legacycaps = moderncaps.union({b'changegroupsubset'})
208
208
209
209
210 @interfaceutil.implementer(repository.ipeercommandexecutor)
210 @interfaceutil.implementer(repository.ipeercommandexecutor)
211 class localcommandexecutor(object):
211 class localcommandexecutor(object):
212 def __init__(self, peer):
212 def __init__(self, peer):
213 self._peer = peer
213 self._peer = peer
214 self._sent = False
214 self._sent = False
215 self._closed = False
215 self._closed = False
216
216
217 def __enter__(self):
217 def __enter__(self):
218 return self
218 return self
219
219
220 def __exit__(self, exctype, excvalue, exctb):
220 def __exit__(self, exctype, excvalue, exctb):
221 self.close()
221 self.close()
222
222
223 def callcommand(self, command, args):
223 def callcommand(self, command, args):
224 if self._sent:
224 if self._sent:
225 raise error.ProgrammingError(
225 raise error.ProgrammingError(
226 b'callcommand() cannot be used after sendcommands()'
226 b'callcommand() cannot be used after sendcommands()'
227 )
227 )
228
228
229 if self._closed:
229 if self._closed:
230 raise error.ProgrammingError(
230 raise error.ProgrammingError(
231 b'callcommand() cannot be used after close()'
231 b'callcommand() cannot be used after close()'
232 )
232 )
233
233
234 # We don't need to support anything fancy. Just call the named
234 # We don't need to support anything fancy. Just call the named
235 # method on the peer and return a resolved future.
235 # method on the peer and return a resolved future.
236 fn = getattr(self._peer, pycompat.sysstr(command))
236 fn = getattr(self._peer, pycompat.sysstr(command))
237
237
238 f = pycompat.futures.Future()
238 f = pycompat.futures.Future()
239
239
240 try:
240 try:
241 result = fn(**pycompat.strkwargs(args))
241 result = fn(**pycompat.strkwargs(args))
242 except Exception:
242 except Exception:
243 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
243 pycompat.future_set_exception_info(f, sys.exc_info()[1:])
244 else:
244 else:
245 f.set_result(result)
245 f.set_result(result)
246
246
247 return f
247 return f
248
248
249 def sendcommands(self):
249 def sendcommands(self):
250 self._sent = True
250 self._sent = True
251
251
252 def close(self):
252 def close(self):
253 self._closed = True
253 self._closed = True
254
254
255
255
256 @interfaceutil.implementer(repository.ipeercommands)
256 @interfaceutil.implementer(repository.ipeercommands)
257 class localpeer(repository.peer):
257 class localpeer(repository.peer):
258 '''peer for a local repo; reflects only the most recent API'''
258 '''peer for a local repo; reflects only the most recent API'''
259
259
260 def __init__(self, repo, caps=None):
260 def __init__(self, repo, caps=None):
261 super(localpeer, self).__init__()
261 super(localpeer, self).__init__()
262
262
263 if caps is None:
263 if caps is None:
264 caps = moderncaps.copy()
264 caps = moderncaps.copy()
265 self._repo = repo.filtered(b'served')
265 self._repo = repo.filtered(b'served')
266 self.ui = repo.ui
266 self.ui = repo.ui
267 self._caps = repo._restrictcapabilities(caps)
267 self._caps = repo._restrictcapabilities(caps)
268
268
269 # Begin of _basepeer interface.
269 # Begin of _basepeer interface.
270
270
271 def url(self):
271 def url(self):
272 return self._repo.url()
272 return self._repo.url()
273
273
274 def local(self):
274 def local(self):
275 return self._repo
275 return self._repo
276
276
277 def peer(self):
277 def peer(self):
278 return self
278 return self
279
279
280 def canpush(self):
280 def canpush(self):
281 return True
281 return True
282
282
283 def close(self):
283 def close(self):
284 self._repo.close()
284 self._repo.close()
285
285
286 # End of _basepeer interface.
286 # End of _basepeer interface.
287
287
288 # Begin of _basewirecommands interface.
288 # Begin of _basewirecommands interface.
289
289
290 def branchmap(self):
290 def branchmap(self):
291 return self._repo.branchmap()
291 return self._repo.branchmap()
292
292
293 def capabilities(self):
293 def capabilities(self):
294 return self._caps
294 return self._caps
295
295
296 def clonebundles(self):
296 def clonebundles(self):
297 return self._repo.tryread(b'clonebundles.manifest')
297 return self._repo.tryread(b'clonebundles.manifest')
298
298
299 def debugwireargs(self, one, two, three=None, four=None, five=None):
299 def debugwireargs(self, one, two, three=None, four=None, five=None):
300 """Used to test argument passing over the wire"""
300 """Used to test argument passing over the wire"""
301 return b"%s %s %s %s %s" % (
301 return b"%s %s %s %s %s" % (
302 one,
302 one,
303 two,
303 two,
304 pycompat.bytestr(three),
304 pycompat.bytestr(three),
305 pycompat.bytestr(four),
305 pycompat.bytestr(four),
306 pycompat.bytestr(five),
306 pycompat.bytestr(five),
307 )
307 )
308
308
309 def getbundle(
309 def getbundle(
310 self, source, heads=None, common=None, bundlecaps=None, **kwargs
310 self, source, heads=None, common=None, bundlecaps=None, **kwargs
311 ):
311 ):
312 chunks = exchange.getbundlechunks(
312 chunks = exchange.getbundlechunks(
313 self._repo,
313 self._repo,
314 source,
314 source,
315 heads=heads,
315 heads=heads,
316 common=common,
316 common=common,
317 bundlecaps=bundlecaps,
317 bundlecaps=bundlecaps,
318 **kwargs
318 **kwargs
319 )[1]
319 )[1]
320 cb = util.chunkbuffer(chunks)
320 cb = util.chunkbuffer(chunks)
321
321
322 if exchange.bundle2requested(bundlecaps):
322 if exchange.bundle2requested(bundlecaps):
323 # When requesting a bundle2, getbundle returns a stream to make the
323 # When requesting a bundle2, getbundle returns a stream to make the
324 # wire level function happier. We need to build a proper object
324 # wire level function happier. We need to build a proper object
325 # from it in local peer.
325 # from it in local peer.
326 return bundle2.getunbundler(self.ui, cb)
326 return bundle2.getunbundler(self.ui, cb)
327 else:
327 else:
328 return changegroup.getunbundler(b'01', cb, None)
328 return changegroup.getunbundler(b'01', cb, None)
329
329
330 def heads(self):
330 def heads(self):
331 return self._repo.heads()
331 return self._repo.heads()
332
332
333 def known(self, nodes):
333 def known(self, nodes):
334 return self._repo.known(nodes)
334 return self._repo.known(nodes)
335
335
336 def listkeys(self, namespace):
336 def listkeys(self, namespace):
337 return self._repo.listkeys(namespace)
337 return self._repo.listkeys(namespace)
338
338
339 def lookup(self, key):
339 def lookup(self, key):
340 return self._repo.lookup(key)
340 return self._repo.lookup(key)
341
341
342 def pushkey(self, namespace, key, old, new):
342 def pushkey(self, namespace, key, old, new):
343 return self._repo.pushkey(namespace, key, old, new)
343 return self._repo.pushkey(namespace, key, old, new)
344
344
345 def stream_out(self):
345 def stream_out(self):
346 raise error.Abort(_(b'cannot perform stream clone against local peer'))
346 raise error.Abort(_(b'cannot perform stream clone against local peer'))
347
347
348 def unbundle(self, bundle, heads, url):
348 def unbundle(self, bundle, heads, url):
349 """apply a bundle on a repo
349 """apply a bundle on a repo
350
350
351 This function handles the repo locking itself."""
351 This function handles the repo locking itself."""
352 try:
352 try:
353 try:
353 try:
354 bundle = exchange.readbundle(self.ui, bundle, None)
354 bundle = exchange.readbundle(self.ui, bundle, None)
355 ret = exchange.unbundle(self._repo, bundle, heads, b'push', url)
355 ret = exchange.unbundle(self._repo, bundle, heads, b'push', url)
356 if util.safehasattr(ret, b'getchunks'):
356 if util.safehasattr(ret, b'getchunks'):
357 # This is a bundle20 object, turn it into an unbundler.
357 # This is a bundle20 object, turn it into an unbundler.
358 # This little dance should be dropped eventually when the
358 # This little dance should be dropped eventually when the
359 # API is finally improved.
359 # API is finally improved.
360 stream = util.chunkbuffer(ret.getchunks())
360 stream = util.chunkbuffer(ret.getchunks())
361 ret = bundle2.getunbundler(self.ui, stream)
361 ret = bundle2.getunbundler(self.ui, stream)
362 return ret
362 return ret
363 except Exception as exc:
363 except Exception as exc:
364 # If the exception contains output salvaged from a bundle2
364 # If the exception contains output salvaged from a bundle2
365 # reply, we need to make sure it is printed before continuing
365 # reply, we need to make sure it is printed before continuing
366 # to fail. So we build a bundle2 with such output and consume
366 # to fail. So we build a bundle2 with such output and consume
367 # it directly.
367 # it directly.
368 #
368 #
369 # This is not very elegant but allows a "simple" solution for
369 # This is not very elegant but allows a "simple" solution for
370 # issue4594
370 # issue4594
371 output = getattr(exc, '_bundle2salvagedoutput', ())
371 output = getattr(exc, '_bundle2salvagedoutput', ())
372 if output:
372 if output:
373 bundler = bundle2.bundle20(self._repo.ui)
373 bundler = bundle2.bundle20(self._repo.ui)
374 for out in output:
374 for out in output:
375 bundler.addpart(out)
375 bundler.addpart(out)
376 stream = util.chunkbuffer(bundler.getchunks())
376 stream = util.chunkbuffer(bundler.getchunks())
377 b = bundle2.getunbundler(self.ui, stream)
377 b = bundle2.getunbundler(self.ui, stream)
378 bundle2.processbundle(self._repo, b)
378 bundle2.processbundle(self._repo, b)
379 raise
379 raise
380 except error.PushRaced as exc:
380 except error.PushRaced as exc:
381 raise error.ResponseError(
381 raise error.ResponseError(
382 _(b'push failed:'), stringutil.forcebytestr(exc)
382 _(b'push failed:'), stringutil.forcebytestr(exc)
383 )
383 )
384
384
385 # End of _basewirecommands interface.
385 # End of _basewirecommands interface.
386
386
387 # Begin of peer interface.
387 # Begin of peer interface.
388
388
389 def commandexecutor(self):
389 def commandexecutor(self):
390 return localcommandexecutor(self)
390 return localcommandexecutor(self)
391
391
392 # End of peer interface.
392 # End of peer interface.
393
393
394
394
395 @interfaceutil.implementer(repository.ipeerlegacycommands)
395 @interfaceutil.implementer(repository.ipeerlegacycommands)
396 class locallegacypeer(localpeer):
396 class locallegacypeer(localpeer):
397 '''peer extension which implements legacy methods too; used for tests with
397 '''peer extension which implements legacy methods too; used for tests with
398 restricted capabilities'''
398 restricted capabilities'''
399
399
400 def __init__(self, repo):
400 def __init__(self, repo):
401 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
401 super(locallegacypeer, self).__init__(repo, caps=legacycaps)
402
402
403 # Begin of baselegacywirecommands interface.
403 # Begin of baselegacywirecommands interface.
404
404
405 def between(self, pairs):
405 def between(self, pairs):
406 return self._repo.between(pairs)
406 return self._repo.between(pairs)
407
407
408 def branches(self, nodes):
408 def branches(self, nodes):
409 return self._repo.branches(nodes)
409 return self._repo.branches(nodes)
410
410
411 def changegroup(self, nodes, source):
411 def changegroup(self, nodes, source):
412 outgoing = discovery.outgoing(
412 outgoing = discovery.outgoing(
413 self._repo, missingroots=nodes, missingheads=self._repo.heads()
413 self._repo, missingroots=nodes, missingheads=self._repo.heads()
414 )
414 )
415 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
415 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
416
416
417 def changegroupsubset(self, bases, heads, source):
417 def changegroupsubset(self, bases, heads, source):
418 outgoing = discovery.outgoing(
418 outgoing = discovery.outgoing(
419 self._repo, missingroots=bases, missingheads=heads
419 self._repo, missingroots=bases, missingheads=heads
420 )
420 )
421 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
421 return changegroup.makechangegroup(self._repo, outgoing, b'01', source)
422
422
423 # End of baselegacywirecommands interface.
423 # End of baselegacywirecommands interface.
424
424
425
425
426 # Increment the sub-version when the revlog v2 format changes to lock out old
426 # Increment the sub-version when the revlog v2 format changes to lock out old
427 # clients.
427 # clients.
428 REVLOGV2_REQUIREMENT = b'exp-revlogv2.1'
428 REVLOGV2_REQUIREMENT = b'exp-revlogv2.1'
429
429
430 # A repository with the sparserevlog feature will have delta chains that
430 # A repository with the sparserevlog feature will have delta chains that
431 # can spread over a larger span. Sparse reading cuts these large spans into
431 # can spread over a larger span. Sparse reading cuts these large spans into
432 # pieces, so that each piece isn't too big.
432 # pieces, so that each piece isn't too big.
433 # Without the sparserevlog capability, reading from the repository could use
433 # Without the sparserevlog capability, reading from the repository could use
434 # huge amounts of memory, because the whole span would be read at once,
434 # huge amounts of memory, because the whole span would be read at once,
435 # including all the intermediate revisions that aren't pertinent for the chain.
435 # including all the intermediate revisions that aren't pertinent for the chain.
436 # This is why once a repository has enabled sparse-read, it becomes required.
436 # This is why once a repository has enabled sparse-read, it becomes required.
437 SPARSEREVLOG_REQUIREMENT = b'sparserevlog'
437 SPARSEREVLOG_REQUIREMENT = b'sparserevlog'
438
438
439 # A repository with the sidedataflag requirement will allow to store extra
439 # A repository with the sidedataflag requirement will allow to store extra
440 # information for revision without altering their original hashes.
440 # information for revision without altering their original hashes.
441 SIDEDATA_REQUIREMENT = b'exp-sidedata-flag'
441 SIDEDATA_REQUIREMENT = b'exp-sidedata-flag'
442
442
443 # A repository with the the copies-sidedata-changeset requirement will store
443 # A repository with the the copies-sidedata-changeset requirement will store
444 # copies related information in changeset's sidedata.
444 # copies related information in changeset's sidedata.
445 COPIESSDC_REQUIREMENT = b'exp-copies-sidedata-changeset'
445 COPIESSDC_REQUIREMENT = b'exp-copies-sidedata-changeset'
446
446
447 # Functions receiving (ui, features) that extensions can register to impact
447 # Functions receiving (ui, features) that extensions can register to impact
448 # the ability to load repositories with custom requirements. Only
448 # the ability to load repositories with custom requirements. Only
449 # functions defined in loaded extensions are called.
449 # functions defined in loaded extensions are called.
450 #
450 #
451 # The function receives a set of requirement strings that the repository
451 # The function receives a set of requirement strings that the repository
452 # is capable of opening. Functions will typically add elements to the
452 # is capable of opening. Functions will typically add elements to the
453 # set to reflect that the extension knows how to handle that requirements.
453 # set to reflect that the extension knows how to handle that requirements.
454 featuresetupfuncs = set()
454 featuresetupfuncs = set()
455
455
456
456
457 def makelocalrepository(baseui, path, intents=None):
457 def makelocalrepository(baseui, path, intents=None):
458 """Create a local repository object.
458 """Create a local repository object.
459
459
460 Given arguments needed to construct a local repository, this function
460 Given arguments needed to construct a local repository, this function
461 performs various early repository loading functionality (such as
461 performs various early repository loading functionality (such as
462 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
462 reading the ``.hg/requires`` and ``.hg/hgrc`` files), validates that
463 the repository can be opened, derives a type suitable for representing
463 the repository can be opened, derives a type suitable for representing
464 that repository, and returns an instance of it.
464 that repository, and returns an instance of it.
465
465
466 The returned object conforms to the ``repository.completelocalrepository``
466 The returned object conforms to the ``repository.completelocalrepository``
467 interface.
467 interface.
468
468
469 The repository type is derived by calling a series of factory functions
469 The repository type is derived by calling a series of factory functions
470 for each aspect/interface of the final repository. These are defined by
470 for each aspect/interface of the final repository. These are defined by
471 ``REPO_INTERFACES``.
471 ``REPO_INTERFACES``.
472
472
473 Each factory function is called to produce a type implementing a specific
473 Each factory function is called to produce a type implementing a specific
474 interface. The cumulative list of returned types will be combined into a
474 interface. The cumulative list of returned types will be combined into a
475 new type and that type will be instantiated to represent the local
475 new type and that type will be instantiated to represent the local
476 repository.
476 repository.
477
477
478 The factory functions each receive various state that may be consulted
478 The factory functions each receive various state that may be consulted
479 as part of deriving a type.
479 as part of deriving a type.
480
480
481 Extensions should wrap these factory functions to customize repository type
481 Extensions should wrap these factory functions to customize repository type
482 creation. Note that an extension's wrapped function may be called even if
482 creation. Note that an extension's wrapped function may be called even if
483 that extension is not loaded for the repo being constructed. Extensions
483 that extension is not loaded for the repo being constructed. Extensions
484 should check if their ``__name__`` appears in the
484 should check if their ``__name__`` appears in the
485 ``extensionmodulenames`` set passed to the factory function and no-op if
485 ``extensionmodulenames`` set passed to the factory function and no-op if
486 not.
486 not.
487 """
487 """
488 ui = baseui.copy()
488 ui = baseui.copy()
489 # Prevent copying repo configuration.
489 # Prevent copying repo configuration.
490 ui.copy = baseui.copy
490 ui.copy = baseui.copy
491
491
492 # Working directory VFS rooted at repository root.
492 # Working directory VFS rooted at repository root.
493 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
493 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
494
494
495 # Main VFS for .hg/ directory.
495 # Main VFS for .hg/ directory.
496 hgpath = wdirvfs.join(b'.hg')
496 hgpath = wdirvfs.join(b'.hg')
497 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
497 hgvfs = vfsmod.vfs(hgpath, cacheaudited=True)
498
498
499 # The .hg/ path should exist and should be a directory. All other
499 # The .hg/ path should exist and should be a directory. All other
500 # cases are errors.
500 # cases are errors.
501 if not hgvfs.isdir():
501 if not hgvfs.isdir():
502 try:
502 try:
503 hgvfs.stat()
503 hgvfs.stat()
504 except OSError as e:
504 except OSError as e:
505 if e.errno != errno.ENOENT:
505 if e.errno != errno.ENOENT:
506 raise
506 raise
507
507
508 raise error.RepoError(_(b'repository %s not found') % path)
508 raise error.RepoError(_(b'repository %s not found') % path)
509
509
510 # .hg/requires file contains a newline-delimited list of
510 # .hg/requires file contains a newline-delimited list of
511 # features/capabilities the opener (us) must have in order to use
511 # features/capabilities the opener (us) must have in order to use
512 # the repository. This file was introduced in Mercurial 0.9.2,
512 # the repository. This file was introduced in Mercurial 0.9.2,
513 # which means very old repositories may not have one. We assume
513 # which means very old repositories may not have one. We assume
514 # a missing file translates to no requirements.
514 # a missing file translates to no requirements.
515 try:
515 try:
516 requirements = set(hgvfs.read(b'requires').splitlines())
516 requirements = set(hgvfs.read(b'requires').splitlines())
517 except IOError as e:
517 except IOError as e:
518 if e.errno != errno.ENOENT:
518 if e.errno != errno.ENOENT:
519 raise
519 raise
520 requirements = set()
520 requirements = set()
521
521
522 # The .hg/hgrc file may load extensions or contain config options
522 # The .hg/hgrc file may load extensions or contain config options
523 # that influence repository construction. Attempt to load it and
523 # that influence repository construction. Attempt to load it and
524 # process any new extensions that it may have pulled in.
524 # process any new extensions that it may have pulled in.
525 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
525 if loadhgrc(ui, wdirvfs, hgvfs, requirements):
526 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
526 afterhgrcload(ui, wdirvfs, hgvfs, requirements)
527 extensions.loadall(ui)
527 extensions.loadall(ui)
528 extensions.populateui(ui)
528 extensions.populateui(ui)
529
529
530 # Set of module names of extensions loaded for this repository.
530 # Set of module names of extensions loaded for this repository.
531 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
531 extensionmodulenames = {m.__name__ for n, m in extensions.extensions(ui)}
532
532
533 supportedrequirements = gathersupportedrequirements(ui)
533 supportedrequirements = gathersupportedrequirements(ui)
534
534
535 # We first validate the requirements are known.
535 # We first validate the requirements are known.
536 ensurerequirementsrecognized(requirements, supportedrequirements)
536 ensurerequirementsrecognized(requirements, supportedrequirements)
537
537
538 # Then we validate that the known set is reasonable to use together.
538 # Then we validate that the known set is reasonable to use together.
539 ensurerequirementscompatible(ui, requirements)
539 ensurerequirementscompatible(ui, requirements)
540
540
541 # TODO there are unhandled edge cases related to opening repositories with
541 # TODO there are unhandled edge cases related to opening repositories with
542 # shared storage. If storage is shared, we should also test for requirements
542 # shared storage. If storage is shared, we should also test for requirements
543 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
543 # compatibility in the pointed-to repo. This entails loading the .hg/hgrc in
544 # that repo, as that repo may load extensions needed to open it. This is a
544 # that repo, as that repo may load extensions needed to open it. This is a
545 # bit complicated because we don't want the other hgrc to overwrite settings
545 # bit complicated because we don't want the other hgrc to overwrite settings
546 # in this hgrc.
546 # in this hgrc.
547 #
547 #
548 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
548 # This bug is somewhat mitigated by the fact that we copy the .hg/requires
549 # file when sharing repos. But if a requirement is added after the share is
549 # file when sharing repos. But if a requirement is added after the share is
550 # performed, thereby introducing a new requirement for the opener, we may
550 # performed, thereby introducing a new requirement for the opener, we may
551 # will not see that and could encounter a run-time error interacting with
551 # will not see that and could encounter a run-time error interacting with
552 # that shared store since it has an unknown-to-us requirement.
552 # that shared store since it has an unknown-to-us requirement.
553
553
554 # At this point, we know we should be capable of opening the repository.
554 # At this point, we know we should be capable of opening the repository.
555 # Now get on with doing that.
555 # Now get on with doing that.
556
556
557 features = set()
557 features = set()
558
558
559 # The "store" part of the repository holds versioned data. How it is
559 # The "store" part of the repository holds versioned data. How it is
560 # accessed is determined by various requirements. The ``shared`` or
560 # accessed is determined by various requirements. The ``shared`` or
561 # ``relshared`` requirements indicate the store lives in the path contained
561 # ``relshared`` requirements indicate the store lives in the path contained
562 # in the ``.hg/sharedpath`` file. This is an absolute path for
562 # in the ``.hg/sharedpath`` file. This is an absolute path for
563 # ``shared`` and relative to ``.hg/`` for ``relshared``.
563 # ``shared`` and relative to ``.hg/`` for ``relshared``.
564 if b'shared' in requirements or b'relshared' in requirements:
564 if b'shared' in requirements or b'relshared' in requirements:
565 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
565 sharedpath = hgvfs.read(b'sharedpath').rstrip(b'\n')
566 if b'relshared' in requirements:
566 if b'relshared' in requirements:
567 sharedpath = hgvfs.join(sharedpath)
567 sharedpath = hgvfs.join(sharedpath)
568
568
569 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
569 sharedvfs = vfsmod.vfs(sharedpath, realpath=True)
570
570
571 if not sharedvfs.exists():
571 if not sharedvfs.exists():
572 raise error.RepoError(
572 raise error.RepoError(
573 _(b'.hg/sharedpath points to nonexistent directory %s')
573 _(b'.hg/sharedpath points to nonexistent directory %s')
574 % sharedvfs.base
574 % sharedvfs.base
575 )
575 )
576
576
577 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
577 features.add(repository.REPO_FEATURE_SHARED_STORAGE)
578
578
579 storebasepath = sharedvfs.base
579 storebasepath = sharedvfs.base
580 cachepath = sharedvfs.join(b'cache')
580 cachepath = sharedvfs.join(b'cache')
581 else:
581 else:
582 storebasepath = hgvfs.base
582 storebasepath = hgvfs.base
583 cachepath = hgvfs.join(b'cache')
583 cachepath = hgvfs.join(b'cache')
584 wcachepath = hgvfs.join(b'wcache')
584 wcachepath = hgvfs.join(b'wcache')
585
585
586 # The store has changed over time and the exact layout is dictated by
586 # The store has changed over time and the exact layout is dictated by
587 # requirements. The store interface abstracts differences across all
587 # requirements. The store interface abstracts differences across all
588 # of them.
588 # of them.
589 store = makestore(
589 store = makestore(
590 requirements,
590 requirements,
591 storebasepath,
591 storebasepath,
592 lambda base: vfsmod.vfs(base, cacheaudited=True),
592 lambda base: vfsmod.vfs(base, cacheaudited=True),
593 )
593 )
594 hgvfs.createmode = store.createmode
594 hgvfs.createmode = store.createmode
595
595
596 storevfs = store.vfs
596 storevfs = store.vfs
597 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
597 storevfs.options = resolvestorevfsoptions(ui, requirements, features)
598
598
599 # The cache vfs is used to manage cache files.
599 # The cache vfs is used to manage cache files.
600 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
600 cachevfs = vfsmod.vfs(cachepath, cacheaudited=True)
601 cachevfs.createmode = store.createmode
601 cachevfs.createmode = store.createmode
602 # The cache vfs is used to manage cache files related to the working copy
602 # The cache vfs is used to manage cache files related to the working copy
603 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
603 wcachevfs = vfsmod.vfs(wcachepath, cacheaudited=True)
604 wcachevfs.createmode = store.createmode
604 wcachevfs.createmode = store.createmode
605
605
606 # Now resolve the type for the repository object. We do this by repeatedly
606 # Now resolve the type for the repository object. We do this by repeatedly
607 # calling a factory function to produces types for specific aspects of the
607 # calling a factory function to produces types for specific aspects of the
608 # repo's operation. The aggregate returned types are used as base classes
608 # repo's operation. The aggregate returned types are used as base classes
609 # for a dynamically-derived type, which will represent our new repository.
609 # for a dynamically-derived type, which will represent our new repository.
610
610
611 bases = []
611 bases = []
612 extrastate = {}
612 extrastate = {}
613
613
614 for iface, fn in REPO_INTERFACES:
614 for iface, fn in REPO_INTERFACES:
615 # We pass all potentially useful state to give extensions tons of
615 # We pass all potentially useful state to give extensions tons of
616 # flexibility.
616 # flexibility.
617 typ = fn()(
617 typ = fn()(
618 ui=ui,
618 ui=ui,
619 intents=intents,
619 intents=intents,
620 requirements=requirements,
620 requirements=requirements,
621 features=features,
621 features=features,
622 wdirvfs=wdirvfs,
622 wdirvfs=wdirvfs,
623 hgvfs=hgvfs,
623 hgvfs=hgvfs,
624 store=store,
624 store=store,
625 storevfs=storevfs,
625 storevfs=storevfs,
626 storeoptions=storevfs.options,
626 storeoptions=storevfs.options,
627 cachevfs=cachevfs,
627 cachevfs=cachevfs,
628 wcachevfs=wcachevfs,
628 wcachevfs=wcachevfs,
629 extensionmodulenames=extensionmodulenames,
629 extensionmodulenames=extensionmodulenames,
630 extrastate=extrastate,
630 extrastate=extrastate,
631 baseclasses=bases,
631 baseclasses=bases,
632 )
632 )
633
633
634 if not isinstance(typ, type):
634 if not isinstance(typ, type):
635 raise error.ProgrammingError(
635 raise error.ProgrammingError(
636 b'unable to construct type for %s' % iface
636 b'unable to construct type for %s' % iface
637 )
637 )
638
638
639 bases.append(typ)
639 bases.append(typ)
640
640
641 # type() allows you to use characters in type names that wouldn't be
641 # type() allows you to use characters in type names that wouldn't be
642 # recognized as Python symbols in source code. We abuse that to add
642 # recognized as Python symbols in source code. We abuse that to add
643 # rich information about our constructed repo.
643 # rich information about our constructed repo.
644 name = pycompat.sysstr(
644 name = pycompat.sysstr(
645 b'derivedrepo:%s<%s>' % (wdirvfs.base, b','.join(sorted(requirements)))
645 b'derivedrepo:%s<%s>' % (wdirvfs.base, b','.join(sorted(requirements)))
646 )
646 )
647
647
648 cls = type(name, tuple(bases), {})
648 cls = type(name, tuple(bases), {})
649
649
650 return cls(
650 return cls(
651 baseui=baseui,
651 baseui=baseui,
652 ui=ui,
652 ui=ui,
653 origroot=path,
653 origroot=path,
654 wdirvfs=wdirvfs,
654 wdirvfs=wdirvfs,
655 hgvfs=hgvfs,
655 hgvfs=hgvfs,
656 requirements=requirements,
656 requirements=requirements,
657 supportedrequirements=supportedrequirements,
657 supportedrequirements=supportedrequirements,
658 sharedpath=storebasepath,
658 sharedpath=storebasepath,
659 store=store,
659 store=store,
660 cachevfs=cachevfs,
660 cachevfs=cachevfs,
661 wcachevfs=wcachevfs,
661 wcachevfs=wcachevfs,
662 features=features,
662 features=features,
663 intents=intents,
663 intents=intents,
664 )
664 )
665
665
666
666
667 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
667 def loadhgrc(ui, wdirvfs, hgvfs, requirements):
668 """Load hgrc files/content into a ui instance.
668 """Load hgrc files/content into a ui instance.
669
669
670 This is called during repository opening to load any additional
670 This is called during repository opening to load any additional
671 config files or settings relevant to the current repository.
671 config files or settings relevant to the current repository.
672
672
673 Returns a bool indicating whether any additional configs were loaded.
673 Returns a bool indicating whether any additional configs were loaded.
674
674
675 Extensions should monkeypatch this function to modify how per-repo
675 Extensions should monkeypatch this function to modify how per-repo
676 configs are loaded. For example, an extension may wish to pull in
676 configs are loaded. For example, an extension may wish to pull in
677 configs from alternate files or sources.
677 configs from alternate files or sources.
678 """
678 """
679 if b'HGRCSKIPREPO' in encoding.environ:
680 return False
679 try:
681 try:
680 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
682 ui.readconfig(hgvfs.join(b'hgrc'), root=wdirvfs.base)
681 return True
683 return True
682 except IOError:
684 except IOError:
683 return False
685 return False
684
686
685
687
686 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
688 def afterhgrcload(ui, wdirvfs, hgvfs, requirements):
687 """Perform additional actions after .hg/hgrc is loaded.
689 """Perform additional actions after .hg/hgrc is loaded.
688
690
689 This function is called during repository loading immediately after
691 This function is called during repository loading immediately after
690 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
692 the .hg/hgrc file is loaded and before per-repo extensions are loaded.
691
693
692 The function can be used to validate configs, automatically add
694 The function can be used to validate configs, automatically add
693 options (including extensions) based on requirements, etc.
695 options (including extensions) based on requirements, etc.
694 """
696 """
695
697
696 # Map of requirements to list of extensions to load automatically when
698 # Map of requirements to list of extensions to load automatically when
697 # requirement is present.
699 # requirement is present.
698 autoextensions = {
700 autoextensions = {
699 b'largefiles': [b'largefiles'],
701 b'largefiles': [b'largefiles'],
700 b'lfs': [b'lfs'],
702 b'lfs': [b'lfs'],
701 }
703 }
702
704
703 for requirement, names in sorted(autoextensions.items()):
705 for requirement, names in sorted(autoextensions.items()):
704 if requirement not in requirements:
706 if requirement not in requirements:
705 continue
707 continue
706
708
707 for name in names:
709 for name in names:
708 if not ui.hasconfig(b'extensions', name):
710 if not ui.hasconfig(b'extensions', name):
709 ui.setconfig(b'extensions', name, b'', source=b'autoload')
711 ui.setconfig(b'extensions', name, b'', source=b'autoload')
710
712
711
713
712 def gathersupportedrequirements(ui):
714 def gathersupportedrequirements(ui):
713 """Determine the complete set of recognized requirements."""
715 """Determine the complete set of recognized requirements."""
714 # Start with all requirements supported by this file.
716 # Start with all requirements supported by this file.
715 supported = set(localrepository._basesupported)
717 supported = set(localrepository._basesupported)
716
718
717 # Execute ``featuresetupfuncs`` entries if they belong to an extension
719 # Execute ``featuresetupfuncs`` entries if they belong to an extension
718 # relevant to this ui instance.
720 # relevant to this ui instance.
719 modules = {m.__name__ for n, m in extensions.extensions(ui)}
721 modules = {m.__name__ for n, m in extensions.extensions(ui)}
720
722
721 for fn in featuresetupfuncs:
723 for fn in featuresetupfuncs:
722 if fn.__module__ in modules:
724 if fn.__module__ in modules:
723 fn(ui, supported)
725 fn(ui, supported)
724
726
725 # Add derived requirements from registered compression engines.
727 # Add derived requirements from registered compression engines.
726 for name in util.compengines:
728 for name in util.compengines:
727 engine = util.compengines[name]
729 engine = util.compengines[name]
728 if engine.available() and engine.revlogheader():
730 if engine.available() and engine.revlogheader():
729 supported.add(b'exp-compression-%s' % name)
731 supported.add(b'exp-compression-%s' % name)
730 if engine.name() == b'zstd':
732 if engine.name() == b'zstd':
731 supported.add(b'revlog-compression-zstd')
733 supported.add(b'revlog-compression-zstd')
732
734
733 return supported
735 return supported
734
736
735
737
736 def ensurerequirementsrecognized(requirements, supported):
738 def ensurerequirementsrecognized(requirements, supported):
737 """Validate that a set of local requirements is recognized.
739 """Validate that a set of local requirements is recognized.
738
740
739 Receives a set of requirements. Raises an ``error.RepoError`` if there
741 Receives a set of requirements. Raises an ``error.RepoError`` if there
740 exists any requirement in that set that currently loaded code doesn't
742 exists any requirement in that set that currently loaded code doesn't
741 recognize.
743 recognize.
742
744
743 Returns a set of supported requirements.
745 Returns a set of supported requirements.
744 """
746 """
745 missing = set()
747 missing = set()
746
748
747 for requirement in requirements:
749 for requirement in requirements:
748 if requirement in supported:
750 if requirement in supported:
749 continue
751 continue
750
752
751 if not requirement or not requirement[0:1].isalnum():
753 if not requirement or not requirement[0:1].isalnum():
752 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
754 raise error.RequirementError(_(b'.hg/requires file is corrupt'))
753
755
754 missing.add(requirement)
756 missing.add(requirement)
755
757
756 if missing:
758 if missing:
757 raise error.RequirementError(
759 raise error.RequirementError(
758 _(b'repository requires features unknown to this Mercurial: %s')
760 _(b'repository requires features unknown to this Mercurial: %s')
759 % b' '.join(sorted(missing)),
761 % b' '.join(sorted(missing)),
760 hint=_(
762 hint=_(
761 b'see https://mercurial-scm.org/wiki/MissingRequirement '
763 b'see https://mercurial-scm.org/wiki/MissingRequirement '
762 b'for more information'
764 b'for more information'
763 ),
765 ),
764 )
766 )
765
767
766
768
767 def ensurerequirementscompatible(ui, requirements):
769 def ensurerequirementscompatible(ui, requirements):
768 """Validates that a set of recognized requirements is mutually compatible.
770 """Validates that a set of recognized requirements is mutually compatible.
769
771
770 Some requirements may not be compatible with others or require
772 Some requirements may not be compatible with others or require
771 config options that aren't enabled. This function is called during
773 config options that aren't enabled. This function is called during
772 repository opening to ensure that the set of requirements needed
774 repository opening to ensure that the set of requirements needed
773 to open a repository is sane and compatible with config options.
775 to open a repository is sane and compatible with config options.
774
776
775 Extensions can monkeypatch this function to perform additional
777 Extensions can monkeypatch this function to perform additional
776 checking.
778 checking.
777
779
778 ``error.RepoError`` should be raised on failure.
780 ``error.RepoError`` should be raised on failure.
779 """
781 """
780 if b'exp-sparse' in requirements and not sparse.enabled:
782 if b'exp-sparse' in requirements and not sparse.enabled:
781 raise error.RepoError(
783 raise error.RepoError(
782 _(
784 _(
783 b'repository is using sparse feature but '
785 b'repository is using sparse feature but '
784 b'sparse is not enabled; enable the '
786 b'sparse is not enabled; enable the '
785 b'"sparse" extensions to access'
787 b'"sparse" extensions to access'
786 )
788 )
787 )
789 )
788
790
789
791
790 def makestore(requirements, path, vfstype):
792 def makestore(requirements, path, vfstype):
791 """Construct a storage object for a repository."""
793 """Construct a storage object for a repository."""
792 if b'store' in requirements:
794 if b'store' in requirements:
793 if b'fncache' in requirements:
795 if b'fncache' in requirements:
794 return storemod.fncachestore(
796 return storemod.fncachestore(
795 path, vfstype, b'dotencode' in requirements
797 path, vfstype, b'dotencode' in requirements
796 )
798 )
797
799
798 return storemod.encodedstore(path, vfstype)
800 return storemod.encodedstore(path, vfstype)
799
801
800 return storemod.basicstore(path, vfstype)
802 return storemod.basicstore(path, vfstype)
801
803
802
804
803 def resolvestorevfsoptions(ui, requirements, features):
805 def resolvestorevfsoptions(ui, requirements, features):
804 """Resolve the options to pass to the store vfs opener.
806 """Resolve the options to pass to the store vfs opener.
805
807
806 The returned dict is used to influence behavior of the storage layer.
808 The returned dict is used to influence behavior of the storage layer.
807 """
809 """
808 options = {}
810 options = {}
809
811
810 if b'treemanifest' in requirements:
812 if b'treemanifest' in requirements:
811 options[b'treemanifest'] = True
813 options[b'treemanifest'] = True
812
814
813 # experimental config: format.manifestcachesize
815 # experimental config: format.manifestcachesize
814 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
816 manifestcachesize = ui.configint(b'format', b'manifestcachesize')
815 if manifestcachesize is not None:
817 if manifestcachesize is not None:
816 options[b'manifestcachesize'] = manifestcachesize
818 options[b'manifestcachesize'] = manifestcachesize
817
819
818 # In the absence of another requirement superseding a revlog-related
820 # In the absence of another requirement superseding a revlog-related
819 # requirement, we have to assume the repo is using revlog version 0.
821 # requirement, we have to assume the repo is using revlog version 0.
820 # This revlog format is super old and we don't bother trying to parse
822 # This revlog format is super old and we don't bother trying to parse
821 # opener options for it because those options wouldn't do anything
823 # opener options for it because those options wouldn't do anything
822 # meaningful on such old repos.
824 # meaningful on such old repos.
823 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
825 if b'revlogv1' in requirements or REVLOGV2_REQUIREMENT in requirements:
824 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
826 options.update(resolverevlogstorevfsoptions(ui, requirements, features))
825 else: # explicitly mark repo as using revlogv0
827 else: # explicitly mark repo as using revlogv0
826 options[b'revlogv0'] = True
828 options[b'revlogv0'] = True
827
829
828 if COPIESSDC_REQUIREMENT in requirements:
830 if COPIESSDC_REQUIREMENT in requirements:
829 options[b'copies-storage'] = b'changeset-sidedata'
831 options[b'copies-storage'] = b'changeset-sidedata'
830 else:
832 else:
831 writecopiesto = ui.config(b'experimental', b'copies.write-to')
833 writecopiesto = ui.config(b'experimental', b'copies.write-to')
832 copiesextramode = (b'changeset-only', b'compatibility')
834 copiesextramode = (b'changeset-only', b'compatibility')
833 if writecopiesto in copiesextramode:
835 if writecopiesto in copiesextramode:
834 options[b'copies-storage'] = b'extra'
836 options[b'copies-storage'] = b'extra'
835
837
836 return options
838 return options
837
839
838
840
839 def resolverevlogstorevfsoptions(ui, requirements, features):
841 def resolverevlogstorevfsoptions(ui, requirements, features):
840 """Resolve opener options specific to revlogs."""
842 """Resolve opener options specific to revlogs."""
841
843
842 options = {}
844 options = {}
843 options[b'flagprocessors'] = {}
845 options[b'flagprocessors'] = {}
844
846
845 if b'revlogv1' in requirements:
847 if b'revlogv1' in requirements:
846 options[b'revlogv1'] = True
848 options[b'revlogv1'] = True
847 if REVLOGV2_REQUIREMENT in requirements:
849 if REVLOGV2_REQUIREMENT in requirements:
848 options[b'revlogv2'] = True
850 options[b'revlogv2'] = True
849
851
850 if b'generaldelta' in requirements:
852 if b'generaldelta' in requirements:
851 options[b'generaldelta'] = True
853 options[b'generaldelta'] = True
852
854
853 # experimental config: format.chunkcachesize
855 # experimental config: format.chunkcachesize
854 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
856 chunkcachesize = ui.configint(b'format', b'chunkcachesize')
855 if chunkcachesize is not None:
857 if chunkcachesize is not None:
856 options[b'chunkcachesize'] = chunkcachesize
858 options[b'chunkcachesize'] = chunkcachesize
857
859
858 deltabothparents = ui.configbool(
860 deltabothparents = ui.configbool(
859 b'storage', b'revlog.optimize-delta-parent-choice'
861 b'storage', b'revlog.optimize-delta-parent-choice'
860 )
862 )
861 options[b'deltabothparents'] = deltabothparents
863 options[b'deltabothparents'] = deltabothparents
862
864
863 lazydelta = ui.configbool(b'storage', b'revlog.reuse-external-delta')
865 lazydelta = ui.configbool(b'storage', b'revlog.reuse-external-delta')
864 lazydeltabase = False
866 lazydeltabase = False
865 if lazydelta:
867 if lazydelta:
866 lazydeltabase = ui.configbool(
868 lazydeltabase = ui.configbool(
867 b'storage', b'revlog.reuse-external-delta-parent'
869 b'storage', b'revlog.reuse-external-delta-parent'
868 )
870 )
869 if lazydeltabase is None:
871 if lazydeltabase is None:
870 lazydeltabase = not scmutil.gddeltaconfig(ui)
872 lazydeltabase = not scmutil.gddeltaconfig(ui)
871 options[b'lazydelta'] = lazydelta
873 options[b'lazydelta'] = lazydelta
872 options[b'lazydeltabase'] = lazydeltabase
874 options[b'lazydeltabase'] = lazydeltabase
873
875
874 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
876 chainspan = ui.configbytes(b'experimental', b'maxdeltachainspan')
875 if 0 <= chainspan:
877 if 0 <= chainspan:
876 options[b'maxdeltachainspan'] = chainspan
878 options[b'maxdeltachainspan'] = chainspan
877
879
878 mmapindexthreshold = ui.configbytes(b'experimental', b'mmapindexthreshold')
880 mmapindexthreshold = ui.configbytes(b'experimental', b'mmapindexthreshold')
879 if mmapindexthreshold is not None:
881 if mmapindexthreshold is not None:
880 options[b'mmapindexthreshold'] = mmapindexthreshold
882 options[b'mmapindexthreshold'] = mmapindexthreshold
881
883
882 withsparseread = ui.configbool(b'experimental', b'sparse-read')
884 withsparseread = ui.configbool(b'experimental', b'sparse-read')
883 srdensitythres = float(
885 srdensitythres = float(
884 ui.config(b'experimental', b'sparse-read.density-threshold')
886 ui.config(b'experimental', b'sparse-read.density-threshold')
885 )
887 )
886 srmingapsize = ui.configbytes(b'experimental', b'sparse-read.min-gap-size')
888 srmingapsize = ui.configbytes(b'experimental', b'sparse-read.min-gap-size')
887 options[b'with-sparse-read'] = withsparseread
889 options[b'with-sparse-read'] = withsparseread
888 options[b'sparse-read-density-threshold'] = srdensitythres
890 options[b'sparse-read-density-threshold'] = srdensitythres
889 options[b'sparse-read-min-gap-size'] = srmingapsize
891 options[b'sparse-read-min-gap-size'] = srmingapsize
890
892
891 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
893 sparserevlog = SPARSEREVLOG_REQUIREMENT in requirements
892 options[b'sparse-revlog'] = sparserevlog
894 options[b'sparse-revlog'] = sparserevlog
893 if sparserevlog:
895 if sparserevlog:
894 options[b'generaldelta'] = True
896 options[b'generaldelta'] = True
895
897
896 sidedata = SIDEDATA_REQUIREMENT in requirements
898 sidedata = SIDEDATA_REQUIREMENT in requirements
897 options[b'side-data'] = sidedata
899 options[b'side-data'] = sidedata
898
900
899 maxchainlen = None
901 maxchainlen = None
900 if sparserevlog:
902 if sparserevlog:
901 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
903 maxchainlen = revlogconst.SPARSE_REVLOG_MAX_CHAIN_LENGTH
902 # experimental config: format.maxchainlen
904 # experimental config: format.maxchainlen
903 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
905 maxchainlen = ui.configint(b'format', b'maxchainlen', maxchainlen)
904 if maxchainlen is not None:
906 if maxchainlen is not None:
905 options[b'maxchainlen'] = maxchainlen
907 options[b'maxchainlen'] = maxchainlen
906
908
907 for r in requirements:
909 for r in requirements:
908 # we allow multiple compression engine requirement to co-exist because
910 # we allow multiple compression engine requirement to co-exist because
909 # strickly speaking, revlog seems to support mixed compression style.
911 # strickly speaking, revlog seems to support mixed compression style.
910 #
912 #
911 # The compression used for new entries will be "the last one"
913 # The compression used for new entries will be "the last one"
912 prefix = r.startswith
914 prefix = r.startswith
913 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
915 if prefix(b'revlog-compression-') or prefix(b'exp-compression-'):
914 options[b'compengine'] = r.split(b'-', 2)[2]
916 options[b'compengine'] = r.split(b'-', 2)[2]
915
917
916 options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
918 options[b'zlib.level'] = ui.configint(b'storage', b'revlog.zlib.level')
917 if options[b'zlib.level'] is not None:
919 if options[b'zlib.level'] is not None:
918 if not (0 <= options[b'zlib.level'] <= 9):
920 if not (0 <= options[b'zlib.level'] <= 9):
919 msg = _(b'invalid value for `storage.revlog.zlib.level` config: %d')
921 msg = _(b'invalid value for `storage.revlog.zlib.level` config: %d')
920 raise error.Abort(msg % options[b'zlib.level'])
922 raise error.Abort(msg % options[b'zlib.level'])
921 options[b'zstd.level'] = ui.configint(b'storage', b'revlog.zstd.level')
923 options[b'zstd.level'] = ui.configint(b'storage', b'revlog.zstd.level')
922 if options[b'zstd.level'] is not None:
924 if options[b'zstd.level'] is not None:
923 if not (0 <= options[b'zstd.level'] <= 22):
925 if not (0 <= options[b'zstd.level'] <= 22):
924 msg = _(b'invalid value for `storage.revlog.zstd.level` config: %d')
926 msg = _(b'invalid value for `storage.revlog.zstd.level` config: %d')
925 raise error.Abort(msg % options[b'zstd.level'])
927 raise error.Abort(msg % options[b'zstd.level'])
926
928
927 if repository.NARROW_REQUIREMENT in requirements:
929 if repository.NARROW_REQUIREMENT in requirements:
928 options[b'enableellipsis'] = True
930 options[b'enableellipsis'] = True
929
931
930 if ui.configbool(b'experimental', b'rust.index'):
932 if ui.configbool(b'experimental', b'rust.index'):
931 options[b'rust.index'] = True
933 options[b'rust.index'] = True
932
934
933 return options
935 return options
934
936
935
937
936 def makemain(**kwargs):
938 def makemain(**kwargs):
937 """Produce a type conforming to ``ilocalrepositorymain``."""
939 """Produce a type conforming to ``ilocalrepositorymain``."""
938 return localrepository
940 return localrepository
939
941
940
942
941 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
943 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
942 class revlogfilestorage(object):
944 class revlogfilestorage(object):
943 """File storage when using revlogs."""
945 """File storage when using revlogs."""
944
946
945 def file(self, path):
947 def file(self, path):
946 if path[0] == b'/':
948 if path[0] == b'/':
947 path = path[1:]
949 path = path[1:]
948
950
949 return filelog.filelog(self.svfs, path)
951 return filelog.filelog(self.svfs, path)
950
952
951
953
952 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
954 @interfaceutil.implementer(repository.ilocalrepositoryfilestorage)
953 class revlognarrowfilestorage(object):
955 class revlognarrowfilestorage(object):
954 """File storage when using revlogs and narrow files."""
956 """File storage when using revlogs and narrow files."""
955
957
956 def file(self, path):
958 def file(self, path):
957 if path[0] == b'/':
959 if path[0] == b'/':
958 path = path[1:]
960 path = path[1:]
959
961
960 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
962 return filelog.narrowfilelog(self.svfs, path, self._storenarrowmatch)
961
963
962
964
963 def makefilestorage(requirements, features, **kwargs):
965 def makefilestorage(requirements, features, **kwargs):
964 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
966 """Produce a type conforming to ``ilocalrepositoryfilestorage``."""
965 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
967 features.add(repository.REPO_FEATURE_REVLOG_FILE_STORAGE)
966 features.add(repository.REPO_FEATURE_STREAM_CLONE)
968 features.add(repository.REPO_FEATURE_STREAM_CLONE)
967
969
968 if repository.NARROW_REQUIREMENT in requirements:
970 if repository.NARROW_REQUIREMENT in requirements:
969 return revlognarrowfilestorage
971 return revlognarrowfilestorage
970 else:
972 else:
971 return revlogfilestorage
973 return revlogfilestorage
972
974
973
975
974 # List of repository interfaces and factory functions for them. Each
976 # List of repository interfaces and factory functions for them. Each
975 # will be called in order during ``makelocalrepository()`` to iteratively
977 # will be called in order during ``makelocalrepository()`` to iteratively
976 # derive the final type for a local repository instance. We capture the
978 # derive the final type for a local repository instance. We capture the
977 # function as a lambda so we don't hold a reference and the module-level
979 # function as a lambda so we don't hold a reference and the module-level
978 # functions can be wrapped.
980 # functions can be wrapped.
979 REPO_INTERFACES = [
981 REPO_INTERFACES = [
980 (repository.ilocalrepositorymain, lambda: makemain),
982 (repository.ilocalrepositorymain, lambda: makemain),
981 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
983 (repository.ilocalrepositoryfilestorage, lambda: makefilestorage),
982 ]
984 ]
983
985
984
986
985 @interfaceutil.implementer(repository.ilocalrepositorymain)
987 @interfaceutil.implementer(repository.ilocalrepositorymain)
986 class localrepository(object):
988 class localrepository(object):
987 """Main class for representing local repositories.
989 """Main class for representing local repositories.
988
990
989 All local repositories are instances of this class.
991 All local repositories are instances of this class.
990
992
991 Constructed on its own, instances of this class are not usable as
993 Constructed on its own, instances of this class are not usable as
992 repository objects. To obtain a usable repository object, call
994 repository objects. To obtain a usable repository object, call
993 ``hg.repository()``, ``localrepo.instance()``, or
995 ``hg.repository()``, ``localrepo.instance()``, or
994 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
996 ``localrepo.makelocalrepository()``. The latter is the lowest-level.
995 ``instance()`` adds support for creating new repositories.
997 ``instance()`` adds support for creating new repositories.
996 ``hg.repository()`` adds more extension integration, including calling
998 ``hg.repository()`` adds more extension integration, including calling
997 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
999 ``reposetup()``. Generally speaking, ``hg.repository()`` should be
998 used.
1000 used.
999 """
1001 """
1000
1002
1001 # obsolete experimental requirements:
1003 # obsolete experimental requirements:
1002 # - manifestv2: An experimental new manifest format that allowed
1004 # - manifestv2: An experimental new manifest format that allowed
1003 # for stem compression of long paths. Experiment ended up not
1005 # for stem compression of long paths. Experiment ended up not
1004 # being successful (repository sizes went up due to worse delta
1006 # being successful (repository sizes went up due to worse delta
1005 # chains), and the code was deleted in 4.6.
1007 # chains), and the code was deleted in 4.6.
1006 supportedformats = {
1008 supportedformats = {
1007 b'revlogv1',
1009 b'revlogv1',
1008 b'generaldelta',
1010 b'generaldelta',
1009 b'treemanifest',
1011 b'treemanifest',
1010 COPIESSDC_REQUIREMENT,
1012 COPIESSDC_REQUIREMENT,
1011 REVLOGV2_REQUIREMENT,
1013 REVLOGV2_REQUIREMENT,
1012 SIDEDATA_REQUIREMENT,
1014 SIDEDATA_REQUIREMENT,
1013 SPARSEREVLOG_REQUIREMENT,
1015 SPARSEREVLOG_REQUIREMENT,
1014 bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT,
1016 bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT,
1015 }
1017 }
1016 _basesupported = supportedformats | {
1018 _basesupported = supportedformats | {
1017 b'store',
1019 b'store',
1018 b'fncache',
1020 b'fncache',
1019 b'shared',
1021 b'shared',
1020 b'relshared',
1022 b'relshared',
1021 b'dotencode',
1023 b'dotencode',
1022 b'exp-sparse',
1024 b'exp-sparse',
1023 b'internal-phase',
1025 b'internal-phase',
1024 }
1026 }
1025
1027
1026 # list of prefix for file which can be written without 'wlock'
1028 # list of prefix for file which can be written without 'wlock'
1027 # Extensions should extend this list when needed
1029 # Extensions should extend this list when needed
1028 _wlockfreeprefix = {
1030 _wlockfreeprefix = {
1029 # We migh consider requiring 'wlock' for the next
1031 # We migh consider requiring 'wlock' for the next
1030 # two, but pretty much all the existing code assume
1032 # two, but pretty much all the existing code assume
1031 # wlock is not needed so we keep them excluded for
1033 # wlock is not needed so we keep them excluded for
1032 # now.
1034 # now.
1033 b'hgrc',
1035 b'hgrc',
1034 b'requires',
1036 b'requires',
1035 # XXX cache is a complicatged business someone
1037 # XXX cache is a complicatged business someone
1036 # should investigate this in depth at some point
1038 # should investigate this in depth at some point
1037 b'cache/',
1039 b'cache/',
1038 # XXX shouldn't be dirstate covered by the wlock?
1040 # XXX shouldn't be dirstate covered by the wlock?
1039 b'dirstate',
1041 b'dirstate',
1040 # XXX bisect was still a bit too messy at the time
1042 # XXX bisect was still a bit too messy at the time
1041 # this changeset was introduced. Someone should fix
1043 # this changeset was introduced. Someone should fix
1042 # the remainig bit and drop this line
1044 # the remainig bit and drop this line
1043 b'bisect.state',
1045 b'bisect.state',
1044 }
1046 }
1045
1047
1046 def __init__(
1048 def __init__(
1047 self,
1049 self,
1048 baseui,
1050 baseui,
1049 ui,
1051 ui,
1050 origroot,
1052 origroot,
1051 wdirvfs,
1053 wdirvfs,
1052 hgvfs,
1054 hgvfs,
1053 requirements,
1055 requirements,
1054 supportedrequirements,
1056 supportedrequirements,
1055 sharedpath,
1057 sharedpath,
1056 store,
1058 store,
1057 cachevfs,
1059 cachevfs,
1058 wcachevfs,
1060 wcachevfs,
1059 features,
1061 features,
1060 intents=None,
1062 intents=None,
1061 ):
1063 ):
1062 """Create a new local repository instance.
1064 """Create a new local repository instance.
1063
1065
1064 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
1066 Most callers should use ``hg.repository()``, ``localrepo.instance()``,
1065 or ``localrepo.makelocalrepository()`` for obtaining a new repository
1067 or ``localrepo.makelocalrepository()`` for obtaining a new repository
1066 object.
1068 object.
1067
1069
1068 Arguments:
1070 Arguments:
1069
1071
1070 baseui
1072 baseui
1071 ``ui.ui`` instance that ``ui`` argument was based off of.
1073 ``ui.ui`` instance that ``ui`` argument was based off of.
1072
1074
1073 ui
1075 ui
1074 ``ui.ui`` instance for use by the repository.
1076 ``ui.ui`` instance for use by the repository.
1075
1077
1076 origroot
1078 origroot
1077 ``bytes`` path to working directory root of this repository.
1079 ``bytes`` path to working directory root of this repository.
1078
1080
1079 wdirvfs
1081 wdirvfs
1080 ``vfs.vfs`` rooted at the working directory.
1082 ``vfs.vfs`` rooted at the working directory.
1081
1083
1082 hgvfs
1084 hgvfs
1083 ``vfs.vfs`` rooted at .hg/
1085 ``vfs.vfs`` rooted at .hg/
1084
1086
1085 requirements
1087 requirements
1086 ``set`` of bytestrings representing repository opening requirements.
1088 ``set`` of bytestrings representing repository opening requirements.
1087
1089
1088 supportedrequirements
1090 supportedrequirements
1089 ``set`` of bytestrings representing repository requirements that we
1091 ``set`` of bytestrings representing repository requirements that we
1090 know how to open. May be a supetset of ``requirements``.
1092 know how to open. May be a supetset of ``requirements``.
1091
1093
1092 sharedpath
1094 sharedpath
1093 ``bytes`` Defining path to storage base directory. Points to a
1095 ``bytes`` Defining path to storage base directory. Points to a
1094 ``.hg/`` directory somewhere.
1096 ``.hg/`` directory somewhere.
1095
1097
1096 store
1098 store
1097 ``store.basicstore`` (or derived) instance providing access to
1099 ``store.basicstore`` (or derived) instance providing access to
1098 versioned storage.
1100 versioned storage.
1099
1101
1100 cachevfs
1102 cachevfs
1101 ``vfs.vfs`` used for cache files.
1103 ``vfs.vfs`` used for cache files.
1102
1104
1103 wcachevfs
1105 wcachevfs
1104 ``vfs.vfs`` used for cache files related to the working copy.
1106 ``vfs.vfs`` used for cache files related to the working copy.
1105
1107
1106 features
1108 features
1107 ``set`` of bytestrings defining features/capabilities of this
1109 ``set`` of bytestrings defining features/capabilities of this
1108 instance.
1110 instance.
1109
1111
1110 intents
1112 intents
1111 ``set`` of system strings indicating what this repo will be used
1113 ``set`` of system strings indicating what this repo will be used
1112 for.
1114 for.
1113 """
1115 """
1114 self.baseui = baseui
1116 self.baseui = baseui
1115 self.ui = ui
1117 self.ui = ui
1116 self.origroot = origroot
1118 self.origroot = origroot
1117 # vfs rooted at working directory.
1119 # vfs rooted at working directory.
1118 self.wvfs = wdirvfs
1120 self.wvfs = wdirvfs
1119 self.root = wdirvfs.base
1121 self.root = wdirvfs.base
1120 # vfs rooted at .hg/. Used to access most non-store paths.
1122 # vfs rooted at .hg/. Used to access most non-store paths.
1121 self.vfs = hgvfs
1123 self.vfs = hgvfs
1122 self.path = hgvfs.base
1124 self.path = hgvfs.base
1123 self.requirements = requirements
1125 self.requirements = requirements
1124 self.supported = supportedrequirements
1126 self.supported = supportedrequirements
1125 self.sharedpath = sharedpath
1127 self.sharedpath = sharedpath
1126 self.store = store
1128 self.store = store
1127 self.cachevfs = cachevfs
1129 self.cachevfs = cachevfs
1128 self.wcachevfs = wcachevfs
1130 self.wcachevfs = wcachevfs
1129 self.features = features
1131 self.features = features
1130
1132
1131 self.filtername = None
1133 self.filtername = None
1132
1134
1133 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1135 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1134 b'devel', b'check-locks'
1136 b'devel', b'check-locks'
1135 ):
1137 ):
1136 self.vfs.audit = self._getvfsward(self.vfs.audit)
1138 self.vfs.audit = self._getvfsward(self.vfs.audit)
1137 # A list of callback to shape the phase if no data were found.
1139 # A list of callback to shape the phase if no data were found.
1138 # Callback are in the form: func(repo, roots) --> processed root.
1140 # Callback are in the form: func(repo, roots) --> processed root.
1139 # This list it to be filled by extension during repo setup
1141 # This list it to be filled by extension during repo setup
1140 self._phasedefaults = []
1142 self._phasedefaults = []
1141
1143
1142 color.setup(self.ui)
1144 color.setup(self.ui)
1143
1145
1144 self.spath = self.store.path
1146 self.spath = self.store.path
1145 self.svfs = self.store.vfs
1147 self.svfs = self.store.vfs
1146 self.sjoin = self.store.join
1148 self.sjoin = self.store.join
1147 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1149 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
1148 b'devel', b'check-locks'
1150 b'devel', b'check-locks'
1149 ):
1151 ):
1150 if util.safehasattr(self.svfs, b'vfs'): # this is filtervfs
1152 if util.safehasattr(self.svfs, b'vfs'): # this is filtervfs
1151 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
1153 self.svfs.vfs.audit = self._getsvfsward(self.svfs.vfs.audit)
1152 else: # standard vfs
1154 else: # standard vfs
1153 self.svfs.audit = self._getsvfsward(self.svfs.audit)
1155 self.svfs.audit = self._getsvfsward(self.svfs.audit)
1154
1156
1155 self._dirstatevalidatewarned = False
1157 self._dirstatevalidatewarned = False
1156
1158
1157 self._branchcaches = branchmap.BranchMapCache()
1159 self._branchcaches = branchmap.BranchMapCache()
1158 self._revbranchcache = None
1160 self._revbranchcache = None
1159 self._filterpats = {}
1161 self._filterpats = {}
1160 self._datafilters = {}
1162 self._datafilters = {}
1161 self._transref = self._lockref = self._wlockref = None
1163 self._transref = self._lockref = self._wlockref = None
1162
1164
1163 # A cache for various files under .hg/ that tracks file changes,
1165 # A cache for various files under .hg/ that tracks file changes,
1164 # (used by the filecache decorator)
1166 # (used by the filecache decorator)
1165 #
1167 #
1166 # Maps a property name to its util.filecacheentry
1168 # Maps a property name to its util.filecacheentry
1167 self._filecache = {}
1169 self._filecache = {}
1168
1170
1169 # hold sets of revision to be filtered
1171 # hold sets of revision to be filtered
1170 # should be cleared when something might have changed the filter value:
1172 # should be cleared when something might have changed the filter value:
1171 # - new changesets,
1173 # - new changesets,
1172 # - phase change,
1174 # - phase change,
1173 # - new obsolescence marker,
1175 # - new obsolescence marker,
1174 # - working directory parent change,
1176 # - working directory parent change,
1175 # - bookmark changes
1177 # - bookmark changes
1176 self.filteredrevcache = {}
1178 self.filteredrevcache = {}
1177
1179
1178 # post-dirstate-status hooks
1180 # post-dirstate-status hooks
1179 self._postdsstatus = []
1181 self._postdsstatus = []
1180
1182
1181 # generic mapping between names and nodes
1183 # generic mapping between names and nodes
1182 self.names = namespaces.namespaces()
1184 self.names = namespaces.namespaces()
1183
1185
1184 # Key to signature value.
1186 # Key to signature value.
1185 self._sparsesignaturecache = {}
1187 self._sparsesignaturecache = {}
1186 # Signature to cached matcher instance.
1188 # Signature to cached matcher instance.
1187 self._sparsematchercache = {}
1189 self._sparsematchercache = {}
1188
1190
1189 self._extrafilterid = repoview.extrafilter(ui)
1191 self._extrafilterid = repoview.extrafilter(ui)
1190
1192
1191 self.filecopiesmode = None
1193 self.filecopiesmode = None
1192 if COPIESSDC_REQUIREMENT in self.requirements:
1194 if COPIESSDC_REQUIREMENT in self.requirements:
1193 self.filecopiesmode = b'changeset-sidedata'
1195 self.filecopiesmode = b'changeset-sidedata'
1194
1196
1195 def _getvfsward(self, origfunc):
1197 def _getvfsward(self, origfunc):
1196 """build a ward for self.vfs"""
1198 """build a ward for self.vfs"""
1197 rref = weakref.ref(self)
1199 rref = weakref.ref(self)
1198
1200
1199 def checkvfs(path, mode=None):
1201 def checkvfs(path, mode=None):
1200 ret = origfunc(path, mode=mode)
1202 ret = origfunc(path, mode=mode)
1201 repo = rref()
1203 repo = rref()
1202 if (
1204 if (
1203 repo is None
1205 repo is None
1204 or not util.safehasattr(repo, b'_wlockref')
1206 or not util.safehasattr(repo, b'_wlockref')
1205 or not util.safehasattr(repo, b'_lockref')
1207 or not util.safehasattr(repo, b'_lockref')
1206 ):
1208 ):
1207 return
1209 return
1208 if mode in (None, b'r', b'rb'):
1210 if mode in (None, b'r', b'rb'):
1209 return
1211 return
1210 if path.startswith(repo.path):
1212 if path.startswith(repo.path):
1211 # truncate name relative to the repository (.hg)
1213 # truncate name relative to the repository (.hg)
1212 path = path[len(repo.path) + 1 :]
1214 path = path[len(repo.path) + 1 :]
1213 if path.startswith(b'cache/'):
1215 if path.startswith(b'cache/'):
1214 msg = b'accessing cache with vfs instead of cachevfs: "%s"'
1216 msg = b'accessing cache with vfs instead of cachevfs: "%s"'
1215 repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs")
1217 repo.ui.develwarn(msg % path, stacklevel=3, config=b"cache-vfs")
1216 if path.startswith(b'journal.') or path.startswith(b'undo.'):
1218 if path.startswith(b'journal.') or path.startswith(b'undo.'):
1217 # journal is covered by 'lock'
1219 # journal is covered by 'lock'
1218 if repo._currentlock(repo._lockref) is None:
1220 if repo._currentlock(repo._lockref) is None:
1219 repo.ui.develwarn(
1221 repo.ui.develwarn(
1220 b'write with no lock: "%s"' % path,
1222 b'write with no lock: "%s"' % path,
1221 stacklevel=3,
1223 stacklevel=3,
1222 config=b'check-locks',
1224 config=b'check-locks',
1223 )
1225 )
1224 elif repo._currentlock(repo._wlockref) is None:
1226 elif repo._currentlock(repo._wlockref) is None:
1225 # rest of vfs files are covered by 'wlock'
1227 # rest of vfs files are covered by 'wlock'
1226 #
1228 #
1227 # exclude special files
1229 # exclude special files
1228 for prefix in self._wlockfreeprefix:
1230 for prefix in self._wlockfreeprefix:
1229 if path.startswith(prefix):
1231 if path.startswith(prefix):
1230 return
1232 return
1231 repo.ui.develwarn(
1233 repo.ui.develwarn(
1232 b'write with no wlock: "%s"' % path,
1234 b'write with no wlock: "%s"' % path,
1233 stacklevel=3,
1235 stacklevel=3,
1234 config=b'check-locks',
1236 config=b'check-locks',
1235 )
1237 )
1236 return ret
1238 return ret
1237
1239
1238 return checkvfs
1240 return checkvfs
1239
1241
1240 def _getsvfsward(self, origfunc):
1242 def _getsvfsward(self, origfunc):
1241 """build a ward for self.svfs"""
1243 """build a ward for self.svfs"""
1242 rref = weakref.ref(self)
1244 rref = weakref.ref(self)
1243
1245
1244 def checksvfs(path, mode=None):
1246 def checksvfs(path, mode=None):
1245 ret = origfunc(path, mode=mode)
1247 ret = origfunc(path, mode=mode)
1246 repo = rref()
1248 repo = rref()
1247 if repo is None or not util.safehasattr(repo, b'_lockref'):
1249 if repo is None or not util.safehasattr(repo, b'_lockref'):
1248 return
1250 return
1249 if mode in (None, b'r', b'rb'):
1251 if mode in (None, b'r', b'rb'):
1250 return
1252 return
1251 if path.startswith(repo.sharedpath):
1253 if path.startswith(repo.sharedpath):
1252 # truncate name relative to the repository (.hg)
1254 # truncate name relative to the repository (.hg)
1253 path = path[len(repo.sharedpath) + 1 :]
1255 path = path[len(repo.sharedpath) + 1 :]
1254 if repo._currentlock(repo._lockref) is None:
1256 if repo._currentlock(repo._lockref) is None:
1255 repo.ui.develwarn(
1257 repo.ui.develwarn(
1256 b'write with no lock: "%s"' % path, stacklevel=4
1258 b'write with no lock: "%s"' % path, stacklevel=4
1257 )
1259 )
1258 return ret
1260 return ret
1259
1261
1260 return checksvfs
1262 return checksvfs
1261
1263
1262 def close(self):
1264 def close(self):
1263 self._writecaches()
1265 self._writecaches()
1264
1266
1265 def _writecaches(self):
1267 def _writecaches(self):
1266 if self._revbranchcache:
1268 if self._revbranchcache:
1267 self._revbranchcache.write()
1269 self._revbranchcache.write()
1268
1270
1269 def _restrictcapabilities(self, caps):
1271 def _restrictcapabilities(self, caps):
1270 if self.ui.configbool(b'experimental', b'bundle2-advertise'):
1272 if self.ui.configbool(b'experimental', b'bundle2-advertise'):
1271 caps = set(caps)
1273 caps = set(caps)
1272 capsblob = bundle2.encodecaps(
1274 capsblob = bundle2.encodecaps(
1273 bundle2.getrepocaps(self, role=b'client')
1275 bundle2.getrepocaps(self, role=b'client')
1274 )
1276 )
1275 caps.add(b'bundle2=' + urlreq.quote(capsblob))
1277 caps.add(b'bundle2=' + urlreq.quote(capsblob))
1276 return caps
1278 return caps
1277
1279
1278 def _writerequirements(self):
1280 def _writerequirements(self):
1279 scmutil.writerequires(self.vfs, self.requirements)
1281 scmutil.writerequires(self.vfs, self.requirements)
1280
1282
1281 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1283 # Don't cache auditor/nofsauditor, or you'll end up with reference cycle:
1282 # self -> auditor -> self._checknested -> self
1284 # self -> auditor -> self._checknested -> self
1283
1285
1284 @property
1286 @property
1285 def auditor(self):
1287 def auditor(self):
1286 # This is only used by context.workingctx.match in order to
1288 # This is only used by context.workingctx.match in order to
1287 # detect files in subrepos.
1289 # detect files in subrepos.
1288 return pathutil.pathauditor(self.root, callback=self._checknested)
1290 return pathutil.pathauditor(self.root, callback=self._checknested)
1289
1291
1290 @property
1292 @property
1291 def nofsauditor(self):
1293 def nofsauditor(self):
1292 # This is only used by context.basectx.match in order to detect
1294 # This is only used by context.basectx.match in order to detect
1293 # files in subrepos.
1295 # files in subrepos.
1294 return pathutil.pathauditor(
1296 return pathutil.pathauditor(
1295 self.root, callback=self._checknested, realfs=False, cached=True
1297 self.root, callback=self._checknested, realfs=False, cached=True
1296 )
1298 )
1297
1299
1298 def _checknested(self, path):
1300 def _checknested(self, path):
1299 """Determine if path is a legal nested repository."""
1301 """Determine if path is a legal nested repository."""
1300 if not path.startswith(self.root):
1302 if not path.startswith(self.root):
1301 return False
1303 return False
1302 subpath = path[len(self.root) + 1 :]
1304 subpath = path[len(self.root) + 1 :]
1303 normsubpath = util.pconvert(subpath)
1305 normsubpath = util.pconvert(subpath)
1304
1306
1305 # XXX: Checking against the current working copy is wrong in
1307 # XXX: Checking against the current working copy is wrong in
1306 # the sense that it can reject things like
1308 # the sense that it can reject things like
1307 #
1309 #
1308 # $ hg cat -r 10 sub/x.txt
1310 # $ hg cat -r 10 sub/x.txt
1309 #
1311 #
1310 # if sub/ is no longer a subrepository in the working copy
1312 # if sub/ is no longer a subrepository in the working copy
1311 # parent revision.
1313 # parent revision.
1312 #
1314 #
1313 # However, it can of course also allow things that would have
1315 # However, it can of course also allow things that would have
1314 # been rejected before, such as the above cat command if sub/
1316 # been rejected before, such as the above cat command if sub/
1315 # is a subrepository now, but was a normal directory before.
1317 # is a subrepository now, but was a normal directory before.
1316 # The old path auditor would have rejected by mistake since it
1318 # The old path auditor would have rejected by mistake since it
1317 # panics when it sees sub/.hg/.
1319 # panics when it sees sub/.hg/.
1318 #
1320 #
1319 # All in all, checking against the working copy seems sensible
1321 # All in all, checking against the working copy seems sensible
1320 # since we want to prevent access to nested repositories on
1322 # since we want to prevent access to nested repositories on
1321 # the filesystem *now*.
1323 # the filesystem *now*.
1322 ctx = self[None]
1324 ctx = self[None]
1323 parts = util.splitpath(subpath)
1325 parts = util.splitpath(subpath)
1324 while parts:
1326 while parts:
1325 prefix = b'/'.join(parts)
1327 prefix = b'/'.join(parts)
1326 if prefix in ctx.substate:
1328 if prefix in ctx.substate:
1327 if prefix == normsubpath:
1329 if prefix == normsubpath:
1328 return True
1330 return True
1329 else:
1331 else:
1330 sub = ctx.sub(prefix)
1332 sub = ctx.sub(prefix)
1331 return sub.checknested(subpath[len(prefix) + 1 :])
1333 return sub.checknested(subpath[len(prefix) + 1 :])
1332 else:
1334 else:
1333 parts.pop()
1335 parts.pop()
1334 return False
1336 return False
1335
1337
1336 def peer(self):
1338 def peer(self):
1337 return localpeer(self) # not cached to avoid reference cycle
1339 return localpeer(self) # not cached to avoid reference cycle
1338
1340
1339 def unfiltered(self):
1341 def unfiltered(self):
1340 """Return unfiltered version of the repository
1342 """Return unfiltered version of the repository
1341
1343
1342 Intended to be overwritten by filtered repo."""
1344 Intended to be overwritten by filtered repo."""
1343 return self
1345 return self
1344
1346
1345 def filtered(self, name, visibilityexceptions=None):
1347 def filtered(self, name, visibilityexceptions=None):
1346 """Return a filtered version of a repository
1348 """Return a filtered version of a repository
1347
1349
1348 The `name` parameter is the identifier of the requested view. This
1350 The `name` parameter is the identifier of the requested view. This
1349 will return a repoview object set "exactly" to the specified view.
1351 will return a repoview object set "exactly" to the specified view.
1350
1352
1351 This function does not apply recursive filtering to a repository. For
1353 This function does not apply recursive filtering to a repository. For
1352 example calling `repo.filtered("served")` will return a repoview using
1354 example calling `repo.filtered("served")` will return a repoview using
1353 the "served" view, regardless of the initial view used by `repo`.
1355 the "served" view, regardless of the initial view used by `repo`.
1354
1356
1355 In other word, there is always only one level of `repoview` "filtering".
1357 In other word, there is always only one level of `repoview` "filtering".
1356 """
1358 """
1357 if self._extrafilterid is not None and b'%' not in name:
1359 if self._extrafilterid is not None and b'%' not in name:
1358 name = name + b'%' + self._extrafilterid
1360 name = name + b'%' + self._extrafilterid
1359
1361
1360 cls = repoview.newtype(self.unfiltered().__class__)
1362 cls = repoview.newtype(self.unfiltered().__class__)
1361 return cls(self, name, visibilityexceptions)
1363 return cls(self, name, visibilityexceptions)
1362
1364
1363 @mixedrepostorecache(
1365 @mixedrepostorecache(
1364 (b'bookmarks', b'plain'),
1366 (b'bookmarks', b'plain'),
1365 (b'bookmarks.current', b'plain'),
1367 (b'bookmarks.current', b'plain'),
1366 (b'bookmarks', b''),
1368 (b'bookmarks', b''),
1367 (b'00changelog.i', b''),
1369 (b'00changelog.i', b''),
1368 )
1370 )
1369 def _bookmarks(self):
1371 def _bookmarks(self):
1370 # Since the multiple files involved in the transaction cannot be
1372 # Since the multiple files involved in the transaction cannot be
1371 # written atomically (with current repository format), there is a race
1373 # written atomically (with current repository format), there is a race
1372 # condition here.
1374 # condition here.
1373 #
1375 #
1374 # 1) changelog content A is read
1376 # 1) changelog content A is read
1375 # 2) outside transaction update changelog to content B
1377 # 2) outside transaction update changelog to content B
1376 # 3) outside transaction update bookmark file referring to content B
1378 # 3) outside transaction update bookmark file referring to content B
1377 # 4) bookmarks file content is read and filtered against changelog-A
1379 # 4) bookmarks file content is read and filtered against changelog-A
1378 #
1380 #
1379 # When this happens, bookmarks against nodes missing from A are dropped.
1381 # When this happens, bookmarks against nodes missing from A are dropped.
1380 #
1382 #
1381 # Having this happening during read is not great, but it become worse
1383 # Having this happening during read is not great, but it become worse
1382 # when this happen during write because the bookmarks to the "unknown"
1384 # when this happen during write because the bookmarks to the "unknown"
1383 # nodes will be dropped for good. However, writes happen within locks.
1385 # nodes will be dropped for good. However, writes happen within locks.
1384 # This locking makes it possible to have a race free consistent read.
1386 # This locking makes it possible to have a race free consistent read.
1385 # For this purpose data read from disc before locking are
1387 # For this purpose data read from disc before locking are
1386 # "invalidated" right after the locks are taken. This invalidations are
1388 # "invalidated" right after the locks are taken. This invalidations are
1387 # "light", the `filecache` mechanism keep the data in memory and will
1389 # "light", the `filecache` mechanism keep the data in memory and will
1388 # reuse them if the underlying files did not changed. Not parsing the
1390 # reuse them if the underlying files did not changed. Not parsing the
1389 # same data multiple times helps performances.
1391 # same data multiple times helps performances.
1390 #
1392 #
1391 # Unfortunately in the case describe above, the files tracked by the
1393 # Unfortunately in the case describe above, the files tracked by the
1392 # bookmarks file cache might not have changed, but the in-memory
1394 # bookmarks file cache might not have changed, but the in-memory
1393 # content is still "wrong" because we used an older changelog content
1395 # content is still "wrong" because we used an older changelog content
1394 # to process the on-disk data. So after locking, the changelog would be
1396 # to process the on-disk data. So after locking, the changelog would be
1395 # refreshed but `_bookmarks` would be preserved.
1397 # refreshed but `_bookmarks` would be preserved.
1396 # Adding `00changelog.i` to the list of tracked file is not
1398 # Adding `00changelog.i` to the list of tracked file is not
1397 # enough, because at the time we build the content for `_bookmarks` in
1399 # enough, because at the time we build the content for `_bookmarks` in
1398 # (4), the changelog file has already diverged from the content used
1400 # (4), the changelog file has already diverged from the content used
1399 # for loading `changelog` in (1)
1401 # for loading `changelog` in (1)
1400 #
1402 #
1401 # To prevent the issue, we force the changelog to be explicitly
1403 # To prevent the issue, we force the changelog to be explicitly
1402 # reloaded while computing `_bookmarks`. The data race can still happen
1404 # reloaded while computing `_bookmarks`. The data race can still happen
1403 # without the lock (with a narrower window), but it would no longer go
1405 # without the lock (with a narrower window), but it would no longer go
1404 # undetected during the lock time refresh.
1406 # undetected during the lock time refresh.
1405 #
1407 #
1406 # The new schedule is as follow
1408 # The new schedule is as follow
1407 #
1409 #
1408 # 1) filecache logic detect that `_bookmarks` needs to be computed
1410 # 1) filecache logic detect that `_bookmarks` needs to be computed
1409 # 2) cachestat for `bookmarks` and `changelog` are captured (for book)
1411 # 2) cachestat for `bookmarks` and `changelog` are captured (for book)
1410 # 3) We force `changelog` filecache to be tested
1412 # 3) We force `changelog` filecache to be tested
1411 # 4) cachestat for `changelog` are captured (for changelog)
1413 # 4) cachestat for `changelog` are captured (for changelog)
1412 # 5) `_bookmarks` is computed and cached
1414 # 5) `_bookmarks` is computed and cached
1413 #
1415 #
1414 # The step in (3) ensure we have a changelog at least as recent as the
1416 # The step in (3) ensure we have a changelog at least as recent as the
1415 # cache stat computed in (1). As a result at locking time:
1417 # cache stat computed in (1). As a result at locking time:
1416 # * if the changelog did not changed since (1) -> we can reuse the data
1418 # * if the changelog did not changed since (1) -> we can reuse the data
1417 # * otherwise -> the bookmarks get refreshed.
1419 # * otherwise -> the bookmarks get refreshed.
1418 self._refreshchangelog()
1420 self._refreshchangelog()
1419 return bookmarks.bmstore(self)
1421 return bookmarks.bmstore(self)
1420
1422
1421 def _refreshchangelog(self):
1423 def _refreshchangelog(self):
1422 """make sure the in memory changelog match the on-disk one"""
1424 """make sure the in memory changelog match the on-disk one"""
1423 if 'changelog' in vars(self) and self.currenttransaction() is None:
1425 if 'changelog' in vars(self) and self.currenttransaction() is None:
1424 del self.changelog
1426 del self.changelog
1425
1427
1426 @property
1428 @property
1427 def _activebookmark(self):
1429 def _activebookmark(self):
1428 return self._bookmarks.active
1430 return self._bookmarks.active
1429
1431
1430 # _phasesets depend on changelog. what we need is to call
1432 # _phasesets depend on changelog. what we need is to call
1431 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1433 # _phasecache.invalidate() if '00changelog.i' was changed, but it
1432 # can't be easily expressed in filecache mechanism.
1434 # can't be easily expressed in filecache mechanism.
1433 @storecache(b'phaseroots', b'00changelog.i')
1435 @storecache(b'phaseroots', b'00changelog.i')
1434 def _phasecache(self):
1436 def _phasecache(self):
1435 return phases.phasecache(self, self._phasedefaults)
1437 return phases.phasecache(self, self._phasedefaults)
1436
1438
1437 @storecache(b'obsstore')
1439 @storecache(b'obsstore')
1438 def obsstore(self):
1440 def obsstore(self):
1439 return obsolete.makestore(self.ui, self)
1441 return obsolete.makestore(self.ui, self)
1440
1442
1441 @storecache(b'00changelog.i')
1443 @storecache(b'00changelog.i')
1442 def changelog(self):
1444 def changelog(self):
1443 return self.store.changelog(txnutil.mayhavepending(self.root))
1445 return self.store.changelog(txnutil.mayhavepending(self.root))
1444
1446
1445 @storecache(b'00manifest.i')
1447 @storecache(b'00manifest.i')
1446 def manifestlog(self):
1448 def manifestlog(self):
1447 return self.store.manifestlog(self, self._storenarrowmatch)
1449 return self.store.manifestlog(self, self._storenarrowmatch)
1448
1450
1449 @repofilecache(b'dirstate')
1451 @repofilecache(b'dirstate')
1450 def dirstate(self):
1452 def dirstate(self):
1451 return self._makedirstate()
1453 return self._makedirstate()
1452
1454
1453 def _makedirstate(self):
1455 def _makedirstate(self):
1454 """Extension point for wrapping the dirstate per-repo."""
1456 """Extension point for wrapping the dirstate per-repo."""
1455 sparsematchfn = lambda: sparse.matcher(self)
1457 sparsematchfn = lambda: sparse.matcher(self)
1456
1458
1457 return dirstate.dirstate(
1459 return dirstate.dirstate(
1458 self.vfs, self.ui, self.root, self._dirstatevalidate, sparsematchfn
1460 self.vfs, self.ui, self.root, self._dirstatevalidate, sparsematchfn
1459 )
1461 )
1460
1462
1461 def _dirstatevalidate(self, node):
1463 def _dirstatevalidate(self, node):
1462 try:
1464 try:
1463 self.changelog.rev(node)
1465 self.changelog.rev(node)
1464 return node
1466 return node
1465 except error.LookupError:
1467 except error.LookupError:
1466 if not self._dirstatevalidatewarned:
1468 if not self._dirstatevalidatewarned:
1467 self._dirstatevalidatewarned = True
1469 self._dirstatevalidatewarned = True
1468 self.ui.warn(
1470 self.ui.warn(
1469 _(b"warning: ignoring unknown working parent %s!\n")
1471 _(b"warning: ignoring unknown working parent %s!\n")
1470 % short(node)
1472 % short(node)
1471 )
1473 )
1472 return nullid
1474 return nullid
1473
1475
1474 @storecache(narrowspec.FILENAME)
1476 @storecache(narrowspec.FILENAME)
1475 def narrowpats(self):
1477 def narrowpats(self):
1476 """matcher patterns for this repository's narrowspec
1478 """matcher patterns for this repository's narrowspec
1477
1479
1478 A tuple of (includes, excludes).
1480 A tuple of (includes, excludes).
1479 """
1481 """
1480 return narrowspec.load(self)
1482 return narrowspec.load(self)
1481
1483
1482 @storecache(narrowspec.FILENAME)
1484 @storecache(narrowspec.FILENAME)
1483 def _storenarrowmatch(self):
1485 def _storenarrowmatch(self):
1484 if repository.NARROW_REQUIREMENT not in self.requirements:
1486 if repository.NARROW_REQUIREMENT not in self.requirements:
1485 return matchmod.always()
1487 return matchmod.always()
1486 include, exclude = self.narrowpats
1488 include, exclude = self.narrowpats
1487 return narrowspec.match(self.root, include=include, exclude=exclude)
1489 return narrowspec.match(self.root, include=include, exclude=exclude)
1488
1490
1489 @storecache(narrowspec.FILENAME)
1491 @storecache(narrowspec.FILENAME)
1490 def _narrowmatch(self):
1492 def _narrowmatch(self):
1491 if repository.NARROW_REQUIREMENT not in self.requirements:
1493 if repository.NARROW_REQUIREMENT not in self.requirements:
1492 return matchmod.always()
1494 return matchmod.always()
1493 narrowspec.checkworkingcopynarrowspec(self)
1495 narrowspec.checkworkingcopynarrowspec(self)
1494 include, exclude = self.narrowpats
1496 include, exclude = self.narrowpats
1495 return narrowspec.match(self.root, include=include, exclude=exclude)
1497 return narrowspec.match(self.root, include=include, exclude=exclude)
1496
1498
1497 def narrowmatch(self, match=None, includeexact=False):
1499 def narrowmatch(self, match=None, includeexact=False):
1498 """matcher corresponding the the repo's narrowspec
1500 """matcher corresponding the the repo's narrowspec
1499
1501
1500 If `match` is given, then that will be intersected with the narrow
1502 If `match` is given, then that will be intersected with the narrow
1501 matcher.
1503 matcher.
1502
1504
1503 If `includeexact` is True, then any exact matches from `match` will
1505 If `includeexact` is True, then any exact matches from `match` will
1504 be included even if they're outside the narrowspec.
1506 be included even if they're outside the narrowspec.
1505 """
1507 """
1506 if match:
1508 if match:
1507 if includeexact and not self._narrowmatch.always():
1509 if includeexact and not self._narrowmatch.always():
1508 # do not exclude explicitly-specified paths so that they can
1510 # do not exclude explicitly-specified paths so that they can
1509 # be warned later on
1511 # be warned later on
1510 em = matchmod.exact(match.files())
1512 em = matchmod.exact(match.files())
1511 nm = matchmod.unionmatcher([self._narrowmatch, em])
1513 nm = matchmod.unionmatcher([self._narrowmatch, em])
1512 return matchmod.intersectmatchers(match, nm)
1514 return matchmod.intersectmatchers(match, nm)
1513 return matchmod.intersectmatchers(match, self._narrowmatch)
1515 return matchmod.intersectmatchers(match, self._narrowmatch)
1514 return self._narrowmatch
1516 return self._narrowmatch
1515
1517
1516 def setnarrowpats(self, newincludes, newexcludes):
1518 def setnarrowpats(self, newincludes, newexcludes):
1517 narrowspec.save(self, newincludes, newexcludes)
1519 narrowspec.save(self, newincludes, newexcludes)
1518 self.invalidate(clearfilecache=True)
1520 self.invalidate(clearfilecache=True)
1519
1521
1520 @unfilteredpropertycache
1522 @unfilteredpropertycache
1521 def _quick_access_changeid_null(self):
1523 def _quick_access_changeid_null(self):
1522 return {
1524 return {
1523 b'null': (nullrev, nullid),
1525 b'null': (nullrev, nullid),
1524 nullrev: (nullrev, nullid),
1526 nullrev: (nullrev, nullid),
1525 nullid: (nullrev, nullid),
1527 nullid: (nullrev, nullid),
1526 }
1528 }
1527
1529
1528 @unfilteredpropertycache
1530 @unfilteredpropertycache
1529 def _quick_access_changeid_wc(self):
1531 def _quick_access_changeid_wc(self):
1530 # also fast path access to the working copy parents
1532 # also fast path access to the working copy parents
1531 # however, only do it for filter that ensure wc is visible.
1533 # however, only do it for filter that ensure wc is visible.
1532 quick = {}
1534 quick = {}
1533 cl = self.unfiltered().changelog
1535 cl = self.unfiltered().changelog
1534 for node in self.dirstate.parents():
1536 for node in self.dirstate.parents():
1535 if node == nullid:
1537 if node == nullid:
1536 continue
1538 continue
1537 rev = cl.index.get_rev(node)
1539 rev = cl.index.get_rev(node)
1538 if rev is None:
1540 if rev is None:
1539 # unknown working copy parent case:
1541 # unknown working copy parent case:
1540 #
1542 #
1541 # skip the fast path and let higher code deal with it
1543 # skip the fast path and let higher code deal with it
1542 continue
1544 continue
1543 pair = (rev, node)
1545 pair = (rev, node)
1544 quick[rev] = pair
1546 quick[rev] = pair
1545 quick[node] = pair
1547 quick[node] = pair
1546 # also add the parents of the parents
1548 # also add the parents of the parents
1547 for r in cl.parentrevs(rev):
1549 for r in cl.parentrevs(rev):
1548 if r == nullrev:
1550 if r == nullrev:
1549 continue
1551 continue
1550 n = cl.node(r)
1552 n = cl.node(r)
1551 pair = (r, n)
1553 pair = (r, n)
1552 quick[r] = pair
1554 quick[r] = pair
1553 quick[n] = pair
1555 quick[n] = pair
1554 p1node = self.dirstate.p1()
1556 p1node = self.dirstate.p1()
1555 if p1node != nullid:
1557 if p1node != nullid:
1556 quick[b'.'] = quick[p1node]
1558 quick[b'.'] = quick[p1node]
1557 return quick
1559 return quick
1558
1560
1559 @unfilteredmethod
1561 @unfilteredmethod
1560 def _quick_access_changeid_invalidate(self):
1562 def _quick_access_changeid_invalidate(self):
1561 if '_quick_access_changeid_wc' in vars(self):
1563 if '_quick_access_changeid_wc' in vars(self):
1562 del self.__dict__['_quick_access_changeid_wc']
1564 del self.__dict__['_quick_access_changeid_wc']
1563
1565
1564 @property
1566 @property
1565 def _quick_access_changeid(self):
1567 def _quick_access_changeid(self):
1566 """an helper dictionnary for __getitem__ calls
1568 """an helper dictionnary for __getitem__ calls
1567
1569
1568 This contains a list of symbol we can recognise right away without
1570 This contains a list of symbol we can recognise right away without
1569 further processing.
1571 further processing.
1570 """
1572 """
1571 mapping = self._quick_access_changeid_null
1573 mapping = self._quick_access_changeid_null
1572 if self.filtername in repoview.filter_has_wc:
1574 if self.filtername in repoview.filter_has_wc:
1573 mapping = mapping.copy()
1575 mapping = mapping.copy()
1574 mapping.update(self._quick_access_changeid_wc)
1576 mapping.update(self._quick_access_changeid_wc)
1575 return mapping
1577 return mapping
1576
1578
1577 def __getitem__(self, changeid):
1579 def __getitem__(self, changeid):
1578 # dealing with special cases
1580 # dealing with special cases
1579 if changeid is None:
1581 if changeid is None:
1580 return context.workingctx(self)
1582 return context.workingctx(self)
1581 if isinstance(changeid, context.basectx):
1583 if isinstance(changeid, context.basectx):
1582 return changeid
1584 return changeid
1583
1585
1584 # dealing with multiple revisions
1586 # dealing with multiple revisions
1585 if isinstance(changeid, slice):
1587 if isinstance(changeid, slice):
1586 # wdirrev isn't contiguous so the slice shouldn't include it
1588 # wdirrev isn't contiguous so the slice shouldn't include it
1587 return [
1589 return [
1588 self[i]
1590 self[i]
1589 for i in pycompat.xrange(*changeid.indices(len(self)))
1591 for i in pycompat.xrange(*changeid.indices(len(self)))
1590 if i not in self.changelog.filteredrevs
1592 if i not in self.changelog.filteredrevs
1591 ]
1593 ]
1592
1594
1593 # dealing with some special values
1595 # dealing with some special values
1594 quick_access = self._quick_access_changeid.get(changeid)
1596 quick_access = self._quick_access_changeid.get(changeid)
1595 if quick_access is not None:
1597 if quick_access is not None:
1596 rev, node = quick_access
1598 rev, node = quick_access
1597 return context.changectx(self, rev, node, maybe_filtered=False)
1599 return context.changectx(self, rev, node, maybe_filtered=False)
1598 if changeid == b'tip':
1600 if changeid == b'tip':
1599 node = self.changelog.tip()
1601 node = self.changelog.tip()
1600 rev = self.changelog.rev(node)
1602 rev = self.changelog.rev(node)
1601 return context.changectx(self, rev, node)
1603 return context.changectx(self, rev, node)
1602
1604
1603 # dealing with arbitrary values
1605 # dealing with arbitrary values
1604 try:
1606 try:
1605 if isinstance(changeid, int):
1607 if isinstance(changeid, int):
1606 node = self.changelog.node(changeid)
1608 node = self.changelog.node(changeid)
1607 rev = changeid
1609 rev = changeid
1608 elif changeid == b'.':
1610 elif changeid == b'.':
1609 # this is a hack to delay/avoid loading obsmarkers
1611 # this is a hack to delay/avoid loading obsmarkers
1610 # when we know that '.' won't be hidden
1612 # when we know that '.' won't be hidden
1611 node = self.dirstate.p1()
1613 node = self.dirstate.p1()
1612 rev = self.unfiltered().changelog.rev(node)
1614 rev = self.unfiltered().changelog.rev(node)
1613 elif len(changeid) == 20:
1615 elif len(changeid) == 20:
1614 try:
1616 try:
1615 node = changeid
1617 node = changeid
1616 rev = self.changelog.rev(changeid)
1618 rev = self.changelog.rev(changeid)
1617 except error.FilteredLookupError:
1619 except error.FilteredLookupError:
1618 changeid = hex(changeid) # for the error message
1620 changeid = hex(changeid) # for the error message
1619 raise
1621 raise
1620 except LookupError:
1622 except LookupError:
1621 # check if it might have come from damaged dirstate
1623 # check if it might have come from damaged dirstate
1622 #
1624 #
1623 # XXX we could avoid the unfiltered if we had a recognizable
1625 # XXX we could avoid the unfiltered if we had a recognizable
1624 # exception for filtered changeset access
1626 # exception for filtered changeset access
1625 if (
1627 if (
1626 self.local()
1628 self.local()
1627 and changeid in self.unfiltered().dirstate.parents()
1629 and changeid in self.unfiltered().dirstate.parents()
1628 ):
1630 ):
1629 msg = _(b"working directory has unknown parent '%s'!")
1631 msg = _(b"working directory has unknown parent '%s'!")
1630 raise error.Abort(msg % short(changeid))
1632 raise error.Abort(msg % short(changeid))
1631 changeid = hex(changeid) # for the error message
1633 changeid = hex(changeid) # for the error message
1632 raise
1634 raise
1633
1635
1634 elif len(changeid) == 40:
1636 elif len(changeid) == 40:
1635 node = bin(changeid)
1637 node = bin(changeid)
1636 rev = self.changelog.rev(node)
1638 rev = self.changelog.rev(node)
1637 else:
1639 else:
1638 raise error.ProgrammingError(
1640 raise error.ProgrammingError(
1639 b"unsupported changeid '%s' of type %s"
1641 b"unsupported changeid '%s' of type %s"
1640 % (changeid, pycompat.bytestr(type(changeid)))
1642 % (changeid, pycompat.bytestr(type(changeid)))
1641 )
1643 )
1642
1644
1643 return context.changectx(self, rev, node)
1645 return context.changectx(self, rev, node)
1644
1646
1645 except (error.FilteredIndexError, error.FilteredLookupError):
1647 except (error.FilteredIndexError, error.FilteredLookupError):
1646 raise error.FilteredRepoLookupError(
1648 raise error.FilteredRepoLookupError(
1647 _(b"filtered revision '%s'") % pycompat.bytestr(changeid)
1649 _(b"filtered revision '%s'") % pycompat.bytestr(changeid)
1648 )
1650 )
1649 except (IndexError, LookupError):
1651 except (IndexError, LookupError):
1650 raise error.RepoLookupError(
1652 raise error.RepoLookupError(
1651 _(b"unknown revision '%s'") % pycompat.bytestr(changeid)
1653 _(b"unknown revision '%s'") % pycompat.bytestr(changeid)
1652 )
1654 )
1653 except error.WdirUnsupported:
1655 except error.WdirUnsupported:
1654 return context.workingctx(self)
1656 return context.workingctx(self)
1655
1657
1656 def __contains__(self, changeid):
1658 def __contains__(self, changeid):
1657 """True if the given changeid exists
1659 """True if the given changeid exists
1658
1660
1659 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1661 error.AmbiguousPrefixLookupError is raised if an ambiguous node
1660 specified.
1662 specified.
1661 """
1663 """
1662 try:
1664 try:
1663 self[changeid]
1665 self[changeid]
1664 return True
1666 return True
1665 except error.RepoLookupError:
1667 except error.RepoLookupError:
1666 return False
1668 return False
1667
1669
1668 def __nonzero__(self):
1670 def __nonzero__(self):
1669 return True
1671 return True
1670
1672
1671 __bool__ = __nonzero__
1673 __bool__ = __nonzero__
1672
1674
1673 def __len__(self):
1675 def __len__(self):
1674 # no need to pay the cost of repoview.changelog
1676 # no need to pay the cost of repoview.changelog
1675 unfi = self.unfiltered()
1677 unfi = self.unfiltered()
1676 return len(unfi.changelog)
1678 return len(unfi.changelog)
1677
1679
1678 def __iter__(self):
1680 def __iter__(self):
1679 return iter(self.changelog)
1681 return iter(self.changelog)
1680
1682
1681 def revs(self, expr, *args):
1683 def revs(self, expr, *args):
1682 '''Find revisions matching a revset.
1684 '''Find revisions matching a revset.
1683
1685
1684 The revset is specified as a string ``expr`` that may contain
1686 The revset is specified as a string ``expr`` that may contain
1685 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1687 %-formatting to escape certain types. See ``revsetlang.formatspec``.
1686
1688
1687 Revset aliases from the configuration are not expanded. To expand
1689 Revset aliases from the configuration are not expanded. To expand
1688 user aliases, consider calling ``scmutil.revrange()`` or
1690 user aliases, consider calling ``scmutil.revrange()`` or
1689 ``repo.anyrevs([expr], user=True)``.
1691 ``repo.anyrevs([expr], user=True)``.
1690
1692
1691 Returns a smartset.abstractsmartset, which is a list-like interface
1693 Returns a smartset.abstractsmartset, which is a list-like interface
1692 that contains integer revisions.
1694 that contains integer revisions.
1693 '''
1695 '''
1694 tree = revsetlang.spectree(expr, *args)
1696 tree = revsetlang.spectree(expr, *args)
1695 return revset.makematcher(tree)(self)
1697 return revset.makematcher(tree)(self)
1696
1698
1697 def set(self, expr, *args):
1699 def set(self, expr, *args):
1698 '''Find revisions matching a revset and emit changectx instances.
1700 '''Find revisions matching a revset and emit changectx instances.
1699
1701
1700 This is a convenience wrapper around ``revs()`` that iterates the
1702 This is a convenience wrapper around ``revs()`` that iterates the
1701 result and is a generator of changectx instances.
1703 result and is a generator of changectx instances.
1702
1704
1703 Revset aliases from the configuration are not expanded. To expand
1705 Revset aliases from the configuration are not expanded. To expand
1704 user aliases, consider calling ``scmutil.revrange()``.
1706 user aliases, consider calling ``scmutil.revrange()``.
1705 '''
1707 '''
1706 for r in self.revs(expr, *args):
1708 for r in self.revs(expr, *args):
1707 yield self[r]
1709 yield self[r]
1708
1710
1709 def anyrevs(self, specs, user=False, localalias=None):
1711 def anyrevs(self, specs, user=False, localalias=None):
1710 '''Find revisions matching one of the given revsets.
1712 '''Find revisions matching one of the given revsets.
1711
1713
1712 Revset aliases from the configuration are not expanded by default. To
1714 Revset aliases from the configuration are not expanded by default. To
1713 expand user aliases, specify ``user=True``. To provide some local
1715 expand user aliases, specify ``user=True``. To provide some local
1714 definitions overriding user aliases, set ``localalias`` to
1716 definitions overriding user aliases, set ``localalias`` to
1715 ``{name: definitionstring}``.
1717 ``{name: definitionstring}``.
1716 '''
1718 '''
1717 if specs == [b'null']:
1719 if specs == [b'null']:
1718 return revset.baseset([nullrev])
1720 return revset.baseset([nullrev])
1719 if specs == [b'.']:
1721 if specs == [b'.']:
1720 quick_data = self._quick_access_changeid.get(b'.')
1722 quick_data = self._quick_access_changeid.get(b'.')
1721 if quick_data is not None:
1723 if quick_data is not None:
1722 return revset.baseset([quick_data[0]])
1724 return revset.baseset([quick_data[0]])
1723 if user:
1725 if user:
1724 m = revset.matchany(
1726 m = revset.matchany(
1725 self.ui,
1727 self.ui,
1726 specs,
1728 specs,
1727 lookup=revset.lookupfn(self),
1729 lookup=revset.lookupfn(self),
1728 localalias=localalias,
1730 localalias=localalias,
1729 )
1731 )
1730 else:
1732 else:
1731 m = revset.matchany(None, specs, localalias=localalias)
1733 m = revset.matchany(None, specs, localalias=localalias)
1732 return m(self)
1734 return m(self)
1733
1735
1734 def url(self):
1736 def url(self):
1735 return b'file:' + self.root
1737 return b'file:' + self.root
1736
1738
1737 def hook(self, name, throw=False, **args):
1739 def hook(self, name, throw=False, **args):
1738 """Call a hook, passing this repo instance.
1740 """Call a hook, passing this repo instance.
1739
1741
1740 This a convenience method to aid invoking hooks. Extensions likely
1742 This a convenience method to aid invoking hooks. Extensions likely
1741 won't call this unless they have registered a custom hook or are
1743 won't call this unless they have registered a custom hook or are
1742 replacing code that is expected to call a hook.
1744 replacing code that is expected to call a hook.
1743 """
1745 """
1744 return hook.hook(self.ui, self, name, throw, **args)
1746 return hook.hook(self.ui, self, name, throw, **args)
1745
1747
1746 @filteredpropertycache
1748 @filteredpropertycache
1747 def _tagscache(self):
1749 def _tagscache(self):
1748 '''Returns a tagscache object that contains various tags related
1750 '''Returns a tagscache object that contains various tags related
1749 caches.'''
1751 caches.'''
1750
1752
1751 # This simplifies its cache management by having one decorated
1753 # This simplifies its cache management by having one decorated
1752 # function (this one) and the rest simply fetch things from it.
1754 # function (this one) and the rest simply fetch things from it.
1753 class tagscache(object):
1755 class tagscache(object):
1754 def __init__(self):
1756 def __init__(self):
1755 # These two define the set of tags for this repository. tags
1757 # These two define the set of tags for this repository. tags
1756 # maps tag name to node; tagtypes maps tag name to 'global' or
1758 # maps tag name to node; tagtypes maps tag name to 'global' or
1757 # 'local'. (Global tags are defined by .hgtags across all
1759 # 'local'. (Global tags are defined by .hgtags across all
1758 # heads, and local tags are defined in .hg/localtags.)
1760 # heads, and local tags are defined in .hg/localtags.)
1759 # They constitute the in-memory cache of tags.
1761 # They constitute the in-memory cache of tags.
1760 self.tags = self.tagtypes = None
1762 self.tags = self.tagtypes = None
1761
1763
1762 self.nodetagscache = self.tagslist = None
1764 self.nodetagscache = self.tagslist = None
1763
1765
1764 cache = tagscache()
1766 cache = tagscache()
1765 cache.tags, cache.tagtypes = self._findtags()
1767 cache.tags, cache.tagtypes = self._findtags()
1766
1768
1767 return cache
1769 return cache
1768
1770
1769 def tags(self):
1771 def tags(self):
1770 '''return a mapping of tag to node'''
1772 '''return a mapping of tag to node'''
1771 t = {}
1773 t = {}
1772 if self.changelog.filteredrevs:
1774 if self.changelog.filteredrevs:
1773 tags, tt = self._findtags()
1775 tags, tt = self._findtags()
1774 else:
1776 else:
1775 tags = self._tagscache.tags
1777 tags = self._tagscache.tags
1776 rev = self.changelog.rev
1778 rev = self.changelog.rev
1777 for k, v in pycompat.iteritems(tags):
1779 for k, v in pycompat.iteritems(tags):
1778 try:
1780 try:
1779 # ignore tags to unknown nodes
1781 # ignore tags to unknown nodes
1780 rev(v)
1782 rev(v)
1781 t[k] = v
1783 t[k] = v
1782 except (error.LookupError, ValueError):
1784 except (error.LookupError, ValueError):
1783 pass
1785 pass
1784 return t
1786 return t
1785
1787
1786 def _findtags(self):
1788 def _findtags(self):
1787 '''Do the hard work of finding tags. Return a pair of dicts
1789 '''Do the hard work of finding tags. Return a pair of dicts
1788 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1790 (tags, tagtypes) where tags maps tag name to node, and tagtypes
1789 maps tag name to a string like \'global\' or \'local\'.
1791 maps tag name to a string like \'global\' or \'local\'.
1790 Subclasses or extensions are free to add their own tags, but
1792 Subclasses or extensions are free to add their own tags, but
1791 should be aware that the returned dicts will be retained for the
1793 should be aware that the returned dicts will be retained for the
1792 duration of the localrepo object.'''
1794 duration of the localrepo object.'''
1793
1795
1794 # XXX what tagtype should subclasses/extensions use? Currently
1796 # XXX what tagtype should subclasses/extensions use? Currently
1795 # mq and bookmarks add tags, but do not set the tagtype at all.
1797 # mq and bookmarks add tags, but do not set the tagtype at all.
1796 # Should each extension invent its own tag type? Should there
1798 # Should each extension invent its own tag type? Should there
1797 # be one tagtype for all such "virtual" tags? Or is the status
1799 # be one tagtype for all such "virtual" tags? Or is the status
1798 # quo fine?
1800 # quo fine?
1799
1801
1800 # map tag name to (node, hist)
1802 # map tag name to (node, hist)
1801 alltags = tagsmod.findglobaltags(self.ui, self)
1803 alltags = tagsmod.findglobaltags(self.ui, self)
1802 # map tag name to tag type
1804 # map tag name to tag type
1803 tagtypes = dict((tag, b'global') for tag in alltags)
1805 tagtypes = dict((tag, b'global') for tag in alltags)
1804
1806
1805 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1807 tagsmod.readlocaltags(self.ui, self, alltags, tagtypes)
1806
1808
1807 # Build the return dicts. Have to re-encode tag names because
1809 # Build the return dicts. Have to re-encode tag names because
1808 # the tags module always uses UTF-8 (in order not to lose info
1810 # the tags module always uses UTF-8 (in order not to lose info
1809 # writing to the cache), but the rest of Mercurial wants them in
1811 # writing to the cache), but the rest of Mercurial wants them in
1810 # local encoding.
1812 # local encoding.
1811 tags = {}
1813 tags = {}
1812 for (name, (node, hist)) in pycompat.iteritems(alltags):
1814 for (name, (node, hist)) in pycompat.iteritems(alltags):
1813 if node != nullid:
1815 if node != nullid:
1814 tags[encoding.tolocal(name)] = node
1816 tags[encoding.tolocal(name)] = node
1815 tags[b'tip'] = self.changelog.tip()
1817 tags[b'tip'] = self.changelog.tip()
1816 tagtypes = dict(
1818 tagtypes = dict(
1817 [
1819 [
1818 (encoding.tolocal(name), value)
1820 (encoding.tolocal(name), value)
1819 for (name, value) in pycompat.iteritems(tagtypes)
1821 for (name, value) in pycompat.iteritems(tagtypes)
1820 ]
1822 ]
1821 )
1823 )
1822 return (tags, tagtypes)
1824 return (tags, tagtypes)
1823
1825
1824 def tagtype(self, tagname):
1826 def tagtype(self, tagname):
1825 '''
1827 '''
1826 return the type of the given tag. result can be:
1828 return the type of the given tag. result can be:
1827
1829
1828 'local' : a local tag
1830 'local' : a local tag
1829 'global' : a global tag
1831 'global' : a global tag
1830 None : tag does not exist
1832 None : tag does not exist
1831 '''
1833 '''
1832
1834
1833 return self._tagscache.tagtypes.get(tagname)
1835 return self._tagscache.tagtypes.get(tagname)
1834
1836
1835 def tagslist(self):
1837 def tagslist(self):
1836 '''return a list of tags ordered by revision'''
1838 '''return a list of tags ordered by revision'''
1837 if not self._tagscache.tagslist:
1839 if not self._tagscache.tagslist:
1838 l = []
1840 l = []
1839 for t, n in pycompat.iteritems(self.tags()):
1841 for t, n in pycompat.iteritems(self.tags()):
1840 l.append((self.changelog.rev(n), t, n))
1842 l.append((self.changelog.rev(n), t, n))
1841 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1843 self._tagscache.tagslist = [(t, n) for r, t, n in sorted(l)]
1842
1844
1843 return self._tagscache.tagslist
1845 return self._tagscache.tagslist
1844
1846
1845 def nodetags(self, node):
1847 def nodetags(self, node):
1846 '''return the tags associated with a node'''
1848 '''return the tags associated with a node'''
1847 if not self._tagscache.nodetagscache:
1849 if not self._tagscache.nodetagscache:
1848 nodetagscache = {}
1850 nodetagscache = {}
1849 for t, n in pycompat.iteritems(self._tagscache.tags):
1851 for t, n in pycompat.iteritems(self._tagscache.tags):
1850 nodetagscache.setdefault(n, []).append(t)
1852 nodetagscache.setdefault(n, []).append(t)
1851 for tags in pycompat.itervalues(nodetagscache):
1853 for tags in pycompat.itervalues(nodetagscache):
1852 tags.sort()
1854 tags.sort()
1853 self._tagscache.nodetagscache = nodetagscache
1855 self._tagscache.nodetagscache = nodetagscache
1854 return self._tagscache.nodetagscache.get(node, [])
1856 return self._tagscache.nodetagscache.get(node, [])
1855
1857
1856 def nodebookmarks(self, node):
1858 def nodebookmarks(self, node):
1857 """return the list of bookmarks pointing to the specified node"""
1859 """return the list of bookmarks pointing to the specified node"""
1858 return self._bookmarks.names(node)
1860 return self._bookmarks.names(node)
1859
1861
1860 def branchmap(self):
1862 def branchmap(self):
1861 '''returns a dictionary {branch: [branchheads]} with branchheads
1863 '''returns a dictionary {branch: [branchheads]} with branchheads
1862 ordered by increasing revision number'''
1864 ordered by increasing revision number'''
1863 return self._branchcaches[self]
1865 return self._branchcaches[self]
1864
1866
1865 @unfilteredmethod
1867 @unfilteredmethod
1866 def revbranchcache(self):
1868 def revbranchcache(self):
1867 if not self._revbranchcache:
1869 if not self._revbranchcache:
1868 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1870 self._revbranchcache = branchmap.revbranchcache(self.unfiltered())
1869 return self._revbranchcache
1871 return self._revbranchcache
1870
1872
1871 def branchtip(self, branch, ignoremissing=False):
1873 def branchtip(self, branch, ignoremissing=False):
1872 '''return the tip node for a given branch
1874 '''return the tip node for a given branch
1873
1875
1874 If ignoremissing is True, then this method will not raise an error.
1876 If ignoremissing is True, then this method will not raise an error.
1875 This is helpful for callers that only expect None for a missing branch
1877 This is helpful for callers that only expect None for a missing branch
1876 (e.g. namespace).
1878 (e.g. namespace).
1877
1879
1878 '''
1880 '''
1879 try:
1881 try:
1880 return self.branchmap().branchtip(branch)
1882 return self.branchmap().branchtip(branch)
1881 except KeyError:
1883 except KeyError:
1882 if not ignoremissing:
1884 if not ignoremissing:
1883 raise error.RepoLookupError(_(b"unknown branch '%s'") % branch)
1885 raise error.RepoLookupError(_(b"unknown branch '%s'") % branch)
1884 else:
1886 else:
1885 pass
1887 pass
1886
1888
1887 def lookup(self, key):
1889 def lookup(self, key):
1888 node = scmutil.revsymbol(self, key).node()
1890 node = scmutil.revsymbol(self, key).node()
1889 if node is None:
1891 if node is None:
1890 raise error.RepoLookupError(_(b"unknown revision '%s'") % key)
1892 raise error.RepoLookupError(_(b"unknown revision '%s'") % key)
1891 return node
1893 return node
1892
1894
1893 def lookupbranch(self, key):
1895 def lookupbranch(self, key):
1894 if self.branchmap().hasbranch(key):
1896 if self.branchmap().hasbranch(key):
1895 return key
1897 return key
1896
1898
1897 return scmutil.revsymbol(self, key).branch()
1899 return scmutil.revsymbol(self, key).branch()
1898
1900
1899 def known(self, nodes):
1901 def known(self, nodes):
1900 cl = self.changelog
1902 cl = self.changelog
1901 get_rev = cl.index.get_rev
1903 get_rev = cl.index.get_rev
1902 filtered = cl.filteredrevs
1904 filtered = cl.filteredrevs
1903 result = []
1905 result = []
1904 for n in nodes:
1906 for n in nodes:
1905 r = get_rev(n)
1907 r = get_rev(n)
1906 resp = not (r is None or r in filtered)
1908 resp = not (r is None or r in filtered)
1907 result.append(resp)
1909 result.append(resp)
1908 return result
1910 return result
1909
1911
1910 def local(self):
1912 def local(self):
1911 return self
1913 return self
1912
1914
1913 def publishing(self):
1915 def publishing(self):
1914 # it's safe (and desirable) to trust the publish flag unconditionally
1916 # it's safe (and desirable) to trust the publish flag unconditionally
1915 # so that we don't finalize changes shared between users via ssh or nfs
1917 # so that we don't finalize changes shared between users via ssh or nfs
1916 return self.ui.configbool(b'phases', b'publish', untrusted=True)
1918 return self.ui.configbool(b'phases', b'publish', untrusted=True)
1917
1919
1918 def cancopy(self):
1920 def cancopy(self):
1919 # so statichttprepo's override of local() works
1921 # so statichttprepo's override of local() works
1920 if not self.local():
1922 if not self.local():
1921 return False
1923 return False
1922 if not self.publishing():
1924 if not self.publishing():
1923 return True
1925 return True
1924 # if publishing we can't copy if there is filtered content
1926 # if publishing we can't copy if there is filtered content
1925 return not self.filtered(b'visible').changelog.filteredrevs
1927 return not self.filtered(b'visible').changelog.filteredrevs
1926
1928
1927 def shared(self):
1929 def shared(self):
1928 '''the type of shared repository (None if not shared)'''
1930 '''the type of shared repository (None if not shared)'''
1929 if self.sharedpath != self.path:
1931 if self.sharedpath != self.path:
1930 return b'store'
1932 return b'store'
1931 return None
1933 return None
1932
1934
1933 def wjoin(self, f, *insidef):
1935 def wjoin(self, f, *insidef):
1934 return self.vfs.reljoin(self.root, f, *insidef)
1936 return self.vfs.reljoin(self.root, f, *insidef)
1935
1937
1936 def setparents(self, p1, p2=nullid):
1938 def setparents(self, p1, p2=nullid):
1937 self[None].setparents(p1, p2)
1939 self[None].setparents(p1, p2)
1938 self._quick_access_changeid_invalidate()
1940 self._quick_access_changeid_invalidate()
1939
1941
1940 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1942 def filectx(self, path, changeid=None, fileid=None, changectx=None):
1941 """changeid must be a changeset revision, if specified.
1943 """changeid must be a changeset revision, if specified.
1942 fileid can be a file revision or node."""
1944 fileid can be a file revision or node."""
1943 return context.filectx(
1945 return context.filectx(
1944 self, path, changeid, fileid, changectx=changectx
1946 self, path, changeid, fileid, changectx=changectx
1945 )
1947 )
1946
1948
1947 def getcwd(self):
1949 def getcwd(self):
1948 return self.dirstate.getcwd()
1950 return self.dirstate.getcwd()
1949
1951
1950 def pathto(self, f, cwd=None):
1952 def pathto(self, f, cwd=None):
1951 return self.dirstate.pathto(f, cwd)
1953 return self.dirstate.pathto(f, cwd)
1952
1954
1953 def _loadfilter(self, filter):
1955 def _loadfilter(self, filter):
1954 if filter not in self._filterpats:
1956 if filter not in self._filterpats:
1955 l = []
1957 l = []
1956 for pat, cmd in self.ui.configitems(filter):
1958 for pat, cmd in self.ui.configitems(filter):
1957 if cmd == b'!':
1959 if cmd == b'!':
1958 continue
1960 continue
1959 mf = matchmod.match(self.root, b'', [pat])
1961 mf = matchmod.match(self.root, b'', [pat])
1960 fn = None
1962 fn = None
1961 params = cmd
1963 params = cmd
1962 for name, filterfn in pycompat.iteritems(self._datafilters):
1964 for name, filterfn in pycompat.iteritems(self._datafilters):
1963 if cmd.startswith(name):
1965 if cmd.startswith(name):
1964 fn = filterfn
1966 fn = filterfn
1965 params = cmd[len(name) :].lstrip()
1967 params = cmd[len(name) :].lstrip()
1966 break
1968 break
1967 if not fn:
1969 if not fn:
1968 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1970 fn = lambda s, c, **kwargs: procutil.filter(s, c)
1969 fn.__name__ = 'commandfilter'
1971 fn.__name__ = 'commandfilter'
1970 # Wrap old filters not supporting keyword arguments
1972 # Wrap old filters not supporting keyword arguments
1971 if not pycompat.getargspec(fn)[2]:
1973 if not pycompat.getargspec(fn)[2]:
1972 oldfn = fn
1974 oldfn = fn
1973 fn = lambda s, c, oldfn=oldfn, **kwargs: oldfn(s, c)
1975 fn = lambda s, c, oldfn=oldfn, **kwargs: oldfn(s, c)
1974 fn.__name__ = 'compat-' + oldfn.__name__
1976 fn.__name__ = 'compat-' + oldfn.__name__
1975 l.append((mf, fn, params))
1977 l.append((mf, fn, params))
1976 self._filterpats[filter] = l
1978 self._filterpats[filter] = l
1977 return self._filterpats[filter]
1979 return self._filterpats[filter]
1978
1980
1979 def _filter(self, filterpats, filename, data):
1981 def _filter(self, filterpats, filename, data):
1980 for mf, fn, cmd in filterpats:
1982 for mf, fn, cmd in filterpats:
1981 if mf(filename):
1983 if mf(filename):
1982 self.ui.debug(
1984 self.ui.debug(
1983 b"filtering %s through %s\n"
1985 b"filtering %s through %s\n"
1984 % (filename, cmd or pycompat.sysbytes(fn.__name__))
1986 % (filename, cmd or pycompat.sysbytes(fn.__name__))
1985 )
1987 )
1986 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1988 data = fn(data, cmd, ui=self.ui, repo=self, filename=filename)
1987 break
1989 break
1988
1990
1989 return data
1991 return data
1990
1992
1991 @unfilteredpropertycache
1993 @unfilteredpropertycache
1992 def _encodefilterpats(self):
1994 def _encodefilterpats(self):
1993 return self._loadfilter(b'encode')
1995 return self._loadfilter(b'encode')
1994
1996
1995 @unfilteredpropertycache
1997 @unfilteredpropertycache
1996 def _decodefilterpats(self):
1998 def _decodefilterpats(self):
1997 return self._loadfilter(b'decode')
1999 return self._loadfilter(b'decode')
1998
2000
1999 def adddatafilter(self, name, filter):
2001 def adddatafilter(self, name, filter):
2000 self._datafilters[name] = filter
2002 self._datafilters[name] = filter
2001
2003
2002 def wread(self, filename):
2004 def wread(self, filename):
2003 if self.wvfs.islink(filename):
2005 if self.wvfs.islink(filename):
2004 data = self.wvfs.readlink(filename)
2006 data = self.wvfs.readlink(filename)
2005 else:
2007 else:
2006 data = self.wvfs.read(filename)
2008 data = self.wvfs.read(filename)
2007 return self._filter(self._encodefilterpats, filename, data)
2009 return self._filter(self._encodefilterpats, filename, data)
2008
2010
2009 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
2011 def wwrite(self, filename, data, flags, backgroundclose=False, **kwargs):
2010 """write ``data`` into ``filename`` in the working directory
2012 """write ``data`` into ``filename`` in the working directory
2011
2013
2012 This returns length of written (maybe decoded) data.
2014 This returns length of written (maybe decoded) data.
2013 """
2015 """
2014 data = self._filter(self._decodefilterpats, filename, data)
2016 data = self._filter(self._decodefilterpats, filename, data)
2015 if b'l' in flags:
2017 if b'l' in flags:
2016 self.wvfs.symlink(data, filename)
2018 self.wvfs.symlink(data, filename)
2017 else:
2019 else:
2018 self.wvfs.write(
2020 self.wvfs.write(
2019 filename, data, backgroundclose=backgroundclose, **kwargs
2021 filename, data, backgroundclose=backgroundclose, **kwargs
2020 )
2022 )
2021 if b'x' in flags:
2023 if b'x' in flags:
2022 self.wvfs.setflags(filename, False, True)
2024 self.wvfs.setflags(filename, False, True)
2023 else:
2025 else:
2024 self.wvfs.setflags(filename, False, False)
2026 self.wvfs.setflags(filename, False, False)
2025 return len(data)
2027 return len(data)
2026
2028
2027 def wwritedata(self, filename, data):
2029 def wwritedata(self, filename, data):
2028 return self._filter(self._decodefilterpats, filename, data)
2030 return self._filter(self._decodefilterpats, filename, data)
2029
2031
2030 def currenttransaction(self):
2032 def currenttransaction(self):
2031 """return the current transaction or None if non exists"""
2033 """return the current transaction or None if non exists"""
2032 if self._transref:
2034 if self._transref:
2033 tr = self._transref()
2035 tr = self._transref()
2034 else:
2036 else:
2035 tr = None
2037 tr = None
2036
2038
2037 if tr and tr.running():
2039 if tr and tr.running():
2038 return tr
2040 return tr
2039 return None
2041 return None
2040
2042
2041 def transaction(self, desc, report=None):
2043 def transaction(self, desc, report=None):
2042 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
2044 if self.ui.configbool(b'devel', b'all-warnings') or self.ui.configbool(
2043 b'devel', b'check-locks'
2045 b'devel', b'check-locks'
2044 ):
2046 ):
2045 if self._currentlock(self._lockref) is None:
2047 if self._currentlock(self._lockref) is None:
2046 raise error.ProgrammingError(b'transaction requires locking')
2048 raise error.ProgrammingError(b'transaction requires locking')
2047 tr = self.currenttransaction()
2049 tr = self.currenttransaction()
2048 if tr is not None:
2050 if tr is not None:
2049 return tr.nest(name=desc)
2051 return tr.nest(name=desc)
2050
2052
2051 # abort here if the journal already exists
2053 # abort here if the journal already exists
2052 if self.svfs.exists(b"journal"):
2054 if self.svfs.exists(b"journal"):
2053 raise error.RepoError(
2055 raise error.RepoError(
2054 _(b"abandoned transaction found"),
2056 _(b"abandoned transaction found"),
2055 hint=_(b"run 'hg recover' to clean up transaction"),
2057 hint=_(b"run 'hg recover' to clean up transaction"),
2056 )
2058 )
2057
2059
2058 idbase = b"%.40f#%f" % (random.random(), time.time())
2060 idbase = b"%.40f#%f" % (random.random(), time.time())
2059 ha = hex(hashutil.sha1(idbase).digest())
2061 ha = hex(hashutil.sha1(idbase).digest())
2060 txnid = b'TXN:' + ha
2062 txnid = b'TXN:' + ha
2061 self.hook(b'pretxnopen', throw=True, txnname=desc, txnid=txnid)
2063 self.hook(b'pretxnopen', throw=True, txnname=desc, txnid=txnid)
2062
2064
2063 self._writejournal(desc)
2065 self._writejournal(desc)
2064 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
2066 renames = [(vfs, x, undoname(x)) for vfs, x in self._journalfiles()]
2065 if report:
2067 if report:
2066 rp = report
2068 rp = report
2067 else:
2069 else:
2068 rp = self.ui.warn
2070 rp = self.ui.warn
2069 vfsmap = {b'plain': self.vfs, b'store': self.svfs} # root of .hg/
2071 vfsmap = {b'plain': self.vfs, b'store': self.svfs} # root of .hg/
2070 # we must avoid cyclic reference between repo and transaction.
2072 # we must avoid cyclic reference between repo and transaction.
2071 reporef = weakref.ref(self)
2073 reporef = weakref.ref(self)
2072 # Code to track tag movement
2074 # Code to track tag movement
2073 #
2075 #
2074 # Since tags are all handled as file content, it is actually quite hard
2076 # Since tags are all handled as file content, it is actually quite hard
2075 # to track these movement from a code perspective. So we fallback to a
2077 # to track these movement from a code perspective. So we fallback to a
2076 # tracking at the repository level. One could envision to track changes
2078 # tracking at the repository level. One could envision to track changes
2077 # to the '.hgtags' file through changegroup apply but that fails to
2079 # to the '.hgtags' file through changegroup apply but that fails to
2078 # cope with case where transaction expose new heads without changegroup
2080 # cope with case where transaction expose new heads without changegroup
2079 # being involved (eg: phase movement).
2081 # being involved (eg: phase movement).
2080 #
2082 #
2081 # For now, We gate the feature behind a flag since this likely comes
2083 # For now, We gate the feature behind a flag since this likely comes
2082 # with performance impacts. The current code run more often than needed
2084 # with performance impacts. The current code run more often than needed
2083 # and do not use caches as much as it could. The current focus is on
2085 # and do not use caches as much as it could. The current focus is on
2084 # the behavior of the feature so we disable it by default. The flag
2086 # the behavior of the feature so we disable it by default. The flag
2085 # will be removed when we are happy with the performance impact.
2087 # will be removed when we are happy with the performance impact.
2086 #
2088 #
2087 # Once this feature is no longer experimental move the following
2089 # Once this feature is no longer experimental move the following
2088 # documentation to the appropriate help section:
2090 # documentation to the appropriate help section:
2089 #
2091 #
2090 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
2092 # The ``HG_TAG_MOVED`` variable will be set if the transaction touched
2091 # tags (new or changed or deleted tags). In addition the details of
2093 # tags (new or changed or deleted tags). In addition the details of
2092 # these changes are made available in a file at:
2094 # these changes are made available in a file at:
2093 # ``REPOROOT/.hg/changes/tags.changes``.
2095 # ``REPOROOT/.hg/changes/tags.changes``.
2094 # Make sure you check for HG_TAG_MOVED before reading that file as it
2096 # Make sure you check for HG_TAG_MOVED before reading that file as it
2095 # might exist from a previous transaction even if no tag were touched
2097 # might exist from a previous transaction even if no tag were touched
2096 # in this one. Changes are recorded in a line base format::
2098 # in this one. Changes are recorded in a line base format::
2097 #
2099 #
2098 # <action> <hex-node> <tag-name>\n
2100 # <action> <hex-node> <tag-name>\n
2099 #
2101 #
2100 # Actions are defined as follow:
2102 # Actions are defined as follow:
2101 # "-R": tag is removed,
2103 # "-R": tag is removed,
2102 # "+A": tag is added,
2104 # "+A": tag is added,
2103 # "-M": tag is moved (old value),
2105 # "-M": tag is moved (old value),
2104 # "+M": tag is moved (new value),
2106 # "+M": tag is moved (new value),
2105 tracktags = lambda x: None
2107 tracktags = lambda x: None
2106 # experimental config: experimental.hook-track-tags
2108 # experimental config: experimental.hook-track-tags
2107 shouldtracktags = self.ui.configbool(
2109 shouldtracktags = self.ui.configbool(
2108 b'experimental', b'hook-track-tags'
2110 b'experimental', b'hook-track-tags'
2109 )
2111 )
2110 if desc != b'strip' and shouldtracktags:
2112 if desc != b'strip' and shouldtracktags:
2111 oldheads = self.changelog.headrevs()
2113 oldheads = self.changelog.headrevs()
2112
2114
2113 def tracktags(tr2):
2115 def tracktags(tr2):
2114 repo = reporef()
2116 repo = reporef()
2115 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
2117 oldfnodes = tagsmod.fnoderevs(repo.ui, repo, oldheads)
2116 newheads = repo.changelog.headrevs()
2118 newheads = repo.changelog.headrevs()
2117 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
2119 newfnodes = tagsmod.fnoderevs(repo.ui, repo, newheads)
2118 # notes: we compare lists here.
2120 # notes: we compare lists here.
2119 # As we do it only once buiding set would not be cheaper
2121 # As we do it only once buiding set would not be cheaper
2120 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
2122 changes = tagsmod.difftags(repo.ui, repo, oldfnodes, newfnodes)
2121 if changes:
2123 if changes:
2122 tr2.hookargs[b'tag_moved'] = b'1'
2124 tr2.hookargs[b'tag_moved'] = b'1'
2123 with repo.vfs(
2125 with repo.vfs(
2124 b'changes/tags.changes', b'w', atomictemp=True
2126 b'changes/tags.changes', b'w', atomictemp=True
2125 ) as changesfile:
2127 ) as changesfile:
2126 # note: we do not register the file to the transaction
2128 # note: we do not register the file to the transaction
2127 # because we needs it to still exist on the transaction
2129 # because we needs it to still exist on the transaction
2128 # is close (for txnclose hooks)
2130 # is close (for txnclose hooks)
2129 tagsmod.writediff(changesfile, changes)
2131 tagsmod.writediff(changesfile, changes)
2130
2132
2131 def validate(tr2):
2133 def validate(tr2):
2132 """will run pre-closing hooks"""
2134 """will run pre-closing hooks"""
2133 # XXX the transaction API is a bit lacking here so we take a hacky
2135 # XXX the transaction API is a bit lacking here so we take a hacky
2134 # path for now
2136 # path for now
2135 #
2137 #
2136 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
2138 # We cannot add this as a "pending" hooks since the 'tr.hookargs'
2137 # dict is copied before these run. In addition we needs the data
2139 # dict is copied before these run. In addition we needs the data
2138 # available to in memory hooks too.
2140 # available to in memory hooks too.
2139 #
2141 #
2140 # Moreover, we also need to make sure this runs before txnclose
2142 # Moreover, we also need to make sure this runs before txnclose
2141 # hooks and there is no "pending" mechanism that would execute
2143 # hooks and there is no "pending" mechanism that would execute
2142 # logic only if hooks are about to run.
2144 # logic only if hooks are about to run.
2143 #
2145 #
2144 # Fixing this limitation of the transaction is also needed to track
2146 # Fixing this limitation of the transaction is also needed to track
2145 # other families of changes (bookmarks, phases, obsolescence).
2147 # other families of changes (bookmarks, phases, obsolescence).
2146 #
2148 #
2147 # This will have to be fixed before we remove the experimental
2149 # This will have to be fixed before we remove the experimental
2148 # gating.
2150 # gating.
2149 tracktags(tr2)
2151 tracktags(tr2)
2150 repo = reporef()
2152 repo = reporef()
2151
2153
2152 singleheadopt = (b'experimental', b'single-head-per-branch')
2154 singleheadopt = (b'experimental', b'single-head-per-branch')
2153 singlehead = repo.ui.configbool(*singleheadopt)
2155 singlehead = repo.ui.configbool(*singleheadopt)
2154 if singlehead:
2156 if singlehead:
2155 singleheadsub = repo.ui.configsuboptions(*singleheadopt)[1]
2157 singleheadsub = repo.ui.configsuboptions(*singleheadopt)[1]
2156 accountclosed = singleheadsub.get(
2158 accountclosed = singleheadsub.get(
2157 b"account-closed-heads", False
2159 b"account-closed-heads", False
2158 )
2160 )
2159 scmutil.enforcesinglehead(repo, tr2, desc, accountclosed)
2161 scmutil.enforcesinglehead(repo, tr2, desc, accountclosed)
2160 if hook.hashook(repo.ui, b'pretxnclose-bookmark'):
2162 if hook.hashook(repo.ui, b'pretxnclose-bookmark'):
2161 for name, (old, new) in sorted(
2163 for name, (old, new) in sorted(
2162 tr.changes[b'bookmarks'].items()
2164 tr.changes[b'bookmarks'].items()
2163 ):
2165 ):
2164 args = tr.hookargs.copy()
2166 args = tr.hookargs.copy()
2165 args.update(bookmarks.preparehookargs(name, old, new))
2167 args.update(bookmarks.preparehookargs(name, old, new))
2166 repo.hook(
2168 repo.hook(
2167 b'pretxnclose-bookmark',
2169 b'pretxnclose-bookmark',
2168 throw=True,
2170 throw=True,
2169 **pycompat.strkwargs(args)
2171 **pycompat.strkwargs(args)
2170 )
2172 )
2171 if hook.hashook(repo.ui, b'pretxnclose-phase'):
2173 if hook.hashook(repo.ui, b'pretxnclose-phase'):
2172 cl = repo.unfiltered().changelog
2174 cl = repo.unfiltered().changelog
2173 for rev, (old, new) in tr.changes[b'phases'].items():
2175 for rev, (old, new) in tr.changes[b'phases'].items():
2174 args = tr.hookargs.copy()
2176 args = tr.hookargs.copy()
2175 node = hex(cl.node(rev))
2177 node = hex(cl.node(rev))
2176 args.update(phases.preparehookargs(node, old, new))
2178 args.update(phases.preparehookargs(node, old, new))
2177 repo.hook(
2179 repo.hook(
2178 b'pretxnclose-phase',
2180 b'pretxnclose-phase',
2179 throw=True,
2181 throw=True,
2180 **pycompat.strkwargs(args)
2182 **pycompat.strkwargs(args)
2181 )
2183 )
2182
2184
2183 repo.hook(
2185 repo.hook(
2184 b'pretxnclose', throw=True, **pycompat.strkwargs(tr.hookargs)
2186 b'pretxnclose', throw=True, **pycompat.strkwargs(tr.hookargs)
2185 )
2187 )
2186
2188
2187 def releasefn(tr, success):
2189 def releasefn(tr, success):
2188 repo = reporef()
2190 repo = reporef()
2189 if repo is None:
2191 if repo is None:
2190 # If the repo has been GC'd (and this release function is being
2192 # If the repo has been GC'd (and this release function is being
2191 # called from transaction.__del__), there's not much we can do,
2193 # called from transaction.__del__), there's not much we can do,
2192 # so just leave the unfinished transaction there and let the
2194 # so just leave the unfinished transaction there and let the
2193 # user run `hg recover`.
2195 # user run `hg recover`.
2194 return
2196 return
2195 if success:
2197 if success:
2196 # this should be explicitly invoked here, because
2198 # this should be explicitly invoked here, because
2197 # in-memory changes aren't written out at closing
2199 # in-memory changes aren't written out at closing
2198 # transaction, if tr.addfilegenerator (via
2200 # transaction, if tr.addfilegenerator (via
2199 # dirstate.write or so) isn't invoked while
2201 # dirstate.write or so) isn't invoked while
2200 # transaction running
2202 # transaction running
2201 repo.dirstate.write(None)
2203 repo.dirstate.write(None)
2202 else:
2204 else:
2203 # discard all changes (including ones already written
2205 # discard all changes (including ones already written
2204 # out) in this transaction
2206 # out) in this transaction
2205 narrowspec.restorebackup(self, b'journal.narrowspec')
2207 narrowspec.restorebackup(self, b'journal.narrowspec')
2206 narrowspec.restorewcbackup(self, b'journal.narrowspec.dirstate')
2208 narrowspec.restorewcbackup(self, b'journal.narrowspec.dirstate')
2207 repo.dirstate.restorebackup(None, b'journal.dirstate')
2209 repo.dirstate.restorebackup(None, b'journal.dirstate')
2208
2210
2209 repo.invalidate(clearfilecache=True)
2211 repo.invalidate(clearfilecache=True)
2210
2212
2211 tr = transaction.transaction(
2213 tr = transaction.transaction(
2212 rp,
2214 rp,
2213 self.svfs,
2215 self.svfs,
2214 vfsmap,
2216 vfsmap,
2215 b"journal",
2217 b"journal",
2216 b"undo",
2218 b"undo",
2217 aftertrans(renames),
2219 aftertrans(renames),
2218 self.store.createmode,
2220 self.store.createmode,
2219 validator=validate,
2221 validator=validate,
2220 releasefn=releasefn,
2222 releasefn=releasefn,
2221 checkambigfiles=_cachedfiles,
2223 checkambigfiles=_cachedfiles,
2222 name=desc,
2224 name=desc,
2223 )
2225 )
2224 tr.changes[b'origrepolen'] = len(self)
2226 tr.changes[b'origrepolen'] = len(self)
2225 tr.changes[b'obsmarkers'] = set()
2227 tr.changes[b'obsmarkers'] = set()
2226 tr.changes[b'phases'] = {}
2228 tr.changes[b'phases'] = {}
2227 tr.changes[b'bookmarks'] = {}
2229 tr.changes[b'bookmarks'] = {}
2228
2230
2229 tr.hookargs[b'txnid'] = txnid
2231 tr.hookargs[b'txnid'] = txnid
2230 tr.hookargs[b'txnname'] = desc
2232 tr.hookargs[b'txnname'] = desc
2231 # note: writing the fncache only during finalize mean that the file is
2233 # note: writing the fncache only during finalize mean that the file is
2232 # outdated when running hooks. As fncache is used for streaming clone,
2234 # outdated when running hooks. As fncache is used for streaming clone,
2233 # this is not expected to break anything that happen during the hooks.
2235 # this is not expected to break anything that happen during the hooks.
2234 tr.addfinalize(b'flush-fncache', self.store.write)
2236 tr.addfinalize(b'flush-fncache', self.store.write)
2235
2237
2236 def txnclosehook(tr2):
2238 def txnclosehook(tr2):
2237 """To be run if transaction is successful, will schedule a hook run
2239 """To be run if transaction is successful, will schedule a hook run
2238 """
2240 """
2239 # Don't reference tr2 in hook() so we don't hold a reference.
2241 # Don't reference tr2 in hook() so we don't hold a reference.
2240 # This reduces memory consumption when there are multiple
2242 # This reduces memory consumption when there are multiple
2241 # transactions per lock. This can likely go away if issue5045
2243 # transactions per lock. This can likely go away if issue5045
2242 # fixes the function accumulation.
2244 # fixes the function accumulation.
2243 hookargs = tr2.hookargs
2245 hookargs = tr2.hookargs
2244
2246
2245 def hookfunc(unused_success):
2247 def hookfunc(unused_success):
2246 repo = reporef()
2248 repo = reporef()
2247 if hook.hashook(repo.ui, b'txnclose-bookmark'):
2249 if hook.hashook(repo.ui, b'txnclose-bookmark'):
2248 bmchanges = sorted(tr.changes[b'bookmarks'].items())
2250 bmchanges = sorted(tr.changes[b'bookmarks'].items())
2249 for name, (old, new) in bmchanges:
2251 for name, (old, new) in bmchanges:
2250 args = tr.hookargs.copy()
2252 args = tr.hookargs.copy()
2251 args.update(bookmarks.preparehookargs(name, old, new))
2253 args.update(bookmarks.preparehookargs(name, old, new))
2252 repo.hook(
2254 repo.hook(
2253 b'txnclose-bookmark',
2255 b'txnclose-bookmark',
2254 throw=False,
2256 throw=False,
2255 **pycompat.strkwargs(args)
2257 **pycompat.strkwargs(args)
2256 )
2258 )
2257
2259
2258 if hook.hashook(repo.ui, b'txnclose-phase'):
2260 if hook.hashook(repo.ui, b'txnclose-phase'):
2259 cl = repo.unfiltered().changelog
2261 cl = repo.unfiltered().changelog
2260 phasemv = sorted(tr.changes[b'phases'].items())
2262 phasemv = sorted(tr.changes[b'phases'].items())
2261 for rev, (old, new) in phasemv:
2263 for rev, (old, new) in phasemv:
2262 args = tr.hookargs.copy()
2264 args = tr.hookargs.copy()
2263 node = hex(cl.node(rev))
2265 node = hex(cl.node(rev))
2264 args.update(phases.preparehookargs(node, old, new))
2266 args.update(phases.preparehookargs(node, old, new))
2265 repo.hook(
2267 repo.hook(
2266 b'txnclose-phase',
2268 b'txnclose-phase',
2267 throw=False,
2269 throw=False,
2268 **pycompat.strkwargs(args)
2270 **pycompat.strkwargs(args)
2269 )
2271 )
2270
2272
2271 repo.hook(
2273 repo.hook(
2272 b'txnclose', throw=False, **pycompat.strkwargs(hookargs)
2274 b'txnclose', throw=False, **pycompat.strkwargs(hookargs)
2273 )
2275 )
2274
2276
2275 reporef()._afterlock(hookfunc)
2277 reporef()._afterlock(hookfunc)
2276
2278
2277 tr.addfinalize(b'txnclose-hook', txnclosehook)
2279 tr.addfinalize(b'txnclose-hook', txnclosehook)
2278 # Include a leading "-" to make it happen before the transaction summary
2280 # Include a leading "-" to make it happen before the transaction summary
2279 # reports registered via scmutil.registersummarycallback() whose names
2281 # reports registered via scmutil.registersummarycallback() whose names
2280 # are 00-txnreport etc. That way, the caches will be warm when the
2282 # are 00-txnreport etc. That way, the caches will be warm when the
2281 # callbacks run.
2283 # callbacks run.
2282 tr.addpostclose(b'-warm-cache', self._buildcacheupdater(tr))
2284 tr.addpostclose(b'-warm-cache', self._buildcacheupdater(tr))
2283
2285
2284 def txnaborthook(tr2):
2286 def txnaborthook(tr2):
2285 """To be run if transaction is aborted
2287 """To be run if transaction is aborted
2286 """
2288 """
2287 reporef().hook(
2289 reporef().hook(
2288 b'txnabort', throw=False, **pycompat.strkwargs(tr2.hookargs)
2290 b'txnabort', throw=False, **pycompat.strkwargs(tr2.hookargs)
2289 )
2291 )
2290
2292
2291 tr.addabort(b'txnabort-hook', txnaborthook)
2293 tr.addabort(b'txnabort-hook', txnaborthook)
2292 # avoid eager cache invalidation. in-memory data should be identical
2294 # avoid eager cache invalidation. in-memory data should be identical
2293 # to stored data if transaction has no error.
2295 # to stored data if transaction has no error.
2294 tr.addpostclose(b'refresh-filecachestats', self._refreshfilecachestats)
2296 tr.addpostclose(b'refresh-filecachestats', self._refreshfilecachestats)
2295 self._transref = weakref.ref(tr)
2297 self._transref = weakref.ref(tr)
2296 scmutil.registersummarycallback(self, tr, desc)
2298 scmutil.registersummarycallback(self, tr, desc)
2297 return tr
2299 return tr
2298
2300
2299 def _journalfiles(self):
2301 def _journalfiles(self):
2300 return (
2302 return (
2301 (self.svfs, b'journal'),
2303 (self.svfs, b'journal'),
2302 (self.svfs, b'journal.narrowspec'),
2304 (self.svfs, b'journal.narrowspec'),
2303 (self.vfs, b'journal.narrowspec.dirstate'),
2305 (self.vfs, b'journal.narrowspec.dirstate'),
2304 (self.vfs, b'journal.dirstate'),
2306 (self.vfs, b'journal.dirstate'),
2305 (self.vfs, b'journal.branch'),
2307 (self.vfs, b'journal.branch'),
2306 (self.vfs, b'journal.desc'),
2308 (self.vfs, b'journal.desc'),
2307 (bookmarks.bookmarksvfs(self), b'journal.bookmarks'),
2309 (bookmarks.bookmarksvfs(self), b'journal.bookmarks'),
2308 (self.svfs, b'journal.phaseroots'),
2310 (self.svfs, b'journal.phaseroots'),
2309 )
2311 )
2310
2312
2311 def undofiles(self):
2313 def undofiles(self):
2312 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
2314 return [(vfs, undoname(x)) for vfs, x in self._journalfiles()]
2313
2315
2314 @unfilteredmethod
2316 @unfilteredmethod
2315 def _writejournal(self, desc):
2317 def _writejournal(self, desc):
2316 self.dirstate.savebackup(None, b'journal.dirstate')
2318 self.dirstate.savebackup(None, b'journal.dirstate')
2317 narrowspec.savewcbackup(self, b'journal.narrowspec.dirstate')
2319 narrowspec.savewcbackup(self, b'journal.narrowspec.dirstate')
2318 narrowspec.savebackup(self, b'journal.narrowspec')
2320 narrowspec.savebackup(self, b'journal.narrowspec')
2319 self.vfs.write(
2321 self.vfs.write(
2320 b"journal.branch", encoding.fromlocal(self.dirstate.branch())
2322 b"journal.branch", encoding.fromlocal(self.dirstate.branch())
2321 )
2323 )
2322 self.vfs.write(b"journal.desc", b"%d\n%s\n" % (len(self), desc))
2324 self.vfs.write(b"journal.desc", b"%d\n%s\n" % (len(self), desc))
2323 bookmarksvfs = bookmarks.bookmarksvfs(self)
2325 bookmarksvfs = bookmarks.bookmarksvfs(self)
2324 bookmarksvfs.write(
2326 bookmarksvfs.write(
2325 b"journal.bookmarks", bookmarksvfs.tryread(b"bookmarks")
2327 b"journal.bookmarks", bookmarksvfs.tryread(b"bookmarks")
2326 )
2328 )
2327 self.svfs.write(b"journal.phaseroots", self.svfs.tryread(b"phaseroots"))
2329 self.svfs.write(b"journal.phaseroots", self.svfs.tryread(b"phaseroots"))
2328
2330
2329 def recover(self):
2331 def recover(self):
2330 with self.lock():
2332 with self.lock():
2331 if self.svfs.exists(b"journal"):
2333 if self.svfs.exists(b"journal"):
2332 self.ui.status(_(b"rolling back interrupted transaction\n"))
2334 self.ui.status(_(b"rolling back interrupted transaction\n"))
2333 vfsmap = {
2335 vfsmap = {
2334 b'': self.svfs,
2336 b'': self.svfs,
2335 b'plain': self.vfs,
2337 b'plain': self.vfs,
2336 }
2338 }
2337 transaction.rollback(
2339 transaction.rollback(
2338 self.svfs,
2340 self.svfs,
2339 vfsmap,
2341 vfsmap,
2340 b"journal",
2342 b"journal",
2341 self.ui.warn,
2343 self.ui.warn,
2342 checkambigfiles=_cachedfiles,
2344 checkambigfiles=_cachedfiles,
2343 )
2345 )
2344 self.invalidate()
2346 self.invalidate()
2345 return True
2347 return True
2346 else:
2348 else:
2347 self.ui.warn(_(b"no interrupted transaction available\n"))
2349 self.ui.warn(_(b"no interrupted transaction available\n"))
2348 return False
2350 return False
2349
2351
2350 def rollback(self, dryrun=False, force=False):
2352 def rollback(self, dryrun=False, force=False):
2351 wlock = lock = dsguard = None
2353 wlock = lock = dsguard = None
2352 try:
2354 try:
2353 wlock = self.wlock()
2355 wlock = self.wlock()
2354 lock = self.lock()
2356 lock = self.lock()
2355 if self.svfs.exists(b"undo"):
2357 if self.svfs.exists(b"undo"):
2356 dsguard = dirstateguard.dirstateguard(self, b'rollback')
2358 dsguard = dirstateguard.dirstateguard(self, b'rollback')
2357
2359
2358 return self._rollback(dryrun, force, dsguard)
2360 return self._rollback(dryrun, force, dsguard)
2359 else:
2361 else:
2360 self.ui.warn(_(b"no rollback information available\n"))
2362 self.ui.warn(_(b"no rollback information available\n"))
2361 return 1
2363 return 1
2362 finally:
2364 finally:
2363 release(dsguard, lock, wlock)
2365 release(dsguard, lock, wlock)
2364
2366
2365 @unfilteredmethod # Until we get smarter cache management
2367 @unfilteredmethod # Until we get smarter cache management
2366 def _rollback(self, dryrun, force, dsguard):
2368 def _rollback(self, dryrun, force, dsguard):
2367 ui = self.ui
2369 ui = self.ui
2368 try:
2370 try:
2369 args = self.vfs.read(b'undo.desc').splitlines()
2371 args = self.vfs.read(b'undo.desc').splitlines()
2370 (oldlen, desc, detail) = (int(args[0]), args[1], None)
2372 (oldlen, desc, detail) = (int(args[0]), args[1], None)
2371 if len(args) >= 3:
2373 if len(args) >= 3:
2372 detail = args[2]
2374 detail = args[2]
2373 oldtip = oldlen - 1
2375 oldtip = oldlen - 1
2374
2376
2375 if detail and ui.verbose:
2377 if detail and ui.verbose:
2376 msg = _(
2378 msg = _(
2377 b'repository tip rolled back to revision %d'
2379 b'repository tip rolled back to revision %d'
2378 b' (undo %s: %s)\n'
2380 b' (undo %s: %s)\n'
2379 ) % (oldtip, desc, detail)
2381 ) % (oldtip, desc, detail)
2380 else:
2382 else:
2381 msg = _(
2383 msg = _(
2382 b'repository tip rolled back to revision %d (undo %s)\n'
2384 b'repository tip rolled back to revision %d (undo %s)\n'
2383 ) % (oldtip, desc)
2385 ) % (oldtip, desc)
2384 except IOError:
2386 except IOError:
2385 msg = _(b'rolling back unknown transaction\n')
2387 msg = _(b'rolling back unknown transaction\n')
2386 desc = None
2388 desc = None
2387
2389
2388 if not force and self[b'.'] != self[b'tip'] and desc == b'commit':
2390 if not force and self[b'.'] != self[b'tip'] and desc == b'commit':
2389 raise error.Abort(
2391 raise error.Abort(
2390 _(
2392 _(
2391 b'rollback of last commit while not checked out '
2393 b'rollback of last commit while not checked out '
2392 b'may lose data'
2394 b'may lose data'
2393 ),
2395 ),
2394 hint=_(b'use -f to force'),
2396 hint=_(b'use -f to force'),
2395 )
2397 )
2396
2398
2397 ui.status(msg)
2399 ui.status(msg)
2398 if dryrun:
2400 if dryrun:
2399 return 0
2401 return 0
2400
2402
2401 parents = self.dirstate.parents()
2403 parents = self.dirstate.parents()
2402 self.destroying()
2404 self.destroying()
2403 vfsmap = {b'plain': self.vfs, b'': self.svfs}
2405 vfsmap = {b'plain': self.vfs, b'': self.svfs}
2404 transaction.rollback(
2406 transaction.rollback(
2405 self.svfs, vfsmap, b'undo', ui.warn, checkambigfiles=_cachedfiles
2407 self.svfs, vfsmap, b'undo', ui.warn, checkambigfiles=_cachedfiles
2406 )
2408 )
2407 bookmarksvfs = bookmarks.bookmarksvfs(self)
2409 bookmarksvfs = bookmarks.bookmarksvfs(self)
2408 if bookmarksvfs.exists(b'undo.bookmarks'):
2410 if bookmarksvfs.exists(b'undo.bookmarks'):
2409 bookmarksvfs.rename(
2411 bookmarksvfs.rename(
2410 b'undo.bookmarks', b'bookmarks', checkambig=True
2412 b'undo.bookmarks', b'bookmarks', checkambig=True
2411 )
2413 )
2412 if self.svfs.exists(b'undo.phaseroots'):
2414 if self.svfs.exists(b'undo.phaseroots'):
2413 self.svfs.rename(b'undo.phaseroots', b'phaseroots', checkambig=True)
2415 self.svfs.rename(b'undo.phaseroots', b'phaseroots', checkambig=True)
2414 self.invalidate()
2416 self.invalidate()
2415
2417
2416 has_node = self.changelog.index.has_node
2418 has_node = self.changelog.index.has_node
2417 parentgone = any(not has_node(p) for p in parents)
2419 parentgone = any(not has_node(p) for p in parents)
2418 if parentgone:
2420 if parentgone:
2419 # prevent dirstateguard from overwriting already restored one
2421 # prevent dirstateguard from overwriting already restored one
2420 dsguard.close()
2422 dsguard.close()
2421
2423
2422 narrowspec.restorebackup(self, b'undo.narrowspec')
2424 narrowspec.restorebackup(self, b'undo.narrowspec')
2423 narrowspec.restorewcbackup(self, b'undo.narrowspec.dirstate')
2425 narrowspec.restorewcbackup(self, b'undo.narrowspec.dirstate')
2424 self.dirstate.restorebackup(None, b'undo.dirstate')
2426 self.dirstate.restorebackup(None, b'undo.dirstate')
2425 try:
2427 try:
2426 branch = self.vfs.read(b'undo.branch')
2428 branch = self.vfs.read(b'undo.branch')
2427 self.dirstate.setbranch(encoding.tolocal(branch))
2429 self.dirstate.setbranch(encoding.tolocal(branch))
2428 except IOError:
2430 except IOError:
2429 ui.warn(
2431 ui.warn(
2430 _(
2432 _(
2431 b'named branch could not be reset: '
2433 b'named branch could not be reset: '
2432 b'current branch is still \'%s\'\n'
2434 b'current branch is still \'%s\'\n'
2433 )
2435 )
2434 % self.dirstate.branch()
2436 % self.dirstate.branch()
2435 )
2437 )
2436
2438
2437 parents = tuple([p.rev() for p in self[None].parents()])
2439 parents = tuple([p.rev() for p in self[None].parents()])
2438 if len(parents) > 1:
2440 if len(parents) > 1:
2439 ui.status(
2441 ui.status(
2440 _(
2442 _(
2441 b'working directory now based on '
2443 b'working directory now based on '
2442 b'revisions %d and %d\n'
2444 b'revisions %d and %d\n'
2443 )
2445 )
2444 % parents
2446 % parents
2445 )
2447 )
2446 else:
2448 else:
2447 ui.status(
2449 ui.status(
2448 _(b'working directory now based on revision %d\n') % parents
2450 _(b'working directory now based on revision %d\n') % parents
2449 )
2451 )
2450 mergemod.mergestate.clean(self, self[b'.'].node())
2452 mergemod.mergestate.clean(self, self[b'.'].node())
2451
2453
2452 # TODO: if we know which new heads may result from this rollback, pass
2454 # TODO: if we know which new heads may result from this rollback, pass
2453 # them to destroy(), which will prevent the branchhead cache from being
2455 # them to destroy(), which will prevent the branchhead cache from being
2454 # invalidated.
2456 # invalidated.
2455 self.destroyed()
2457 self.destroyed()
2456 return 0
2458 return 0
2457
2459
2458 def _buildcacheupdater(self, newtransaction):
2460 def _buildcacheupdater(self, newtransaction):
2459 """called during transaction to build the callback updating cache
2461 """called during transaction to build the callback updating cache
2460
2462
2461 Lives on the repository to help extension who might want to augment
2463 Lives on the repository to help extension who might want to augment
2462 this logic. For this purpose, the created transaction is passed to the
2464 this logic. For this purpose, the created transaction is passed to the
2463 method.
2465 method.
2464 """
2466 """
2465 # we must avoid cyclic reference between repo and transaction.
2467 # we must avoid cyclic reference between repo and transaction.
2466 reporef = weakref.ref(self)
2468 reporef = weakref.ref(self)
2467
2469
2468 def updater(tr):
2470 def updater(tr):
2469 repo = reporef()
2471 repo = reporef()
2470 repo.updatecaches(tr)
2472 repo.updatecaches(tr)
2471
2473
2472 return updater
2474 return updater
2473
2475
2474 @unfilteredmethod
2476 @unfilteredmethod
2475 def updatecaches(self, tr=None, full=False):
2477 def updatecaches(self, tr=None, full=False):
2476 """warm appropriate caches
2478 """warm appropriate caches
2477
2479
2478 If this function is called after a transaction closed. The transaction
2480 If this function is called after a transaction closed. The transaction
2479 will be available in the 'tr' argument. This can be used to selectively
2481 will be available in the 'tr' argument. This can be used to selectively
2480 update caches relevant to the changes in that transaction.
2482 update caches relevant to the changes in that transaction.
2481
2483
2482 If 'full' is set, make sure all caches the function knows about have
2484 If 'full' is set, make sure all caches the function knows about have
2483 up-to-date data. Even the ones usually loaded more lazily.
2485 up-to-date data. Even the ones usually loaded more lazily.
2484 """
2486 """
2485 if tr is not None and tr.hookargs.get(b'source') == b'strip':
2487 if tr is not None and tr.hookargs.get(b'source') == b'strip':
2486 # During strip, many caches are invalid but
2488 # During strip, many caches are invalid but
2487 # later call to `destroyed` will refresh them.
2489 # later call to `destroyed` will refresh them.
2488 return
2490 return
2489
2491
2490 if tr is None or tr.changes[b'origrepolen'] < len(self):
2492 if tr is None or tr.changes[b'origrepolen'] < len(self):
2491 # accessing the 'ser ved' branchmap should refresh all the others,
2493 # accessing the 'ser ved' branchmap should refresh all the others,
2492 self.ui.debug(b'updating the branch cache\n')
2494 self.ui.debug(b'updating the branch cache\n')
2493 self.filtered(b'served').branchmap()
2495 self.filtered(b'served').branchmap()
2494 self.filtered(b'served.hidden').branchmap()
2496 self.filtered(b'served.hidden').branchmap()
2495
2497
2496 if full:
2498 if full:
2497 unfi = self.unfiltered()
2499 unfi = self.unfiltered()
2498 rbc = unfi.revbranchcache()
2500 rbc = unfi.revbranchcache()
2499 for r in unfi.changelog:
2501 for r in unfi.changelog:
2500 rbc.branchinfo(r)
2502 rbc.branchinfo(r)
2501 rbc.write()
2503 rbc.write()
2502
2504
2503 # ensure the working copy parents are in the manifestfulltextcache
2505 # ensure the working copy parents are in the manifestfulltextcache
2504 for ctx in self[b'.'].parents():
2506 for ctx in self[b'.'].parents():
2505 ctx.manifest() # accessing the manifest is enough
2507 ctx.manifest() # accessing the manifest is enough
2506
2508
2507 # accessing fnode cache warms the cache
2509 # accessing fnode cache warms the cache
2508 tagsmod.fnoderevs(self.ui, unfi, unfi.changelog.revs())
2510 tagsmod.fnoderevs(self.ui, unfi, unfi.changelog.revs())
2509 # accessing tags warm the cache
2511 # accessing tags warm the cache
2510 self.tags()
2512 self.tags()
2511 self.filtered(b'served').tags()
2513 self.filtered(b'served').tags()
2512
2514
2513 # The `full` arg is documented as updating even the lazily-loaded
2515 # The `full` arg is documented as updating even the lazily-loaded
2514 # caches immediately, so we're forcing a write to cause these caches
2516 # caches immediately, so we're forcing a write to cause these caches
2515 # to be warmed up even if they haven't explicitly been requested
2517 # to be warmed up even if they haven't explicitly been requested
2516 # yet (if they've never been used by hg, they won't ever have been
2518 # yet (if they've never been used by hg, they won't ever have been
2517 # written, even if they're a subset of another kind of cache that
2519 # written, even if they're a subset of another kind of cache that
2518 # *has* been used).
2520 # *has* been used).
2519 for filt in repoview.filtertable.keys():
2521 for filt in repoview.filtertable.keys():
2520 filtered = self.filtered(filt)
2522 filtered = self.filtered(filt)
2521 filtered.branchmap().write(filtered)
2523 filtered.branchmap().write(filtered)
2522
2524
2523 def invalidatecaches(self):
2525 def invalidatecaches(self):
2524
2526
2525 if '_tagscache' in vars(self):
2527 if '_tagscache' in vars(self):
2526 # can't use delattr on proxy
2528 # can't use delattr on proxy
2527 del self.__dict__['_tagscache']
2529 del self.__dict__['_tagscache']
2528
2530
2529 self._branchcaches.clear()
2531 self._branchcaches.clear()
2530 self.invalidatevolatilesets()
2532 self.invalidatevolatilesets()
2531 self._sparsesignaturecache.clear()
2533 self._sparsesignaturecache.clear()
2532
2534
2533 def invalidatevolatilesets(self):
2535 def invalidatevolatilesets(self):
2534 self.filteredrevcache.clear()
2536 self.filteredrevcache.clear()
2535 obsolete.clearobscaches(self)
2537 obsolete.clearobscaches(self)
2536 self._quick_access_changeid_invalidate()
2538 self._quick_access_changeid_invalidate()
2537
2539
2538 def invalidatedirstate(self):
2540 def invalidatedirstate(self):
2539 '''Invalidates the dirstate, causing the next call to dirstate
2541 '''Invalidates the dirstate, causing the next call to dirstate
2540 to check if it was modified since the last time it was read,
2542 to check if it was modified since the last time it was read,
2541 rereading it if it has.
2543 rereading it if it has.
2542
2544
2543 This is different to dirstate.invalidate() that it doesn't always
2545 This is different to dirstate.invalidate() that it doesn't always
2544 rereads the dirstate. Use dirstate.invalidate() if you want to
2546 rereads the dirstate. Use dirstate.invalidate() if you want to
2545 explicitly read the dirstate again (i.e. restoring it to a previous
2547 explicitly read the dirstate again (i.e. restoring it to a previous
2546 known good state).'''
2548 known good state).'''
2547 if hasunfilteredcache(self, 'dirstate'):
2549 if hasunfilteredcache(self, 'dirstate'):
2548 for k in self.dirstate._filecache:
2550 for k in self.dirstate._filecache:
2549 try:
2551 try:
2550 delattr(self.dirstate, k)
2552 delattr(self.dirstate, k)
2551 except AttributeError:
2553 except AttributeError:
2552 pass
2554 pass
2553 delattr(self.unfiltered(), 'dirstate')
2555 delattr(self.unfiltered(), 'dirstate')
2554
2556
2555 def invalidate(self, clearfilecache=False):
2557 def invalidate(self, clearfilecache=False):
2556 '''Invalidates both store and non-store parts other than dirstate
2558 '''Invalidates both store and non-store parts other than dirstate
2557
2559
2558 If a transaction is running, invalidation of store is omitted,
2560 If a transaction is running, invalidation of store is omitted,
2559 because discarding in-memory changes might cause inconsistency
2561 because discarding in-memory changes might cause inconsistency
2560 (e.g. incomplete fncache causes unintentional failure, but
2562 (e.g. incomplete fncache causes unintentional failure, but
2561 redundant one doesn't).
2563 redundant one doesn't).
2562 '''
2564 '''
2563 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2565 unfiltered = self.unfiltered() # all file caches are stored unfiltered
2564 for k in list(self._filecache.keys()):
2566 for k in list(self._filecache.keys()):
2565 # dirstate is invalidated separately in invalidatedirstate()
2567 # dirstate is invalidated separately in invalidatedirstate()
2566 if k == b'dirstate':
2568 if k == b'dirstate':
2567 continue
2569 continue
2568 if (
2570 if (
2569 k == b'changelog'
2571 k == b'changelog'
2570 and self.currenttransaction()
2572 and self.currenttransaction()
2571 and self.changelog._delayed
2573 and self.changelog._delayed
2572 ):
2574 ):
2573 # The changelog object may store unwritten revisions. We don't
2575 # The changelog object may store unwritten revisions. We don't
2574 # want to lose them.
2576 # want to lose them.
2575 # TODO: Solve the problem instead of working around it.
2577 # TODO: Solve the problem instead of working around it.
2576 continue
2578 continue
2577
2579
2578 if clearfilecache:
2580 if clearfilecache:
2579 del self._filecache[k]
2581 del self._filecache[k]
2580 try:
2582 try:
2581 delattr(unfiltered, k)
2583 delattr(unfiltered, k)
2582 except AttributeError:
2584 except AttributeError:
2583 pass
2585 pass
2584 self.invalidatecaches()
2586 self.invalidatecaches()
2585 if not self.currenttransaction():
2587 if not self.currenttransaction():
2586 # TODO: Changing contents of store outside transaction
2588 # TODO: Changing contents of store outside transaction
2587 # causes inconsistency. We should make in-memory store
2589 # causes inconsistency. We should make in-memory store
2588 # changes detectable, and abort if changed.
2590 # changes detectable, and abort if changed.
2589 self.store.invalidatecaches()
2591 self.store.invalidatecaches()
2590
2592
2591 def invalidateall(self):
2593 def invalidateall(self):
2592 '''Fully invalidates both store and non-store parts, causing the
2594 '''Fully invalidates both store and non-store parts, causing the
2593 subsequent operation to reread any outside changes.'''
2595 subsequent operation to reread any outside changes.'''
2594 # extension should hook this to invalidate its caches
2596 # extension should hook this to invalidate its caches
2595 self.invalidate()
2597 self.invalidate()
2596 self.invalidatedirstate()
2598 self.invalidatedirstate()
2597
2599
2598 @unfilteredmethod
2600 @unfilteredmethod
2599 def _refreshfilecachestats(self, tr):
2601 def _refreshfilecachestats(self, tr):
2600 """Reload stats of cached files so that they are flagged as valid"""
2602 """Reload stats of cached files so that they are flagged as valid"""
2601 for k, ce in self._filecache.items():
2603 for k, ce in self._filecache.items():
2602 k = pycompat.sysstr(k)
2604 k = pycompat.sysstr(k)
2603 if k == 'dirstate' or k not in self.__dict__:
2605 if k == 'dirstate' or k not in self.__dict__:
2604 continue
2606 continue
2605 ce.refresh()
2607 ce.refresh()
2606
2608
2607 def _lock(
2609 def _lock(
2608 self,
2610 self,
2609 vfs,
2611 vfs,
2610 lockname,
2612 lockname,
2611 wait,
2613 wait,
2612 releasefn,
2614 releasefn,
2613 acquirefn,
2615 acquirefn,
2614 desc,
2616 desc,
2615 inheritchecker=None,
2617 inheritchecker=None,
2616 parentenvvar=None,
2618 parentenvvar=None,
2617 ):
2619 ):
2618 parentlock = None
2620 parentlock = None
2619 # the contents of parentenvvar are used by the underlying lock to
2621 # the contents of parentenvvar are used by the underlying lock to
2620 # determine whether it can be inherited
2622 # determine whether it can be inherited
2621 if parentenvvar is not None:
2623 if parentenvvar is not None:
2622 parentlock = encoding.environ.get(parentenvvar)
2624 parentlock = encoding.environ.get(parentenvvar)
2623
2625
2624 timeout = 0
2626 timeout = 0
2625 warntimeout = 0
2627 warntimeout = 0
2626 if wait:
2628 if wait:
2627 timeout = self.ui.configint(b"ui", b"timeout")
2629 timeout = self.ui.configint(b"ui", b"timeout")
2628 warntimeout = self.ui.configint(b"ui", b"timeout.warn")
2630 warntimeout = self.ui.configint(b"ui", b"timeout.warn")
2629 # internal config: ui.signal-safe-lock
2631 # internal config: ui.signal-safe-lock
2630 signalsafe = self.ui.configbool(b'ui', b'signal-safe-lock')
2632 signalsafe = self.ui.configbool(b'ui', b'signal-safe-lock')
2631
2633
2632 l = lockmod.trylock(
2634 l = lockmod.trylock(
2633 self.ui,
2635 self.ui,
2634 vfs,
2636 vfs,
2635 lockname,
2637 lockname,
2636 timeout,
2638 timeout,
2637 warntimeout,
2639 warntimeout,
2638 releasefn=releasefn,
2640 releasefn=releasefn,
2639 acquirefn=acquirefn,
2641 acquirefn=acquirefn,
2640 desc=desc,
2642 desc=desc,
2641 inheritchecker=inheritchecker,
2643 inheritchecker=inheritchecker,
2642 parentlock=parentlock,
2644 parentlock=parentlock,
2643 signalsafe=signalsafe,
2645 signalsafe=signalsafe,
2644 )
2646 )
2645 return l
2647 return l
2646
2648
2647 def _afterlock(self, callback):
2649 def _afterlock(self, callback):
2648 """add a callback to be run when the repository is fully unlocked
2650 """add a callback to be run when the repository is fully unlocked
2649
2651
2650 The callback will be executed when the outermost lock is released
2652 The callback will be executed when the outermost lock is released
2651 (with wlock being higher level than 'lock')."""
2653 (with wlock being higher level than 'lock')."""
2652 for ref in (self._wlockref, self._lockref):
2654 for ref in (self._wlockref, self._lockref):
2653 l = ref and ref()
2655 l = ref and ref()
2654 if l and l.held:
2656 if l and l.held:
2655 l.postrelease.append(callback)
2657 l.postrelease.append(callback)
2656 break
2658 break
2657 else: # no lock have been found.
2659 else: # no lock have been found.
2658 callback(True)
2660 callback(True)
2659
2661
2660 def lock(self, wait=True):
2662 def lock(self, wait=True):
2661 '''Lock the repository store (.hg/store) and return a weak reference
2663 '''Lock the repository store (.hg/store) and return a weak reference
2662 to the lock. Use this before modifying the store (e.g. committing or
2664 to the lock. Use this before modifying the store (e.g. committing or
2663 stripping). If you are opening a transaction, get a lock as well.)
2665 stripping). If you are opening a transaction, get a lock as well.)
2664
2666
2665 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2667 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2666 'wlock' first to avoid a dead-lock hazard.'''
2668 'wlock' first to avoid a dead-lock hazard.'''
2667 l = self._currentlock(self._lockref)
2669 l = self._currentlock(self._lockref)
2668 if l is not None:
2670 if l is not None:
2669 l.lock()
2671 l.lock()
2670 return l
2672 return l
2671
2673
2672 l = self._lock(
2674 l = self._lock(
2673 vfs=self.svfs,
2675 vfs=self.svfs,
2674 lockname=b"lock",
2676 lockname=b"lock",
2675 wait=wait,
2677 wait=wait,
2676 releasefn=None,
2678 releasefn=None,
2677 acquirefn=self.invalidate,
2679 acquirefn=self.invalidate,
2678 desc=_(b'repository %s') % self.origroot,
2680 desc=_(b'repository %s') % self.origroot,
2679 )
2681 )
2680 self._lockref = weakref.ref(l)
2682 self._lockref = weakref.ref(l)
2681 return l
2683 return l
2682
2684
2683 def _wlockchecktransaction(self):
2685 def _wlockchecktransaction(self):
2684 if self.currenttransaction() is not None:
2686 if self.currenttransaction() is not None:
2685 raise error.LockInheritanceContractViolation(
2687 raise error.LockInheritanceContractViolation(
2686 b'wlock cannot be inherited in the middle of a transaction'
2688 b'wlock cannot be inherited in the middle of a transaction'
2687 )
2689 )
2688
2690
2689 def wlock(self, wait=True):
2691 def wlock(self, wait=True):
2690 '''Lock the non-store parts of the repository (everything under
2692 '''Lock the non-store parts of the repository (everything under
2691 .hg except .hg/store) and return a weak reference to the lock.
2693 .hg except .hg/store) and return a weak reference to the lock.
2692
2694
2693 Use this before modifying files in .hg.
2695 Use this before modifying files in .hg.
2694
2696
2695 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2697 If both 'lock' and 'wlock' must be acquired, ensure you always acquires
2696 'wlock' first to avoid a dead-lock hazard.'''
2698 'wlock' first to avoid a dead-lock hazard.'''
2697 l = self._wlockref and self._wlockref()
2699 l = self._wlockref and self._wlockref()
2698 if l is not None and l.held:
2700 if l is not None and l.held:
2699 l.lock()
2701 l.lock()
2700 return l
2702 return l
2701
2703
2702 # We do not need to check for non-waiting lock acquisition. Such
2704 # We do not need to check for non-waiting lock acquisition. Such
2703 # acquisition would not cause dead-lock as they would just fail.
2705 # acquisition would not cause dead-lock as they would just fail.
2704 if wait and (
2706 if wait and (
2705 self.ui.configbool(b'devel', b'all-warnings')
2707 self.ui.configbool(b'devel', b'all-warnings')
2706 or self.ui.configbool(b'devel', b'check-locks')
2708 or self.ui.configbool(b'devel', b'check-locks')
2707 ):
2709 ):
2708 if self._currentlock(self._lockref) is not None:
2710 if self._currentlock(self._lockref) is not None:
2709 self.ui.develwarn(b'"wlock" acquired after "lock"')
2711 self.ui.develwarn(b'"wlock" acquired after "lock"')
2710
2712
2711 def unlock():
2713 def unlock():
2712 if self.dirstate.pendingparentchange():
2714 if self.dirstate.pendingparentchange():
2713 self.dirstate.invalidate()
2715 self.dirstate.invalidate()
2714 else:
2716 else:
2715 self.dirstate.write(None)
2717 self.dirstate.write(None)
2716
2718
2717 self._filecache[b'dirstate'].refresh()
2719 self._filecache[b'dirstate'].refresh()
2718
2720
2719 l = self._lock(
2721 l = self._lock(
2720 self.vfs,
2722 self.vfs,
2721 b"wlock",
2723 b"wlock",
2722 wait,
2724 wait,
2723 unlock,
2725 unlock,
2724 self.invalidatedirstate,
2726 self.invalidatedirstate,
2725 _(b'working directory of %s') % self.origroot,
2727 _(b'working directory of %s') % self.origroot,
2726 inheritchecker=self._wlockchecktransaction,
2728 inheritchecker=self._wlockchecktransaction,
2727 parentenvvar=b'HG_WLOCK_LOCKER',
2729 parentenvvar=b'HG_WLOCK_LOCKER',
2728 )
2730 )
2729 self._wlockref = weakref.ref(l)
2731 self._wlockref = weakref.ref(l)
2730 return l
2732 return l
2731
2733
2732 def _currentlock(self, lockref):
2734 def _currentlock(self, lockref):
2733 """Returns the lock if it's held, or None if it's not."""
2735 """Returns the lock if it's held, or None if it's not."""
2734 if lockref is None:
2736 if lockref is None:
2735 return None
2737 return None
2736 l = lockref()
2738 l = lockref()
2737 if l is None or not l.held:
2739 if l is None or not l.held:
2738 return None
2740 return None
2739 return l
2741 return l
2740
2742
2741 def currentwlock(self):
2743 def currentwlock(self):
2742 """Returns the wlock if it's held, or None if it's not."""
2744 """Returns the wlock if it's held, or None if it's not."""
2743 return self._currentlock(self._wlockref)
2745 return self._currentlock(self._wlockref)
2744
2746
2745 def _filecommit(
2747 def _filecommit(
2746 self,
2748 self,
2747 fctx,
2749 fctx,
2748 manifest1,
2750 manifest1,
2749 manifest2,
2751 manifest2,
2750 linkrev,
2752 linkrev,
2751 tr,
2753 tr,
2752 changelist,
2754 changelist,
2753 includecopymeta,
2755 includecopymeta,
2754 ):
2756 ):
2755 """
2757 """
2756 commit an individual file as part of a larger transaction
2758 commit an individual file as part of a larger transaction
2757 """
2759 """
2758
2760
2759 fname = fctx.path()
2761 fname = fctx.path()
2760 fparent1 = manifest1.get(fname, nullid)
2762 fparent1 = manifest1.get(fname, nullid)
2761 fparent2 = manifest2.get(fname, nullid)
2763 fparent2 = manifest2.get(fname, nullid)
2762 if isinstance(fctx, context.filectx):
2764 if isinstance(fctx, context.filectx):
2763 node = fctx.filenode()
2765 node = fctx.filenode()
2764 if node in [fparent1, fparent2]:
2766 if node in [fparent1, fparent2]:
2765 self.ui.debug(b'reusing %s filelog entry\n' % fname)
2767 self.ui.debug(b'reusing %s filelog entry\n' % fname)
2766 if (
2768 if (
2767 fparent1 != nullid
2769 fparent1 != nullid
2768 and manifest1.flags(fname) != fctx.flags()
2770 and manifest1.flags(fname) != fctx.flags()
2769 ) or (
2771 ) or (
2770 fparent2 != nullid
2772 fparent2 != nullid
2771 and manifest2.flags(fname) != fctx.flags()
2773 and manifest2.flags(fname) != fctx.flags()
2772 ):
2774 ):
2773 changelist.append(fname)
2775 changelist.append(fname)
2774 return node
2776 return node
2775
2777
2776 flog = self.file(fname)
2778 flog = self.file(fname)
2777 meta = {}
2779 meta = {}
2778 cfname = fctx.copysource()
2780 cfname = fctx.copysource()
2779 if cfname and cfname != fname:
2781 if cfname and cfname != fname:
2780 # Mark the new revision of this file as a copy of another
2782 # Mark the new revision of this file as a copy of another
2781 # file. This copy data will effectively act as a parent
2783 # file. This copy data will effectively act as a parent
2782 # of this new revision. If this is a merge, the first
2784 # of this new revision. If this is a merge, the first
2783 # parent will be the nullid (meaning "look up the copy data")
2785 # parent will be the nullid (meaning "look up the copy data")
2784 # and the second one will be the other parent. For example:
2786 # and the second one will be the other parent. For example:
2785 #
2787 #
2786 # 0 --- 1 --- 3 rev1 changes file foo
2788 # 0 --- 1 --- 3 rev1 changes file foo
2787 # \ / rev2 renames foo to bar and changes it
2789 # \ / rev2 renames foo to bar and changes it
2788 # \- 2 -/ rev3 should have bar with all changes and
2790 # \- 2 -/ rev3 should have bar with all changes and
2789 # should record that bar descends from
2791 # should record that bar descends from
2790 # bar in rev2 and foo in rev1
2792 # bar in rev2 and foo in rev1
2791 #
2793 #
2792 # this allows this merge to succeed:
2794 # this allows this merge to succeed:
2793 #
2795 #
2794 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2796 # 0 --- 1 --- 3 rev4 reverts the content change from rev2
2795 # \ / merging rev3 and rev4 should use bar@rev2
2797 # \ / merging rev3 and rev4 should use bar@rev2
2796 # \- 2 --- 4 as the merge base
2798 # \- 2 --- 4 as the merge base
2797 #
2799 #
2798
2800
2799 cnode = manifest1.get(cfname)
2801 cnode = manifest1.get(cfname)
2800 newfparent = fparent2
2802 newfparent = fparent2
2801
2803
2802 if manifest2: # branch merge
2804 if manifest2: # branch merge
2803 if fparent2 == nullid or cnode is None: # copied on remote side
2805 if fparent2 == nullid or cnode is None: # copied on remote side
2804 if cfname in manifest2:
2806 if cfname in manifest2:
2805 cnode = manifest2[cfname]
2807 cnode = manifest2[cfname]
2806 newfparent = fparent1
2808 newfparent = fparent1
2807
2809
2808 # Here, we used to search backwards through history to try to find
2810 # Here, we used to search backwards through history to try to find
2809 # where the file copy came from if the source of a copy was not in
2811 # where the file copy came from if the source of a copy was not in
2810 # the parent directory. However, this doesn't actually make sense to
2812 # the parent directory. However, this doesn't actually make sense to
2811 # do (what does a copy from something not in your working copy even
2813 # do (what does a copy from something not in your working copy even
2812 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2814 # mean?) and it causes bugs (eg, issue4476). Instead, we will warn
2813 # the user that copy information was dropped, so if they didn't
2815 # the user that copy information was dropped, so if they didn't
2814 # expect this outcome it can be fixed, but this is the correct
2816 # expect this outcome it can be fixed, but this is the correct
2815 # behavior in this circumstance.
2817 # behavior in this circumstance.
2816
2818
2817 if cnode:
2819 if cnode:
2818 self.ui.debug(
2820 self.ui.debug(
2819 b" %s: copy %s:%s\n" % (fname, cfname, hex(cnode))
2821 b" %s: copy %s:%s\n" % (fname, cfname, hex(cnode))
2820 )
2822 )
2821 if includecopymeta:
2823 if includecopymeta:
2822 meta[b"copy"] = cfname
2824 meta[b"copy"] = cfname
2823 meta[b"copyrev"] = hex(cnode)
2825 meta[b"copyrev"] = hex(cnode)
2824 fparent1, fparent2 = nullid, newfparent
2826 fparent1, fparent2 = nullid, newfparent
2825 else:
2827 else:
2826 self.ui.warn(
2828 self.ui.warn(
2827 _(
2829 _(
2828 b"warning: can't find ancestor for '%s' "
2830 b"warning: can't find ancestor for '%s' "
2829 b"copied from '%s'!\n"
2831 b"copied from '%s'!\n"
2830 )
2832 )
2831 % (fname, cfname)
2833 % (fname, cfname)
2832 )
2834 )
2833
2835
2834 elif fparent1 == nullid:
2836 elif fparent1 == nullid:
2835 fparent1, fparent2 = fparent2, nullid
2837 fparent1, fparent2 = fparent2, nullid
2836 elif fparent2 != nullid:
2838 elif fparent2 != nullid:
2837 # is one parent an ancestor of the other?
2839 # is one parent an ancestor of the other?
2838 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2840 fparentancestors = flog.commonancestorsheads(fparent1, fparent2)
2839 if fparent1 in fparentancestors:
2841 if fparent1 in fparentancestors:
2840 fparent1, fparent2 = fparent2, nullid
2842 fparent1, fparent2 = fparent2, nullid
2841 elif fparent2 in fparentancestors:
2843 elif fparent2 in fparentancestors:
2842 fparent2 = nullid
2844 fparent2 = nullid
2843
2845
2844 # is the file changed?
2846 # is the file changed?
2845 text = fctx.data()
2847 text = fctx.data()
2846 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2848 if fparent2 != nullid or flog.cmp(fparent1, text) or meta:
2847 changelist.append(fname)
2849 changelist.append(fname)
2848 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2850 return flog.add(text, meta, tr, linkrev, fparent1, fparent2)
2849 # are just the flags changed during merge?
2851 # are just the flags changed during merge?
2850 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2852 elif fname in manifest1 and manifest1.flags(fname) != fctx.flags():
2851 changelist.append(fname)
2853 changelist.append(fname)
2852
2854
2853 return fparent1
2855 return fparent1
2854
2856
2855 def checkcommitpatterns(self, wctx, match, status, fail):
2857 def checkcommitpatterns(self, wctx, match, status, fail):
2856 """check for commit arguments that aren't committable"""
2858 """check for commit arguments that aren't committable"""
2857 if match.isexact() or match.prefix():
2859 if match.isexact() or match.prefix():
2858 matched = set(status.modified + status.added + status.removed)
2860 matched = set(status.modified + status.added + status.removed)
2859
2861
2860 for f in match.files():
2862 for f in match.files():
2861 f = self.dirstate.normalize(f)
2863 f = self.dirstate.normalize(f)
2862 if f == b'.' or f in matched or f in wctx.substate:
2864 if f == b'.' or f in matched or f in wctx.substate:
2863 continue
2865 continue
2864 if f in status.deleted:
2866 if f in status.deleted:
2865 fail(f, _(b'file not found!'))
2867 fail(f, _(b'file not found!'))
2866 # Is it a directory that exists or used to exist?
2868 # Is it a directory that exists or used to exist?
2867 if self.wvfs.isdir(f) or wctx.p1().hasdir(f):
2869 if self.wvfs.isdir(f) or wctx.p1().hasdir(f):
2868 d = f + b'/'
2870 d = f + b'/'
2869 for mf in matched:
2871 for mf in matched:
2870 if mf.startswith(d):
2872 if mf.startswith(d):
2871 break
2873 break
2872 else:
2874 else:
2873 fail(f, _(b"no match under directory!"))
2875 fail(f, _(b"no match under directory!"))
2874 elif f not in self.dirstate:
2876 elif f not in self.dirstate:
2875 fail(f, _(b"file not tracked!"))
2877 fail(f, _(b"file not tracked!"))
2876
2878
2877 @unfilteredmethod
2879 @unfilteredmethod
2878 def commit(
2880 def commit(
2879 self,
2881 self,
2880 text=b"",
2882 text=b"",
2881 user=None,
2883 user=None,
2882 date=None,
2884 date=None,
2883 match=None,
2885 match=None,
2884 force=False,
2886 force=False,
2885 editor=None,
2887 editor=None,
2886 extra=None,
2888 extra=None,
2887 ):
2889 ):
2888 """Add a new revision to current repository.
2890 """Add a new revision to current repository.
2889
2891
2890 Revision information is gathered from the working directory,
2892 Revision information is gathered from the working directory,
2891 match can be used to filter the committed files. If editor is
2893 match can be used to filter the committed files. If editor is
2892 supplied, it is called to get a commit message.
2894 supplied, it is called to get a commit message.
2893 """
2895 """
2894 if extra is None:
2896 if extra is None:
2895 extra = {}
2897 extra = {}
2896
2898
2897 def fail(f, msg):
2899 def fail(f, msg):
2898 raise error.Abort(b'%s: %s' % (f, msg))
2900 raise error.Abort(b'%s: %s' % (f, msg))
2899
2901
2900 if not match:
2902 if not match:
2901 match = matchmod.always()
2903 match = matchmod.always()
2902
2904
2903 if not force:
2905 if not force:
2904 match.bad = fail
2906 match.bad = fail
2905
2907
2906 # lock() for recent changelog (see issue4368)
2908 # lock() for recent changelog (see issue4368)
2907 with self.wlock(), self.lock():
2909 with self.wlock(), self.lock():
2908 wctx = self[None]
2910 wctx = self[None]
2909 merge = len(wctx.parents()) > 1
2911 merge = len(wctx.parents()) > 1
2910
2912
2911 if not force and merge and not match.always():
2913 if not force and merge and not match.always():
2912 raise error.Abort(
2914 raise error.Abort(
2913 _(
2915 _(
2914 b'cannot partially commit a merge '
2916 b'cannot partially commit a merge '
2915 b'(do not specify files or patterns)'
2917 b'(do not specify files or patterns)'
2916 )
2918 )
2917 )
2919 )
2918
2920
2919 status = self.status(match=match, clean=force)
2921 status = self.status(match=match, clean=force)
2920 if force:
2922 if force:
2921 status.modified.extend(
2923 status.modified.extend(
2922 status.clean
2924 status.clean
2923 ) # mq may commit clean files
2925 ) # mq may commit clean files
2924
2926
2925 # check subrepos
2927 # check subrepos
2926 subs, commitsubs, newstate = subrepoutil.precommit(
2928 subs, commitsubs, newstate = subrepoutil.precommit(
2927 self.ui, wctx, status, match, force=force
2929 self.ui, wctx, status, match, force=force
2928 )
2930 )
2929
2931
2930 # make sure all explicit patterns are matched
2932 # make sure all explicit patterns are matched
2931 if not force:
2933 if not force:
2932 self.checkcommitpatterns(wctx, match, status, fail)
2934 self.checkcommitpatterns(wctx, match, status, fail)
2933
2935
2934 cctx = context.workingcommitctx(
2936 cctx = context.workingcommitctx(
2935 self, status, text, user, date, extra
2937 self, status, text, user, date, extra
2936 )
2938 )
2937
2939
2938 # internal config: ui.allowemptycommit
2940 # internal config: ui.allowemptycommit
2939 allowemptycommit = (
2941 allowemptycommit = (
2940 wctx.branch() != wctx.p1().branch()
2942 wctx.branch() != wctx.p1().branch()
2941 or extra.get(b'close')
2943 or extra.get(b'close')
2942 or merge
2944 or merge
2943 or cctx.files()
2945 or cctx.files()
2944 or self.ui.configbool(b'ui', b'allowemptycommit')
2946 or self.ui.configbool(b'ui', b'allowemptycommit')
2945 )
2947 )
2946 if not allowemptycommit:
2948 if not allowemptycommit:
2947 return None
2949 return None
2948
2950
2949 if merge and cctx.deleted():
2951 if merge and cctx.deleted():
2950 raise error.Abort(_(b"cannot commit merge with missing files"))
2952 raise error.Abort(_(b"cannot commit merge with missing files"))
2951
2953
2952 ms = mergemod.mergestate.read(self)
2954 ms = mergemod.mergestate.read(self)
2953 mergeutil.checkunresolved(ms)
2955 mergeutil.checkunresolved(ms)
2954
2956
2955 if editor:
2957 if editor:
2956 cctx._text = editor(self, cctx, subs)
2958 cctx._text = editor(self, cctx, subs)
2957 edited = text != cctx._text
2959 edited = text != cctx._text
2958
2960
2959 # Save commit message in case this transaction gets rolled back
2961 # Save commit message in case this transaction gets rolled back
2960 # (e.g. by a pretxncommit hook). Leave the content alone on
2962 # (e.g. by a pretxncommit hook). Leave the content alone on
2961 # the assumption that the user will use the same editor again.
2963 # the assumption that the user will use the same editor again.
2962 msgfn = self.savecommitmessage(cctx._text)
2964 msgfn = self.savecommitmessage(cctx._text)
2963
2965
2964 # commit subs and write new state
2966 # commit subs and write new state
2965 if subs:
2967 if subs:
2966 uipathfn = scmutil.getuipathfn(self)
2968 uipathfn = scmutil.getuipathfn(self)
2967 for s in sorted(commitsubs):
2969 for s in sorted(commitsubs):
2968 sub = wctx.sub(s)
2970 sub = wctx.sub(s)
2969 self.ui.status(
2971 self.ui.status(
2970 _(b'committing subrepository %s\n')
2972 _(b'committing subrepository %s\n')
2971 % uipathfn(subrepoutil.subrelpath(sub))
2973 % uipathfn(subrepoutil.subrelpath(sub))
2972 )
2974 )
2973 sr = sub.commit(cctx._text, user, date)
2975 sr = sub.commit(cctx._text, user, date)
2974 newstate[s] = (newstate[s][0], sr)
2976 newstate[s] = (newstate[s][0], sr)
2975 subrepoutil.writestate(self, newstate)
2977 subrepoutil.writestate(self, newstate)
2976
2978
2977 p1, p2 = self.dirstate.parents()
2979 p1, p2 = self.dirstate.parents()
2978 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or b'')
2980 hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or b'')
2979 try:
2981 try:
2980 self.hook(
2982 self.hook(
2981 b"precommit", throw=True, parent1=hookp1, parent2=hookp2
2983 b"precommit", throw=True, parent1=hookp1, parent2=hookp2
2982 )
2984 )
2983 with self.transaction(b'commit'):
2985 with self.transaction(b'commit'):
2984 ret = self.commitctx(cctx, True)
2986 ret = self.commitctx(cctx, True)
2985 # update bookmarks, dirstate and mergestate
2987 # update bookmarks, dirstate and mergestate
2986 bookmarks.update(self, [p1, p2], ret)
2988 bookmarks.update(self, [p1, p2], ret)
2987 cctx.markcommitted(ret)
2989 cctx.markcommitted(ret)
2988 ms.reset()
2990 ms.reset()
2989 except: # re-raises
2991 except: # re-raises
2990 if edited:
2992 if edited:
2991 self.ui.write(
2993 self.ui.write(
2992 _(b'note: commit message saved in %s\n') % msgfn
2994 _(b'note: commit message saved in %s\n') % msgfn
2993 )
2995 )
2994 raise
2996 raise
2995
2997
2996 def commithook(unused_success):
2998 def commithook(unused_success):
2997 # hack for command that use a temporary commit (eg: histedit)
2999 # hack for command that use a temporary commit (eg: histedit)
2998 # temporary commit got stripped before hook release
3000 # temporary commit got stripped before hook release
2999 if self.changelog.hasnode(ret):
3001 if self.changelog.hasnode(ret):
3000 self.hook(
3002 self.hook(
3001 b"commit", node=hex(ret), parent1=hookp1, parent2=hookp2
3003 b"commit", node=hex(ret), parent1=hookp1, parent2=hookp2
3002 )
3004 )
3003
3005
3004 self._afterlock(commithook)
3006 self._afterlock(commithook)
3005 return ret
3007 return ret
3006
3008
3007 @unfilteredmethod
3009 @unfilteredmethod
3008 def commitctx(self, ctx, error=False, origctx=None):
3010 def commitctx(self, ctx, error=False, origctx=None):
3009 """Add a new revision to current repository.
3011 """Add a new revision to current repository.
3010 Revision information is passed via the context argument.
3012 Revision information is passed via the context argument.
3011
3013
3012 ctx.files() should list all files involved in this commit, i.e.
3014 ctx.files() should list all files involved in this commit, i.e.
3013 modified/added/removed files. On merge, it may be wider than the
3015 modified/added/removed files. On merge, it may be wider than the
3014 ctx.files() to be committed, since any file nodes derived directly
3016 ctx.files() to be committed, since any file nodes derived directly
3015 from p1 or p2 are excluded from the committed ctx.files().
3017 from p1 or p2 are excluded from the committed ctx.files().
3016
3018
3017 origctx is for convert to work around the problem that bug
3019 origctx is for convert to work around the problem that bug
3018 fixes to the files list in changesets change hashes. For
3020 fixes to the files list in changesets change hashes. For
3019 convert to be the identity, it can pass an origctx and this
3021 convert to be the identity, it can pass an origctx and this
3020 function will use the same files list when it makes sense to
3022 function will use the same files list when it makes sense to
3021 do so.
3023 do so.
3022 """
3024 """
3023
3025
3024 p1, p2 = ctx.p1(), ctx.p2()
3026 p1, p2 = ctx.p1(), ctx.p2()
3025 user = ctx.user()
3027 user = ctx.user()
3026
3028
3027 if self.filecopiesmode == b'changeset-sidedata':
3029 if self.filecopiesmode == b'changeset-sidedata':
3028 writechangesetcopy = True
3030 writechangesetcopy = True
3029 writefilecopymeta = True
3031 writefilecopymeta = True
3030 writecopiesto = None
3032 writecopiesto = None
3031 else:
3033 else:
3032 writecopiesto = self.ui.config(b'experimental', b'copies.write-to')
3034 writecopiesto = self.ui.config(b'experimental', b'copies.write-to')
3033 writefilecopymeta = writecopiesto != b'changeset-only'
3035 writefilecopymeta = writecopiesto != b'changeset-only'
3034 writechangesetcopy = writecopiesto in (
3036 writechangesetcopy = writecopiesto in (
3035 b'changeset-only',
3037 b'changeset-only',
3036 b'compatibility',
3038 b'compatibility',
3037 )
3039 )
3038 p1copies, p2copies = None, None
3040 p1copies, p2copies = None, None
3039 if writechangesetcopy:
3041 if writechangesetcopy:
3040 p1copies = ctx.p1copies()
3042 p1copies = ctx.p1copies()
3041 p2copies = ctx.p2copies()
3043 p2copies = ctx.p2copies()
3042 filesadded, filesremoved = None, None
3044 filesadded, filesremoved = None, None
3043 with self.lock(), self.transaction(b"commit") as tr:
3045 with self.lock(), self.transaction(b"commit") as tr:
3044 trp = weakref.proxy(tr)
3046 trp = weakref.proxy(tr)
3045
3047
3046 if ctx.manifestnode():
3048 if ctx.manifestnode():
3047 # reuse an existing manifest revision
3049 # reuse an existing manifest revision
3048 self.ui.debug(b'reusing known manifest\n')
3050 self.ui.debug(b'reusing known manifest\n')
3049 mn = ctx.manifestnode()
3051 mn = ctx.manifestnode()
3050 files = ctx.files()
3052 files = ctx.files()
3051 if writechangesetcopy:
3053 if writechangesetcopy:
3052 filesadded = ctx.filesadded()
3054 filesadded = ctx.filesadded()
3053 filesremoved = ctx.filesremoved()
3055 filesremoved = ctx.filesremoved()
3054 elif ctx.files():
3056 elif ctx.files():
3055 m1ctx = p1.manifestctx()
3057 m1ctx = p1.manifestctx()
3056 m2ctx = p2.manifestctx()
3058 m2ctx = p2.manifestctx()
3057 mctx = m1ctx.copy()
3059 mctx = m1ctx.copy()
3058
3060
3059 m = mctx.read()
3061 m = mctx.read()
3060 m1 = m1ctx.read()
3062 m1 = m1ctx.read()
3061 m2 = m2ctx.read()
3063 m2 = m2ctx.read()
3062
3064
3063 # check in files
3065 # check in files
3064 added = []
3066 added = []
3065 changed = []
3067 changed = []
3066 removed = list(ctx.removed())
3068 removed = list(ctx.removed())
3067 linkrev = len(self)
3069 linkrev = len(self)
3068 self.ui.note(_(b"committing files:\n"))
3070 self.ui.note(_(b"committing files:\n"))
3069 uipathfn = scmutil.getuipathfn(self)
3071 uipathfn = scmutil.getuipathfn(self)
3070 for f in sorted(ctx.modified() + ctx.added()):
3072 for f in sorted(ctx.modified() + ctx.added()):
3071 self.ui.note(uipathfn(f) + b"\n")
3073 self.ui.note(uipathfn(f) + b"\n")
3072 try:
3074 try:
3073 fctx = ctx[f]
3075 fctx = ctx[f]
3074 if fctx is None:
3076 if fctx is None:
3075 removed.append(f)
3077 removed.append(f)
3076 else:
3078 else:
3077 added.append(f)
3079 added.append(f)
3078 m[f] = self._filecommit(
3080 m[f] = self._filecommit(
3079 fctx,
3081 fctx,
3080 m1,
3082 m1,
3081 m2,
3083 m2,
3082 linkrev,
3084 linkrev,
3083 trp,
3085 trp,
3084 changed,
3086 changed,
3085 writefilecopymeta,
3087 writefilecopymeta,
3086 )
3088 )
3087 m.setflag(f, fctx.flags())
3089 m.setflag(f, fctx.flags())
3088 except OSError:
3090 except OSError:
3089 self.ui.warn(
3091 self.ui.warn(
3090 _(b"trouble committing %s!\n") % uipathfn(f)
3092 _(b"trouble committing %s!\n") % uipathfn(f)
3091 )
3093 )
3092 raise
3094 raise
3093 except IOError as inst:
3095 except IOError as inst:
3094 errcode = getattr(inst, 'errno', errno.ENOENT)
3096 errcode = getattr(inst, 'errno', errno.ENOENT)
3095 if error or errcode and errcode != errno.ENOENT:
3097 if error or errcode and errcode != errno.ENOENT:
3096 self.ui.warn(
3098 self.ui.warn(
3097 _(b"trouble committing %s!\n") % uipathfn(f)
3099 _(b"trouble committing %s!\n") % uipathfn(f)
3098 )
3100 )
3099 raise
3101 raise
3100
3102
3101 # update manifest
3103 # update manifest
3102 removed = [f for f in removed if f in m1 or f in m2]
3104 removed = [f for f in removed if f in m1 or f in m2]
3103 drop = sorted([f for f in removed if f in m])
3105 drop = sorted([f for f in removed if f in m])
3104 for f in drop:
3106 for f in drop:
3105 del m[f]
3107 del m[f]
3106 if p2.rev() != nullrev:
3108 if p2.rev() != nullrev:
3107
3109
3108 @util.cachefunc
3110 @util.cachefunc
3109 def mas():
3111 def mas():
3110 p1n = p1.node()
3112 p1n = p1.node()
3111 p2n = p2.node()
3113 p2n = p2.node()
3112 cahs = self.changelog.commonancestorsheads(p1n, p2n)
3114 cahs = self.changelog.commonancestorsheads(p1n, p2n)
3113 if not cahs:
3115 if not cahs:
3114 cahs = [nullrev]
3116 cahs = [nullrev]
3115 return [self[r].manifest() for r in cahs]
3117 return [self[r].manifest() for r in cahs]
3116
3118
3117 def deletionfromparent(f):
3119 def deletionfromparent(f):
3118 # When a file is removed relative to p1 in a merge, this
3120 # When a file is removed relative to p1 in a merge, this
3119 # function determines whether the absence is due to a
3121 # function determines whether the absence is due to a
3120 # deletion from a parent, or whether the merge commit
3122 # deletion from a parent, or whether the merge commit
3121 # itself deletes the file. We decide this by doing a
3123 # itself deletes the file. We decide this by doing a
3122 # simplified three way merge of the manifest entry for
3124 # simplified three way merge of the manifest entry for
3123 # the file. There are two ways we decide the merge
3125 # the file. There are two ways we decide the merge
3124 # itself didn't delete a file:
3126 # itself didn't delete a file:
3125 # - neither parent (nor the merge) contain the file
3127 # - neither parent (nor the merge) contain the file
3126 # - exactly one parent contains the file, and that
3128 # - exactly one parent contains the file, and that
3127 # parent has the same filelog entry as the merge
3129 # parent has the same filelog entry as the merge
3128 # ancestor (or all of them if there two). In other
3130 # ancestor (or all of them if there two). In other
3129 # words, that parent left the file unchanged while the
3131 # words, that parent left the file unchanged while the
3130 # other one deleted it.
3132 # other one deleted it.
3131 # One way to think about this is that deleting a file is
3133 # One way to think about this is that deleting a file is
3132 # similar to emptying it, so the list of changed files
3134 # similar to emptying it, so the list of changed files
3133 # should be similar either way. The computation
3135 # should be similar either way. The computation
3134 # described above is not done directly in _filecommit
3136 # described above is not done directly in _filecommit
3135 # when creating the list of changed files, however
3137 # when creating the list of changed files, however
3136 # it does something very similar by comparing filelog
3138 # it does something very similar by comparing filelog
3137 # nodes.
3139 # nodes.
3138 if f in m1:
3140 if f in m1:
3139 return f not in m2 and all(
3141 return f not in m2 and all(
3140 f in ma and ma.find(f) == m1.find(f)
3142 f in ma and ma.find(f) == m1.find(f)
3141 for ma in mas()
3143 for ma in mas()
3142 )
3144 )
3143 elif f in m2:
3145 elif f in m2:
3144 return all(
3146 return all(
3145 f in ma and ma.find(f) == m2.find(f)
3147 f in ma and ma.find(f) == m2.find(f)
3146 for ma in mas()
3148 for ma in mas()
3147 )
3149 )
3148 else:
3150 else:
3149 return True
3151 return True
3150
3152
3151 removed = [f for f in removed if not deletionfromparent(f)]
3153 removed = [f for f in removed if not deletionfromparent(f)]
3152
3154
3153 files = changed + removed
3155 files = changed + removed
3154 md = None
3156 md = None
3155 if not files:
3157 if not files:
3156 # if no "files" actually changed in terms of the changelog,
3158 # if no "files" actually changed in terms of the changelog,
3157 # try hard to detect unmodified manifest entry so that the
3159 # try hard to detect unmodified manifest entry so that the
3158 # exact same commit can be reproduced later on convert.
3160 # exact same commit can be reproduced later on convert.
3159 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
3161 md = m1.diff(m, scmutil.matchfiles(self, ctx.files()))
3160 if not files and md:
3162 if not files and md:
3161 self.ui.debug(
3163 self.ui.debug(
3162 b'not reusing manifest (no file change in '
3164 b'not reusing manifest (no file change in '
3163 b'changelog, but manifest differs)\n'
3165 b'changelog, but manifest differs)\n'
3164 )
3166 )
3165 if files or md:
3167 if files or md:
3166 self.ui.note(_(b"committing manifest\n"))
3168 self.ui.note(_(b"committing manifest\n"))
3167 # we're using narrowmatch here since it's already applied at
3169 # we're using narrowmatch here since it's already applied at
3168 # other stages (such as dirstate.walk), so we're already
3170 # other stages (such as dirstate.walk), so we're already
3169 # ignoring things outside of narrowspec in most cases. The
3171 # ignoring things outside of narrowspec in most cases. The
3170 # one case where we might have files outside the narrowspec
3172 # one case where we might have files outside the narrowspec
3171 # at this point is merges, and we already error out in the
3173 # at this point is merges, and we already error out in the
3172 # case where the merge has files outside of the narrowspec,
3174 # case where the merge has files outside of the narrowspec,
3173 # so this is safe.
3175 # so this is safe.
3174 mn = mctx.write(
3176 mn = mctx.write(
3175 trp,
3177 trp,
3176 linkrev,
3178 linkrev,
3177 p1.manifestnode(),
3179 p1.manifestnode(),
3178 p2.manifestnode(),
3180 p2.manifestnode(),
3179 added,
3181 added,
3180 drop,
3182 drop,
3181 match=self.narrowmatch(),
3183 match=self.narrowmatch(),
3182 )
3184 )
3183
3185
3184 if writechangesetcopy:
3186 if writechangesetcopy:
3185 filesadded = [
3187 filesadded = [
3186 f for f in changed if not (f in m1 or f in m2)
3188 f for f in changed if not (f in m1 or f in m2)
3187 ]
3189 ]
3188 filesremoved = removed
3190 filesremoved = removed
3189 else:
3191 else:
3190 self.ui.debug(
3192 self.ui.debug(
3191 b'reusing manifest from p1 (listed files '
3193 b'reusing manifest from p1 (listed files '
3192 b'actually unchanged)\n'
3194 b'actually unchanged)\n'
3193 )
3195 )
3194 mn = p1.manifestnode()
3196 mn = p1.manifestnode()
3195 else:
3197 else:
3196 self.ui.debug(b'reusing manifest from p1 (no file change)\n')
3198 self.ui.debug(b'reusing manifest from p1 (no file change)\n')
3197 mn = p1.manifestnode()
3199 mn = p1.manifestnode()
3198 files = []
3200 files = []
3199
3201
3200 if writecopiesto == b'changeset-only':
3202 if writecopiesto == b'changeset-only':
3201 # If writing only to changeset extras, use None to indicate that
3203 # If writing only to changeset extras, use None to indicate that
3202 # no entry should be written. If writing to both, write an empty
3204 # no entry should be written. If writing to both, write an empty
3203 # entry to prevent the reader from falling back to reading
3205 # entry to prevent the reader from falling back to reading
3204 # filelogs.
3206 # filelogs.
3205 p1copies = p1copies or None
3207 p1copies = p1copies or None
3206 p2copies = p2copies or None
3208 p2copies = p2copies or None
3207 filesadded = filesadded or None
3209 filesadded = filesadded or None
3208 filesremoved = filesremoved or None
3210 filesremoved = filesremoved or None
3209
3211
3210 if origctx and origctx.manifestnode() == mn:
3212 if origctx and origctx.manifestnode() == mn:
3211 files = origctx.files()
3213 files = origctx.files()
3212
3214
3213 # update changelog
3215 # update changelog
3214 self.ui.note(_(b"committing changelog\n"))
3216 self.ui.note(_(b"committing changelog\n"))
3215 self.changelog.delayupdate(tr)
3217 self.changelog.delayupdate(tr)
3216 n = self.changelog.add(
3218 n = self.changelog.add(
3217 mn,
3219 mn,
3218 files,
3220 files,
3219 ctx.description(),
3221 ctx.description(),
3220 trp,
3222 trp,
3221 p1.node(),
3223 p1.node(),
3222 p2.node(),
3224 p2.node(),
3223 user,
3225 user,
3224 ctx.date(),
3226 ctx.date(),
3225 ctx.extra().copy(),
3227 ctx.extra().copy(),
3226 p1copies,
3228 p1copies,
3227 p2copies,
3229 p2copies,
3228 filesadded,
3230 filesadded,
3229 filesremoved,
3231 filesremoved,
3230 )
3232 )
3231 xp1, xp2 = p1.hex(), p2 and p2.hex() or b''
3233 xp1, xp2 = p1.hex(), p2 and p2.hex() or b''
3232 self.hook(
3234 self.hook(
3233 b'pretxncommit',
3235 b'pretxncommit',
3234 throw=True,
3236 throw=True,
3235 node=hex(n),
3237 node=hex(n),
3236 parent1=xp1,
3238 parent1=xp1,
3237 parent2=xp2,
3239 parent2=xp2,
3238 )
3240 )
3239 # set the new commit is proper phase
3241 # set the new commit is proper phase
3240 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
3242 targetphase = subrepoutil.newcommitphase(self.ui, ctx)
3241 if targetphase:
3243 if targetphase:
3242 # retract boundary do not alter parent changeset.
3244 # retract boundary do not alter parent changeset.
3243 # if a parent have higher the resulting phase will
3245 # if a parent have higher the resulting phase will
3244 # be compliant anyway
3246 # be compliant anyway
3245 #
3247 #
3246 # if minimal phase was 0 we don't need to retract anything
3248 # if minimal phase was 0 we don't need to retract anything
3247 phases.registernew(self, tr, targetphase, [n])
3249 phases.registernew(self, tr, targetphase, [n])
3248 return n
3250 return n
3249
3251
3250 @unfilteredmethod
3252 @unfilteredmethod
3251 def destroying(self):
3253 def destroying(self):
3252 '''Inform the repository that nodes are about to be destroyed.
3254 '''Inform the repository that nodes are about to be destroyed.
3253 Intended for use by strip and rollback, so there's a common
3255 Intended for use by strip and rollback, so there's a common
3254 place for anything that has to be done before destroying history.
3256 place for anything that has to be done before destroying history.
3255
3257
3256 This is mostly useful for saving state that is in memory and waiting
3258 This is mostly useful for saving state that is in memory and waiting
3257 to be flushed when the current lock is released. Because a call to
3259 to be flushed when the current lock is released. Because a call to
3258 destroyed is imminent, the repo will be invalidated causing those
3260 destroyed is imminent, the repo will be invalidated causing those
3259 changes to stay in memory (waiting for the next unlock), or vanish
3261 changes to stay in memory (waiting for the next unlock), or vanish
3260 completely.
3262 completely.
3261 '''
3263 '''
3262 # When using the same lock to commit and strip, the phasecache is left
3264 # When using the same lock to commit and strip, the phasecache is left
3263 # dirty after committing. Then when we strip, the repo is invalidated,
3265 # dirty after committing. Then when we strip, the repo is invalidated,
3264 # causing those changes to disappear.
3266 # causing those changes to disappear.
3265 if '_phasecache' in vars(self):
3267 if '_phasecache' in vars(self):
3266 self._phasecache.write()
3268 self._phasecache.write()
3267
3269
3268 @unfilteredmethod
3270 @unfilteredmethod
3269 def destroyed(self):
3271 def destroyed(self):
3270 '''Inform the repository that nodes have been destroyed.
3272 '''Inform the repository that nodes have been destroyed.
3271 Intended for use by strip and rollback, so there's a common
3273 Intended for use by strip and rollback, so there's a common
3272 place for anything that has to be done after destroying history.
3274 place for anything that has to be done after destroying history.
3273 '''
3275 '''
3274 # When one tries to:
3276 # When one tries to:
3275 # 1) destroy nodes thus calling this method (e.g. strip)
3277 # 1) destroy nodes thus calling this method (e.g. strip)
3276 # 2) use phasecache somewhere (e.g. commit)
3278 # 2) use phasecache somewhere (e.g. commit)
3277 #
3279 #
3278 # then 2) will fail because the phasecache contains nodes that were
3280 # then 2) will fail because the phasecache contains nodes that were
3279 # removed. We can either remove phasecache from the filecache,
3281 # removed. We can either remove phasecache from the filecache,
3280 # causing it to reload next time it is accessed, or simply filter
3282 # causing it to reload next time it is accessed, or simply filter
3281 # the removed nodes now and write the updated cache.
3283 # the removed nodes now and write the updated cache.
3282 self._phasecache.filterunknown(self)
3284 self._phasecache.filterunknown(self)
3283 self._phasecache.write()
3285 self._phasecache.write()
3284
3286
3285 # refresh all repository caches
3287 # refresh all repository caches
3286 self.updatecaches()
3288 self.updatecaches()
3287
3289
3288 # Ensure the persistent tag cache is updated. Doing it now
3290 # Ensure the persistent tag cache is updated. Doing it now
3289 # means that the tag cache only has to worry about destroyed
3291 # means that the tag cache only has to worry about destroyed
3290 # heads immediately after a strip/rollback. That in turn
3292 # heads immediately after a strip/rollback. That in turn
3291 # guarantees that "cachetip == currenttip" (comparing both rev
3293 # guarantees that "cachetip == currenttip" (comparing both rev
3292 # and node) always means no nodes have been added or destroyed.
3294 # and node) always means no nodes have been added or destroyed.
3293
3295
3294 # XXX this is suboptimal when qrefresh'ing: we strip the current
3296 # XXX this is suboptimal when qrefresh'ing: we strip the current
3295 # head, refresh the tag cache, then immediately add a new head.
3297 # head, refresh the tag cache, then immediately add a new head.
3296 # But I think doing it this way is necessary for the "instant
3298 # But I think doing it this way is necessary for the "instant
3297 # tag cache retrieval" case to work.
3299 # tag cache retrieval" case to work.
3298 self.invalidate()
3300 self.invalidate()
3299
3301
3300 def status(
3302 def status(
3301 self,
3303 self,
3302 node1=b'.',
3304 node1=b'.',
3303 node2=None,
3305 node2=None,
3304 match=None,
3306 match=None,
3305 ignored=False,
3307 ignored=False,
3306 clean=False,
3308 clean=False,
3307 unknown=False,
3309 unknown=False,
3308 listsubrepos=False,
3310 listsubrepos=False,
3309 ):
3311 ):
3310 '''a convenience method that calls node1.status(node2)'''
3312 '''a convenience method that calls node1.status(node2)'''
3311 return self[node1].status(
3313 return self[node1].status(
3312 node2, match, ignored, clean, unknown, listsubrepos
3314 node2, match, ignored, clean, unknown, listsubrepos
3313 )
3315 )
3314
3316
3315 def addpostdsstatus(self, ps):
3317 def addpostdsstatus(self, ps):
3316 """Add a callback to run within the wlock, at the point at which status
3318 """Add a callback to run within the wlock, at the point at which status
3317 fixups happen.
3319 fixups happen.
3318
3320
3319 On status completion, callback(wctx, status) will be called with the
3321 On status completion, callback(wctx, status) will be called with the
3320 wlock held, unless the dirstate has changed from underneath or the wlock
3322 wlock held, unless the dirstate has changed from underneath or the wlock
3321 couldn't be grabbed.
3323 couldn't be grabbed.
3322
3324
3323 Callbacks should not capture and use a cached copy of the dirstate --
3325 Callbacks should not capture and use a cached copy of the dirstate --
3324 it might change in the meanwhile. Instead, they should access the
3326 it might change in the meanwhile. Instead, they should access the
3325 dirstate via wctx.repo().dirstate.
3327 dirstate via wctx.repo().dirstate.
3326
3328
3327 This list is emptied out after each status run -- extensions should
3329 This list is emptied out after each status run -- extensions should
3328 make sure it adds to this list each time dirstate.status is called.
3330 make sure it adds to this list each time dirstate.status is called.
3329 Extensions should also make sure they don't call this for statuses
3331 Extensions should also make sure they don't call this for statuses
3330 that don't involve the dirstate.
3332 that don't involve the dirstate.
3331 """
3333 """
3332
3334
3333 # The list is located here for uniqueness reasons -- it is actually
3335 # The list is located here for uniqueness reasons -- it is actually
3334 # managed by the workingctx, but that isn't unique per-repo.
3336 # managed by the workingctx, but that isn't unique per-repo.
3335 self._postdsstatus.append(ps)
3337 self._postdsstatus.append(ps)
3336
3338
3337 def postdsstatus(self):
3339 def postdsstatus(self):
3338 """Used by workingctx to get the list of post-dirstate-status hooks."""
3340 """Used by workingctx to get the list of post-dirstate-status hooks."""
3339 return self._postdsstatus
3341 return self._postdsstatus
3340
3342
3341 def clearpostdsstatus(self):
3343 def clearpostdsstatus(self):
3342 """Used by workingctx to clear post-dirstate-status hooks."""
3344 """Used by workingctx to clear post-dirstate-status hooks."""
3343 del self._postdsstatus[:]
3345 del self._postdsstatus[:]
3344
3346
3345 def heads(self, start=None):
3347 def heads(self, start=None):
3346 if start is None:
3348 if start is None:
3347 cl = self.changelog
3349 cl = self.changelog
3348 headrevs = reversed(cl.headrevs())
3350 headrevs = reversed(cl.headrevs())
3349 return [cl.node(rev) for rev in headrevs]
3351 return [cl.node(rev) for rev in headrevs]
3350
3352
3351 heads = self.changelog.heads(start)
3353 heads = self.changelog.heads(start)
3352 # sort the output in rev descending order
3354 # sort the output in rev descending order
3353 return sorted(heads, key=self.changelog.rev, reverse=True)
3355 return sorted(heads, key=self.changelog.rev, reverse=True)
3354
3356
3355 def branchheads(self, branch=None, start=None, closed=False):
3357 def branchheads(self, branch=None, start=None, closed=False):
3356 '''return a (possibly filtered) list of heads for the given branch
3358 '''return a (possibly filtered) list of heads for the given branch
3357
3359
3358 Heads are returned in topological order, from newest to oldest.
3360 Heads are returned in topological order, from newest to oldest.
3359 If branch is None, use the dirstate branch.
3361 If branch is None, use the dirstate branch.
3360 If start is not None, return only heads reachable from start.
3362 If start is not None, return only heads reachable from start.
3361 If closed is True, return heads that are marked as closed as well.
3363 If closed is True, return heads that are marked as closed as well.
3362 '''
3364 '''
3363 if branch is None:
3365 if branch is None:
3364 branch = self[None].branch()
3366 branch = self[None].branch()
3365 branches = self.branchmap()
3367 branches = self.branchmap()
3366 if not branches.hasbranch(branch):
3368 if not branches.hasbranch(branch):
3367 return []
3369 return []
3368 # the cache returns heads ordered lowest to highest
3370 # the cache returns heads ordered lowest to highest
3369 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
3371 bheads = list(reversed(branches.branchheads(branch, closed=closed)))
3370 if start is not None:
3372 if start is not None:
3371 # filter out the heads that cannot be reached from startrev
3373 # filter out the heads that cannot be reached from startrev
3372 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
3374 fbheads = set(self.changelog.nodesbetween([start], bheads)[2])
3373 bheads = [h for h in bheads if h in fbheads]
3375 bheads = [h for h in bheads if h in fbheads]
3374 return bheads
3376 return bheads
3375
3377
3376 def branches(self, nodes):
3378 def branches(self, nodes):
3377 if not nodes:
3379 if not nodes:
3378 nodes = [self.changelog.tip()]
3380 nodes = [self.changelog.tip()]
3379 b = []
3381 b = []
3380 for n in nodes:
3382 for n in nodes:
3381 t = n
3383 t = n
3382 while True:
3384 while True:
3383 p = self.changelog.parents(n)
3385 p = self.changelog.parents(n)
3384 if p[1] != nullid or p[0] == nullid:
3386 if p[1] != nullid or p[0] == nullid:
3385 b.append((t, n, p[0], p[1]))
3387 b.append((t, n, p[0], p[1]))
3386 break
3388 break
3387 n = p[0]
3389 n = p[0]
3388 return b
3390 return b
3389
3391
3390 def between(self, pairs):
3392 def between(self, pairs):
3391 r = []
3393 r = []
3392
3394
3393 for top, bottom in pairs:
3395 for top, bottom in pairs:
3394 n, l, i = top, [], 0
3396 n, l, i = top, [], 0
3395 f = 1
3397 f = 1
3396
3398
3397 while n != bottom and n != nullid:
3399 while n != bottom and n != nullid:
3398 p = self.changelog.parents(n)[0]
3400 p = self.changelog.parents(n)[0]
3399 if i == f:
3401 if i == f:
3400 l.append(n)
3402 l.append(n)
3401 f = f * 2
3403 f = f * 2
3402 n = p
3404 n = p
3403 i += 1
3405 i += 1
3404
3406
3405 r.append(l)
3407 r.append(l)
3406
3408
3407 return r
3409 return r
3408
3410
3409 def checkpush(self, pushop):
3411 def checkpush(self, pushop):
3410 """Extensions can override this function if additional checks have
3412 """Extensions can override this function if additional checks have
3411 to be performed before pushing, or call it if they override push
3413 to be performed before pushing, or call it if they override push
3412 command.
3414 command.
3413 """
3415 """
3414
3416
3415 @unfilteredpropertycache
3417 @unfilteredpropertycache
3416 def prepushoutgoinghooks(self):
3418 def prepushoutgoinghooks(self):
3417 """Return util.hooks consists of a pushop with repo, remote, outgoing
3419 """Return util.hooks consists of a pushop with repo, remote, outgoing
3418 methods, which are called before pushing changesets.
3420 methods, which are called before pushing changesets.
3419 """
3421 """
3420 return util.hooks()
3422 return util.hooks()
3421
3423
3422 def pushkey(self, namespace, key, old, new):
3424 def pushkey(self, namespace, key, old, new):
3423 try:
3425 try:
3424 tr = self.currenttransaction()
3426 tr = self.currenttransaction()
3425 hookargs = {}
3427 hookargs = {}
3426 if tr is not None:
3428 if tr is not None:
3427 hookargs.update(tr.hookargs)
3429 hookargs.update(tr.hookargs)
3428 hookargs = pycompat.strkwargs(hookargs)
3430 hookargs = pycompat.strkwargs(hookargs)
3429 hookargs['namespace'] = namespace
3431 hookargs['namespace'] = namespace
3430 hookargs['key'] = key
3432 hookargs['key'] = key
3431 hookargs['old'] = old
3433 hookargs['old'] = old
3432 hookargs['new'] = new
3434 hookargs['new'] = new
3433 self.hook(b'prepushkey', throw=True, **hookargs)
3435 self.hook(b'prepushkey', throw=True, **hookargs)
3434 except error.HookAbort as exc:
3436 except error.HookAbort as exc:
3435 self.ui.write_err(_(b"pushkey-abort: %s\n") % exc)
3437 self.ui.write_err(_(b"pushkey-abort: %s\n") % exc)
3436 if exc.hint:
3438 if exc.hint:
3437 self.ui.write_err(_(b"(%s)\n") % exc.hint)
3439 self.ui.write_err(_(b"(%s)\n") % exc.hint)
3438 return False
3440 return False
3439 self.ui.debug(b'pushing key for "%s:%s"\n' % (namespace, key))
3441 self.ui.debug(b'pushing key for "%s:%s"\n' % (namespace, key))
3440 ret = pushkey.push(self, namespace, key, old, new)
3442 ret = pushkey.push(self, namespace, key, old, new)
3441
3443
3442 def runhook(unused_success):
3444 def runhook(unused_success):
3443 self.hook(
3445 self.hook(
3444 b'pushkey',
3446 b'pushkey',
3445 namespace=namespace,
3447 namespace=namespace,
3446 key=key,
3448 key=key,
3447 old=old,
3449 old=old,
3448 new=new,
3450 new=new,
3449 ret=ret,
3451 ret=ret,
3450 )
3452 )
3451
3453
3452 self._afterlock(runhook)
3454 self._afterlock(runhook)
3453 return ret
3455 return ret
3454
3456
3455 def listkeys(self, namespace):
3457 def listkeys(self, namespace):
3456 self.hook(b'prelistkeys', throw=True, namespace=namespace)
3458 self.hook(b'prelistkeys', throw=True, namespace=namespace)
3457 self.ui.debug(b'listing keys for "%s"\n' % namespace)
3459 self.ui.debug(b'listing keys for "%s"\n' % namespace)
3458 values = pushkey.list(self, namespace)
3460 values = pushkey.list(self, namespace)
3459 self.hook(b'listkeys', namespace=namespace, values=values)
3461 self.hook(b'listkeys', namespace=namespace, values=values)
3460 return values
3462 return values
3461
3463
3462 def debugwireargs(self, one, two, three=None, four=None, five=None):
3464 def debugwireargs(self, one, two, three=None, four=None, five=None):
3463 '''used to test argument passing over the wire'''
3465 '''used to test argument passing over the wire'''
3464 return b"%s %s %s %s %s" % (
3466 return b"%s %s %s %s %s" % (
3465 one,
3467 one,
3466 two,
3468 two,
3467 pycompat.bytestr(three),
3469 pycompat.bytestr(three),
3468 pycompat.bytestr(four),
3470 pycompat.bytestr(four),
3469 pycompat.bytestr(five),
3471 pycompat.bytestr(five),
3470 )
3472 )
3471
3473
3472 def savecommitmessage(self, text):
3474 def savecommitmessage(self, text):
3473 fp = self.vfs(b'last-message.txt', b'wb')
3475 fp = self.vfs(b'last-message.txt', b'wb')
3474 try:
3476 try:
3475 fp.write(text)
3477 fp.write(text)
3476 finally:
3478 finally:
3477 fp.close()
3479 fp.close()
3478 return self.pathto(fp.name[len(self.root) + 1 :])
3480 return self.pathto(fp.name[len(self.root) + 1 :])
3479
3481
3480
3482
3481 # used to avoid circular references so destructors work
3483 # used to avoid circular references so destructors work
3482 def aftertrans(files):
3484 def aftertrans(files):
3483 renamefiles = [tuple(t) for t in files]
3485 renamefiles = [tuple(t) for t in files]
3484
3486
3485 def a():
3487 def a():
3486 for vfs, src, dest in renamefiles:
3488 for vfs, src, dest in renamefiles:
3487 # if src and dest refer to a same file, vfs.rename is a no-op,
3489 # if src and dest refer to a same file, vfs.rename is a no-op,
3488 # leaving both src and dest on disk. delete dest to make sure
3490 # leaving both src and dest on disk. delete dest to make sure
3489 # the rename couldn't be such a no-op.
3491 # the rename couldn't be such a no-op.
3490 vfs.tryunlink(dest)
3492 vfs.tryunlink(dest)
3491 try:
3493 try:
3492 vfs.rename(src, dest)
3494 vfs.rename(src, dest)
3493 except OSError: # journal file does not yet exist
3495 except OSError: # journal file does not yet exist
3494 pass
3496 pass
3495
3497
3496 return a
3498 return a
3497
3499
3498
3500
3499 def undoname(fn):
3501 def undoname(fn):
3500 base, name = os.path.split(fn)
3502 base, name = os.path.split(fn)
3501 assert name.startswith(b'journal')
3503 assert name.startswith(b'journal')
3502 return os.path.join(base, name.replace(b'journal', b'undo', 1))
3504 return os.path.join(base, name.replace(b'journal', b'undo', 1))
3503
3505
3504
3506
3505 def instance(ui, path, create, intents=None, createopts=None):
3507 def instance(ui, path, create, intents=None, createopts=None):
3506 localpath = util.urllocalpath(path)
3508 localpath = util.urllocalpath(path)
3507 if create:
3509 if create:
3508 createrepository(ui, localpath, createopts=createopts)
3510 createrepository(ui, localpath, createopts=createopts)
3509
3511
3510 return makelocalrepository(ui, localpath, intents=intents)
3512 return makelocalrepository(ui, localpath, intents=intents)
3511
3513
3512
3514
3513 def islocal(path):
3515 def islocal(path):
3514 return True
3516 return True
3515
3517
3516
3518
3517 def defaultcreateopts(ui, createopts=None):
3519 def defaultcreateopts(ui, createopts=None):
3518 """Populate the default creation options for a repository.
3520 """Populate the default creation options for a repository.
3519
3521
3520 A dictionary of explicitly requested creation options can be passed
3522 A dictionary of explicitly requested creation options can be passed
3521 in. Missing keys will be populated.
3523 in. Missing keys will be populated.
3522 """
3524 """
3523 createopts = dict(createopts or {})
3525 createopts = dict(createopts or {})
3524
3526
3525 if b'backend' not in createopts:
3527 if b'backend' not in createopts:
3526 # experimental config: storage.new-repo-backend
3528 # experimental config: storage.new-repo-backend
3527 createopts[b'backend'] = ui.config(b'storage', b'new-repo-backend')
3529 createopts[b'backend'] = ui.config(b'storage', b'new-repo-backend')
3528
3530
3529 return createopts
3531 return createopts
3530
3532
3531
3533
3532 def newreporequirements(ui, createopts):
3534 def newreporequirements(ui, createopts):
3533 """Determine the set of requirements for a new local repository.
3535 """Determine the set of requirements for a new local repository.
3534
3536
3535 Extensions can wrap this function to specify custom requirements for
3537 Extensions can wrap this function to specify custom requirements for
3536 new repositories.
3538 new repositories.
3537 """
3539 """
3538 # If the repo is being created from a shared repository, we copy
3540 # If the repo is being created from a shared repository, we copy
3539 # its requirements.
3541 # its requirements.
3540 if b'sharedrepo' in createopts:
3542 if b'sharedrepo' in createopts:
3541 requirements = set(createopts[b'sharedrepo'].requirements)
3543 requirements = set(createopts[b'sharedrepo'].requirements)
3542 if createopts.get(b'sharedrelative'):
3544 if createopts.get(b'sharedrelative'):
3543 requirements.add(b'relshared')
3545 requirements.add(b'relshared')
3544 else:
3546 else:
3545 requirements.add(b'shared')
3547 requirements.add(b'shared')
3546
3548
3547 return requirements
3549 return requirements
3548
3550
3549 if b'backend' not in createopts:
3551 if b'backend' not in createopts:
3550 raise error.ProgrammingError(
3552 raise error.ProgrammingError(
3551 b'backend key not present in createopts; '
3553 b'backend key not present in createopts; '
3552 b'was defaultcreateopts() called?'
3554 b'was defaultcreateopts() called?'
3553 )
3555 )
3554
3556
3555 if createopts[b'backend'] != b'revlogv1':
3557 if createopts[b'backend'] != b'revlogv1':
3556 raise error.Abort(
3558 raise error.Abort(
3557 _(
3559 _(
3558 b'unable to determine repository requirements for '
3560 b'unable to determine repository requirements for '
3559 b'storage backend: %s'
3561 b'storage backend: %s'
3560 )
3562 )
3561 % createopts[b'backend']
3563 % createopts[b'backend']
3562 )
3564 )
3563
3565
3564 requirements = {b'revlogv1'}
3566 requirements = {b'revlogv1'}
3565 if ui.configbool(b'format', b'usestore'):
3567 if ui.configbool(b'format', b'usestore'):
3566 requirements.add(b'store')
3568 requirements.add(b'store')
3567 if ui.configbool(b'format', b'usefncache'):
3569 if ui.configbool(b'format', b'usefncache'):
3568 requirements.add(b'fncache')
3570 requirements.add(b'fncache')
3569 if ui.configbool(b'format', b'dotencode'):
3571 if ui.configbool(b'format', b'dotencode'):
3570 requirements.add(b'dotencode')
3572 requirements.add(b'dotencode')
3571
3573
3572 compengine = ui.config(b'format', b'revlog-compression')
3574 compengine = ui.config(b'format', b'revlog-compression')
3573 if compengine not in util.compengines:
3575 if compengine not in util.compengines:
3574 raise error.Abort(
3576 raise error.Abort(
3575 _(
3577 _(
3576 b'compression engine %s defined by '
3578 b'compression engine %s defined by '
3577 b'format.revlog-compression not available'
3579 b'format.revlog-compression not available'
3578 )
3580 )
3579 % compengine,
3581 % compengine,
3580 hint=_(
3582 hint=_(
3581 b'run "hg debuginstall" to list available '
3583 b'run "hg debuginstall" to list available '
3582 b'compression engines'
3584 b'compression engines'
3583 ),
3585 ),
3584 )
3586 )
3585
3587
3586 # zlib is the historical default and doesn't need an explicit requirement.
3588 # zlib is the historical default and doesn't need an explicit requirement.
3587 elif compengine == b'zstd':
3589 elif compengine == b'zstd':
3588 requirements.add(b'revlog-compression-zstd')
3590 requirements.add(b'revlog-compression-zstd')
3589 elif compengine != b'zlib':
3591 elif compengine != b'zlib':
3590 requirements.add(b'exp-compression-%s' % compengine)
3592 requirements.add(b'exp-compression-%s' % compengine)
3591
3593
3592 if scmutil.gdinitconfig(ui):
3594 if scmutil.gdinitconfig(ui):
3593 requirements.add(b'generaldelta')
3595 requirements.add(b'generaldelta')
3594 if ui.configbool(b'format', b'sparse-revlog'):
3596 if ui.configbool(b'format', b'sparse-revlog'):
3595 requirements.add(SPARSEREVLOG_REQUIREMENT)
3597 requirements.add(SPARSEREVLOG_REQUIREMENT)
3596
3598
3597 # experimental config: format.exp-use-side-data
3599 # experimental config: format.exp-use-side-data
3598 if ui.configbool(b'format', b'exp-use-side-data'):
3600 if ui.configbool(b'format', b'exp-use-side-data'):
3599 requirements.add(SIDEDATA_REQUIREMENT)
3601 requirements.add(SIDEDATA_REQUIREMENT)
3600 # experimental config: format.exp-use-copies-side-data-changeset
3602 # experimental config: format.exp-use-copies-side-data-changeset
3601 if ui.configbool(b'format', b'exp-use-copies-side-data-changeset'):
3603 if ui.configbool(b'format', b'exp-use-copies-side-data-changeset'):
3602 requirements.add(SIDEDATA_REQUIREMENT)
3604 requirements.add(SIDEDATA_REQUIREMENT)
3603 requirements.add(COPIESSDC_REQUIREMENT)
3605 requirements.add(COPIESSDC_REQUIREMENT)
3604 if ui.configbool(b'experimental', b'treemanifest'):
3606 if ui.configbool(b'experimental', b'treemanifest'):
3605 requirements.add(b'treemanifest')
3607 requirements.add(b'treemanifest')
3606
3608
3607 revlogv2 = ui.config(b'experimental', b'revlogv2')
3609 revlogv2 = ui.config(b'experimental', b'revlogv2')
3608 if revlogv2 == b'enable-unstable-format-and-corrupt-my-data':
3610 if revlogv2 == b'enable-unstable-format-and-corrupt-my-data':
3609 requirements.remove(b'revlogv1')
3611 requirements.remove(b'revlogv1')
3610 # generaldelta is implied by revlogv2.
3612 # generaldelta is implied by revlogv2.
3611 requirements.discard(b'generaldelta')
3613 requirements.discard(b'generaldelta')
3612 requirements.add(REVLOGV2_REQUIREMENT)
3614 requirements.add(REVLOGV2_REQUIREMENT)
3613 # experimental config: format.internal-phase
3615 # experimental config: format.internal-phase
3614 if ui.configbool(b'format', b'internal-phase'):
3616 if ui.configbool(b'format', b'internal-phase'):
3615 requirements.add(b'internal-phase')
3617 requirements.add(b'internal-phase')
3616
3618
3617 if createopts.get(b'narrowfiles'):
3619 if createopts.get(b'narrowfiles'):
3618 requirements.add(repository.NARROW_REQUIREMENT)
3620 requirements.add(repository.NARROW_REQUIREMENT)
3619
3621
3620 if createopts.get(b'lfs'):
3622 if createopts.get(b'lfs'):
3621 requirements.add(b'lfs')
3623 requirements.add(b'lfs')
3622
3624
3623 if ui.configbool(b'format', b'bookmarks-in-store'):
3625 if ui.configbool(b'format', b'bookmarks-in-store'):
3624 requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3626 requirements.add(bookmarks.BOOKMARKS_IN_STORE_REQUIREMENT)
3625
3627
3626 return requirements
3628 return requirements
3627
3629
3628
3630
3629 def filterknowncreateopts(ui, createopts):
3631 def filterknowncreateopts(ui, createopts):
3630 """Filters a dict of repo creation options against options that are known.
3632 """Filters a dict of repo creation options against options that are known.
3631
3633
3632 Receives a dict of repo creation options and returns a dict of those
3634 Receives a dict of repo creation options and returns a dict of those
3633 options that we don't know how to handle.
3635 options that we don't know how to handle.
3634
3636
3635 This function is called as part of repository creation. If the
3637 This function is called as part of repository creation. If the
3636 returned dict contains any items, repository creation will not
3638 returned dict contains any items, repository creation will not
3637 be allowed, as it means there was a request to create a repository
3639 be allowed, as it means there was a request to create a repository
3638 with options not recognized by loaded code.
3640 with options not recognized by loaded code.
3639
3641
3640 Extensions can wrap this function to filter out creation options
3642 Extensions can wrap this function to filter out creation options
3641 they know how to handle.
3643 they know how to handle.
3642 """
3644 """
3643 known = {
3645 known = {
3644 b'backend',
3646 b'backend',
3645 b'lfs',
3647 b'lfs',
3646 b'narrowfiles',
3648 b'narrowfiles',
3647 b'sharedrepo',
3649 b'sharedrepo',
3648 b'sharedrelative',
3650 b'sharedrelative',
3649 b'shareditems',
3651 b'shareditems',
3650 b'shallowfilestore',
3652 b'shallowfilestore',
3651 }
3653 }
3652
3654
3653 return {k: v for k, v in createopts.items() if k not in known}
3655 return {k: v for k, v in createopts.items() if k not in known}
3654
3656
3655
3657
3656 def createrepository(ui, path, createopts=None):
3658 def createrepository(ui, path, createopts=None):
3657 """Create a new repository in a vfs.
3659 """Create a new repository in a vfs.
3658
3660
3659 ``path`` path to the new repo's working directory.
3661 ``path`` path to the new repo's working directory.
3660 ``createopts`` options for the new repository.
3662 ``createopts`` options for the new repository.
3661
3663
3662 The following keys for ``createopts`` are recognized:
3664 The following keys for ``createopts`` are recognized:
3663
3665
3664 backend
3666 backend
3665 The storage backend to use.
3667 The storage backend to use.
3666 lfs
3668 lfs
3667 Repository will be created with ``lfs`` requirement. The lfs extension
3669 Repository will be created with ``lfs`` requirement. The lfs extension
3668 will automatically be loaded when the repository is accessed.
3670 will automatically be loaded when the repository is accessed.
3669 narrowfiles
3671 narrowfiles
3670 Set up repository to support narrow file storage.
3672 Set up repository to support narrow file storage.
3671 sharedrepo
3673 sharedrepo
3672 Repository object from which storage should be shared.
3674 Repository object from which storage should be shared.
3673 sharedrelative
3675 sharedrelative
3674 Boolean indicating if the path to the shared repo should be
3676 Boolean indicating if the path to the shared repo should be
3675 stored as relative. By default, the pointer to the "parent" repo
3677 stored as relative. By default, the pointer to the "parent" repo
3676 is stored as an absolute path.
3678 is stored as an absolute path.
3677 shareditems
3679 shareditems
3678 Set of items to share to the new repository (in addition to storage).
3680 Set of items to share to the new repository (in addition to storage).
3679 shallowfilestore
3681 shallowfilestore
3680 Indicates that storage for files should be shallow (not all ancestor
3682 Indicates that storage for files should be shallow (not all ancestor
3681 revisions are known).
3683 revisions are known).
3682 """
3684 """
3683 createopts = defaultcreateopts(ui, createopts=createopts)
3685 createopts = defaultcreateopts(ui, createopts=createopts)
3684
3686
3685 unknownopts = filterknowncreateopts(ui, createopts)
3687 unknownopts = filterknowncreateopts(ui, createopts)
3686
3688
3687 if not isinstance(unknownopts, dict):
3689 if not isinstance(unknownopts, dict):
3688 raise error.ProgrammingError(
3690 raise error.ProgrammingError(
3689 b'filterknowncreateopts() did not return a dict'
3691 b'filterknowncreateopts() did not return a dict'
3690 )
3692 )
3691
3693
3692 if unknownopts:
3694 if unknownopts:
3693 raise error.Abort(
3695 raise error.Abort(
3694 _(
3696 _(
3695 b'unable to create repository because of unknown '
3697 b'unable to create repository because of unknown '
3696 b'creation option: %s'
3698 b'creation option: %s'
3697 )
3699 )
3698 % b', '.join(sorted(unknownopts)),
3700 % b', '.join(sorted(unknownopts)),
3699 hint=_(b'is a required extension not loaded?'),
3701 hint=_(b'is a required extension not loaded?'),
3700 )
3702 )
3701
3703
3702 requirements = newreporequirements(ui, createopts=createopts)
3704 requirements = newreporequirements(ui, createopts=createopts)
3703
3705
3704 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3706 wdirvfs = vfsmod.vfs(path, expandpath=True, realpath=True)
3705
3707
3706 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3708 hgvfs = vfsmod.vfs(wdirvfs.join(b'.hg'))
3707 if hgvfs.exists():
3709 if hgvfs.exists():
3708 raise error.RepoError(_(b'repository %s already exists') % path)
3710 raise error.RepoError(_(b'repository %s already exists') % path)
3709
3711
3710 if b'sharedrepo' in createopts:
3712 if b'sharedrepo' in createopts:
3711 sharedpath = createopts[b'sharedrepo'].sharedpath
3713 sharedpath = createopts[b'sharedrepo'].sharedpath
3712
3714
3713 if createopts.get(b'sharedrelative'):
3715 if createopts.get(b'sharedrelative'):
3714 try:
3716 try:
3715 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3717 sharedpath = os.path.relpath(sharedpath, hgvfs.base)
3716 except (IOError, ValueError) as e:
3718 except (IOError, ValueError) as e:
3717 # ValueError is raised on Windows if the drive letters differ
3719 # ValueError is raised on Windows if the drive letters differ
3718 # on each path.
3720 # on each path.
3719 raise error.Abort(
3721 raise error.Abort(
3720 _(b'cannot calculate relative path'),
3722 _(b'cannot calculate relative path'),
3721 hint=stringutil.forcebytestr(e),
3723 hint=stringutil.forcebytestr(e),
3722 )
3724 )
3723
3725
3724 if not wdirvfs.exists():
3726 if not wdirvfs.exists():
3725 wdirvfs.makedirs()
3727 wdirvfs.makedirs()
3726
3728
3727 hgvfs.makedir(notindexed=True)
3729 hgvfs.makedir(notindexed=True)
3728 if b'sharedrepo' not in createopts:
3730 if b'sharedrepo' not in createopts:
3729 hgvfs.mkdir(b'cache')
3731 hgvfs.mkdir(b'cache')
3730 hgvfs.mkdir(b'wcache')
3732 hgvfs.mkdir(b'wcache')
3731
3733
3732 if b'store' in requirements and b'sharedrepo' not in createopts:
3734 if b'store' in requirements and b'sharedrepo' not in createopts:
3733 hgvfs.mkdir(b'store')
3735 hgvfs.mkdir(b'store')
3734
3736
3735 # We create an invalid changelog outside the store so very old
3737 # We create an invalid changelog outside the store so very old
3736 # Mercurial versions (which didn't know about the requirements
3738 # Mercurial versions (which didn't know about the requirements
3737 # file) encounter an error on reading the changelog. This
3739 # file) encounter an error on reading the changelog. This
3738 # effectively locks out old clients and prevents them from
3740 # effectively locks out old clients and prevents them from
3739 # mucking with a repo in an unknown format.
3741 # mucking with a repo in an unknown format.
3740 #
3742 #
3741 # The revlog header has version 2, which won't be recognized by
3743 # The revlog header has version 2, which won't be recognized by
3742 # such old clients.
3744 # such old clients.
3743 hgvfs.append(
3745 hgvfs.append(
3744 b'00changelog.i',
3746 b'00changelog.i',
3745 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3747 b'\0\0\0\2 dummy changelog to prevent using the old repo '
3746 b'layout',
3748 b'layout',
3747 )
3749 )
3748
3750
3749 scmutil.writerequires(hgvfs, requirements)
3751 scmutil.writerequires(hgvfs, requirements)
3750
3752
3751 # Write out file telling readers where to find the shared store.
3753 # Write out file telling readers where to find the shared store.
3752 if b'sharedrepo' in createopts:
3754 if b'sharedrepo' in createopts:
3753 hgvfs.write(b'sharedpath', sharedpath)
3755 hgvfs.write(b'sharedpath', sharedpath)
3754
3756
3755 if createopts.get(b'shareditems'):
3757 if createopts.get(b'shareditems'):
3756 shared = b'\n'.join(sorted(createopts[b'shareditems'])) + b'\n'
3758 shared = b'\n'.join(sorted(createopts[b'shareditems'])) + b'\n'
3757 hgvfs.write(b'shared', shared)
3759 hgvfs.write(b'shared', shared)
3758
3760
3759
3761
3760 def poisonrepository(repo):
3762 def poisonrepository(repo):
3761 """Poison a repository instance so it can no longer be used."""
3763 """Poison a repository instance so it can no longer be used."""
3762 # Perform any cleanup on the instance.
3764 # Perform any cleanup on the instance.
3763 repo.close()
3765 repo.close()
3764
3766
3765 # Our strategy is to replace the type of the object with one that
3767 # Our strategy is to replace the type of the object with one that
3766 # has all attribute lookups result in error.
3768 # has all attribute lookups result in error.
3767 #
3769 #
3768 # But we have to allow the close() method because some constructors
3770 # But we have to allow the close() method because some constructors
3769 # of repos call close() on repo references.
3771 # of repos call close() on repo references.
3770 class poisonedrepository(object):
3772 class poisonedrepository(object):
3771 def __getattribute__(self, item):
3773 def __getattribute__(self, item):
3772 if item == 'close':
3774 if item == 'close':
3773 return object.__getattribute__(self, item)
3775 return object.__getattribute__(self, item)
3774
3776
3775 raise error.ProgrammingError(
3777 raise error.ProgrammingError(
3776 b'repo instances should not be used after unshare'
3778 b'repo instances should not be used after unshare'
3777 )
3779 )
3778
3780
3779 def close(self):
3781 def close(self):
3780 pass
3782 pass
3781
3783
3782 # We may have a repoview, which intercepts __setattr__. So be sure
3784 # We may have a repoview, which intercepts __setattr__. So be sure
3783 # we operate at the lowest level possible.
3785 # we operate at the lowest level possible.
3784 object.__setattr__(repo, '__class__', poisonedrepository)
3786 object.__setattr__(repo, '__class__', poisonedrepository)
@@ -1,260 +1,273 b''
1 Use hgrc within $TESTTMP
1 Use hgrc within $TESTTMP
2
2
3 $ HGRCPATH=`pwd`/hgrc
3 $ HGRCPATH=`pwd`/hgrc
4 $ export HGRCPATH
4 $ export HGRCPATH
5
5
6 hide outer repo
6 hide outer repo
7 $ hg init
7 $ hg init
8
8
9 Use an alternate var for scribbling on hgrc to keep check-code from
9 Use an alternate var for scribbling on hgrc to keep check-code from
10 complaining about the important settings we may be overwriting:
10 complaining about the important settings we may be overwriting:
11
11
12 $ HGRC=`pwd`/hgrc
12 $ HGRC=`pwd`/hgrc
13 $ export HGRC
13 $ export HGRC
14
14
15 Basic syntax error
15 Basic syntax error
16
16
17 $ echo "invalid" > $HGRC
17 $ echo "invalid" > $HGRC
18 $ hg version
18 $ hg version
19 hg: parse error at $TESTTMP/hgrc:1: invalid
19 hg: parse error at $TESTTMP/hgrc:1: invalid
20 [255]
20 [255]
21 $ echo "" > $HGRC
21 $ echo "" > $HGRC
22
22
23 Issue1199: Can't use '%' in hgrc (eg url encoded username)
23 Issue1199: Can't use '%' in hgrc (eg url encoded username)
24
24
25 $ hg init "foo%bar"
25 $ hg init "foo%bar"
26 $ hg clone "foo%bar" foobar
26 $ hg clone "foo%bar" foobar
27 updating to branch default
27 updating to branch default
28 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
28 0 files updated, 0 files merged, 0 files removed, 0 files unresolved
29 $ cd foobar
29 $ cd foobar
30 $ cat .hg/hgrc
30 $ cat .hg/hgrc
31 # example repository config (see 'hg help config' for more info)
31 # example repository config (see 'hg help config' for more info)
32 [paths]
32 [paths]
33 default = $TESTTMP/foo%bar
33 default = $TESTTMP/foo%bar
34
34
35 # path aliases to other clones of this repo in URLs or filesystem paths
35 # path aliases to other clones of this repo in URLs or filesystem paths
36 # (see 'hg help config.paths' for more info)
36 # (see 'hg help config.paths' for more info)
37 #
37 #
38 # default:pushurl = ssh://jdoe@example.net/hg/jdoes-fork
38 # default:pushurl = ssh://jdoe@example.net/hg/jdoes-fork
39 # my-fork = ssh://jdoe@example.net/hg/jdoes-fork
39 # my-fork = ssh://jdoe@example.net/hg/jdoes-fork
40 # my-clone = /home/jdoe/jdoes-clone
40 # my-clone = /home/jdoe/jdoes-clone
41
41
42 [ui]
42 [ui]
43 # name and email (local to this repository, optional), e.g.
43 # name and email (local to this repository, optional), e.g.
44 # username = Jane Doe <jdoe@example.com>
44 # username = Jane Doe <jdoe@example.com>
45 $ hg paths
45 $ hg paths
46 default = $TESTTMP/foo%bar
46 default = $TESTTMP/foo%bar
47 $ hg showconfig
47 $ hg showconfig
48 bundle.mainreporoot=$TESTTMP/foobar
48 bundle.mainreporoot=$TESTTMP/foobar
49 paths.default=$TESTTMP/foo%bar
49 paths.default=$TESTTMP/foo%bar
50 $ cd ..
50 $ cd ..
51
51
52 Check %include
52 Check %include
53
53
54 $ echo '[section]' > $TESTTMP/included
54 $ echo '[section]' > $TESTTMP/included
55 $ echo 'option = value' >> $TESTTMP/included
55 $ echo 'option = value' >> $TESTTMP/included
56 $ echo '%include $TESTTMP/included' >> $HGRC
56 $ echo '%include $TESTTMP/included' >> $HGRC
57 $ hg showconfig section
57 $ hg showconfig section
58 section.option=value
58 section.option=value
59 #if no-windows
59 #if no-windows
60 $ chmod u-r $TESTTMP/included
60 $ chmod u-r $TESTTMP/included
61 $ hg showconfig section
61 $ hg showconfig section
62 hg: parse error at $TESTTMP/hgrc:2: cannot include $TESTTMP/included (Permission denied)
62 hg: parse error at $TESTTMP/hgrc:2: cannot include $TESTTMP/included (Permission denied)
63 [255]
63 [255]
64 #endif
64 #endif
65
65
66 issue1829: wrong indentation
66 issue1829: wrong indentation
67
67
68 $ echo '[foo]' > $HGRC
68 $ echo '[foo]' > $HGRC
69 $ echo ' x = y' >> $HGRC
69 $ echo ' x = y' >> $HGRC
70 $ hg version
70 $ hg version
71 hg: parse error at $TESTTMP/hgrc:2: x = y
71 hg: parse error at $TESTTMP/hgrc:2: x = y
72 unexpected leading whitespace
72 unexpected leading whitespace
73 [255]
73 [255]
74
74
75 $ "$PYTHON" -c "from __future__ import print_function; print('[foo]\nbar = a\n b\n c \n de\n fg \nbaz = bif cb \n')" \
75 $ "$PYTHON" -c "from __future__ import print_function; print('[foo]\nbar = a\n b\n c \n de\n fg \nbaz = bif cb \n')" \
76 > > $HGRC
76 > > $HGRC
77 $ hg showconfig foo
77 $ hg showconfig foo
78 foo.bar=a\nb\nc\nde\nfg
78 foo.bar=a\nb\nc\nde\nfg
79 foo.baz=bif cb
79 foo.baz=bif cb
80
80
81 $ FAKEPATH=/path/to/nowhere
81 $ FAKEPATH=/path/to/nowhere
82 $ export FAKEPATH
82 $ export FAKEPATH
83 $ echo '%include $FAKEPATH/no-such-file' > $HGRC
83 $ echo '%include $FAKEPATH/no-such-file' > $HGRC
84 $ hg version
84 $ hg version
85 Mercurial Distributed SCM (version *) (glob)
85 Mercurial Distributed SCM (version *) (glob)
86 (see https://mercurial-scm.org for more information)
86 (see https://mercurial-scm.org for more information)
87
87
88 Copyright (C) 2005-* Matt Mackall and others (glob)
88 Copyright (C) 2005-* Matt Mackall and others (glob)
89 This is free software; see the source for copying conditions. There is NO
89 This is free software; see the source for copying conditions. There is NO
90 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
90 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
91 $ unset FAKEPATH
91 $ unset FAKEPATH
92
92
93 make sure global options given on the cmdline take precedence
93 make sure global options given on the cmdline take precedence
94
94
95 $ hg showconfig --config ui.verbose=True --quiet
95 $ hg showconfig --config ui.verbose=True --quiet
96 bundle.mainreporoot=$TESTTMP
96 bundle.mainreporoot=$TESTTMP
97 ui.verbose=False
97 ui.verbose=False
98 ui.debug=False
98 ui.debug=False
99 ui.quiet=True
99 ui.quiet=True
100
100
101 $ touch foobar/untracked
101 $ touch foobar/untracked
102 $ cat >> foobar/.hg/hgrc <<EOF
102 $ cat >> foobar/.hg/hgrc <<EOF
103 > [ui]
103 > [ui]
104 > verbose=True
104 > verbose=True
105 > EOF
105 > EOF
106 $ hg -R foobar st -q
106 $ hg -R foobar st -q
107
107
108 username expansion
108 username expansion
109
109
110 $ olduser=$HGUSER
110 $ olduser=$HGUSER
111 $ unset HGUSER
111 $ unset HGUSER
112
112
113 $ FAKEUSER='John Doe'
113 $ FAKEUSER='John Doe'
114 $ export FAKEUSER
114 $ export FAKEUSER
115 $ echo '[ui]' > $HGRC
115 $ echo '[ui]' > $HGRC
116 $ echo 'username = $FAKEUSER' >> $HGRC
116 $ echo 'username = $FAKEUSER' >> $HGRC
117
117
118 $ hg init usertest
118 $ hg init usertest
119 $ cd usertest
119 $ cd usertest
120 $ touch bar
120 $ touch bar
121 $ hg commit --addremove --quiet -m "added bar"
121 $ hg commit --addremove --quiet -m "added bar"
122 $ hg log --template "{author}\n"
122 $ hg log --template "{author}\n"
123 John Doe
123 John Doe
124 $ cd ..
124 $ cd ..
125
125
126 $ hg showconfig
126 $ hg showconfig
127 bundle.mainreporoot=$TESTTMP
127 bundle.mainreporoot=$TESTTMP
128 ui.username=$FAKEUSER
128 ui.username=$FAKEUSER
129
129
130 $ unset FAKEUSER
130 $ unset FAKEUSER
131 $ HGUSER=$olduser
131 $ HGUSER=$olduser
132 $ export HGUSER
132 $ export HGUSER
133
133
134 showconfig with multiple arguments
134 showconfig with multiple arguments
135
135
136 $ echo "[alias]" > $HGRC
136 $ echo "[alias]" > $HGRC
137 $ echo "log = log -g" >> $HGRC
137 $ echo "log = log -g" >> $HGRC
138 $ echo "[defaults]" >> $HGRC
138 $ echo "[defaults]" >> $HGRC
139 $ echo "identify = -n" >> $HGRC
139 $ echo "identify = -n" >> $HGRC
140 $ hg showconfig alias defaults
140 $ hg showconfig alias defaults
141 alias.log=log -g
141 alias.log=log -g
142 defaults.identify=-n
142 defaults.identify=-n
143 $ hg showconfig alias alias
143 $ hg showconfig alias alias
144 alias.log=log -g
144 alias.log=log -g
145 $ hg showconfig alias.log alias.log
145 $ hg showconfig alias.log alias.log
146 alias.log=log -g
146 alias.log=log -g
147 $ hg showconfig alias defaults.identify
147 $ hg showconfig alias defaults.identify
148 alias.log=log -g
148 alias.log=log -g
149 defaults.identify=-n
149 defaults.identify=-n
150 $ hg showconfig alias.log defaults.identify
150 $ hg showconfig alias.log defaults.identify
151 alias.log=log -g
151 alias.log=log -g
152 defaults.identify=-n
152 defaults.identify=-n
153
153
154 HGPLAIN
154 HGPLAIN
155
155
156 $ echo "[ui]" > $HGRC
156 $ echo "[ui]" > $HGRC
157 $ echo "debug=true" >> $HGRC
157 $ echo "debug=true" >> $HGRC
158 $ echo "fallbackencoding=ASCII" >> $HGRC
158 $ echo "fallbackencoding=ASCII" >> $HGRC
159 $ echo "quiet=true" >> $HGRC
159 $ echo "quiet=true" >> $HGRC
160 $ echo "slash=true" >> $HGRC
160 $ echo "slash=true" >> $HGRC
161 $ echo "traceback=true" >> $HGRC
161 $ echo "traceback=true" >> $HGRC
162 $ echo "verbose=true" >> $HGRC
162 $ echo "verbose=true" >> $HGRC
163 $ echo "style=~/.hgstyle" >> $HGRC
163 $ echo "style=~/.hgstyle" >> $HGRC
164 $ echo "logtemplate={node}" >> $HGRC
164 $ echo "logtemplate={node}" >> $HGRC
165 $ echo "[defaults]" >> $HGRC
165 $ echo "[defaults]" >> $HGRC
166 $ echo "identify=-n" >> $HGRC
166 $ echo "identify=-n" >> $HGRC
167 $ echo "[alias]" >> $HGRC
167 $ echo "[alias]" >> $HGRC
168 $ echo "log=log -g" >> $HGRC
168 $ echo "log=log -g" >> $HGRC
169
169
170 customized hgrc
170 customized hgrc
171
171
172 $ hg showconfig
172 $ hg showconfig
173 read config from: $TESTTMP/hgrc
173 read config from: $TESTTMP/hgrc
174 $TESTTMP/hgrc:13: alias.log=log -g
174 $TESTTMP/hgrc:13: alias.log=log -g
175 repo: bundle.mainreporoot=$TESTTMP
175 repo: bundle.mainreporoot=$TESTTMP
176 $TESTTMP/hgrc:11: defaults.identify=-n
176 $TESTTMP/hgrc:11: defaults.identify=-n
177 $TESTTMP/hgrc:2: ui.debug=true
177 $TESTTMP/hgrc:2: ui.debug=true
178 $TESTTMP/hgrc:3: ui.fallbackencoding=ASCII
178 $TESTTMP/hgrc:3: ui.fallbackencoding=ASCII
179 $TESTTMP/hgrc:4: ui.quiet=true
179 $TESTTMP/hgrc:4: ui.quiet=true
180 $TESTTMP/hgrc:5: ui.slash=true
180 $TESTTMP/hgrc:5: ui.slash=true
181 $TESTTMP/hgrc:6: ui.traceback=true
181 $TESTTMP/hgrc:6: ui.traceback=true
182 $TESTTMP/hgrc:7: ui.verbose=true
182 $TESTTMP/hgrc:7: ui.verbose=true
183 $TESTTMP/hgrc:8: ui.style=~/.hgstyle
183 $TESTTMP/hgrc:8: ui.style=~/.hgstyle
184 $TESTTMP/hgrc:9: ui.logtemplate={node}
184 $TESTTMP/hgrc:9: ui.logtemplate={node}
185
185
186 plain hgrc
186 plain hgrc
187
187
188 $ HGPLAIN=; export HGPLAIN
188 $ HGPLAIN=; export HGPLAIN
189 $ hg showconfig --config ui.traceback=True --debug
189 $ hg showconfig --config ui.traceback=True --debug
190 read config from: $TESTTMP/hgrc
190 read config from: $TESTTMP/hgrc
191 repo: bundle.mainreporoot=$TESTTMP
191 repo: bundle.mainreporoot=$TESTTMP
192 --config: ui.traceback=True
192 --config: ui.traceback=True
193 --verbose: ui.verbose=False
193 --verbose: ui.verbose=False
194 --debug: ui.debug=True
194 --debug: ui.debug=True
195 --quiet: ui.quiet=False
195 --quiet: ui.quiet=False
196
196
197 with environment variables
197 with environment variables
198
198
199 $ PAGER=p1 EDITOR=e1 VISUAL=e2 hg showconfig --debug
199 $ PAGER=p1 EDITOR=e1 VISUAL=e2 hg showconfig --debug
200 read config from: $TESTTMP/hgrc
200 read config from: $TESTTMP/hgrc
201 repo: bundle.mainreporoot=$TESTTMP
201 repo: bundle.mainreporoot=$TESTTMP
202 $PAGER: pager.pager=p1
202 $PAGER: pager.pager=p1
203 $VISUAL: ui.editor=e2
203 $VISUAL: ui.editor=e2
204 --verbose: ui.verbose=False
204 --verbose: ui.verbose=False
205 --debug: ui.debug=True
205 --debug: ui.debug=True
206 --quiet: ui.quiet=False
206 --quiet: ui.quiet=False
207
207
208 plain mode with exceptions
208 plain mode with exceptions
209
209
210 $ cat > plain.py <<EOF
210 $ cat > plain.py <<EOF
211 > from mercurial import commands, extensions
211 > from mercurial import commands, extensions
212 > def _config(orig, ui, repo, *values, **opts):
212 > def _config(orig, ui, repo, *values, **opts):
213 > ui.write(b'plain: %r\n' % ui.plain())
213 > ui.write(b'plain: %r\n' % ui.plain())
214 > return orig(ui, repo, *values, **opts)
214 > return orig(ui, repo, *values, **opts)
215 > def uisetup(ui):
215 > def uisetup(ui):
216 > extensions.wrapcommand(commands.table, b'config', _config)
216 > extensions.wrapcommand(commands.table, b'config', _config)
217 > EOF
217 > EOF
218 $ echo "[extensions]" >> $HGRC
218 $ echo "[extensions]" >> $HGRC
219 $ echo "plain=./plain.py" >> $HGRC
219 $ echo "plain=./plain.py" >> $HGRC
220 $ HGPLAINEXCEPT=; export HGPLAINEXCEPT
220 $ HGPLAINEXCEPT=; export HGPLAINEXCEPT
221 $ hg showconfig --config ui.traceback=True --debug
221 $ hg showconfig --config ui.traceback=True --debug
222 plain: True
222 plain: True
223 read config from: $TESTTMP/hgrc
223 read config from: $TESTTMP/hgrc
224 repo: bundle.mainreporoot=$TESTTMP
224 repo: bundle.mainreporoot=$TESTTMP
225 $TESTTMP/hgrc:15: extensions.plain=./plain.py
225 $TESTTMP/hgrc:15: extensions.plain=./plain.py
226 --config: ui.traceback=True
226 --config: ui.traceback=True
227 --verbose: ui.verbose=False
227 --verbose: ui.verbose=False
228 --debug: ui.debug=True
228 --debug: ui.debug=True
229 --quiet: ui.quiet=False
229 --quiet: ui.quiet=False
230 $ unset HGPLAIN
230 $ unset HGPLAIN
231 $ hg showconfig --config ui.traceback=True --debug
231 $ hg showconfig --config ui.traceback=True --debug
232 plain: True
232 plain: True
233 read config from: $TESTTMP/hgrc
233 read config from: $TESTTMP/hgrc
234 repo: bundle.mainreporoot=$TESTTMP
234 repo: bundle.mainreporoot=$TESTTMP
235 $TESTTMP/hgrc:15: extensions.plain=./plain.py
235 $TESTTMP/hgrc:15: extensions.plain=./plain.py
236 --config: ui.traceback=True
236 --config: ui.traceback=True
237 --verbose: ui.verbose=False
237 --verbose: ui.verbose=False
238 --debug: ui.debug=True
238 --debug: ui.debug=True
239 --quiet: ui.quiet=False
239 --quiet: ui.quiet=False
240 $ HGPLAINEXCEPT=i18n; export HGPLAINEXCEPT
240 $ HGPLAINEXCEPT=i18n; export HGPLAINEXCEPT
241 $ hg showconfig --config ui.traceback=True --debug
241 $ hg showconfig --config ui.traceback=True --debug
242 plain: True
242 plain: True
243 read config from: $TESTTMP/hgrc
243 read config from: $TESTTMP/hgrc
244 repo: bundle.mainreporoot=$TESTTMP
244 repo: bundle.mainreporoot=$TESTTMP
245 $TESTTMP/hgrc:15: extensions.plain=./plain.py
245 $TESTTMP/hgrc:15: extensions.plain=./plain.py
246 --config: ui.traceback=True
246 --config: ui.traceback=True
247 --verbose: ui.verbose=False
247 --verbose: ui.verbose=False
248 --debug: ui.debug=True
248 --debug: ui.debug=True
249 --quiet: ui.quiet=False
249 --quiet: ui.quiet=False
250
250
251 source of paths is not mangled
251 source of paths is not mangled
252
252
253 $ cat >> $HGRCPATH <<EOF
253 $ cat >> $HGRCPATH <<EOF
254 > [paths]
254 > [paths]
255 > foo = bar
255 > foo = bar
256 > EOF
256 > EOF
257 $ hg showconfig --debug paths
257 $ hg showconfig --debug paths
258 plain: True
258 plain: True
259 read config from: $TESTTMP/hgrc
259 read config from: $TESTTMP/hgrc
260 $TESTTMP/hgrc:17: paths.foo=$TESTTMP/bar
260 $TESTTMP/hgrc:17: paths.foo=$TESTTMP/bar
261
262 Test we can skip the user configuration
263
264 $ cat >> .hg/hgrc <<EOF
265 > [paths]
266 > elephant = babar
267 > EOF
268 $ hg path
269 elephant = $TESTTMP/babar
270 foo = $TESTTMP/bar
271 $ HGRCSKIPREPO=1 hg path
272 foo = $TESTTMP/bar
273
General Comments 0
You need to be logged in to leave comments. Login now