Show More
@@ -1,1123 +1,1128 b'' | |||||
1 | ====== |
|
1 | ====== | |
2 | hgrc |
|
2 | hgrc | |
3 | ====== |
|
3 | ====== | |
4 |
|
4 | |||
5 | --------------------------------- |
|
5 | --------------------------------- | |
6 | configuration files for Mercurial |
|
6 | configuration files for Mercurial | |
7 | --------------------------------- |
|
7 | --------------------------------- | |
8 |
|
8 | |||
9 | :Author: Bryan O'Sullivan <bos@serpentine.com> |
|
9 | :Author: Bryan O'Sullivan <bos@serpentine.com> | |
10 | :Organization: Mercurial |
|
10 | :Organization: Mercurial | |
11 | :Manual section: 5 |
|
11 | :Manual section: 5 | |
12 | :Manual group: Mercurial Manual |
|
12 | :Manual group: Mercurial Manual | |
13 |
|
13 | |||
14 | .. contents:: |
|
14 | .. contents:: | |
15 | :backlinks: top |
|
15 | :backlinks: top | |
16 | :class: htmlonly |
|
16 | :class: htmlonly | |
17 |
|
17 | |||
18 |
|
18 | |||
19 | Synopsis |
|
19 | Synopsis | |
20 | -------- |
|
20 | -------- | |
21 |
|
21 | |||
22 | The Mercurial system uses a set of configuration files to control |
|
22 | The Mercurial system uses a set of configuration files to control | |
23 | aspects of its behavior. |
|
23 | aspects of its behavior. | |
24 |
|
24 | |||
25 | Files |
|
25 | Files | |
26 | ----- |
|
26 | ----- | |
27 |
|
27 | |||
28 | Mercurial reads configuration data from several files, if they exist. |
|
28 | Mercurial reads configuration data from several files, if they exist. | |
29 | The names of these files depend on the system on which Mercurial is |
|
29 | The names of these files depend on the system on which Mercurial is | |
30 | installed. ``*.rc`` files from a single directory are read in |
|
30 | installed. ``*.rc`` files from a single directory are read in | |
31 | alphabetical order, later ones overriding earlier ones. Where multiple |
|
31 | alphabetical order, later ones overriding earlier ones. Where multiple | |
32 | paths are given below, settings from earlier paths override later |
|
32 | paths are given below, settings from earlier paths override later | |
33 | ones. |
|
33 | ones. | |
34 |
|
34 | |||
35 | | (Unix, Windows) ``<repo>/.hg/hgrc`` |
|
35 | | (Unix, Windows) ``<repo>/.hg/hgrc`` | |
36 |
|
36 | |||
37 | Per-repository configuration options that only apply in a |
|
37 | Per-repository configuration options that only apply in a | |
38 | particular repository. This file is not version-controlled, and |
|
38 | particular repository. This file is not version-controlled, and | |
39 | will not get transferred during a "clone" operation. Options in |
|
39 | will not get transferred during a "clone" operation. Options in | |
40 | this file override options in all other configuration files. On |
|
40 | this file override options in all other configuration files. On | |
41 | Unix, most of this file will be ignored if it doesn't belong to a |
|
41 | Unix, most of this file will be ignored if it doesn't belong to a | |
42 | trusted user or to a trusted group. See the documentation for the |
|
42 | trusted user or to a trusted group. See the documentation for the | |
43 | trusted_ section below for more details. |
|
43 | trusted_ section below for more details. | |
44 |
|
44 | |||
45 | | (Unix) ``$HOME/.hgrc`` |
|
45 | | (Unix) ``$HOME/.hgrc`` | |
46 | | (Windows) ``%USERPROFILE%\.hgrc`` |
|
46 | | (Windows) ``%USERPROFILE%\.hgrc`` | |
47 | | (Windows) ``%USERPROFILE%\Mercurial.ini`` |
|
47 | | (Windows) ``%USERPROFILE%\Mercurial.ini`` | |
48 | | (Windows) ``%HOME%\.hgrc`` |
|
48 | | (Windows) ``%HOME%\.hgrc`` | |
49 | | (Windows) ``%HOME%\Mercurial.ini`` |
|
49 | | (Windows) ``%HOME%\Mercurial.ini`` | |
50 |
|
50 | |||
51 | Per-user configuration file(s), for the user running Mercurial. On |
|
51 | Per-user configuration file(s), for the user running Mercurial. On | |
52 | Windows 9x, ``%HOME%`` is replaced by ``%APPDATA%``. Options in these |
|
52 | Windows 9x, ``%HOME%`` is replaced by ``%APPDATA%``. Options in these | |
53 | files apply to all Mercurial commands executed by this user in any |
|
53 | files apply to all Mercurial commands executed by this user in any | |
54 | directory. Options in these files override per-system and per-installation |
|
54 | directory. Options in these files override per-system and per-installation | |
55 | options. |
|
55 | options. | |
56 |
|
56 | |||
57 | | (Unix) ``/etc/mercurial/hgrc`` |
|
57 | | (Unix) ``/etc/mercurial/hgrc`` | |
58 | | (Unix) ``/etc/mercurial/hgrc.d/*.rc`` |
|
58 | | (Unix) ``/etc/mercurial/hgrc.d/*.rc`` | |
59 |
|
59 | |||
60 | Per-system configuration files, for the system on which Mercurial |
|
60 | Per-system configuration files, for the system on which Mercurial | |
61 | is running. Options in these files apply to all Mercurial commands |
|
61 | is running. Options in these files apply to all Mercurial commands | |
62 | executed by any user in any directory. Options in these files |
|
62 | executed by any user in any directory. Options in these files | |
63 | override per-installation options. |
|
63 | override per-installation options. | |
64 |
|
64 | |||
65 | | (Unix) ``<install-root>/etc/mercurial/hgrc`` |
|
65 | | (Unix) ``<install-root>/etc/mercurial/hgrc`` | |
66 | | (Unix) ``<install-root>/etc/mercurial/hgrc.d/*.rc`` |
|
66 | | (Unix) ``<install-root>/etc/mercurial/hgrc.d/*.rc`` | |
67 |
|
67 | |||
68 | Per-installation configuration files, searched for in the |
|
68 | Per-installation configuration files, searched for in the | |
69 | directory where Mercurial is installed. ``<install-root>`` is the |
|
69 | directory where Mercurial is installed. ``<install-root>`` is the | |
70 | parent directory of the **hg** executable (or symlink) being run. For |
|
70 | parent directory of the **hg** executable (or symlink) being run. For | |
71 | example, if installed in ``/shared/tools/bin/hg``, Mercurial will look |
|
71 | example, if installed in ``/shared/tools/bin/hg``, Mercurial will look | |
72 | in ``/shared/tools/etc/mercurial/hgrc``. Options in these files apply |
|
72 | in ``/shared/tools/etc/mercurial/hgrc``. Options in these files apply | |
73 | to all Mercurial commands executed by any user in any directory. |
|
73 | to all Mercurial commands executed by any user in any directory. | |
74 |
|
74 | |||
75 | | (Windows) ``<install-dir>\Mercurial.ini`` |
|
75 | | (Windows) ``<install-dir>\Mercurial.ini`` | |
76 | | (Windows) ``<install-dir>\hgrc.d\*.rc`` |
|
76 | | (Windows) ``<install-dir>\hgrc.d\*.rc`` | |
77 | | (Windows) ``HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial`` |
|
77 | | (Windows) ``HKEY_LOCAL_MACHINE\SOFTWARE\Mercurial`` | |
78 |
|
78 | |||
79 | Per-installation/system configuration files, for the system on |
|
79 | Per-installation/system configuration files, for the system on | |
80 | which Mercurial is running. Options in these files apply to all |
|
80 | which Mercurial is running. Options in these files apply to all | |
81 | Mercurial commands executed by any user in any directory. Registry |
|
81 | Mercurial commands executed by any user in any directory. Registry | |
82 | keys contain PATH-like strings, every part of which must reference |
|
82 | keys contain PATH-like strings, every part of which must reference | |
83 | a ``Mercurial.ini`` file or be a directory where ``*.rc`` files will |
|
83 | a ``Mercurial.ini`` file or be a directory where ``*.rc`` files will | |
84 | be read. Mercurial checks each of these locations in the specified |
|
84 | be read. Mercurial checks each of these locations in the specified | |
85 | order until one or more configuration files are detected. If the |
|
85 | order until one or more configuration files are detected. If the | |
86 | pywin32 extensions are not installed, Mercurial will only look for |
|
86 | pywin32 extensions are not installed, Mercurial will only look for | |
87 | site-wide configuration in ``C:\Mercurial\Mercurial.ini``. |
|
87 | site-wide configuration in ``C:\Mercurial\Mercurial.ini``. | |
88 |
|
88 | |||
89 | Syntax |
|
89 | Syntax | |
90 | ------ |
|
90 | ------ | |
91 |
|
91 | |||
92 | A configuration file consists of sections, led by a ``[section]`` header |
|
92 | A configuration file consists of sections, led by a ``[section]`` header | |
93 | and followed by ``name = value`` entries (sometimes called |
|
93 | and followed by ``name = value`` entries (sometimes called | |
94 | ``configuration keys``):: |
|
94 | ``configuration keys``):: | |
95 |
|
95 | |||
96 | [spam] |
|
96 | [spam] | |
97 | eggs=ham |
|
97 | eggs=ham | |
98 | green= |
|
98 | green= | |
99 | eggs |
|
99 | eggs | |
100 |
|
100 | |||
101 | Each line contains one entry. If the lines that follow are indented, |
|
101 | Each line contains one entry. If the lines that follow are indented, | |
102 | they are treated as continuations of that entry. Leading whitespace is |
|
102 | they are treated as continuations of that entry. Leading whitespace is | |
103 | removed from values. Empty lines are skipped. Lines beginning with |
|
103 | removed from values. Empty lines are skipped. Lines beginning with | |
104 | ``#`` or ``;`` are ignored and may be used to provide comments. |
|
104 | ``#`` or ``;`` are ignored and may be used to provide comments. | |
105 |
|
105 | |||
106 | Configuration keys can be set multiple times, in which case mercurial |
|
106 | Configuration keys can be set multiple times, in which case mercurial | |
107 | will use the value that was configured last. As an example:: |
|
107 | will use the value that was configured last. As an example:: | |
108 |
|
108 | |||
109 | [spam] |
|
109 | [spam] | |
110 | eggs=large |
|
110 | eggs=large | |
111 | ham=serrano |
|
111 | ham=serrano | |
112 | eggs=small |
|
112 | eggs=small | |
113 |
|
113 | |||
114 | This would set the configuration key named ``eggs`` to ``small``. |
|
114 | This would set the configuration key named ``eggs`` to ``small``. | |
115 |
|
115 | |||
116 | It is also possible to define a section multiple times. A section can |
|
116 | It is also possible to define a section multiple times. A section can | |
117 | be redefined on the same and/or on different hgrc files. For example:: |
|
117 | be redefined on the same and/or on different hgrc files. For example:: | |
118 |
|
118 | |||
119 | [foo] |
|
119 | [foo] | |
120 | eggs=large |
|
120 | eggs=large | |
121 | ham=serrano |
|
121 | ham=serrano | |
122 | eggs=small |
|
122 | eggs=small | |
123 |
|
123 | |||
124 | [bar] |
|
124 | [bar] | |
125 | eggs=ham |
|
125 | eggs=ham | |
126 | green= |
|
126 | green= | |
127 | eggs |
|
127 | eggs | |
128 |
|
128 | |||
129 | [foo] |
|
129 | [foo] | |
130 | ham=prosciutto |
|
130 | ham=prosciutto | |
131 | eggs=medium |
|
131 | eggs=medium | |
132 | bread=toasted |
|
132 | bread=toasted | |
133 |
|
133 | |||
134 | This would set the ``eggs``, ``ham``, and ``bread`` configuration keys |
|
134 | This would set the ``eggs``, ``ham``, and ``bread`` configuration keys | |
135 | of the ``foo`` section to ``medium``, ``prosciutto``, and ``toasted``, |
|
135 | of the ``foo`` section to ``medium``, ``prosciutto``, and ``toasted``, | |
136 | respectively. As you can see there only thing that matters is the last |
|
136 | respectively. As you can see there only thing that matters is the last | |
137 | value that was set for each of the configuration keys. |
|
137 | value that was set for each of the configuration keys. | |
138 |
|
138 | |||
139 | If a configuration key is set multiple times in different |
|
139 | If a configuration key is set multiple times in different | |
140 | configuration files the final value will depend on the order in which |
|
140 | configuration files the final value will depend on the order in which | |
141 | the different configuration files are read, with settings from earlier |
|
141 | the different configuration files are read, with settings from earlier | |
142 | paths overriding later ones as described on the ``Files`` section |
|
142 | paths overriding later ones as described on the ``Files`` section | |
143 | above. |
|
143 | above. | |
144 |
|
144 | |||
145 | A line of the form ``%include file`` will include ``file`` into the |
|
145 | A line of the form ``%include file`` will include ``file`` into the | |
146 | current configuration file. The inclusion is recursive, which means |
|
146 | current configuration file. The inclusion is recursive, which means | |
147 | that included files can include other files. Filenames are relative to |
|
147 | that included files can include other files. Filenames are relative to | |
148 | the configuration file in which the ``%include`` directive is found. |
|
148 | the configuration file in which the ``%include`` directive is found. | |
149 | Environment variables and ``~user`` constructs are expanded in |
|
149 | Environment variables and ``~user`` constructs are expanded in | |
150 | ``file``. This lets you do something like:: |
|
150 | ``file``. This lets you do something like:: | |
151 |
|
151 | |||
152 | %include ~/.hgrc.d/$HOST.rc |
|
152 | %include ~/.hgrc.d/$HOST.rc | |
153 |
|
153 | |||
154 | to include a different configuration file on each computer you use. |
|
154 | to include a different configuration file on each computer you use. | |
155 |
|
155 | |||
156 | A line with ``%unset name`` will remove ``name`` from the current |
|
156 | A line with ``%unset name`` will remove ``name`` from the current | |
157 | section, if it has been set previously. |
|
157 | section, if it has been set previously. | |
158 |
|
158 | |||
159 | The values are either free-form text strings, lists of text strings, |
|
159 | The values are either free-form text strings, lists of text strings, | |
160 | or Boolean values. Boolean values can be set to true using any of "1", |
|
160 | or Boolean values. Boolean values can be set to true using any of "1", | |
161 | "yes", "true", or "on" and to false using "0", "no", "false", or "off" |
|
161 | "yes", "true", or "on" and to false using "0", "no", "false", or "off" | |
162 | (all case insensitive). |
|
162 | (all case insensitive). | |
163 |
|
163 | |||
164 | List values are separated by whitespace or comma, except when values are |
|
164 | List values are separated by whitespace or comma, except when values are | |
165 | placed in double quotation marks:: |
|
165 | placed in double quotation marks:: | |
166 |
|
166 | |||
167 | allow_read = "John Doe, PhD", brian, betty |
|
167 | allow_read = "John Doe, PhD", brian, betty | |
168 |
|
168 | |||
169 | Quotation marks can be escaped by prefixing them with a backslash. Only |
|
169 | Quotation marks can be escaped by prefixing them with a backslash. Only | |
170 | quotation marks at the beginning of a word is counted as a quotation |
|
170 | quotation marks at the beginning of a word is counted as a quotation | |
171 | (e.g., ``foo"bar baz`` is the list of ``foo"bar`` and ``baz``). |
|
171 | (e.g., ``foo"bar baz`` is the list of ``foo"bar`` and ``baz``). | |
172 |
|
172 | |||
173 | Sections |
|
173 | Sections | |
174 | -------- |
|
174 | -------- | |
175 |
|
175 | |||
176 | This section describes the different sections that may appear in a |
|
176 | This section describes the different sections that may appear in a | |
177 | Mercurial "hgrc" file, the purpose of each section, its possible keys, |
|
177 | Mercurial "hgrc" file, the purpose of each section, its possible keys, | |
178 | and their possible values. |
|
178 | and their possible values. | |
179 |
|
179 | |||
180 | ``alias`` |
|
180 | ``alias`` | |
181 | """"""""" |
|
181 | """"""""" | |
182 | Defines command aliases. |
|
182 | Defines command aliases. | |
183 | Aliases allow you to define your own commands in terms of other |
|
183 | Aliases allow you to define your own commands in terms of other | |
184 | commands (or aliases), optionally including arguments. |
|
184 | commands (or aliases), optionally including arguments. | |
185 |
|
185 | |||
186 | Alias definitions consist of lines of the form:: |
|
186 | Alias definitions consist of lines of the form:: | |
187 |
|
187 | |||
188 | <alias> = <command> [<argument]... |
|
188 | <alias> = <command> [<argument]... | |
189 |
|
189 | |||
190 | For example, this definition:: |
|
190 | For example, this definition:: | |
191 |
|
191 | |||
192 | latest = log --limit 5 |
|
192 | latest = log --limit 5 | |
193 |
|
193 | |||
194 | creates a new command ``latest`` that shows only the five most recent |
|
194 | creates a new command ``latest`` that shows only the five most recent | |
195 | changesets. You can define subsequent aliases using earlier ones:: |
|
195 | changesets. You can define subsequent aliases using earlier ones:: | |
196 |
|
196 | |||
197 | stable5 = latest -b stable |
|
197 | stable5 = latest -b stable | |
198 |
|
198 | |||
199 | .. note:: It is possible to create aliases with the same names as |
|
199 | .. note:: It is possible to create aliases with the same names as | |
200 | existing commands, which will then override the original |
|
200 | existing commands, which will then override the original | |
201 | definitions. This is almost always a bad idea! |
|
201 | definitions. This is almost always a bad idea! | |
202 |
|
202 | |||
203 |
|
203 | |||
204 | ``auth`` |
|
204 | ``auth`` | |
205 | """""""" |
|
205 | """""""" | |
206 |
|
206 | |||
207 | Authentication credentials for HTTP authentication. This section |
|
207 | Authentication credentials for HTTP authentication. This section | |
208 | allows you to store usernames and passwords for use when logging |
|
208 | allows you to store usernames and passwords for use when logging | |
209 | *into* HTTP servers. See the web_ configuration section if you want to |
|
209 | *into* HTTP servers. See the web_ configuration section if you want to | |
210 | configure *who* can login to your HTTP server. |
|
210 | configure *who* can login to your HTTP server. | |
211 |
|
211 | |||
212 | Each line has the following format:: |
|
212 | Each line has the following format:: | |
213 |
|
213 | |||
214 | <name>.<argument> = <value> |
|
214 | <name>.<argument> = <value> | |
215 |
|
215 | |||
216 | where ``<name>`` is used to group arguments into authentication |
|
216 | where ``<name>`` is used to group arguments into authentication | |
217 | entries. Example:: |
|
217 | entries. Example:: | |
218 |
|
218 | |||
219 | foo.prefix = hg.intevation.org/mercurial |
|
219 | foo.prefix = hg.intevation.org/mercurial | |
220 | foo.username = foo |
|
220 | foo.username = foo | |
221 | foo.password = bar |
|
221 | foo.password = bar | |
222 | foo.schemes = http https |
|
222 | foo.schemes = http https | |
223 |
|
223 | |||
224 | bar.prefix = secure.example.org |
|
224 | bar.prefix = secure.example.org | |
225 | bar.key = path/to/file.key |
|
225 | bar.key = path/to/file.key | |
226 | bar.cert = path/to/file.cert |
|
226 | bar.cert = path/to/file.cert | |
227 | bar.schemes = https |
|
227 | bar.schemes = https | |
228 |
|
228 | |||
229 | Supported arguments: |
|
229 | Supported arguments: | |
230 |
|
230 | |||
231 | ``prefix`` |
|
231 | ``prefix`` | |
232 | Either ``*`` or a URI prefix with or without the scheme part. |
|
232 | Either ``*`` or a URI prefix with or without the scheme part. | |
233 | The authentication entry with the longest matching prefix is used |
|
233 | The authentication entry with the longest matching prefix is used | |
234 | (where ``*`` matches everything and counts as a match of length |
|
234 | (where ``*`` matches everything and counts as a match of length | |
235 | 1). If the prefix doesn't include a scheme, the match is performed |
|
235 | 1). If the prefix doesn't include a scheme, the match is performed | |
236 | against the URI with its scheme stripped as well, and the schemes |
|
236 | against the URI with its scheme stripped as well, and the schemes | |
237 | argument, q.v., is then subsequently consulted. |
|
237 | argument, q.v., is then subsequently consulted. | |
238 | ``username`` |
|
238 | ``username`` | |
239 | Optional. Username to authenticate with. If not given, and the |
|
239 | Optional. Username to authenticate with. If not given, and the | |
240 | remote site requires basic or digest authentication, the user will |
|
240 | remote site requires basic or digest authentication, the user will | |
241 | be prompted for it. Environment variables are expanded in the |
|
241 | be prompted for it. Environment variables are expanded in the | |
242 | username letting you do ``foo.username = $USER``. |
|
242 | username letting you do ``foo.username = $USER``. | |
243 | ``password`` |
|
243 | ``password`` | |
244 | Optional. Password to authenticate with. If not given, and the |
|
244 | Optional. Password to authenticate with. If not given, and the | |
245 | remote site requires basic or digest authentication, the user |
|
245 | remote site requires basic or digest authentication, the user | |
246 | will be prompted for it. |
|
246 | will be prompted for it. | |
247 | ``key`` |
|
247 | ``key`` | |
248 | Optional. PEM encoded client certificate key file. Environment |
|
248 | Optional. PEM encoded client certificate key file. Environment | |
249 | variables are expanded in the filename. |
|
249 | variables are expanded in the filename. | |
250 | ``cert`` |
|
250 | ``cert`` | |
251 | Optional. PEM encoded client certificate chain file. Environment |
|
251 | Optional. PEM encoded client certificate chain file. Environment | |
252 | variables are expanded in the filename. |
|
252 | variables are expanded in the filename. | |
253 | ``schemes`` |
|
253 | ``schemes`` | |
254 | Optional. Space separated list of URI schemes to use this |
|
254 | Optional. Space separated list of URI schemes to use this | |
255 | authentication entry with. Only used if the prefix doesn't include |
|
255 | authentication entry with. Only used if the prefix doesn't include | |
256 | a scheme. Supported schemes are http and https. They will match |
|
256 | a scheme. Supported schemes are http and https. They will match | |
257 | static-http and static-https respectively, as well. |
|
257 | static-http and static-https respectively, as well. | |
258 | Default: https. |
|
258 | Default: https. | |
259 |
|
259 | |||
260 | If no suitable authentication entry is found, the user is prompted |
|
260 | If no suitable authentication entry is found, the user is prompted | |
261 | for credentials as usual if required by the remote. |
|
261 | for credentials as usual if required by the remote. | |
262 |
|
262 | |||
263 |
|
263 | |||
264 | ``decode/encode`` |
|
264 | ``decode/encode`` | |
265 | """"""""""""""""" |
|
265 | """"""""""""""""" | |
266 | Filters for transforming files on checkout/checkin. This would |
|
266 | Filters for transforming files on checkout/checkin. This would | |
267 | typically be used for newline processing or other |
|
267 | typically be used for newline processing or other | |
268 | localization/canonicalization of files. |
|
268 | localization/canonicalization of files. | |
269 |
|
269 | |||
270 | Filters consist of a filter pattern followed by a filter command. |
|
270 | Filters consist of a filter pattern followed by a filter command. | |
271 | Filter patterns are globs by default, rooted at the repository root. |
|
271 | Filter patterns are globs by default, rooted at the repository root. | |
272 | For example, to match any file ending in ``.txt`` in the root |
|
272 | For example, to match any file ending in ``.txt`` in the root | |
273 | directory only, use the pattern ``*.txt``. To match any file ending |
|
273 | directory only, use the pattern ``*.txt``. To match any file ending | |
274 | in ``.c`` anywhere in the repository, use the pattern ``**.c``. |
|
274 | in ``.c`` anywhere in the repository, use the pattern ``**.c``. | |
275 | For each file only the first matching filter applies. |
|
275 | For each file only the first matching filter applies. | |
276 |
|
276 | |||
277 | The filter command can start with a specifier, either ``pipe:`` or |
|
277 | The filter command can start with a specifier, either ``pipe:`` or | |
278 | ``tempfile:``. If no specifier is given, ``pipe:`` is used by default. |
|
278 | ``tempfile:``. If no specifier is given, ``pipe:`` is used by default. | |
279 |
|
279 | |||
280 | A ``pipe:`` command must accept data on stdin and return the transformed |
|
280 | A ``pipe:`` command must accept data on stdin and return the transformed | |
281 | data on stdout. |
|
281 | data on stdout. | |
282 |
|
282 | |||
283 | Pipe example:: |
|
283 | Pipe example:: | |
284 |
|
284 | |||
285 | [encode] |
|
285 | [encode] | |
286 | # uncompress gzip files on checkin to improve delta compression |
|
286 | # uncompress gzip files on checkin to improve delta compression | |
287 | # note: not necessarily a good idea, just an example |
|
287 | # note: not necessarily a good idea, just an example | |
288 | *.gz = pipe: gunzip |
|
288 | *.gz = pipe: gunzip | |
289 |
|
289 | |||
290 | [decode] |
|
290 | [decode] | |
291 | # recompress gzip files when writing them to the working dir (we |
|
291 | # recompress gzip files when writing them to the working dir (we | |
292 | # can safely omit "pipe:", because it's the default) |
|
292 | # can safely omit "pipe:", because it's the default) | |
293 | *.gz = gzip |
|
293 | *.gz = gzip | |
294 |
|
294 | |||
295 | A ``tempfile:`` command is a template. The string ``INFILE`` is replaced |
|
295 | A ``tempfile:`` command is a template. The string ``INFILE`` is replaced | |
296 | with the name of a temporary file that contains the data to be |
|
296 | with the name of a temporary file that contains the data to be | |
297 | filtered by the command. The string ``OUTFILE`` is replaced with the name |
|
297 | filtered by the command. The string ``OUTFILE`` is replaced with the name | |
298 | of an empty temporary file, where the filtered data must be written by |
|
298 | of an empty temporary file, where the filtered data must be written by | |
299 | the command. |
|
299 | the command. | |
300 |
|
300 | |||
301 | .. note:: The tempfile mechanism is recommended for Windows systems, |
|
301 | .. note:: The tempfile mechanism is recommended for Windows systems, | |
302 | where the standard shell I/O redirection operators often have |
|
302 | where the standard shell I/O redirection operators often have | |
303 | strange effects and may corrupt the contents of your files. |
|
303 | strange effects and may corrupt the contents of your files. | |
304 |
|
304 | |||
305 | This filter mechanism is used internally by the ``eol`` extension to |
|
305 | This filter mechanism is used internally by the ``eol`` extension to | |
306 | translate line ending characters between Windows (CRLF) and Unix (LF) |
|
306 | translate line ending characters between Windows (CRLF) and Unix (LF) | |
307 | format. We suggest you use the ``eol`` extension for convenience. |
|
307 | format. We suggest you use the ``eol`` extension for convenience. | |
308 |
|
308 | |||
309 |
|
309 | |||
310 | ``defaults`` |
|
310 | ``defaults`` | |
311 | """""""""""" |
|
311 | """""""""""" | |
312 |
|
312 | |||
313 | (defaults are deprecated. Don't use them. Use aliases instead) |
|
313 | (defaults are deprecated. Don't use them. Use aliases instead) | |
314 |
|
314 | |||
315 | Use the ``[defaults]`` section to define command defaults, i.e. the |
|
315 | Use the ``[defaults]`` section to define command defaults, i.e. the | |
316 | default options/arguments to pass to the specified commands. |
|
316 | default options/arguments to pass to the specified commands. | |
317 |
|
317 | |||
318 | The following example makes :hg:`log` run in verbose mode, and |
|
318 | The following example makes :hg:`log` run in verbose mode, and | |
319 | :hg:`status` show only the modified files, by default:: |
|
319 | :hg:`status` show only the modified files, by default:: | |
320 |
|
320 | |||
321 | [defaults] |
|
321 | [defaults] | |
322 | log = -v |
|
322 | log = -v | |
323 | status = -m |
|
323 | status = -m | |
324 |
|
324 | |||
325 | The actual commands, instead of their aliases, must be used when |
|
325 | The actual commands, instead of their aliases, must be used when | |
326 | defining command defaults. The command defaults will also be applied |
|
326 | defining command defaults. The command defaults will also be applied | |
327 | to the aliases of the commands defined. |
|
327 | to the aliases of the commands defined. | |
328 |
|
328 | |||
329 |
|
329 | |||
330 | ``diff`` |
|
330 | ``diff`` | |
331 | """""""" |
|
331 | """""""" | |
332 |
|
332 | |||
333 | Settings used when displaying diffs. Everything except for ``unified`` is a |
|
333 | Settings used when displaying diffs. Everything except for ``unified`` is a | |
334 | Boolean and defaults to False. |
|
334 | Boolean and defaults to False. | |
335 |
|
335 | |||
336 | ``git`` |
|
336 | ``git`` | |
337 | Use git extended diff format. |
|
337 | Use git extended diff format. | |
338 | ``nodates`` |
|
338 | ``nodates`` | |
339 | Don't include dates in diff headers. |
|
339 | Don't include dates in diff headers. | |
340 | ``showfunc`` |
|
340 | ``showfunc`` | |
341 | Show which function each change is in. |
|
341 | Show which function each change is in. | |
342 | ``ignorews`` |
|
342 | ``ignorews`` | |
343 | Ignore white space when comparing lines. |
|
343 | Ignore white space when comparing lines. | |
344 | ``ignorewsamount`` |
|
344 | ``ignorewsamount`` | |
345 | Ignore changes in the amount of white space. |
|
345 | Ignore changes in the amount of white space. | |
346 | ``ignoreblanklines`` |
|
346 | ``ignoreblanklines`` | |
347 | Ignore changes whose lines are all blank. |
|
347 | Ignore changes whose lines are all blank. | |
348 | ``unified`` |
|
348 | ``unified`` | |
349 | Number of lines of context to show. |
|
349 | Number of lines of context to show. | |
350 |
|
350 | |||
351 | ``email`` |
|
351 | ``email`` | |
352 | """"""""" |
|
352 | """"""""" | |
353 | Settings for extensions that send email messages. |
|
353 | Settings for extensions that send email messages. | |
354 |
|
354 | |||
355 | ``from`` |
|
355 | ``from`` | |
356 | Optional. Email address to use in "From" header and SMTP envelope |
|
356 | Optional. Email address to use in "From" header and SMTP envelope | |
357 | of outgoing messages. |
|
357 | of outgoing messages. | |
358 | ``to`` |
|
358 | ``to`` | |
359 | Optional. Comma-separated list of recipients' email addresses. |
|
359 | Optional. Comma-separated list of recipients' email addresses. | |
360 | ``cc`` |
|
360 | ``cc`` | |
361 | Optional. Comma-separated list of carbon copy recipients' |
|
361 | Optional. Comma-separated list of carbon copy recipients' | |
362 | email addresses. |
|
362 | email addresses. | |
363 | ``bcc`` |
|
363 | ``bcc`` | |
364 | Optional. Comma-separated list of blind carbon copy recipients' |
|
364 | Optional. Comma-separated list of blind carbon copy recipients' | |
365 | email addresses. |
|
365 | email addresses. | |
366 | ``method`` |
|
366 | ``method`` | |
367 | Optional. Method to use to send email messages. If value is ``smtp`` |
|
367 | Optional. Method to use to send email messages. If value is ``smtp`` | |
368 | (default), use SMTP (see the SMTP_ section for configuration). |
|
368 | (default), use SMTP (see the SMTP_ section for configuration). | |
369 | Otherwise, use as name of program to run that acts like sendmail |
|
369 | Otherwise, use as name of program to run that acts like sendmail | |
370 | (takes ``-f`` option for sender, list of recipients on command line, |
|
370 | (takes ``-f`` option for sender, list of recipients on command line, | |
371 | message on stdin). Normally, setting this to ``sendmail`` or |
|
371 | message on stdin). Normally, setting this to ``sendmail`` or | |
372 | ``/usr/sbin/sendmail`` is enough to use sendmail to send messages. |
|
372 | ``/usr/sbin/sendmail`` is enough to use sendmail to send messages. | |
373 | ``charsets`` |
|
373 | ``charsets`` | |
374 | Optional. Comma-separated list of character sets considered |
|
374 | Optional. Comma-separated list of character sets considered | |
375 | convenient for recipients. Addresses, headers, and parts not |
|
375 | convenient for recipients. Addresses, headers, and parts not | |
376 | containing patches of outgoing messages will be encoded in the |
|
376 | containing patches of outgoing messages will be encoded in the | |
377 | first character set to which conversion from local encoding |
|
377 | first character set to which conversion from local encoding | |
378 | (``$HGENCODING``, ``ui.fallbackencoding``) succeeds. If correct |
|
378 | (``$HGENCODING``, ``ui.fallbackencoding``) succeeds. If correct | |
379 | conversion fails, the text in question is sent as is. Defaults to |
|
379 | conversion fails, the text in question is sent as is. Defaults to | |
380 | empty (explicit) list. |
|
380 | empty (explicit) list. | |
381 |
|
381 | |||
382 | Order of outgoing email character sets: |
|
382 | Order of outgoing email character sets: | |
383 |
|
383 | |||
384 | 1. ``us-ascii``: always first, regardless of settings |
|
384 | 1. ``us-ascii``: always first, regardless of settings | |
385 | 2. ``email.charsets``: in order given by user |
|
385 | 2. ``email.charsets``: in order given by user | |
386 | 3. ``ui.fallbackencoding``: if not in email.charsets |
|
386 | 3. ``ui.fallbackencoding``: if not in email.charsets | |
387 | 4. ``$HGENCODING``: if not in email.charsets |
|
387 | 4. ``$HGENCODING``: if not in email.charsets | |
388 | 5. ``utf-8``: always last, regardless of settings |
|
388 | 5. ``utf-8``: always last, regardless of settings | |
389 |
|
389 | |||
390 | Email example:: |
|
390 | Email example:: | |
391 |
|
391 | |||
392 | [email] |
|
392 | [email] | |
393 | from = Joseph User <joe.user@example.com> |
|
393 | from = Joseph User <joe.user@example.com> | |
394 | method = /usr/sbin/sendmail |
|
394 | method = /usr/sbin/sendmail | |
395 | # charsets for western Europeans |
|
395 | # charsets for western Europeans | |
396 | # us-ascii, utf-8 omitted, as they are tried first and last |
|
396 | # us-ascii, utf-8 omitted, as they are tried first and last | |
397 | charsets = iso-8859-1, iso-8859-15, windows-1252 |
|
397 | charsets = iso-8859-1, iso-8859-15, windows-1252 | |
398 |
|
398 | |||
399 |
|
399 | |||
400 | ``extensions`` |
|
400 | ``extensions`` | |
401 | """""""""""""" |
|
401 | """""""""""""" | |
402 |
|
402 | |||
403 | Mercurial has an extension mechanism for adding new features. To |
|
403 | Mercurial has an extension mechanism for adding new features. To | |
404 | enable an extension, create an entry for it in this section. |
|
404 | enable an extension, create an entry for it in this section. | |
405 |
|
405 | |||
406 | If you know that the extension is already in Python's search path, |
|
406 | If you know that the extension is already in Python's search path, | |
407 | you can give the name of the module, followed by ``=``, with nothing |
|
407 | you can give the name of the module, followed by ``=``, with nothing | |
408 | after the ``=``. |
|
408 | after the ``=``. | |
409 |
|
409 | |||
410 | Otherwise, give a name that you choose, followed by ``=``, followed by |
|
410 | Otherwise, give a name that you choose, followed by ``=``, followed by | |
411 | the path to the ``.py`` file (including the file name extension) that |
|
411 | the path to the ``.py`` file (including the file name extension) that | |
412 | defines the extension. |
|
412 | defines the extension. | |
413 |
|
413 | |||
414 | To explicitly disable an extension that is enabled in an hgrc of |
|
414 | To explicitly disable an extension that is enabled in an hgrc of | |
415 | broader scope, prepend its path with ``!``, as in |
|
415 | broader scope, prepend its path with ``!``, as in | |
416 | ``hgext.foo = !/ext/path`` or ``hgext.foo = !`` when path is not |
|
416 | ``hgext.foo = !/ext/path`` or ``hgext.foo = !`` when path is not | |
417 | supplied. |
|
417 | supplied. | |
418 |
|
418 | |||
419 | Example for ``~/.hgrc``:: |
|
419 | Example for ``~/.hgrc``:: | |
420 |
|
420 | |||
421 | [extensions] |
|
421 | [extensions] | |
422 | # (the mq extension will get loaded from Mercurial's path) |
|
422 | # (the mq extension will get loaded from Mercurial's path) | |
423 | hgext.mq = |
|
423 | hgext.mq = | |
424 | # (this extension will get loaded from the file specified) |
|
424 | # (this extension will get loaded from the file specified) | |
425 | myfeature = ~/.hgext/myfeature.py |
|
425 | myfeature = ~/.hgext/myfeature.py | |
426 |
|
426 | |||
427 |
|
427 | |||
428 | ``hostfingerprints`` |
|
428 | ``hostfingerprints`` | |
429 | """""""""""""""""""" |
|
429 | """""""""""""""""""" | |
430 |
|
430 | |||
431 | Fingerprints of the certificates of known HTTPS servers. |
|
431 | Fingerprints of the certificates of known HTTPS servers. | |
432 | A HTTPS connection to a server with a fingerprint configured here will |
|
432 | A HTTPS connection to a server with a fingerprint configured here will | |
433 | only succeed if the servers certificate matches the fingerprint. |
|
433 | only succeed if the servers certificate matches the fingerprint. | |
434 | This is very similar to how ssh known hosts works. |
|
434 | This is very similar to how ssh known hosts works. | |
435 | The fingerprint is the SHA-1 hash value of the DER encoded certificate. |
|
435 | The fingerprint is the SHA-1 hash value of the DER encoded certificate. | |
436 | The CA chain and web.cacerts is not used for servers with a fingerprint. |
|
436 | The CA chain and web.cacerts is not used for servers with a fingerprint. | |
437 |
|
437 | |||
438 | For example:: |
|
438 | For example:: | |
439 |
|
439 | |||
440 | [hostfingerprints] |
|
440 | [hostfingerprints] | |
441 | hg.intevation.org = 38:76:52:7c:87:26:9a:8f:4a:f8:d3:de:08:45:3b:ea:d6:4b:ee:cc |
|
441 | hg.intevation.org = 38:76:52:7c:87:26:9a:8f:4a:f8:d3:de:08:45:3b:ea:d6:4b:ee:cc | |
442 |
|
442 | |||
443 | This feature is only supported when using Python 2.6 or later. |
|
443 | This feature is only supported when using Python 2.6 or later. | |
444 |
|
444 | |||
445 |
|
445 | |||
446 | ``format`` |
|
446 | ``format`` | |
447 | """""""""" |
|
447 | """""""""" | |
448 |
|
448 | |||
449 | ``usestore`` |
|
449 | ``usestore`` | |
450 | Enable or disable the "store" repository format which improves |
|
450 | Enable or disable the "store" repository format which improves | |
451 | compatibility with systems that fold case or otherwise mangle |
|
451 | compatibility with systems that fold case or otherwise mangle | |
452 | filenames. Enabled by default. Disabling this option will allow |
|
452 | filenames. Enabled by default. Disabling this option will allow | |
453 | you to store longer filenames in some situations at the expense of |
|
453 | you to store longer filenames in some situations at the expense of | |
454 | compatibility and ensures that the on-disk format of newly created |
|
454 | compatibility and ensures that the on-disk format of newly created | |
455 | repositories will be compatible with Mercurial before version 0.9.4. |
|
455 | repositories will be compatible with Mercurial before version 0.9.4. | |
456 |
|
456 | |||
457 | ``usefncache`` |
|
457 | ``usefncache`` | |
458 | Enable or disable the "fncache" repository format which enhances |
|
458 | Enable or disable the "fncache" repository format which enhances | |
459 | the "store" repository format (which has to be enabled to use |
|
459 | the "store" repository format (which has to be enabled to use | |
460 | fncache) to allow longer filenames and avoids using Windows |
|
460 | fncache) to allow longer filenames and avoids using Windows | |
461 | reserved names, e.g. "nul". Enabled by default. Disabling this |
|
461 | reserved names, e.g. "nul". Enabled by default. Disabling this | |
462 | option ensures that the on-disk format of newly created |
|
462 | option ensures that the on-disk format of newly created | |
463 | repositories will be compatible with Mercurial before version 1.1. |
|
463 | repositories will be compatible with Mercurial before version 1.1. | |
464 |
|
464 | |||
465 | ``dotencode`` |
|
465 | ``dotencode`` | |
466 | Enable or disable the "dotencode" repository format which enhances |
|
466 | Enable or disable the "dotencode" repository format which enhances | |
467 | the "fncache" repository format (which has to be enabled to use |
|
467 | the "fncache" repository format (which has to be enabled to use | |
468 | dotencode) to avoid issues with filenames starting with ._ on |
|
468 | dotencode) to avoid issues with filenames starting with ._ on | |
469 | Mac OS X and spaces on Windows. Enabled by default. Disabling this |
|
469 | Mac OS X and spaces on Windows. Enabled by default. Disabling this | |
470 | option ensures that the on-disk format of newly created |
|
470 | option ensures that the on-disk format of newly created | |
471 | repositories will be compatible with Mercurial before version 1.7. |
|
471 | repositories will be compatible with Mercurial before version 1.7. | |
472 |
|
472 | |||
473 | ``merge-patterns`` |
|
473 | ``merge-patterns`` | |
474 | """""""""""""""""" |
|
474 | """""""""""""""""" | |
475 |
|
475 | |||
476 | This section specifies merge tools to associate with particular file |
|
476 | This section specifies merge tools to associate with particular file | |
477 | patterns. Tools matched here will take precedence over the default |
|
477 | patterns. Tools matched here will take precedence over the default | |
478 | merge tool. Patterns are globs by default, rooted at the repository |
|
478 | merge tool. Patterns are globs by default, rooted at the repository | |
479 | root. |
|
479 | root. | |
480 |
|
480 | |||
481 | Example:: |
|
481 | Example:: | |
482 |
|
482 | |||
483 | [merge-patterns] |
|
483 | [merge-patterns] | |
484 | **.c = kdiff3 |
|
484 | **.c = kdiff3 | |
485 | **.jpg = myimgmerge |
|
485 | **.jpg = myimgmerge | |
486 |
|
486 | |||
487 | ``merge-tools`` |
|
487 | ``merge-tools`` | |
488 | """"""""""""""" |
|
488 | """"""""""""""" | |
489 |
|
489 | |||
490 | This section configures external merge tools to use for file-level |
|
490 | This section configures external merge tools to use for file-level | |
491 | merges. |
|
491 | merges. | |
492 |
|
492 | |||
493 | Example ``~/.hgrc``:: |
|
493 | Example ``~/.hgrc``:: | |
494 |
|
494 | |||
495 | [merge-tools] |
|
495 | [merge-tools] | |
496 | # Override stock tool location |
|
496 | # Override stock tool location | |
497 | kdiff3.executable = ~/bin/kdiff3 |
|
497 | kdiff3.executable = ~/bin/kdiff3 | |
498 | # Specify command line |
|
498 | # Specify command line | |
499 | kdiff3.args = $base $local $other -o $output |
|
499 | kdiff3.args = $base $local $other -o $output | |
500 | # Give higher priority |
|
500 | # Give higher priority | |
501 | kdiff3.priority = 1 |
|
501 | kdiff3.priority = 1 | |
502 |
|
502 | |||
503 | # Define new tool |
|
503 | # Define new tool | |
504 | myHtmlTool.args = -m $local $other $base $output |
|
504 | myHtmlTool.args = -m $local $other $base $output | |
505 | myHtmlTool.regkey = Software\FooSoftware\HtmlMerge |
|
505 | myHtmlTool.regkey = Software\FooSoftware\HtmlMerge | |
506 | myHtmlTool.priority = 1 |
|
506 | myHtmlTool.priority = 1 | |
507 |
|
507 | |||
508 | Supported arguments: |
|
508 | Supported arguments: | |
509 |
|
509 | |||
510 | ``priority`` |
|
510 | ``priority`` | |
511 | The priority in which to evaluate this tool. |
|
511 | The priority in which to evaluate this tool. | |
512 | Default: 0. |
|
512 | Default: 0. | |
513 | ``executable`` |
|
513 | ``executable`` | |
514 | Either just the name of the executable or its pathname. On Windows, |
|
514 | Either just the name of the executable or its pathname. On Windows, | |
515 | the path can use environment variables with ${ProgramFiles} syntax. |
|
515 | the path can use environment variables with ${ProgramFiles} syntax. | |
516 | Default: the tool name. |
|
516 | Default: the tool name. | |
517 | ``args`` |
|
517 | ``args`` | |
518 | The arguments to pass to the tool executable. You can refer to the |
|
518 | The arguments to pass to the tool executable. You can refer to the | |
519 | files being merged as well as the output file through these |
|
519 | files being merged as well as the output file through these | |
520 | variables: ``$base``, ``$local``, ``$other``, ``$output``. |
|
520 | variables: ``$base``, ``$local``, ``$other``, ``$output``. | |
521 | Default: ``$local $base $other`` |
|
521 | Default: ``$local $base $other`` | |
522 | ``premerge`` |
|
522 | ``premerge`` | |
523 | Attempt to run internal non-interactive 3-way merge tool before |
|
523 | Attempt to run internal non-interactive 3-way merge tool before | |
524 | launching external tool. Options are ``true``, ``false``, or ``keep`` |
|
524 | launching external tool. Options are ``true``, ``false``, or ``keep`` | |
525 | to leave markers in the file if the premerge fails. |
|
525 | to leave markers in the file if the premerge fails. | |
526 | Default: True |
|
526 | Default: True | |
527 | ``binary`` |
|
527 | ``binary`` | |
528 | This tool can merge binary files. Defaults to False, unless tool |
|
528 | This tool can merge binary files. Defaults to False, unless tool | |
529 | was selected by file pattern match. |
|
529 | was selected by file pattern match. | |
530 | ``symlink`` |
|
530 | ``symlink`` | |
531 | This tool can merge symlinks. Defaults to False, even if tool was |
|
531 | This tool can merge symlinks. Defaults to False, even if tool was | |
532 | selected by file pattern match. |
|
532 | selected by file pattern match. | |
533 | ``check`` |
|
533 | ``check`` | |
534 | A list of merge success-checking options: |
|
534 | A list of merge success-checking options: | |
535 |
|
535 | |||
536 | ``changed`` |
|
536 | ``changed`` | |
537 | Ask whether merge was successful when the merged file shows no changes. |
|
537 | Ask whether merge was successful when the merged file shows no changes. | |
538 | ``conflicts`` |
|
538 | ``conflicts`` | |
539 | Check whether there are conflicts even though the tool reported success. |
|
539 | Check whether there are conflicts even though the tool reported success. | |
540 | ``prompt`` |
|
540 | ``prompt`` | |
541 | Always prompt for merge success, regardless of success reported by tool. |
|
541 | Always prompt for merge success, regardless of success reported by tool. | |
542 |
|
542 | |||
543 | ``checkchanged`` |
|
543 | ``checkchanged`` | |
544 | True is equivalent to ``check = changed``. |
|
544 | True is equivalent to ``check = changed``. | |
545 | Default: False |
|
545 | Default: False | |
546 | ``checkconflicts`` |
|
546 | ``checkconflicts`` | |
547 | True is equivalent to ``check = conflicts``. |
|
547 | True is equivalent to ``check = conflicts``. | |
548 | Default: False |
|
548 | Default: False | |
549 | ``fixeol`` |
|
549 | ``fixeol`` | |
550 | Attempt to fix up EOL changes caused by the merge tool. |
|
550 | Attempt to fix up EOL changes caused by the merge tool. | |
551 | Default: False |
|
551 | Default: False | |
552 | ``gui`` |
|
552 | ``gui`` | |
553 | This tool requires a graphical interface to run. Default: False |
|
553 | This tool requires a graphical interface to run. Default: False | |
554 | ``regkey`` |
|
554 | ``regkey`` | |
555 | Windows registry key which describes install location of this |
|
555 | Windows registry key which describes install location of this | |
556 | tool. Mercurial will search for this key first under |
|
556 | tool. Mercurial will search for this key first under | |
557 | ``HKEY_CURRENT_USER`` and then under ``HKEY_LOCAL_MACHINE``. |
|
557 | ``HKEY_CURRENT_USER`` and then under ``HKEY_LOCAL_MACHINE``. | |
558 | Default: None |
|
558 | Default: None | |
559 | ``regname`` |
|
559 | ``regname`` | |
560 | Name of value to read from specified registry key. Defaults to the |
|
560 | Name of value to read from specified registry key. Defaults to the | |
561 | unnamed (default) value. |
|
561 | unnamed (default) value. | |
562 | ``regappend`` |
|
562 | ``regappend`` | |
563 | String to append to the value read from the registry, typically |
|
563 | String to append to the value read from the registry, typically | |
564 | the executable name of the tool. |
|
564 | the executable name of the tool. | |
565 | Default: None |
|
565 | Default: None | |
566 |
|
566 | |||
567 |
|
567 | |||
568 | ``hooks`` |
|
568 | ``hooks`` | |
569 | """"""""" |
|
569 | """"""""" | |
570 | Commands or Python functions that get automatically executed by |
|
570 | Commands or Python functions that get automatically executed by | |
571 | various actions such as starting or finishing a commit. Multiple |
|
571 | various actions such as starting or finishing a commit. Multiple | |
572 | hooks can be run for the same action by appending a suffix to the |
|
572 | hooks can be run for the same action by appending a suffix to the | |
573 | action. Overriding a site-wide hook can be done by changing its |
|
573 | action. Overriding a site-wide hook can be done by changing its | |
574 | value or setting it to an empty string. |
|
574 | value or setting it to an empty string. | |
575 |
|
575 | |||
576 | Example ``.hg/hgrc``:: |
|
576 | Example ``.hg/hgrc``:: | |
577 |
|
577 | |||
578 | [hooks] |
|
578 | [hooks] | |
579 | # update working directory after adding changesets |
|
579 | # update working directory after adding changesets | |
580 | changegroup.update = hg update |
|
580 | changegroup.update = hg update | |
581 | # do not use the site-wide hook |
|
581 | # do not use the site-wide hook | |
582 | incoming = |
|
582 | incoming = | |
583 | incoming.email = /my/email/hook |
|
583 | incoming.email = /my/email/hook | |
584 | incoming.autobuild = /my/build/hook |
|
584 | incoming.autobuild = /my/build/hook | |
585 |
|
585 | |||
586 | Most hooks are run with environment variables set that give useful |
|
586 | Most hooks are run with environment variables set that give useful | |
587 | additional information. For each hook below, the environment |
|
587 | additional information. For each hook below, the environment | |
588 | variables it is passed are listed with names of the form ``$HG_foo``. |
|
588 | variables it is passed are listed with names of the form ``$HG_foo``. | |
589 |
|
589 | |||
590 | ``changegroup`` |
|
590 | ``changegroup`` | |
591 | Run after a changegroup has been added via push, pull or unbundle. |
|
591 | Run after a changegroup has been added via push, pull or unbundle. | |
592 | ID of the first new changeset is in ``$HG_NODE``. URL from which |
|
592 | ID of the first new changeset is in ``$HG_NODE``. URL from which | |
593 | changes came is in ``$HG_URL``. |
|
593 | changes came is in ``$HG_URL``. | |
594 | ``commit`` |
|
594 | ``commit`` | |
595 | Run after a changeset has been created in the local repository. ID |
|
595 | Run after a changeset has been created in the local repository. ID | |
596 | of the newly created changeset is in ``$HG_NODE``. Parent changeset |
|
596 | of the newly created changeset is in ``$HG_NODE``. Parent changeset | |
597 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
597 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
598 | ``incoming`` |
|
598 | ``incoming`` | |
599 | Run after a changeset has been pulled, pushed, or unbundled into |
|
599 | Run after a changeset has been pulled, pushed, or unbundled into | |
600 | the local repository. The ID of the newly arrived changeset is in |
|
600 | the local repository. The ID of the newly arrived changeset is in | |
601 | ``$HG_NODE``. URL that was source of changes came is in ``$HG_URL``. |
|
601 | ``$HG_NODE``. URL that was source of changes came is in ``$HG_URL``. | |
602 | ``outgoing`` |
|
602 | ``outgoing`` | |
603 | Run after sending changes from local repository to another. ID of |
|
603 | Run after sending changes from local repository to another. ID of | |
604 | first changeset sent is in ``$HG_NODE``. Source of operation is in |
|
604 | first changeset sent is in ``$HG_NODE``. Source of operation is in | |
605 | ``$HG_SOURCE``; see "preoutgoing" hook for description. |
|
605 | ``$HG_SOURCE``; see "preoutgoing" hook for description. | |
606 | ``post-<command>`` |
|
606 | ``post-<command>`` | |
607 | Run after successful invocations of the associated command. The |
|
607 | Run after successful invocations of the associated command. The | |
608 | contents of the command line are passed as ``$HG_ARGS`` and the result |
|
608 | contents of the command line are passed as ``$HG_ARGS`` and the result | |
609 | code in ``$HG_RESULT``. Parsed command line arguments are passed as |
|
609 | code in ``$HG_RESULT``. Parsed command line arguments are passed as | |
610 | ``$HG_PATS`` and ``$HG_OPTS``. These contain string representations of |
|
610 | ``$HG_PATS`` and ``$HG_OPTS``. These contain string representations of | |
611 | the python data internally passed to <command>. ``$HG_OPTS`` is a |
|
611 | the python data internally passed to <command>. ``$HG_OPTS`` is a | |
612 | dictionary of options (with unspecified options set to their defaults). |
|
612 | dictionary of options (with unspecified options set to their defaults). | |
613 | ``$HG_PATS`` is a list of arguments. Hook failure is ignored. |
|
613 | ``$HG_PATS`` is a list of arguments. Hook failure is ignored. | |
614 | ``pre-<command>`` |
|
614 | ``pre-<command>`` | |
615 | Run before executing the associated command. The contents of the |
|
615 | Run before executing the associated command. The contents of the | |
616 | command line are passed as ``$HG_ARGS``. Parsed command line arguments |
|
616 | command line are passed as ``$HG_ARGS``. Parsed command line arguments | |
617 | are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain string |
|
617 | are passed as ``$HG_PATS`` and ``$HG_OPTS``. These contain string | |
618 | representations of the data internally passed to <command>. ``$HG_OPTS`` |
|
618 | representations of the data internally passed to <command>. ``$HG_OPTS`` | |
619 | is a dictionary of options (with unspecified options set to their |
|
619 | is a dictionary of options (with unspecified options set to their | |
620 | defaults). ``$HG_PATS`` is a list of arguments. If the hook returns |
|
620 | defaults). ``$HG_PATS`` is a list of arguments. If the hook returns | |
621 | failure, the command doesn't execute and Mercurial returns the failure |
|
621 | failure, the command doesn't execute and Mercurial returns the failure | |
622 | code. |
|
622 | code. | |
623 | ``prechangegroup`` |
|
623 | ``prechangegroup`` | |
624 | Run before a changegroup is added via push, pull or unbundle. Exit |
|
624 | Run before a changegroup is added via push, pull or unbundle. Exit | |
625 | status 0 allows the changegroup to proceed. Non-zero status will |
|
625 | status 0 allows the changegroup to proceed. Non-zero status will | |
626 | cause the push, pull or unbundle to fail. URL from which changes |
|
626 | cause the push, pull or unbundle to fail. URL from which changes | |
627 | will come is in ``$HG_URL``. |
|
627 | will come is in ``$HG_URL``. | |
628 | ``precommit`` |
|
628 | ``precommit`` | |
629 | Run before starting a local commit. Exit status 0 allows the |
|
629 | Run before starting a local commit. Exit status 0 allows the | |
630 | commit to proceed. Non-zero status will cause the commit to fail. |
|
630 | commit to proceed. Non-zero status will cause the commit to fail. | |
631 | Parent changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
631 | Parent changeset IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
632 | ``preoutgoing`` |
|
632 | ``preoutgoing`` | |
633 | Run before collecting changes to send from the local repository to |
|
633 | Run before collecting changes to send from the local repository to | |
634 | another. Non-zero status will cause failure. This lets you prevent |
|
634 | another. Non-zero status will cause failure. This lets you prevent | |
635 | pull over HTTP or SSH. Also prevents against local pull, push |
|
635 | pull over HTTP or SSH. Also prevents against local pull, push | |
636 | (outbound) or bundle commands, but not effective, since you can |
|
636 | (outbound) or bundle commands, but not effective, since you can | |
637 | just copy files instead then. Source of operation is in |
|
637 | just copy files instead then. Source of operation is in | |
638 | ``$HG_SOURCE``. If "serve", operation is happening on behalf of remote |
|
638 | ``$HG_SOURCE``. If "serve", operation is happening on behalf of remote | |
639 | SSH or HTTP repository. If "push", "pull" or "bundle", operation |
|
639 | SSH or HTTP repository. If "push", "pull" or "bundle", operation | |
640 | is happening on behalf of repository on same system. |
|
640 | is happening on behalf of repository on same system. | |
641 | ``pretag`` |
|
641 | ``pretag`` | |
642 | Run before creating a tag. Exit status 0 allows the tag to be |
|
642 | Run before creating a tag. Exit status 0 allows the tag to be | |
643 | created. Non-zero status will cause the tag to fail. ID of |
|
643 | created. Non-zero status will cause the tag to fail. ID of | |
644 | changeset to tag is in ``$HG_NODE``. Name of tag is in ``$HG_TAG``. Tag is |
|
644 | changeset to tag is in ``$HG_NODE``. Name of tag is in ``$HG_TAG``. Tag is | |
645 | local if ``$HG_LOCAL=1``, in repository if ``$HG_LOCAL=0``. |
|
645 | local if ``$HG_LOCAL=1``, in repository if ``$HG_LOCAL=0``. | |
646 | ``pretxnchangegroup`` |
|
646 | ``pretxnchangegroup`` | |
647 | Run after a changegroup has been added via push, pull or unbundle, |
|
647 | Run after a changegroup has been added via push, pull or unbundle, | |
648 | but before the transaction has been committed. Changegroup is |
|
648 | but before the transaction has been committed. Changegroup is | |
649 | visible to hook program. This lets you validate incoming changes |
|
649 | visible to hook program. This lets you validate incoming changes | |
650 | before accepting them. Passed the ID of the first new changeset in |
|
650 | before accepting them. Passed the ID of the first new changeset in | |
651 | ``$HG_NODE``. Exit status 0 allows the transaction to commit. Non-zero |
|
651 | ``$HG_NODE``. Exit status 0 allows the transaction to commit. Non-zero | |
652 | status will cause the transaction to be rolled back and the push, |
|
652 | status will cause the transaction to be rolled back and the push, | |
653 | pull or unbundle will fail. URL that was source of changes is in |
|
653 | pull or unbundle will fail. URL that was source of changes is in | |
654 | ``$HG_URL``. |
|
654 | ``$HG_URL``. | |
655 | ``pretxncommit`` |
|
655 | ``pretxncommit`` | |
656 | Run after a changeset has been created but the transaction not yet |
|
656 | Run after a changeset has been created but the transaction not yet | |
657 | committed. Changeset is visible to hook program. This lets you |
|
657 | committed. Changeset is visible to hook program. This lets you | |
658 | validate commit message and changes. Exit status 0 allows the |
|
658 | validate commit message and changes. Exit status 0 allows the | |
659 | commit to proceed. Non-zero status will cause the transaction to |
|
659 | commit to proceed. Non-zero status will cause the transaction to | |
660 | be rolled back. ID of changeset is in ``$HG_NODE``. Parent changeset |
|
660 | be rolled back. ID of changeset is in ``$HG_NODE``. Parent changeset | |
661 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. |
|
661 | IDs are in ``$HG_PARENT1`` and ``$HG_PARENT2``. | |
662 | ``preupdate`` |
|
662 | ``preupdate`` | |
663 | Run before updating the working directory. Exit status 0 allows |
|
663 | Run before updating the working directory. Exit status 0 allows | |
664 | the update to proceed. Non-zero status will prevent the update. |
|
664 | the update to proceed. Non-zero status will prevent the update. | |
665 | Changeset ID of first new parent is in ``$HG_PARENT1``. If merge, ID |
|
665 | Changeset ID of first new parent is in ``$HG_PARENT1``. If merge, ID | |
666 | of second new parent is in ``$HG_PARENT2``. |
|
666 | of second new parent is in ``$HG_PARENT2``. | |
667 | ``tag`` |
|
667 | ``tag`` | |
668 | Run after a tag is created. ID of tagged changeset is in ``$HG_NODE``. |
|
668 | Run after a tag is created. ID of tagged changeset is in ``$HG_NODE``. | |
669 | Name of tag is in ``$HG_TAG``. Tag is local if ``$HG_LOCAL=1``, in |
|
669 | Name of tag is in ``$HG_TAG``. Tag is local if ``$HG_LOCAL=1``, in | |
670 | repository if ``$HG_LOCAL=0``. |
|
670 | repository if ``$HG_LOCAL=0``. | |
671 | ``update`` |
|
671 | ``update`` | |
672 | Run after updating the working directory. Changeset ID of first |
|
672 | Run after updating the working directory. Changeset ID of first | |
673 | new parent is in ``$HG_PARENT1``. If merge, ID of second new parent is |
|
673 | new parent is in ``$HG_PARENT1``. If merge, ID of second new parent is | |
674 | in ``$HG_PARENT2``. If the update succeeded, ``$HG_ERROR=0``. If the |
|
674 | in ``$HG_PARENT2``. If the update succeeded, ``$HG_ERROR=0``. If the | |
675 | update failed (e.g. because conflicts not resolved), ``$HG_ERROR=1``. |
|
675 | update failed (e.g. because conflicts not resolved), ``$HG_ERROR=1``. | |
676 |
|
676 | |||
677 | .. note:: It is generally better to use standard hooks rather than the |
|
677 | .. note:: It is generally better to use standard hooks rather than the | |
678 | generic pre- and post- command hooks as they are guaranteed to be |
|
678 | generic pre- and post- command hooks as they are guaranteed to be | |
679 | called in the appropriate contexts for influencing transactions. |
|
679 | called in the appropriate contexts for influencing transactions. | |
680 | Also, hooks like "commit" will be called in all contexts that |
|
680 | Also, hooks like "commit" will be called in all contexts that | |
681 | generate a commit (e.g. tag) and not just the commit command. |
|
681 | generate a commit (e.g. tag) and not just the commit command. | |
682 |
|
682 | |||
683 | .. note:: Environment variables with empty values may not be passed to |
|
683 | .. note:: Environment variables with empty values may not be passed to | |
684 | hooks on platforms such as Windows. As an example, ``$HG_PARENT2`` |
|
684 | hooks on platforms such as Windows. As an example, ``$HG_PARENT2`` | |
685 | will have an empty value under Unix-like platforms for non-merge |
|
685 | will have an empty value under Unix-like platforms for non-merge | |
686 | changesets, while it will not be available at all under Windows. |
|
686 | changesets, while it will not be available at all under Windows. | |
687 |
|
687 | |||
688 | The syntax for Python hooks is as follows:: |
|
688 | The syntax for Python hooks is as follows:: | |
689 |
|
689 | |||
690 | hookname = python:modulename.submodule.callable |
|
690 | hookname = python:modulename.submodule.callable | |
691 | hookname = python:/path/to/python/module.py:callable |
|
691 | hookname = python:/path/to/python/module.py:callable | |
692 |
|
692 | |||
693 | Python hooks are run within the Mercurial process. Each hook is |
|
693 | Python hooks are run within the Mercurial process. Each hook is | |
694 | called with at least three keyword arguments: a ui object (keyword |
|
694 | called with at least three keyword arguments: a ui object (keyword | |
695 | ``ui``), a repository object (keyword ``repo``), and a ``hooktype`` |
|
695 | ``ui``), a repository object (keyword ``repo``), and a ``hooktype`` | |
696 | keyword that tells what kind of hook is used. Arguments listed as |
|
696 | keyword that tells what kind of hook is used. Arguments listed as | |
697 | environment variables above are passed as keyword arguments, with no |
|
697 | environment variables above are passed as keyword arguments, with no | |
698 | ``HG_`` prefix, and names in lower case. |
|
698 | ``HG_`` prefix, and names in lower case. | |
699 |
|
699 | |||
700 | If a Python hook returns a "true" value or raises an exception, this |
|
700 | If a Python hook returns a "true" value or raises an exception, this | |
701 | is treated as a failure. |
|
701 | is treated as a failure. | |
702 |
|
702 | |||
703 |
|
703 | |||
704 | ``http_proxy`` |
|
704 | ``http_proxy`` | |
705 | """""""""""""" |
|
705 | """""""""""""" | |
706 | Used to access web-based Mercurial repositories through a HTTP |
|
706 | Used to access web-based Mercurial repositories through a HTTP | |
707 | proxy. |
|
707 | proxy. | |
708 |
|
708 | |||
709 | ``host`` |
|
709 | ``host`` | |
710 | Host name and (optional) port of the proxy server, for example |
|
710 | Host name and (optional) port of the proxy server, for example | |
711 | "myproxy:8000". |
|
711 | "myproxy:8000". | |
712 | ``no`` |
|
712 | ``no`` | |
713 | Optional. Comma-separated list of host names that should bypass |
|
713 | Optional. Comma-separated list of host names that should bypass | |
714 | the proxy. |
|
714 | the proxy. | |
715 | ``passwd`` |
|
715 | ``passwd`` | |
716 | Optional. Password to authenticate with at the proxy server. |
|
716 | Optional. Password to authenticate with at the proxy server. | |
717 | ``user`` |
|
717 | ``user`` | |
718 | Optional. User name to authenticate with at the proxy server. |
|
718 | Optional. User name to authenticate with at the proxy server. | |
719 | ``always`` |
|
719 | ``always`` | |
720 | Optional. Always use the proxy, even for localhost and any entries |
|
720 | Optional. Always use the proxy, even for localhost and any entries | |
721 | in ``http_proxy.no``. True or False. Default: False. |
|
721 | in ``http_proxy.no``. True or False. Default: False. | |
722 |
|
722 | |||
723 | ``smtp`` |
|
723 | ``smtp`` | |
724 | """""""" |
|
724 | """""""" | |
725 | Configuration for extensions that need to send email messages. |
|
725 | Configuration for extensions that need to send email messages. | |
726 |
|
726 | |||
727 | ``host`` |
|
727 | ``host`` | |
728 | Host name of mail server, e.g. "mail.example.com". |
|
728 | Host name of mail server, e.g. "mail.example.com". | |
729 | ``port`` |
|
729 | ``port`` | |
730 | Optional. Port to connect to on mail server. Default: 25. |
|
730 | Optional. Port to connect to on mail server. Default: 25. | |
731 | ``tls`` |
|
731 | ``tls`` | |
732 | Optional. Method to enable TLS when connecting to mail server: starttls, |
|
732 | Optional. Method to enable TLS when connecting to mail server: starttls, | |
733 | smtps or none. Default: none. |
|
733 | smtps or none. Default: none. | |
734 | ``username`` |
|
734 | ``username`` | |
735 | Optional. User name for authenticating with the SMTP server. |
|
735 | Optional. User name for authenticating with the SMTP server. | |
736 | Default: none. |
|
736 | Default: none. | |
737 | ``password`` |
|
737 | ``password`` | |
738 | Optional. Password for authenticating with the SMTP server. If not |
|
738 | Optional. Password for authenticating with the SMTP server. If not | |
739 | specified, interactive sessions will prompt the user for a |
|
739 | specified, interactive sessions will prompt the user for a | |
740 | password; non-interactive sessions will fail. Default: none. |
|
740 | password; non-interactive sessions will fail. Default: none. | |
741 | ``local_hostname`` |
|
741 | ``local_hostname`` | |
742 | Optional. It's the hostname that the sender can use to identify |
|
742 | Optional. It's the hostname that the sender can use to identify | |
743 | itself to the MTA. |
|
743 | itself to the MTA. | |
744 |
|
744 | |||
745 |
|
745 | |||
746 | ``patch`` |
|
746 | ``patch`` | |
747 | """"""""" |
|
747 | """"""""" | |
748 | Settings used when applying patches, for instance through the 'import' |
|
748 | Settings used when applying patches, for instance through the 'import' | |
749 | command or with Mercurial Queues extension. |
|
749 | command or with Mercurial Queues extension. | |
750 |
|
750 | |||
751 | ``eol`` |
|
751 | ``eol`` | |
752 | When set to 'strict' patch content and patched files end of lines |
|
752 | When set to 'strict' patch content and patched files end of lines | |
753 | are preserved. When set to ``lf`` or ``crlf``, both files end of |
|
753 | are preserved. When set to ``lf`` or ``crlf``, both files end of | |
754 | lines are ignored when patching and the result line endings are |
|
754 | lines are ignored when patching and the result line endings are | |
755 | normalized to either LF (Unix) or CRLF (Windows). When set to |
|
755 | normalized to either LF (Unix) or CRLF (Windows). When set to | |
756 | ``auto``, end of lines are again ignored while patching but line |
|
756 | ``auto``, end of lines are again ignored while patching but line | |
757 | endings in patched files are normalized to their original setting |
|
757 | endings in patched files are normalized to their original setting | |
758 | on a per-file basis. If target file does not exist or has no end |
|
758 | on a per-file basis. If target file does not exist or has no end | |
759 | of line, patch line endings are preserved. |
|
759 | of line, patch line endings are preserved. | |
760 | Default: strict. |
|
760 | Default: strict. | |
761 |
|
761 | |||
762 |
|
762 | |||
763 | ``paths`` |
|
763 | ``paths`` | |
764 | """"""""" |
|
764 | """"""""" | |
765 | Assigns symbolic names to repositories. The left side is the |
|
765 | Assigns symbolic names to repositories. The left side is the | |
766 | symbolic name, and the right gives the directory or URL that is the |
|
766 | symbolic name, and the right gives the directory or URL that is the | |
767 | location of the repository. Default paths can be declared by setting |
|
767 | location of the repository. Default paths can be declared by setting | |
768 | the following entries. |
|
768 | the following entries. | |
769 |
|
769 | |||
770 | ``default`` |
|
770 | ``default`` | |
771 | Directory or URL to use when pulling if no source is specified. |
|
771 | Directory or URL to use when pulling if no source is specified. | |
772 | Default is set to repository from which the current repository was |
|
772 | Default is set to repository from which the current repository was | |
773 | cloned. |
|
773 | cloned. | |
774 | ``default-push`` |
|
774 | ``default-push`` | |
775 | Optional. Directory or URL to use when pushing if no destination |
|
775 | Optional. Directory or URL to use when pushing if no destination | |
776 | is specified. |
|
776 | is specified. | |
777 |
|
777 | |||
778 |
|
778 | |||
779 | ``profiling`` |
|
779 | ``profiling`` | |
780 | """"""""""""" |
|
780 | """"""""""""" | |
781 | Specifies profiling format and file output. In this section |
|
781 | Specifies profiling format and file output. In this section | |
782 | description, 'profiling data' stands for the raw data collected |
|
782 | description, 'profiling data' stands for the raw data collected | |
783 | during profiling, while 'profiling report' stands for a statistical |
|
783 | during profiling, while 'profiling report' stands for a statistical | |
784 | text report generated from the profiling data. The profiling is done |
|
784 | text report generated from the profiling data. The profiling is done | |
785 | using lsprof. |
|
785 | using lsprof. | |
786 |
|
786 | |||
787 | ``format`` |
|
787 | ``format`` | |
788 | Profiling format. |
|
788 | Profiling format. | |
789 | Default: text. |
|
789 | Default: text. | |
790 |
|
790 | |||
791 | ``text`` |
|
791 | ``text`` | |
792 | Generate a profiling report. When saving to a file, it should be |
|
792 | Generate a profiling report. When saving to a file, it should be | |
793 | noted that only the report is saved, and the profiling data is |
|
793 | noted that only the report is saved, and the profiling data is | |
794 | not kept. |
|
794 | not kept. | |
795 | ``kcachegrind`` |
|
795 | ``kcachegrind`` | |
796 | Format profiling data for kcachegrind use: when saving to a |
|
796 | Format profiling data for kcachegrind use: when saving to a | |
797 | file, the generated file can directly be loaded into |
|
797 | file, the generated file can directly be loaded into | |
798 | kcachegrind. |
|
798 | kcachegrind. | |
799 | ``output`` |
|
799 | ``output`` | |
800 | File path where profiling data or report should be saved. If the |
|
800 | File path where profiling data or report should be saved. If the | |
801 | file exists, it is replaced. Default: None, data is printed on |
|
801 | file exists, it is replaced. Default: None, data is printed on | |
802 | stderr |
|
802 | stderr | |
803 |
|
803 | |||
804 | ``server`` |
|
804 | ``server`` | |
805 | """""""""" |
|
805 | """""""""" | |
806 | Controls generic server settings. |
|
806 | Controls generic server settings. | |
807 |
|
807 | |||
808 | ``uncompressed`` |
|
808 | ``uncompressed`` | |
809 | Whether to allow clients to clone a repository using the |
|
809 | Whether to allow clients to clone a repository using the | |
810 | uncompressed streaming protocol. This transfers about 40% more |
|
810 | uncompressed streaming protocol. This transfers about 40% more | |
811 | data than a regular clone, but uses less memory and CPU on both |
|
811 | data than a regular clone, but uses less memory and CPU on both | |
812 | server and client. Over a LAN (100 Mbps or better) or a very fast |
|
812 | server and client. Over a LAN (100 Mbps or better) or a very fast | |
813 | WAN, an uncompressed streaming clone is a lot faster (~10x) than a |
|
813 | WAN, an uncompressed streaming clone is a lot faster (~10x) than a | |
814 | regular clone. Over most WAN connections (anything slower than |
|
814 | regular clone. Over most WAN connections (anything slower than | |
815 | about 6 Mbps), uncompressed streaming is slower, because of the |
|
815 | about 6 Mbps), uncompressed streaming is slower, because of the | |
816 | extra data transfer overhead. This mode will also temporarily hold |
|
816 | extra data transfer overhead. This mode will also temporarily hold | |
817 | the write lock while determining what data to transfer. |
|
817 | the write lock while determining what data to transfer. | |
818 | Default is True. |
|
818 | Default is True. | |
819 |
|
819 | |||
820 | ``validate`` |
|
820 | ``validate`` | |
821 | Whether to validate the completeness of pushed changesets by |
|
821 | Whether to validate the completeness of pushed changesets by | |
822 | checking that all new file revisions specified in manifests are |
|
822 | checking that all new file revisions specified in manifests are | |
823 | present. Default is False. |
|
823 | present. Default is False. | |
824 |
|
824 | |||
825 | ``subpaths`` |
|
825 | ``subpaths`` | |
826 | """""""""""" |
|
826 | """""""""""" | |
827 | Defines subrepositories source locations rewriting rules of the form:: |
|
827 | Defines subrepositories source locations rewriting rules of the form:: | |
828 |
|
828 | |||
829 | <pattern> = <replacement> |
|
829 | <pattern> = <replacement> | |
830 |
|
830 | |||
831 | Where ``pattern`` is a regular expression matching the source and |
|
831 | Where ``pattern`` is a regular expression matching the source and | |
832 | ``replacement`` is the replacement string used to rewrite it. Groups |
|
832 | ``replacement`` is the replacement string used to rewrite it. Groups | |
833 | can be matched in ``pattern`` and referenced in ``replacements``. For |
|
833 | can be matched in ``pattern`` and referenced in ``replacements``. For | |
834 | instance:: |
|
834 | instance:: | |
835 |
|
835 | |||
836 | http://server/(.*)-hg/ = http://hg.server/\1/ |
|
836 | http://server/(.*)-hg/ = http://hg.server/\1/ | |
837 |
|
837 | |||
838 | rewrites ``http://server/foo-hg/`` into ``http://hg.server/foo/``. |
|
838 | rewrites ``http://server/foo-hg/`` into ``http://hg.server/foo/``. | |
839 |
|
839 | |||
840 | All patterns are applied in definition order. |
|
840 | All patterns are applied in definition order. | |
841 |
|
841 | |||
842 | ``trusted`` |
|
842 | ``trusted`` | |
843 | """"""""""" |
|
843 | """"""""""" | |
844 |
|
844 | |||
845 | Mercurial will not use the settings in the |
|
845 | Mercurial will not use the settings in the | |
846 | ``.hg/hgrc`` file from a repository if it doesn't belong to a trusted |
|
846 | ``.hg/hgrc`` file from a repository if it doesn't belong to a trusted | |
847 | user or to a trusted group, as various hgrc features allow arbitrary |
|
847 | user or to a trusted group, as various hgrc features allow arbitrary | |
848 | commands to be run. This issue is often encountered when configuring |
|
848 | commands to be run. This issue is often encountered when configuring | |
849 | hooks or extensions for shared repositories or servers. However, |
|
849 | hooks or extensions for shared repositories or servers. However, | |
850 | the web interface will use some safe settings from the ``[web]`` |
|
850 | the web interface will use some safe settings from the ``[web]`` | |
851 | section. |
|
851 | section. | |
852 |
|
852 | |||
853 | This section specifies what users and groups are trusted. The |
|
853 | This section specifies what users and groups are trusted. The | |
854 | current user is always trusted. To trust everybody, list a user or a |
|
854 | current user is always trusted. To trust everybody, list a user or a | |
855 | group with name ``*``. These settings must be placed in an |
|
855 | group with name ``*``. These settings must be placed in an | |
856 | *already-trusted file* to take effect, such as ``$HOME/.hgrc`` of the |
|
856 | *already-trusted file* to take effect, such as ``$HOME/.hgrc`` of the | |
857 | user or service running Mercurial. |
|
857 | user or service running Mercurial. | |
858 |
|
858 | |||
859 | ``users`` |
|
859 | ``users`` | |
860 | Comma-separated list of trusted users. |
|
860 | Comma-separated list of trusted users. | |
861 | ``groups`` |
|
861 | ``groups`` | |
862 | Comma-separated list of trusted groups. |
|
862 | Comma-separated list of trusted groups. | |
863 |
|
863 | |||
864 |
|
864 | |||
865 | ``ui`` |
|
865 | ``ui`` | |
866 | """""" |
|
866 | """""" | |
867 |
|
867 | |||
868 | User interface controls. |
|
868 | User interface controls. | |
869 |
|
869 | |||
870 | ``archivemeta`` |
|
870 | ``archivemeta`` | |
871 | Whether to include the .hg_archival.txt file containing meta data |
|
871 | Whether to include the .hg_archival.txt file containing meta data | |
872 | (hashes for the repository base and for tip) in archives created |
|
872 | (hashes for the repository base and for tip) in archives created | |
873 | by the :hg:`archive` command or downloaded via hgweb. |
|
873 | by the :hg:`archive` command or downloaded via hgweb. | |
874 | Default is True. |
|
874 | Default is True. | |
875 | ``askusername`` |
|
875 | ``askusername`` | |
876 | Whether to prompt for a username when committing. If True, and |
|
876 | Whether to prompt for a username when committing. If True, and | |
877 | neither ``$HGUSER`` nor ``$EMAIL`` has been specified, then the user will |
|
877 | neither ``$HGUSER`` nor ``$EMAIL`` has been specified, then the user will | |
878 | be prompted to enter a username. If no username is entered, the |
|
878 | be prompted to enter a username. If no username is entered, the | |
879 | default ``USER@HOST`` is used instead. |
|
879 | default ``USER@HOST`` is used instead. | |
880 | Default is False. |
|
880 | Default is False. | |
|
881 | ``commitsubrepos`` | |||
|
882 | Whether to commit modified subrepositories when committing the | |||
|
883 | parent repository. If False and one subrepository has uncommitted | |||
|
884 | changes, abort the commit. | |||
|
885 | Default is True. | |||
881 | ``debug`` |
|
886 | ``debug`` | |
882 | Print debugging information. True or False. Default is False. |
|
887 | Print debugging information. True or False. Default is False. | |
883 | ``editor`` |
|
888 | ``editor`` | |
884 | The editor to use during a commit. Default is ``$EDITOR`` or ``vi``. |
|
889 | The editor to use during a commit. Default is ``$EDITOR`` or ``vi``. | |
885 | ``fallbackencoding`` |
|
890 | ``fallbackencoding`` | |
886 | Encoding to try if it's not possible to decode the changelog using |
|
891 | Encoding to try if it's not possible to decode the changelog using | |
887 | UTF-8. Default is ISO-8859-1. |
|
892 | UTF-8. Default is ISO-8859-1. | |
888 | ``ignore`` |
|
893 | ``ignore`` | |
889 | A file to read per-user ignore patterns from. This file should be |
|
894 | A file to read per-user ignore patterns from. This file should be | |
890 | in the same format as a repository-wide .hgignore file. This |
|
895 | in the same format as a repository-wide .hgignore file. This | |
891 | option supports hook syntax, so if you want to specify multiple |
|
896 | option supports hook syntax, so if you want to specify multiple | |
892 | ignore files, you can do so by setting something like |
|
897 | ignore files, you can do so by setting something like | |
893 | ``ignore.other = ~/.hgignore2``. For details of the ignore file |
|
898 | ``ignore.other = ~/.hgignore2``. For details of the ignore file | |
894 | format, see the |hgignore(5)|_ man page. |
|
899 | format, see the |hgignore(5)|_ man page. | |
895 | ``interactive`` |
|
900 | ``interactive`` | |
896 | Allow to prompt the user. True or False. Default is True. |
|
901 | Allow to prompt the user. True or False. Default is True. | |
897 | ``logtemplate`` |
|
902 | ``logtemplate`` | |
898 | Template string for commands that print changesets. |
|
903 | Template string for commands that print changesets. | |
899 | ``merge`` |
|
904 | ``merge`` | |
900 | The conflict resolution program to use during a manual merge. |
|
905 | The conflict resolution program to use during a manual merge. | |
901 | For more information on merge tools see :hg:`help merge-tools`. |
|
906 | For more information on merge tools see :hg:`help merge-tools`. | |
902 | For configuring merge tools see the merge-tools_ section. |
|
907 | For configuring merge tools see the merge-tools_ section. | |
903 | ``patch`` |
|
908 | ``patch`` | |
904 | command to use to apply patches. Look for ``gpatch`` or ``patch`` in |
|
909 | command to use to apply patches. Look for ``gpatch`` or ``patch`` in | |
905 | PATH if unset. |
|
910 | PATH if unset. | |
906 | ``quiet`` |
|
911 | ``quiet`` | |
907 | Reduce the amount of output printed. True or False. Default is False. |
|
912 | Reduce the amount of output printed. True or False. Default is False. | |
908 | ``remotecmd`` |
|
913 | ``remotecmd`` | |
909 | remote command to use for clone/push/pull operations. Default is ``hg``. |
|
914 | remote command to use for clone/push/pull operations. Default is ``hg``. | |
910 | ``report_untrusted`` |
|
915 | ``report_untrusted`` | |
911 | Warn if a ``.hg/hgrc`` file is ignored due to not being owned by a |
|
916 | Warn if a ``.hg/hgrc`` file is ignored due to not being owned by a | |
912 | trusted user or group. True or False. Default is True. |
|
917 | trusted user or group. True or False. Default is True. | |
913 | ``slash`` |
|
918 | ``slash`` | |
914 | Display paths using a slash (``/``) as the path separator. This |
|
919 | Display paths using a slash (``/``) as the path separator. This | |
915 | only makes a difference on systems where the default path |
|
920 | only makes a difference on systems where the default path | |
916 | separator is not the slash character (e.g. Windows uses the |
|
921 | separator is not the slash character (e.g. Windows uses the | |
917 | backslash character (``\``)). |
|
922 | backslash character (``\``)). | |
918 | Default is False. |
|
923 | Default is False. | |
919 | ``ssh`` |
|
924 | ``ssh`` | |
920 | command to use for SSH connections. Default is ``ssh``. |
|
925 | command to use for SSH connections. Default is ``ssh``. | |
921 | ``strict`` |
|
926 | ``strict`` | |
922 | Require exact command names, instead of allowing unambiguous |
|
927 | Require exact command names, instead of allowing unambiguous | |
923 | abbreviations. True or False. Default is False. |
|
928 | abbreviations. True or False. Default is False. | |
924 | ``style`` |
|
929 | ``style`` | |
925 | Name of style to use for command output. |
|
930 | Name of style to use for command output. | |
926 | ``timeout`` |
|
931 | ``timeout`` | |
927 | The timeout used when a lock is held (in seconds), a negative value |
|
932 | The timeout used when a lock is held (in seconds), a negative value | |
928 | means no timeout. Default is 600. |
|
933 | means no timeout. Default is 600. | |
929 | ``traceback`` |
|
934 | ``traceback`` | |
930 | Mercurial always prints a traceback when an unknown exception |
|
935 | Mercurial always prints a traceback when an unknown exception | |
931 | occurs. Setting this to True will make Mercurial print a traceback |
|
936 | occurs. Setting this to True will make Mercurial print a traceback | |
932 | on all exceptions, even those recognized by Mercurial (such as |
|
937 | on all exceptions, even those recognized by Mercurial (such as | |
933 | IOError or MemoryError). Default is False. |
|
938 | IOError or MemoryError). Default is False. | |
934 | ``username`` |
|
939 | ``username`` | |
935 | The committer of a changeset created when running "commit". |
|
940 | The committer of a changeset created when running "commit". | |
936 | Typically a person's name and email address, e.g. ``Fred Widget |
|
941 | Typically a person's name and email address, e.g. ``Fred Widget | |
937 | <fred@example.com>``. Default is ``$EMAIL`` or ``username@hostname``. If |
|
942 | <fred@example.com>``. Default is ``$EMAIL`` or ``username@hostname``. If | |
938 | the username in hgrc is empty, it has to be specified manually or |
|
943 | the username in hgrc is empty, it has to be specified manually or | |
939 | in a different hgrc file (e.g. ``$HOME/.hgrc``, if the admin set |
|
944 | in a different hgrc file (e.g. ``$HOME/.hgrc``, if the admin set | |
940 | ``username =`` in the system hgrc). Environment variables in the |
|
945 | ``username =`` in the system hgrc). Environment variables in the | |
941 | username are expanded. |
|
946 | username are expanded. | |
942 | ``verbose`` |
|
947 | ``verbose`` | |
943 | Increase the amount of output printed. True or False. Default is False. |
|
948 | Increase the amount of output printed. True or False. Default is False. | |
944 |
|
949 | |||
945 |
|
950 | |||
946 | ``web`` |
|
951 | ``web`` | |
947 | """"""" |
|
952 | """"""" | |
948 |
|
953 | |||
949 | Web interface configuration. The settings in this section apply to |
|
954 | Web interface configuration. The settings in this section apply to | |
950 | both the builtin webserver (started by :hg:`serve`) and the script you |
|
955 | both the builtin webserver (started by :hg:`serve`) and the script you | |
951 | run through a webserver (``hgweb.cgi`` and the derivatives for FastCGI |
|
956 | run through a webserver (``hgweb.cgi`` and the derivatives for FastCGI | |
952 | and WSGI). |
|
957 | and WSGI). | |
953 |
|
958 | |||
954 | The Mercurial webserver does no authentication (it does not prompt for |
|
959 | The Mercurial webserver does no authentication (it does not prompt for | |
955 | usernames and passwords to validate *who* users are), but it does do |
|
960 | usernames and passwords to validate *who* users are), but it does do | |
956 | authorization (it grants or denies access for *authenticated users* |
|
961 | authorization (it grants or denies access for *authenticated users* | |
957 | based on settings in this section). You must either configure your |
|
962 | based on settings in this section). You must either configure your | |
958 | webserver to do authentication for you, or disable the authorization |
|
963 | webserver to do authentication for you, or disable the authorization | |
959 | checks. |
|
964 | checks. | |
960 |
|
965 | |||
961 | For a quick setup in a trusted environment, e.g., a private LAN, where |
|
966 | For a quick setup in a trusted environment, e.g., a private LAN, where | |
962 | you want it to accept pushes from anybody, you can use the following |
|
967 | you want it to accept pushes from anybody, you can use the following | |
963 | command line:: |
|
968 | command line:: | |
964 |
|
969 | |||
965 | $ hg --config web.allow_push=* --config web.push_ssl=False serve |
|
970 | $ hg --config web.allow_push=* --config web.push_ssl=False serve | |
966 |
|
971 | |||
967 | Note that this will allow anybody to push anything to the server and |
|
972 | Note that this will allow anybody to push anything to the server and | |
968 | that this should not be used for public servers. |
|
973 | that this should not be used for public servers. | |
969 |
|
974 | |||
970 | The full set of options is: |
|
975 | The full set of options is: | |
971 |
|
976 | |||
972 | ``accesslog`` |
|
977 | ``accesslog`` | |
973 | Where to output the access log. Default is stdout. |
|
978 | Where to output the access log. Default is stdout. | |
974 | ``address`` |
|
979 | ``address`` | |
975 | Interface address to bind to. Default is all. |
|
980 | Interface address to bind to. Default is all. | |
976 | ``allow_archive`` |
|
981 | ``allow_archive`` | |
977 | List of archive format (bz2, gz, zip) allowed for downloading. |
|
982 | List of archive format (bz2, gz, zip) allowed for downloading. | |
978 | Default is empty. |
|
983 | Default is empty. | |
979 | ``allowbz2`` |
|
984 | ``allowbz2`` | |
980 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repository |
|
985 | (DEPRECATED) Whether to allow .tar.bz2 downloading of repository | |
981 | revisions. |
|
986 | revisions. | |
982 | Default is False. |
|
987 | Default is False. | |
983 | ``allowgz`` |
|
988 | ``allowgz`` | |
984 | (DEPRECATED) Whether to allow .tar.gz downloading of repository |
|
989 | (DEPRECATED) Whether to allow .tar.gz downloading of repository | |
985 | revisions. |
|
990 | revisions. | |
986 | Default is False. |
|
991 | Default is False. | |
987 | ``allowpull`` |
|
992 | ``allowpull`` | |
988 | Whether to allow pulling from the repository. Default is True. |
|
993 | Whether to allow pulling from the repository. Default is True. | |
989 | ``allow_push`` |
|
994 | ``allow_push`` | |
990 | Whether to allow pushing to the repository. If empty or not set, |
|
995 | Whether to allow pushing to the repository. If empty or not set, | |
991 | push is not allowed. If the special value ``*``, any remote user can |
|
996 | push is not allowed. If the special value ``*``, any remote user can | |
992 | push, including unauthenticated users. Otherwise, the remote user |
|
997 | push, including unauthenticated users. Otherwise, the remote user | |
993 | must have been authenticated, and the authenticated user name must |
|
998 | must have been authenticated, and the authenticated user name must | |
994 | be present in this list. The contents of the allow_push list are |
|
999 | be present in this list. The contents of the allow_push list are | |
995 | examined after the deny_push list. |
|
1000 | examined after the deny_push list. | |
996 | ``allow_read`` |
|
1001 | ``allow_read`` | |
997 | If the user has not already been denied repository access due to |
|
1002 | If the user has not already been denied repository access due to | |
998 | the contents of deny_read, this list determines whether to grant |
|
1003 | the contents of deny_read, this list determines whether to grant | |
999 | repository access to the user. If this list is not empty, and the |
|
1004 | repository access to the user. If this list is not empty, and the | |
1000 | user is unauthenticated or not present in the list, then access is |
|
1005 | user is unauthenticated or not present in the list, then access is | |
1001 | denied for the user. If the list is empty or not set, then access |
|
1006 | denied for the user. If the list is empty or not set, then access | |
1002 | is permitted to all users by default. Setting allow_read to the |
|
1007 | is permitted to all users by default. Setting allow_read to the | |
1003 | special value ``*`` is equivalent to it not being set (i.e. access |
|
1008 | special value ``*`` is equivalent to it not being set (i.e. access | |
1004 | is permitted to all users). The contents of the allow_read list are |
|
1009 | is permitted to all users). The contents of the allow_read list are | |
1005 | examined after the deny_read list. |
|
1010 | examined after the deny_read list. | |
1006 | ``allowzip`` |
|
1011 | ``allowzip`` | |
1007 | (DEPRECATED) Whether to allow .zip downloading of repository |
|
1012 | (DEPRECATED) Whether to allow .zip downloading of repository | |
1008 | revisions. Default is False. This feature creates temporary files. |
|
1013 | revisions. Default is False. This feature creates temporary files. | |
1009 | ``baseurl`` |
|
1014 | ``baseurl`` | |
1010 | Base URL to use when publishing URLs in other locations, so |
|
1015 | Base URL to use when publishing URLs in other locations, so | |
1011 | third-party tools like email notification hooks can construct |
|
1016 | third-party tools like email notification hooks can construct | |
1012 | URLs. Example: ``http://hgserver/repos/``. |
|
1017 | URLs. Example: ``http://hgserver/repos/``. | |
1013 | ``cacerts`` |
|
1018 | ``cacerts`` | |
1014 | Path to file containing a list of PEM encoded certificate |
|
1019 | Path to file containing a list of PEM encoded certificate | |
1015 | authority certificates. Environment variables and ``~user`` |
|
1020 | authority certificates. Environment variables and ``~user`` | |
1016 | constructs are expanded in the filename. If specified on the |
|
1021 | constructs are expanded in the filename. If specified on the | |
1017 | client, then it will verify the identity of remote HTTPS servers |
|
1022 | client, then it will verify the identity of remote HTTPS servers | |
1018 | with these certificates. The form must be as follows:: |
|
1023 | with these certificates. The form must be as follows:: | |
1019 |
|
1024 | |||
1020 | -----BEGIN CERTIFICATE----- |
|
1025 | -----BEGIN CERTIFICATE----- | |
1021 | ... (certificate in base64 PEM encoding) ... |
|
1026 | ... (certificate in base64 PEM encoding) ... | |
1022 | -----END CERTIFICATE----- |
|
1027 | -----END CERTIFICATE----- | |
1023 | -----BEGIN CERTIFICATE----- |
|
1028 | -----BEGIN CERTIFICATE----- | |
1024 | ... (certificate in base64 PEM encoding) ... |
|
1029 | ... (certificate in base64 PEM encoding) ... | |
1025 | -----END CERTIFICATE----- |
|
1030 | -----END CERTIFICATE----- | |
1026 |
|
1031 | |||
1027 | This feature is only supported when using Python 2.6 or later. If you wish |
|
1032 | This feature is only supported when using Python 2.6 or later. If you wish | |
1028 | to use it with earlier versions of Python, install the backported |
|
1033 | to use it with earlier versions of Python, install the backported | |
1029 | version of the ssl library that is available from |
|
1034 | version of the ssl library that is available from | |
1030 | ``http://pypi.python.org``. |
|
1035 | ``http://pypi.python.org``. | |
1031 |
|
1036 | |||
1032 | You can use OpenSSL's CA certificate file if your platform has one. |
|
1037 | You can use OpenSSL's CA certificate file if your platform has one. | |
1033 | On most Linux systems this will be ``/etc/ssl/certs/ca-certificates.crt``. |
|
1038 | On most Linux systems this will be ``/etc/ssl/certs/ca-certificates.crt``. | |
1034 | Otherwise you will have to generate this file manually. |
|
1039 | Otherwise you will have to generate this file manually. | |
1035 |
|
1040 | |||
1036 | To disable SSL verification temporarily, specify ``--insecure`` from |
|
1041 | To disable SSL verification temporarily, specify ``--insecure`` from | |
1037 | command line. |
|
1042 | command line. | |
1038 | ``contact`` |
|
1043 | ``contact`` | |
1039 | Name or email address of the person in charge of the repository. |
|
1044 | Name or email address of the person in charge of the repository. | |
1040 | Defaults to ui.username or ``$EMAIL`` or "unknown" if unset or empty. |
|
1045 | Defaults to ui.username or ``$EMAIL`` or "unknown" if unset or empty. | |
1041 | ``deny_push`` |
|
1046 | ``deny_push`` | |
1042 | Whether to deny pushing to the repository. If empty or not set, |
|
1047 | Whether to deny pushing to the repository. If empty or not set, | |
1043 | push is not denied. If the special value ``*``, all remote users are |
|
1048 | push is not denied. If the special value ``*``, all remote users are | |
1044 | denied push. Otherwise, unauthenticated users are all denied, and |
|
1049 | denied push. Otherwise, unauthenticated users are all denied, and | |
1045 | any authenticated user name present in this list is also denied. The |
|
1050 | any authenticated user name present in this list is also denied. The | |
1046 | contents of the deny_push list are examined before the allow_push list. |
|
1051 | contents of the deny_push list are examined before the allow_push list. | |
1047 | ``deny_read`` |
|
1052 | ``deny_read`` | |
1048 | Whether to deny reading/viewing of the repository. If this list is |
|
1053 | Whether to deny reading/viewing of the repository. If this list is | |
1049 | not empty, unauthenticated users are all denied, and any |
|
1054 | not empty, unauthenticated users are all denied, and any | |
1050 | authenticated user name present in this list is also denied access to |
|
1055 | authenticated user name present in this list is also denied access to | |
1051 | the repository. If set to the special value ``*``, all remote users |
|
1056 | the repository. If set to the special value ``*``, all remote users | |
1052 | are denied access (rarely needed ;). If deny_read is empty or not set, |
|
1057 | are denied access (rarely needed ;). If deny_read is empty or not set, | |
1053 | the determination of repository access depends on the presence and |
|
1058 | the determination of repository access depends on the presence and | |
1054 | content of the allow_read list (see description). If both |
|
1059 | content of the allow_read list (see description). If both | |
1055 | deny_read and allow_read are empty or not set, then access is |
|
1060 | deny_read and allow_read are empty or not set, then access is | |
1056 | permitted to all users by default. If the repository is being |
|
1061 | permitted to all users by default. If the repository is being | |
1057 | served via hgwebdir, denied users will not be able to see it in |
|
1062 | served via hgwebdir, denied users will not be able to see it in | |
1058 | the list of repositories. The contents of the deny_read list have |
|
1063 | the list of repositories. The contents of the deny_read list have | |
1059 | priority over (are examined before) the contents of the allow_read |
|
1064 | priority over (are examined before) the contents of the allow_read | |
1060 | list. |
|
1065 | list. | |
1061 | ``descend`` |
|
1066 | ``descend`` | |
1062 | hgwebdir indexes will not descend into subdirectories. Only repositories |
|
1067 | hgwebdir indexes will not descend into subdirectories. Only repositories | |
1063 | directly in the current path will be shown (other repositories are still |
|
1068 | directly in the current path will be shown (other repositories are still | |
1064 | available from the index corresponding to their containing path). |
|
1069 | available from the index corresponding to their containing path). | |
1065 | ``description`` |
|
1070 | ``description`` | |
1066 | Textual description of the repository's purpose or contents. |
|
1071 | Textual description of the repository's purpose or contents. | |
1067 | Default is "unknown". |
|
1072 | Default is "unknown". | |
1068 | ``encoding`` |
|
1073 | ``encoding`` | |
1069 | Character encoding name. Default is the current locale charset. |
|
1074 | Character encoding name. Default is the current locale charset. | |
1070 | Example: "UTF-8" |
|
1075 | Example: "UTF-8" | |
1071 | ``errorlog`` |
|
1076 | ``errorlog`` | |
1072 | Where to output the error log. Default is stderr. |
|
1077 | Where to output the error log. Default is stderr. | |
1073 | ``hidden`` |
|
1078 | ``hidden`` | |
1074 | Whether to hide the repository in the hgwebdir index. |
|
1079 | Whether to hide the repository in the hgwebdir index. | |
1075 | Default is False. |
|
1080 | Default is False. | |
1076 | ``ipv6`` |
|
1081 | ``ipv6`` | |
1077 | Whether to use IPv6. Default is False. |
|
1082 | Whether to use IPv6. Default is False. | |
1078 | ``name`` |
|
1083 | ``name`` | |
1079 | Repository name to use in the web interface. Default is current |
|
1084 | Repository name to use in the web interface. Default is current | |
1080 | working directory. |
|
1085 | working directory. | |
1081 | ``maxchanges`` |
|
1086 | ``maxchanges`` | |
1082 | Maximum number of changes to list on the changelog. Default is 10. |
|
1087 | Maximum number of changes to list on the changelog. Default is 10. | |
1083 | ``maxfiles`` |
|
1088 | ``maxfiles`` | |
1084 | Maximum number of files to list per changeset. Default is 10. |
|
1089 | Maximum number of files to list per changeset. Default is 10. | |
1085 | ``port`` |
|
1090 | ``port`` | |
1086 | Port to listen on. Default is 8000. |
|
1091 | Port to listen on. Default is 8000. | |
1087 | ``prefix`` |
|
1092 | ``prefix`` | |
1088 | Prefix path to serve from. Default is '' (server root). |
|
1093 | Prefix path to serve from. Default is '' (server root). | |
1089 | ``push_ssl`` |
|
1094 | ``push_ssl`` | |
1090 | Whether to require that inbound pushes be transported over SSL to |
|
1095 | Whether to require that inbound pushes be transported over SSL to | |
1091 | prevent password sniffing. Default is True. |
|
1096 | prevent password sniffing. Default is True. | |
1092 | ``staticurl`` |
|
1097 | ``staticurl`` | |
1093 | Base URL to use for static files. If unset, static files (e.g. the |
|
1098 | Base URL to use for static files. If unset, static files (e.g. the | |
1094 | hgicon.png favicon) will be served by the CGI script itself. Use |
|
1099 | hgicon.png favicon) will be served by the CGI script itself. Use | |
1095 | this setting to serve them directly with the HTTP server. |
|
1100 | this setting to serve them directly with the HTTP server. | |
1096 | Example: ``http://hgserver/static/``. |
|
1101 | Example: ``http://hgserver/static/``. | |
1097 | ``stripes`` |
|
1102 | ``stripes`` | |
1098 | How many lines a "zebra stripe" should span in multiline output. |
|
1103 | How many lines a "zebra stripe" should span in multiline output. | |
1099 | Default is 1; set to 0 to disable. |
|
1104 | Default is 1; set to 0 to disable. | |
1100 | ``style`` |
|
1105 | ``style`` | |
1101 | Which template map style to use. |
|
1106 | Which template map style to use. | |
1102 | ``templates`` |
|
1107 | ``templates`` | |
1103 | Where to find the HTML templates. Default is install path. |
|
1108 | Where to find the HTML templates. Default is install path. | |
1104 |
|
1109 | |||
1105 |
|
1110 | |||
1106 | Author |
|
1111 | Author | |
1107 | ------ |
|
1112 | ------ | |
1108 | Bryan O'Sullivan <bos@serpentine.com>. |
|
1113 | Bryan O'Sullivan <bos@serpentine.com>. | |
1109 |
|
1114 | |||
1110 | Mercurial was written by Matt Mackall <mpm@selenic.com>. |
|
1115 | Mercurial was written by Matt Mackall <mpm@selenic.com>. | |
1111 |
|
1116 | |||
1112 | See Also |
|
1117 | See Also | |
1113 | -------- |
|
1118 | -------- | |
1114 | |hg(1)|_, |hgignore(5)|_ |
|
1119 | |hg(1)|_, |hgignore(5)|_ | |
1115 |
|
1120 | |||
1116 | Copying |
|
1121 | Copying | |
1117 | ------- |
|
1122 | ------- | |
1118 | This manual page is copyright 2005 Bryan O'Sullivan. |
|
1123 | This manual page is copyright 2005 Bryan O'Sullivan. | |
1119 | Mercurial is copyright 2005-2010 Matt Mackall. |
|
1124 | Mercurial is copyright 2005-2010 Matt Mackall. | |
1120 | Free use of this software is granted under the terms of the GNU General |
|
1125 | Free use of this software is granted under the terms of the GNU General | |
1121 | Public License version 2 or any later version. |
|
1126 | Public License version 2 or any later version. | |
1122 |
|
1127 | |||
1123 | .. include:: common.txt |
|
1128 | .. include:: common.txt |
@@ -1,127 +1,130 b'' | |||||
1 | Subrepositories let you nest external repositories or projects into a |
|
1 | Subrepositories let you nest external repositories or projects into a | |
2 | parent Mercurial repository, and make commands operate on them as a |
|
2 | parent Mercurial repository, and make commands operate on them as a | |
3 | group. External Mercurial and Subversion projects are currently |
|
3 | group. External Mercurial and Subversion projects are currently | |
4 | supported. |
|
4 | supported. | |
5 |
|
5 | |||
6 | Subrepositories are made of three components: |
|
6 | Subrepositories are made of three components: | |
7 |
|
7 | |||
8 | 1. Nested repository checkouts. They can appear anywhere in the |
|
8 | 1. Nested repository checkouts. They can appear anywhere in the | |
9 | parent working directory, and are Mercurial clones or Subversion |
|
9 | parent working directory, and are Mercurial clones or Subversion | |
10 | checkouts. |
|
10 | checkouts. | |
11 |
|
11 | |||
12 | 2. Nested repository references. They are defined in ``.hgsub`` and |
|
12 | 2. Nested repository references. They are defined in ``.hgsub`` and | |
13 | tell where the subrepository checkouts come from. Mercurial |
|
13 | tell where the subrepository checkouts come from. Mercurial | |
14 | subrepositories are referenced like: |
|
14 | subrepositories are referenced like: | |
15 |
|
15 | |||
16 | path/to/nested = https://example.com/nested/repo/path |
|
16 | path/to/nested = https://example.com/nested/repo/path | |
17 |
|
17 | |||
18 | where ``path/to/nested`` is the checkout location relatively to the |
|
18 | where ``path/to/nested`` is the checkout location relatively to the | |
19 | parent Mercurial root, and ``https://example.com/nested/repo/path`` |
|
19 | parent Mercurial root, and ``https://example.com/nested/repo/path`` | |
20 | is the source repository path. The source can also reference a |
|
20 | is the source repository path. The source can also reference a | |
21 | filesystem path. Subversion repositories are defined with: |
|
21 | filesystem path. Subversion repositories are defined with: | |
22 |
|
22 | |||
23 | path/to/nested = [svn]https://example.com/nested/trunk/path |
|
23 | path/to/nested = [svn]https://example.com/nested/trunk/path | |
24 |
|
24 | |||
25 | Note that ``.hgsub`` does not exist by default in Mercurial |
|
25 | Note that ``.hgsub`` does not exist by default in Mercurial | |
26 | repositories, you have to create and add it to the parent |
|
26 | repositories, you have to create and add it to the parent | |
27 | repository before using subrepositories. |
|
27 | repository before using subrepositories. | |
28 |
|
28 | |||
29 | 3. Nested repository states. They are defined in ``.hgsubstate`` and |
|
29 | 3. Nested repository states. They are defined in ``.hgsubstate`` and | |
30 | capture whatever information is required to restore the |
|
30 | capture whatever information is required to restore the | |
31 | subrepositories to the state they were committed in a parent |
|
31 | subrepositories to the state they were committed in a parent | |
32 | repository changeset. Mercurial automatically record the nested |
|
32 | repository changeset. Mercurial automatically record the nested | |
33 | repositories states when committing in the parent repository. |
|
33 | repositories states when committing in the parent repository. | |
34 |
|
34 | |||
35 | .. note:: |
|
35 | .. note:: | |
36 | The ``.hgsubstate`` file should not be edited manually. |
|
36 | The ``.hgsubstate`` file should not be edited manually. | |
37 |
|
37 | |||
38 |
|
38 | |||
39 | Adding a Subrepository |
|
39 | Adding a Subrepository | |
40 | ---------------------- |
|
40 | ---------------------- | |
41 |
|
41 | |||
42 | If ``.hgsub`` does not exist, create it and add it to the parent |
|
42 | If ``.hgsub`` does not exist, create it and add it to the parent | |
43 | repository. Clone or checkout the external projects where you want it |
|
43 | repository. Clone or checkout the external projects where you want it | |
44 | to live in the parent repository. Edit ``.hgsub`` and add the |
|
44 | to live in the parent repository. Edit ``.hgsub`` and add the | |
45 | subrepository entry as described above. At this point, the |
|
45 | subrepository entry as described above. At this point, the | |
46 | subrepository is tracked and the next commit will record its state in |
|
46 | subrepository is tracked and the next commit will record its state in | |
47 | ``.hgsubstate`` and bind it to the committed changeset. |
|
47 | ``.hgsubstate`` and bind it to the committed changeset. | |
48 |
|
48 | |||
49 | Synchronizing a Subrepository |
|
49 | Synchronizing a Subrepository | |
50 | ----------------------------- |
|
50 | ----------------------------- | |
51 |
|
51 | |||
52 | Subrepos do not automatically track the latest changeset of their |
|
52 | Subrepos do not automatically track the latest changeset of their | |
53 | sources. Instead, they are updated to the changeset that corresponds |
|
53 | sources. Instead, they are updated to the changeset that corresponds | |
54 | with the changeset checked out in the top-level changeset. This is so |
|
54 | with the changeset checked out in the top-level changeset. This is so | |
55 | developers always get a consistent set of compatible code and |
|
55 | developers always get a consistent set of compatible code and | |
56 | libraries when they update. |
|
56 | libraries when they update. | |
57 |
|
57 | |||
58 | Thus, updating subrepos is a manual process. Simply check out target |
|
58 | Thus, updating subrepos is a manual process. Simply check out target | |
59 | subrepo at the desired revision, test in the top-level repo, then |
|
59 | subrepo at the desired revision, test in the top-level repo, then | |
60 | commit in the parent repository to record the new combination. |
|
60 | commit in the parent repository to record the new combination. | |
61 |
|
61 | |||
62 | Deleting a Subrepository |
|
62 | Deleting a Subrepository | |
63 | ------------------------ |
|
63 | ------------------------ | |
64 |
|
64 | |||
65 | To remove a subrepository from the parent repository, delete its |
|
65 | To remove a subrepository from the parent repository, delete its | |
66 | reference from ``.hgsub``, then remove its files. |
|
66 | reference from ``.hgsub``, then remove its files. | |
67 |
|
67 | |||
68 | Interaction with Mercurial Commands |
|
68 | Interaction with Mercurial Commands | |
69 | ----------------------------------- |
|
69 | ----------------------------------- | |
70 |
|
70 | |||
71 | :add: add does not recurse in subrepos unless -S/--subrepos is |
|
71 | :add: add does not recurse in subrepos unless -S/--subrepos is | |
72 | specified. Subversion subrepositories are currently silently |
|
72 | specified. Subversion subrepositories are currently silently | |
73 | ignored. |
|
73 | ignored. | |
74 |
|
74 | |||
75 | :archive: archive does not recurse in subrepositories unless |
|
75 | :archive: archive does not recurse in subrepositories unless | |
76 | -S/--subrepos is specified. |
|
76 | -S/--subrepos is specified. | |
77 |
|
77 | |||
78 | :commit: commit creates a consistent snapshot of the state of the |
|
78 | :commit: commit creates a consistent snapshot of the state of the | |
79 | entire project and its subrepositories. It does this by first |
|
79 | entire project and its subrepositories. It does this by first | |
80 | attempting to commit all modified subrepositories, then recording |
|
80 | attempting to commit all modified subrepositories, then recording | |
81 |
their state and finally committing it in the parent |
|
81 | their state and finally committing it in the parent | |
|
82 | repository. Mercurial can be made to abort if any subrepository | |||
|
83 | content is modified by setting "ui.commitsubrepos=no" in a | |||
|
84 | configuration file (see :hg:`help config`). | |||
82 |
|
85 | |||
83 | :diff: diff does not recurse in subrepos unless -S/--subrepos is |
|
86 | :diff: diff does not recurse in subrepos unless -S/--subrepos is | |
84 | specified. Changes are displayed as usual, on the subrepositories |
|
87 | specified. Changes are displayed as usual, on the subrepositories | |
85 | elements. Subversion subrepositories are currently silently |
|
88 | elements. Subversion subrepositories are currently silently | |
86 | ignored. |
|
89 | ignored. | |
87 |
|
90 | |||
88 | :incoming: incoming does not recurse in subrepos unless -S/--subrepos |
|
91 | :incoming: incoming does not recurse in subrepos unless -S/--subrepos | |
89 | is specified. Subversion subrepositories are currently silently |
|
92 | is specified. Subversion subrepositories are currently silently | |
90 | ignored. |
|
93 | ignored. | |
91 |
|
94 | |||
92 | :outgoing: outgoing does not recurse in subrepos unless -S/--subrepos |
|
95 | :outgoing: outgoing does not recurse in subrepos unless -S/--subrepos | |
93 | is specified. Subversion subrepositories are currently silently |
|
96 | is specified. Subversion subrepositories are currently silently | |
94 | ignored. |
|
97 | ignored. | |
95 |
|
98 | |||
96 | :pull: pull is not recursive since it is not clear what to pull prior |
|
99 | :pull: pull is not recursive since it is not clear what to pull prior | |
97 | to running :hg:`update`. Listing and retrieving all |
|
100 | to running :hg:`update`. Listing and retrieving all | |
98 | subrepositories changes referenced by the parent repository pulled |
|
101 | subrepositories changes referenced by the parent repository pulled | |
99 | changesets is expensive at best, impossible in the Subversion |
|
102 | changesets is expensive at best, impossible in the Subversion | |
100 | case. |
|
103 | case. | |
101 |
|
104 | |||
102 | :push: Mercurial will automatically push all subrepositories first |
|
105 | :push: Mercurial will automatically push all subrepositories first | |
103 | when the parent repository is being pushed. This ensures new |
|
106 | when the parent repository is being pushed. This ensures new | |
104 | subrepository changes are available when referenced by top-level |
|
107 | subrepository changes are available when referenced by top-level | |
105 | repositories. |
|
108 | repositories. | |
106 |
|
109 | |||
107 | :status: status does not recurse into subrepositories unless |
|
110 | :status: status does not recurse into subrepositories unless | |
108 | -S/--subrepos is specified. Subrepository changes are displayed as |
|
111 | -S/--subrepos is specified. Subrepository changes are displayed as | |
109 | regular Mercurial changes on the subrepository |
|
112 | regular Mercurial changes on the subrepository | |
110 | elements. Subversion subrepositories are currently silently |
|
113 | elements. Subversion subrepositories are currently silently | |
111 | ignored. |
|
114 | ignored. | |
112 |
|
115 | |||
113 | :update: update restores the subrepos in the state they were |
|
116 | :update: update restores the subrepos in the state they were | |
114 | originally committed in target changeset. If the recorded |
|
117 | originally committed in target changeset. If the recorded | |
115 | changeset is not available in the current subrepository, Mercurial |
|
118 | changeset is not available in the current subrepository, Mercurial | |
116 | will pull it in first before updating. This means that updating |
|
119 | will pull it in first before updating. This means that updating | |
117 | can require network access when using subrepositories. |
|
120 | can require network access when using subrepositories. | |
118 |
|
121 | |||
119 | Remapping Subrepositories Sources |
|
122 | Remapping Subrepositories Sources | |
120 | --------------------------------- |
|
123 | --------------------------------- | |
121 |
|
124 | |||
122 | A subrepository source location may change during a project life, |
|
125 | A subrepository source location may change during a project life, | |
123 | invalidating references stored in the parent repository history. To |
|
126 | invalidating references stored in the parent repository history. To | |
124 | fix this, rewriting rules can be defined in parent repository ``hgrc`` |
|
127 | fix this, rewriting rules can be defined in parent repository ``hgrc`` | |
125 | file or in Mercurial configuration. See the ``[subpaths]`` section in |
|
128 | file or in Mercurial configuration. See the ``[subpaths]`` section in | |
126 | hgrc(5) for more details. |
|
129 | hgrc(5) for more details. | |
127 |
|
130 |
@@ -1,2018 +1,2024 b'' | |||||
1 | # localrepo.py - read/write repository class for mercurial |
|
1 | # localrepo.py - read/write repository class for mercurial | |
2 | # |
|
2 | # | |
3 | # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> |
|
3 | # Copyright 2005-2007 Matt Mackall <mpm@selenic.com> | |
4 | # |
|
4 | # | |
5 | # This software may be used and distributed according to the terms of the |
|
5 | # This software may be used and distributed according to the terms of the | |
6 | # GNU General Public License version 2 or any later version. |
|
6 | # GNU General Public License version 2 or any later version. | |
7 |
|
7 | |||
8 | from node import bin, hex, nullid, nullrev, short |
|
8 | from node import bin, hex, nullid, nullrev, short | |
9 | from i18n import _ |
|
9 | from i18n import _ | |
10 | import repo, changegroup, subrepo, discovery, pushkey |
|
10 | import repo, changegroup, subrepo, discovery, pushkey | |
11 | import changelog, dirstate, filelog, manifest, context, bookmarks |
|
11 | import changelog, dirstate, filelog, manifest, context, bookmarks | |
12 | import lock, transaction, store, encoding |
|
12 | import lock, transaction, store, encoding | |
13 | import util, extensions, hook, error |
|
13 | import util, extensions, hook, error | |
14 | import match as matchmod |
|
14 | import match as matchmod | |
15 | import merge as mergemod |
|
15 | import merge as mergemod | |
16 | import tags as tagsmod |
|
16 | import tags as tagsmod | |
17 | import url as urlmod |
|
17 | import url as urlmod | |
18 | from lock import release |
|
18 | from lock import release | |
19 | import weakref, errno, os, time, inspect |
|
19 | import weakref, errno, os, time, inspect | |
20 | propertycache = util.propertycache |
|
20 | propertycache = util.propertycache | |
21 |
|
21 | |||
22 | class localrepository(repo.repository): |
|
22 | class localrepository(repo.repository): | |
23 | capabilities = set(('lookup', 'changegroupsubset', 'branchmap', 'pushkey')) |
|
23 | capabilities = set(('lookup', 'changegroupsubset', 'branchmap', 'pushkey')) | |
24 | supportedformats = set(('revlogv1', 'parentdelta')) |
|
24 | supportedformats = set(('revlogv1', 'parentdelta')) | |
25 | supported = supportedformats | set(('store', 'fncache', 'shared', |
|
25 | supported = supportedformats | set(('store', 'fncache', 'shared', | |
26 | 'dotencode')) |
|
26 | 'dotencode')) | |
27 |
|
27 | |||
28 | def __init__(self, baseui, path=None, create=0): |
|
28 | def __init__(self, baseui, path=None, create=0): | |
29 | repo.repository.__init__(self) |
|
29 | repo.repository.__init__(self) | |
30 | self.root = os.path.realpath(util.expandpath(path)) |
|
30 | self.root = os.path.realpath(util.expandpath(path)) | |
31 | self.path = os.path.join(self.root, ".hg") |
|
31 | self.path = os.path.join(self.root, ".hg") | |
32 | self.origroot = path |
|
32 | self.origroot = path | |
33 | self.auditor = util.path_auditor(self.root, self._checknested) |
|
33 | self.auditor = util.path_auditor(self.root, self._checknested) | |
34 | self.opener = util.opener(self.path) |
|
34 | self.opener = util.opener(self.path) | |
35 | self.wopener = util.opener(self.root) |
|
35 | self.wopener = util.opener(self.root) | |
36 | self.baseui = baseui |
|
36 | self.baseui = baseui | |
37 | self.ui = baseui.copy() |
|
37 | self.ui = baseui.copy() | |
38 |
|
38 | |||
39 | try: |
|
39 | try: | |
40 | self.ui.readconfig(self.join("hgrc"), self.root) |
|
40 | self.ui.readconfig(self.join("hgrc"), self.root) | |
41 | extensions.loadall(self.ui) |
|
41 | extensions.loadall(self.ui) | |
42 | except IOError: |
|
42 | except IOError: | |
43 | pass |
|
43 | pass | |
44 |
|
44 | |||
45 | if not os.path.isdir(self.path): |
|
45 | if not os.path.isdir(self.path): | |
46 | if create: |
|
46 | if create: | |
47 | if not os.path.exists(path): |
|
47 | if not os.path.exists(path): | |
48 | util.makedirs(path) |
|
48 | util.makedirs(path) | |
49 | os.mkdir(self.path) |
|
49 | os.mkdir(self.path) | |
50 | requirements = ["revlogv1"] |
|
50 | requirements = ["revlogv1"] | |
51 | if self.ui.configbool('format', 'usestore', True): |
|
51 | if self.ui.configbool('format', 'usestore', True): | |
52 | os.mkdir(os.path.join(self.path, "store")) |
|
52 | os.mkdir(os.path.join(self.path, "store")) | |
53 | requirements.append("store") |
|
53 | requirements.append("store") | |
54 | if self.ui.configbool('format', 'usefncache', True): |
|
54 | if self.ui.configbool('format', 'usefncache', True): | |
55 | requirements.append("fncache") |
|
55 | requirements.append("fncache") | |
56 | if self.ui.configbool('format', 'dotencode', True): |
|
56 | if self.ui.configbool('format', 'dotencode', True): | |
57 | requirements.append('dotencode') |
|
57 | requirements.append('dotencode') | |
58 | # create an invalid changelog |
|
58 | # create an invalid changelog | |
59 | self.opener("00changelog.i", "a").write( |
|
59 | self.opener("00changelog.i", "a").write( | |
60 | '\0\0\0\2' # represents revlogv2 |
|
60 | '\0\0\0\2' # represents revlogv2 | |
61 | ' dummy changelog to prevent using the old repo layout' |
|
61 | ' dummy changelog to prevent using the old repo layout' | |
62 | ) |
|
62 | ) | |
63 | if self.ui.configbool('format', 'parentdelta', False): |
|
63 | if self.ui.configbool('format', 'parentdelta', False): | |
64 | requirements.append("parentdelta") |
|
64 | requirements.append("parentdelta") | |
65 | else: |
|
65 | else: | |
66 | raise error.RepoError(_("repository %s not found") % path) |
|
66 | raise error.RepoError(_("repository %s not found") % path) | |
67 | elif create: |
|
67 | elif create: | |
68 | raise error.RepoError(_("repository %s already exists") % path) |
|
68 | raise error.RepoError(_("repository %s already exists") % path) | |
69 | else: |
|
69 | else: | |
70 | # find requirements |
|
70 | # find requirements | |
71 | requirements = set() |
|
71 | requirements = set() | |
72 | try: |
|
72 | try: | |
73 | requirements = set(self.opener("requires").read().splitlines()) |
|
73 | requirements = set(self.opener("requires").read().splitlines()) | |
74 | except IOError, inst: |
|
74 | except IOError, inst: | |
75 | if inst.errno != errno.ENOENT: |
|
75 | if inst.errno != errno.ENOENT: | |
76 | raise |
|
76 | raise | |
77 | for r in requirements - self.supported: |
|
77 | for r in requirements - self.supported: | |
78 | raise error.RepoError(_("requirement '%s' not supported") % r) |
|
78 | raise error.RepoError(_("requirement '%s' not supported") % r) | |
79 |
|
79 | |||
80 | self.sharedpath = self.path |
|
80 | self.sharedpath = self.path | |
81 | try: |
|
81 | try: | |
82 | s = os.path.realpath(self.opener("sharedpath").read()) |
|
82 | s = os.path.realpath(self.opener("sharedpath").read()) | |
83 | if not os.path.exists(s): |
|
83 | if not os.path.exists(s): | |
84 | raise error.RepoError( |
|
84 | raise error.RepoError( | |
85 | _('.hg/sharedpath points to nonexistent directory %s') % s) |
|
85 | _('.hg/sharedpath points to nonexistent directory %s') % s) | |
86 | self.sharedpath = s |
|
86 | self.sharedpath = s | |
87 | except IOError, inst: |
|
87 | except IOError, inst: | |
88 | if inst.errno != errno.ENOENT: |
|
88 | if inst.errno != errno.ENOENT: | |
89 | raise |
|
89 | raise | |
90 |
|
90 | |||
91 | self.store = store.store(requirements, self.sharedpath, util.opener) |
|
91 | self.store = store.store(requirements, self.sharedpath, util.opener) | |
92 | self.spath = self.store.path |
|
92 | self.spath = self.store.path | |
93 | self.sopener = self.store.opener |
|
93 | self.sopener = self.store.opener | |
94 | self.sjoin = self.store.join |
|
94 | self.sjoin = self.store.join | |
95 | self.opener.createmode = self.store.createmode |
|
95 | self.opener.createmode = self.store.createmode | |
96 | self._applyrequirements(requirements) |
|
96 | self._applyrequirements(requirements) | |
97 | if create: |
|
97 | if create: | |
98 | self._writerequirements() |
|
98 | self._writerequirements() | |
99 |
|
99 | |||
100 | # These two define the set of tags for this repository. _tags |
|
100 | # These two define the set of tags for this repository. _tags | |
101 | # maps tag name to node; _tagtypes maps tag name to 'global' or |
|
101 | # maps tag name to node; _tagtypes maps tag name to 'global' or | |
102 | # 'local'. (Global tags are defined by .hgtags across all |
|
102 | # 'local'. (Global tags are defined by .hgtags across all | |
103 | # heads, and local tags are defined in .hg/localtags.) They |
|
103 | # heads, and local tags are defined in .hg/localtags.) They | |
104 | # constitute the in-memory cache of tags. |
|
104 | # constitute the in-memory cache of tags. | |
105 | self._tags = None |
|
105 | self._tags = None | |
106 | self._tagtypes = None |
|
106 | self._tagtypes = None | |
107 |
|
107 | |||
108 | self._branchcache = None |
|
108 | self._branchcache = None | |
109 | self._branchcachetip = None |
|
109 | self._branchcachetip = None | |
110 | self.nodetagscache = None |
|
110 | self.nodetagscache = None | |
111 | self.filterpats = {} |
|
111 | self.filterpats = {} | |
112 | self._datafilters = {} |
|
112 | self._datafilters = {} | |
113 | self._transref = self._lockref = self._wlockref = None |
|
113 | self._transref = self._lockref = self._wlockref = None | |
114 |
|
114 | |||
115 | def _applyrequirements(self, requirements): |
|
115 | def _applyrequirements(self, requirements): | |
116 | self.requirements = requirements |
|
116 | self.requirements = requirements | |
117 | self.sopener.options = {} |
|
117 | self.sopener.options = {} | |
118 | if 'parentdelta' in requirements: |
|
118 | if 'parentdelta' in requirements: | |
119 | self.sopener.options['parentdelta'] = 1 |
|
119 | self.sopener.options['parentdelta'] = 1 | |
120 |
|
120 | |||
121 | def _writerequirements(self): |
|
121 | def _writerequirements(self): | |
122 | reqfile = self.opener("requires", "w") |
|
122 | reqfile = self.opener("requires", "w") | |
123 | for r in self.requirements: |
|
123 | for r in self.requirements: | |
124 | reqfile.write("%s\n" % r) |
|
124 | reqfile.write("%s\n" % r) | |
125 | reqfile.close() |
|
125 | reqfile.close() | |
126 |
|
126 | |||
127 | def _checknested(self, path): |
|
127 | def _checknested(self, path): | |
128 | """Determine if path is a legal nested repository.""" |
|
128 | """Determine if path is a legal nested repository.""" | |
129 | if not path.startswith(self.root): |
|
129 | if not path.startswith(self.root): | |
130 | return False |
|
130 | return False | |
131 | subpath = path[len(self.root) + 1:] |
|
131 | subpath = path[len(self.root) + 1:] | |
132 |
|
132 | |||
133 | # XXX: Checking against the current working copy is wrong in |
|
133 | # XXX: Checking against the current working copy is wrong in | |
134 | # the sense that it can reject things like |
|
134 | # the sense that it can reject things like | |
135 | # |
|
135 | # | |
136 | # $ hg cat -r 10 sub/x.txt |
|
136 | # $ hg cat -r 10 sub/x.txt | |
137 | # |
|
137 | # | |
138 | # if sub/ is no longer a subrepository in the working copy |
|
138 | # if sub/ is no longer a subrepository in the working copy | |
139 | # parent revision. |
|
139 | # parent revision. | |
140 | # |
|
140 | # | |
141 | # However, it can of course also allow things that would have |
|
141 | # However, it can of course also allow things that would have | |
142 | # been rejected before, such as the above cat command if sub/ |
|
142 | # been rejected before, such as the above cat command if sub/ | |
143 | # is a subrepository now, but was a normal directory before. |
|
143 | # is a subrepository now, but was a normal directory before. | |
144 | # The old path auditor would have rejected by mistake since it |
|
144 | # The old path auditor would have rejected by mistake since it | |
145 | # panics when it sees sub/.hg/. |
|
145 | # panics when it sees sub/.hg/. | |
146 | # |
|
146 | # | |
147 | # All in all, checking against the working copy seems sensible |
|
147 | # All in all, checking against the working copy seems sensible | |
148 | # since we want to prevent access to nested repositories on |
|
148 | # since we want to prevent access to nested repositories on | |
149 | # the filesystem *now*. |
|
149 | # the filesystem *now*. | |
150 | ctx = self[None] |
|
150 | ctx = self[None] | |
151 | parts = util.splitpath(subpath) |
|
151 | parts = util.splitpath(subpath) | |
152 | while parts: |
|
152 | while parts: | |
153 | prefix = os.sep.join(parts) |
|
153 | prefix = os.sep.join(parts) | |
154 | if prefix in ctx.substate: |
|
154 | if prefix in ctx.substate: | |
155 | if prefix == subpath: |
|
155 | if prefix == subpath: | |
156 | return True |
|
156 | return True | |
157 | else: |
|
157 | else: | |
158 | sub = ctx.sub(prefix) |
|
158 | sub = ctx.sub(prefix) | |
159 | return sub.checknested(subpath[len(prefix) + 1:]) |
|
159 | return sub.checknested(subpath[len(prefix) + 1:]) | |
160 | else: |
|
160 | else: | |
161 | parts.pop() |
|
161 | parts.pop() | |
162 | return False |
|
162 | return False | |
163 |
|
163 | |||
164 | @util.propertycache |
|
164 | @util.propertycache | |
165 | def _bookmarks(self): |
|
165 | def _bookmarks(self): | |
166 | return bookmarks.read(self) |
|
166 | return bookmarks.read(self) | |
167 |
|
167 | |||
168 | @util.propertycache |
|
168 | @util.propertycache | |
169 | def _bookmarkcurrent(self): |
|
169 | def _bookmarkcurrent(self): | |
170 | return bookmarks.readcurrent(self) |
|
170 | return bookmarks.readcurrent(self) | |
171 |
|
171 | |||
172 | @propertycache |
|
172 | @propertycache | |
173 | def changelog(self): |
|
173 | def changelog(self): | |
174 | c = changelog.changelog(self.sopener) |
|
174 | c = changelog.changelog(self.sopener) | |
175 | if 'HG_PENDING' in os.environ: |
|
175 | if 'HG_PENDING' in os.environ: | |
176 | p = os.environ['HG_PENDING'] |
|
176 | p = os.environ['HG_PENDING'] | |
177 | if p.startswith(self.root): |
|
177 | if p.startswith(self.root): | |
178 | c.readpending('00changelog.i.a') |
|
178 | c.readpending('00changelog.i.a') | |
179 | self.sopener.options['defversion'] = c.version |
|
179 | self.sopener.options['defversion'] = c.version | |
180 | return c |
|
180 | return c | |
181 |
|
181 | |||
182 | @propertycache |
|
182 | @propertycache | |
183 | def manifest(self): |
|
183 | def manifest(self): | |
184 | return manifest.manifest(self.sopener) |
|
184 | return manifest.manifest(self.sopener) | |
185 |
|
185 | |||
186 | @propertycache |
|
186 | @propertycache | |
187 | def dirstate(self): |
|
187 | def dirstate(self): | |
188 | warned = [0] |
|
188 | warned = [0] | |
189 | def validate(node): |
|
189 | def validate(node): | |
190 | try: |
|
190 | try: | |
191 | r = self.changelog.rev(node) |
|
191 | r = self.changelog.rev(node) | |
192 | return node |
|
192 | return node | |
193 | except error.LookupError: |
|
193 | except error.LookupError: | |
194 | if not warned[0]: |
|
194 | if not warned[0]: | |
195 | warned[0] = True |
|
195 | warned[0] = True | |
196 | self.ui.warn(_("warning: ignoring unknown" |
|
196 | self.ui.warn(_("warning: ignoring unknown" | |
197 | " working parent %s!\n") % short(node)) |
|
197 | " working parent %s!\n") % short(node)) | |
198 | return nullid |
|
198 | return nullid | |
199 |
|
199 | |||
200 | return dirstate.dirstate(self.opener, self.ui, self.root, validate) |
|
200 | return dirstate.dirstate(self.opener, self.ui, self.root, validate) | |
201 |
|
201 | |||
202 | def __getitem__(self, changeid): |
|
202 | def __getitem__(self, changeid): | |
203 | if changeid is None: |
|
203 | if changeid is None: | |
204 | return context.workingctx(self) |
|
204 | return context.workingctx(self) | |
205 | return context.changectx(self, changeid) |
|
205 | return context.changectx(self, changeid) | |
206 |
|
206 | |||
207 | def __contains__(self, changeid): |
|
207 | def __contains__(self, changeid): | |
208 | try: |
|
208 | try: | |
209 | return bool(self.lookup(changeid)) |
|
209 | return bool(self.lookup(changeid)) | |
210 | except error.RepoLookupError: |
|
210 | except error.RepoLookupError: | |
211 | return False |
|
211 | return False | |
212 |
|
212 | |||
213 | def __nonzero__(self): |
|
213 | def __nonzero__(self): | |
214 | return True |
|
214 | return True | |
215 |
|
215 | |||
216 | def __len__(self): |
|
216 | def __len__(self): | |
217 | return len(self.changelog) |
|
217 | return len(self.changelog) | |
218 |
|
218 | |||
219 | def __iter__(self): |
|
219 | def __iter__(self): | |
220 | for i in xrange(len(self)): |
|
220 | for i in xrange(len(self)): | |
221 | yield i |
|
221 | yield i | |
222 |
|
222 | |||
223 | def url(self): |
|
223 | def url(self): | |
224 | return 'file:' + self.root |
|
224 | return 'file:' + self.root | |
225 |
|
225 | |||
226 | def hook(self, name, throw=False, **args): |
|
226 | def hook(self, name, throw=False, **args): | |
227 | return hook.hook(self.ui, self, name, throw, **args) |
|
227 | return hook.hook(self.ui, self, name, throw, **args) | |
228 |
|
228 | |||
229 | tag_disallowed = ':\r\n' |
|
229 | tag_disallowed = ':\r\n' | |
230 |
|
230 | |||
231 | def _tag(self, names, node, message, local, user, date, extra={}): |
|
231 | def _tag(self, names, node, message, local, user, date, extra={}): | |
232 | if isinstance(names, str): |
|
232 | if isinstance(names, str): | |
233 | allchars = names |
|
233 | allchars = names | |
234 | names = (names,) |
|
234 | names = (names,) | |
235 | else: |
|
235 | else: | |
236 | allchars = ''.join(names) |
|
236 | allchars = ''.join(names) | |
237 | for c in self.tag_disallowed: |
|
237 | for c in self.tag_disallowed: | |
238 | if c in allchars: |
|
238 | if c in allchars: | |
239 | raise util.Abort(_('%r cannot be used in a tag name') % c) |
|
239 | raise util.Abort(_('%r cannot be used in a tag name') % c) | |
240 |
|
240 | |||
241 | branches = self.branchmap() |
|
241 | branches = self.branchmap() | |
242 | for name in names: |
|
242 | for name in names: | |
243 | self.hook('pretag', throw=True, node=hex(node), tag=name, |
|
243 | self.hook('pretag', throw=True, node=hex(node), tag=name, | |
244 | local=local) |
|
244 | local=local) | |
245 | if name in branches: |
|
245 | if name in branches: | |
246 | self.ui.warn(_("warning: tag %s conflicts with existing" |
|
246 | self.ui.warn(_("warning: tag %s conflicts with existing" | |
247 | " branch name\n") % name) |
|
247 | " branch name\n") % name) | |
248 |
|
248 | |||
249 | def writetags(fp, names, munge, prevtags): |
|
249 | def writetags(fp, names, munge, prevtags): | |
250 | fp.seek(0, 2) |
|
250 | fp.seek(0, 2) | |
251 | if prevtags and prevtags[-1] != '\n': |
|
251 | if prevtags and prevtags[-1] != '\n': | |
252 | fp.write('\n') |
|
252 | fp.write('\n') | |
253 | for name in names: |
|
253 | for name in names: | |
254 | m = munge and munge(name) or name |
|
254 | m = munge and munge(name) or name | |
255 | if self._tagtypes and name in self._tagtypes: |
|
255 | if self._tagtypes and name in self._tagtypes: | |
256 | old = self._tags.get(name, nullid) |
|
256 | old = self._tags.get(name, nullid) | |
257 | fp.write('%s %s\n' % (hex(old), m)) |
|
257 | fp.write('%s %s\n' % (hex(old), m)) | |
258 | fp.write('%s %s\n' % (hex(node), m)) |
|
258 | fp.write('%s %s\n' % (hex(node), m)) | |
259 | fp.close() |
|
259 | fp.close() | |
260 |
|
260 | |||
261 | prevtags = '' |
|
261 | prevtags = '' | |
262 | if local: |
|
262 | if local: | |
263 | try: |
|
263 | try: | |
264 | fp = self.opener('localtags', 'r+') |
|
264 | fp = self.opener('localtags', 'r+') | |
265 | except IOError: |
|
265 | except IOError: | |
266 | fp = self.opener('localtags', 'a') |
|
266 | fp = self.opener('localtags', 'a') | |
267 | else: |
|
267 | else: | |
268 | prevtags = fp.read() |
|
268 | prevtags = fp.read() | |
269 |
|
269 | |||
270 | # local tags are stored in the current charset |
|
270 | # local tags are stored in the current charset | |
271 | writetags(fp, names, None, prevtags) |
|
271 | writetags(fp, names, None, prevtags) | |
272 | for name in names: |
|
272 | for name in names: | |
273 | self.hook('tag', node=hex(node), tag=name, local=local) |
|
273 | self.hook('tag', node=hex(node), tag=name, local=local) | |
274 | return |
|
274 | return | |
275 |
|
275 | |||
276 | try: |
|
276 | try: | |
277 | fp = self.wfile('.hgtags', 'rb+') |
|
277 | fp = self.wfile('.hgtags', 'rb+') | |
278 | except IOError: |
|
278 | except IOError: | |
279 | fp = self.wfile('.hgtags', 'ab') |
|
279 | fp = self.wfile('.hgtags', 'ab') | |
280 | else: |
|
280 | else: | |
281 | prevtags = fp.read() |
|
281 | prevtags = fp.read() | |
282 |
|
282 | |||
283 | # committed tags are stored in UTF-8 |
|
283 | # committed tags are stored in UTF-8 | |
284 | writetags(fp, names, encoding.fromlocal, prevtags) |
|
284 | writetags(fp, names, encoding.fromlocal, prevtags) | |
285 |
|
285 | |||
286 | fp.close() |
|
286 | fp.close() | |
287 |
|
287 | |||
288 | if '.hgtags' not in self.dirstate: |
|
288 | if '.hgtags' not in self.dirstate: | |
289 | self[None].add(['.hgtags']) |
|
289 | self[None].add(['.hgtags']) | |
290 |
|
290 | |||
291 | m = matchmod.exact(self.root, '', ['.hgtags']) |
|
291 | m = matchmod.exact(self.root, '', ['.hgtags']) | |
292 | tagnode = self.commit(message, user, date, extra=extra, match=m) |
|
292 | tagnode = self.commit(message, user, date, extra=extra, match=m) | |
293 |
|
293 | |||
294 | for name in names: |
|
294 | for name in names: | |
295 | self.hook('tag', node=hex(node), tag=name, local=local) |
|
295 | self.hook('tag', node=hex(node), tag=name, local=local) | |
296 |
|
296 | |||
297 | return tagnode |
|
297 | return tagnode | |
298 |
|
298 | |||
299 | def tag(self, names, node, message, local, user, date): |
|
299 | def tag(self, names, node, message, local, user, date): | |
300 | '''tag a revision with one or more symbolic names. |
|
300 | '''tag a revision with one or more symbolic names. | |
301 |
|
301 | |||
302 | names is a list of strings or, when adding a single tag, names may be a |
|
302 | names is a list of strings or, when adding a single tag, names may be a | |
303 | string. |
|
303 | string. | |
304 |
|
304 | |||
305 | if local is True, the tags are stored in a per-repository file. |
|
305 | if local is True, the tags are stored in a per-repository file. | |
306 | otherwise, they are stored in the .hgtags file, and a new |
|
306 | otherwise, they are stored in the .hgtags file, and a new | |
307 | changeset is committed with the change. |
|
307 | changeset is committed with the change. | |
308 |
|
308 | |||
309 | keyword arguments: |
|
309 | keyword arguments: | |
310 |
|
310 | |||
311 | local: whether to store tags in non-version-controlled file |
|
311 | local: whether to store tags in non-version-controlled file | |
312 | (default False) |
|
312 | (default False) | |
313 |
|
313 | |||
314 | message: commit message to use if committing |
|
314 | message: commit message to use if committing | |
315 |
|
315 | |||
316 | user: name of user to use if committing |
|
316 | user: name of user to use if committing | |
317 |
|
317 | |||
318 | date: date tuple to use if committing''' |
|
318 | date: date tuple to use if committing''' | |
319 |
|
319 | |||
320 | if not local: |
|
320 | if not local: | |
321 | for x in self.status()[:5]: |
|
321 | for x in self.status()[:5]: | |
322 | if '.hgtags' in x: |
|
322 | if '.hgtags' in x: | |
323 | raise util.Abort(_('working copy of .hgtags is changed ' |
|
323 | raise util.Abort(_('working copy of .hgtags is changed ' | |
324 | '(please commit .hgtags manually)')) |
|
324 | '(please commit .hgtags manually)')) | |
325 |
|
325 | |||
326 | self.tags() # instantiate the cache |
|
326 | self.tags() # instantiate the cache | |
327 | self._tag(names, node, message, local, user, date) |
|
327 | self._tag(names, node, message, local, user, date) | |
328 |
|
328 | |||
329 | def tags(self): |
|
329 | def tags(self): | |
330 | '''return a mapping of tag to node''' |
|
330 | '''return a mapping of tag to node''' | |
331 | if self._tags is None: |
|
331 | if self._tags is None: | |
332 | (self._tags, self._tagtypes) = self._findtags() |
|
332 | (self._tags, self._tagtypes) = self._findtags() | |
333 |
|
333 | |||
334 | return self._tags |
|
334 | return self._tags | |
335 |
|
335 | |||
336 | def _findtags(self): |
|
336 | def _findtags(self): | |
337 | '''Do the hard work of finding tags. Return a pair of dicts |
|
337 | '''Do the hard work of finding tags. Return a pair of dicts | |
338 | (tags, tagtypes) where tags maps tag name to node, and tagtypes |
|
338 | (tags, tagtypes) where tags maps tag name to node, and tagtypes | |
339 | maps tag name to a string like \'global\' or \'local\'. |
|
339 | maps tag name to a string like \'global\' or \'local\'. | |
340 | Subclasses or extensions are free to add their own tags, but |
|
340 | Subclasses or extensions are free to add their own tags, but | |
341 | should be aware that the returned dicts will be retained for the |
|
341 | should be aware that the returned dicts will be retained for the | |
342 | duration of the localrepo object.''' |
|
342 | duration of the localrepo object.''' | |
343 |
|
343 | |||
344 | # XXX what tagtype should subclasses/extensions use? Currently |
|
344 | # XXX what tagtype should subclasses/extensions use? Currently | |
345 | # mq and bookmarks add tags, but do not set the tagtype at all. |
|
345 | # mq and bookmarks add tags, but do not set the tagtype at all. | |
346 | # Should each extension invent its own tag type? Should there |
|
346 | # Should each extension invent its own tag type? Should there | |
347 | # be one tagtype for all such "virtual" tags? Or is the status |
|
347 | # be one tagtype for all such "virtual" tags? Or is the status | |
348 | # quo fine? |
|
348 | # quo fine? | |
349 |
|
349 | |||
350 | alltags = {} # map tag name to (node, hist) |
|
350 | alltags = {} # map tag name to (node, hist) | |
351 | tagtypes = {} |
|
351 | tagtypes = {} | |
352 |
|
352 | |||
353 | tagsmod.findglobaltags(self.ui, self, alltags, tagtypes) |
|
353 | tagsmod.findglobaltags(self.ui, self, alltags, tagtypes) | |
354 | tagsmod.readlocaltags(self.ui, self, alltags, tagtypes) |
|
354 | tagsmod.readlocaltags(self.ui, self, alltags, tagtypes) | |
355 |
|
355 | |||
356 | # Build the return dicts. Have to re-encode tag names because |
|
356 | # Build the return dicts. Have to re-encode tag names because | |
357 | # the tags module always uses UTF-8 (in order not to lose info |
|
357 | # the tags module always uses UTF-8 (in order not to lose info | |
358 | # writing to the cache), but the rest of Mercurial wants them in |
|
358 | # writing to the cache), but the rest of Mercurial wants them in | |
359 | # local encoding. |
|
359 | # local encoding. | |
360 | tags = {} |
|
360 | tags = {} | |
361 | for (name, (node, hist)) in alltags.iteritems(): |
|
361 | for (name, (node, hist)) in alltags.iteritems(): | |
362 | if node != nullid: |
|
362 | if node != nullid: | |
363 | tags[encoding.tolocal(name)] = node |
|
363 | tags[encoding.tolocal(name)] = node | |
364 | tags['tip'] = self.changelog.tip() |
|
364 | tags['tip'] = self.changelog.tip() | |
365 | tagtypes = dict([(encoding.tolocal(name), value) |
|
365 | tagtypes = dict([(encoding.tolocal(name), value) | |
366 | for (name, value) in tagtypes.iteritems()]) |
|
366 | for (name, value) in tagtypes.iteritems()]) | |
367 | return (tags, tagtypes) |
|
367 | return (tags, tagtypes) | |
368 |
|
368 | |||
369 | def tagtype(self, tagname): |
|
369 | def tagtype(self, tagname): | |
370 | ''' |
|
370 | ''' | |
371 | return the type of the given tag. result can be: |
|
371 | return the type of the given tag. result can be: | |
372 |
|
372 | |||
373 | 'local' : a local tag |
|
373 | 'local' : a local tag | |
374 | 'global' : a global tag |
|
374 | 'global' : a global tag | |
375 | None : tag does not exist |
|
375 | None : tag does not exist | |
376 | ''' |
|
376 | ''' | |
377 |
|
377 | |||
378 | self.tags() |
|
378 | self.tags() | |
379 |
|
379 | |||
380 | return self._tagtypes.get(tagname) |
|
380 | return self._tagtypes.get(tagname) | |
381 |
|
381 | |||
382 | def tagslist(self): |
|
382 | def tagslist(self): | |
383 | '''return a list of tags ordered by revision''' |
|
383 | '''return a list of tags ordered by revision''' | |
384 | l = [] |
|
384 | l = [] | |
385 | for t, n in self.tags().iteritems(): |
|
385 | for t, n in self.tags().iteritems(): | |
386 | try: |
|
386 | try: | |
387 | r = self.changelog.rev(n) |
|
387 | r = self.changelog.rev(n) | |
388 | except: |
|
388 | except: | |
389 | r = -2 # sort to the beginning of the list if unknown |
|
389 | r = -2 # sort to the beginning of the list if unknown | |
390 | l.append((r, t, n)) |
|
390 | l.append((r, t, n)) | |
391 | return [(t, n) for r, t, n in sorted(l)] |
|
391 | return [(t, n) for r, t, n in sorted(l)] | |
392 |
|
392 | |||
393 | def nodetags(self, node): |
|
393 | def nodetags(self, node): | |
394 | '''return the tags associated with a node''' |
|
394 | '''return the tags associated with a node''' | |
395 | if not self.nodetagscache: |
|
395 | if not self.nodetagscache: | |
396 | self.nodetagscache = {} |
|
396 | self.nodetagscache = {} | |
397 | for t, n in self.tags().iteritems(): |
|
397 | for t, n in self.tags().iteritems(): | |
398 | self.nodetagscache.setdefault(n, []).append(t) |
|
398 | self.nodetagscache.setdefault(n, []).append(t) | |
399 | for tags in self.nodetagscache.itervalues(): |
|
399 | for tags in self.nodetagscache.itervalues(): | |
400 | tags.sort() |
|
400 | tags.sort() | |
401 | return self.nodetagscache.get(node, []) |
|
401 | return self.nodetagscache.get(node, []) | |
402 |
|
402 | |||
403 | def nodebookmarks(self, node): |
|
403 | def nodebookmarks(self, node): | |
404 | marks = [] |
|
404 | marks = [] | |
405 | for bookmark, n in self._bookmarks.iteritems(): |
|
405 | for bookmark, n in self._bookmarks.iteritems(): | |
406 | if n == node: |
|
406 | if n == node: | |
407 | marks.append(bookmark) |
|
407 | marks.append(bookmark) | |
408 | return sorted(marks) |
|
408 | return sorted(marks) | |
409 |
|
409 | |||
410 | def _branchtags(self, partial, lrev): |
|
410 | def _branchtags(self, partial, lrev): | |
411 | # TODO: rename this function? |
|
411 | # TODO: rename this function? | |
412 | tiprev = len(self) - 1 |
|
412 | tiprev = len(self) - 1 | |
413 | if lrev != tiprev: |
|
413 | if lrev != tiprev: | |
414 | ctxgen = (self[r] for r in xrange(lrev + 1, tiprev + 1)) |
|
414 | ctxgen = (self[r] for r in xrange(lrev + 1, tiprev + 1)) | |
415 | self._updatebranchcache(partial, ctxgen) |
|
415 | self._updatebranchcache(partial, ctxgen) | |
416 | self._writebranchcache(partial, self.changelog.tip(), tiprev) |
|
416 | self._writebranchcache(partial, self.changelog.tip(), tiprev) | |
417 |
|
417 | |||
418 | return partial |
|
418 | return partial | |
419 |
|
419 | |||
420 | def updatebranchcache(self): |
|
420 | def updatebranchcache(self): | |
421 | tip = self.changelog.tip() |
|
421 | tip = self.changelog.tip() | |
422 | if self._branchcache is not None and self._branchcachetip == tip: |
|
422 | if self._branchcache is not None and self._branchcachetip == tip: | |
423 | return self._branchcache |
|
423 | return self._branchcache | |
424 |
|
424 | |||
425 | oldtip = self._branchcachetip |
|
425 | oldtip = self._branchcachetip | |
426 | self._branchcachetip = tip |
|
426 | self._branchcachetip = tip | |
427 | if oldtip is None or oldtip not in self.changelog.nodemap: |
|
427 | if oldtip is None or oldtip not in self.changelog.nodemap: | |
428 | partial, last, lrev = self._readbranchcache() |
|
428 | partial, last, lrev = self._readbranchcache() | |
429 | else: |
|
429 | else: | |
430 | lrev = self.changelog.rev(oldtip) |
|
430 | lrev = self.changelog.rev(oldtip) | |
431 | partial = self._branchcache |
|
431 | partial = self._branchcache | |
432 |
|
432 | |||
433 | self._branchtags(partial, lrev) |
|
433 | self._branchtags(partial, lrev) | |
434 | # this private cache holds all heads (not just tips) |
|
434 | # this private cache holds all heads (not just tips) | |
435 | self._branchcache = partial |
|
435 | self._branchcache = partial | |
436 |
|
436 | |||
437 | def branchmap(self): |
|
437 | def branchmap(self): | |
438 | '''returns a dictionary {branch: [branchheads]}''' |
|
438 | '''returns a dictionary {branch: [branchheads]}''' | |
439 | self.updatebranchcache() |
|
439 | self.updatebranchcache() | |
440 | return self._branchcache |
|
440 | return self._branchcache | |
441 |
|
441 | |||
442 | def branchtags(self): |
|
442 | def branchtags(self): | |
443 | '''return a dict where branch names map to the tipmost head of |
|
443 | '''return a dict where branch names map to the tipmost head of | |
444 | the branch, open heads come before closed''' |
|
444 | the branch, open heads come before closed''' | |
445 | bt = {} |
|
445 | bt = {} | |
446 | for bn, heads in self.branchmap().iteritems(): |
|
446 | for bn, heads in self.branchmap().iteritems(): | |
447 | tip = heads[-1] |
|
447 | tip = heads[-1] | |
448 | for h in reversed(heads): |
|
448 | for h in reversed(heads): | |
449 | if 'close' not in self.changelog.read(h)[5]: |
|
449 | if 'close' not in self.changelog.read(h)[5]: | |
450 | tip = h |
|
450 | tip = h | |
451 | break |
|
451 | break | |
452 | bt[bn] = tip |
|
452 | bt[bn] = tip | |
453 | return bt |
|
453 | return bt | |
454 |
|
454 | |||
455 | def _readbranchcache(self): |
|
455 | def _readbranchcache(self): | |
456 | partial = {} |
|
456 | partial = {} | |
457 | try: |
|
457 | try: | |
458 | f = self.opener("cache/branchheads") |
|
458 | f = self.opener("cache/branchheads") | |
459 | lines = f.read().split('\n') |
|
459 | lines = f.read().split('\n') | |
460 | f.close() |
|
460 | f.close() | |
461 | except (IOError, OSError): |
|
461 | except (IOError, OSError): | |
462 | return {}, nullid, nullrev |
|
462 | return {}, nullid, nullrev | |
463 |
|
463 | |||
464 | try: |
|
464 | try: | |
465 | last, lrev = lines.pop(0).split(" ", 1) |
|
465 | last, lrev = lines.pop(0).split(" ", 1) | |
466 | last, lrev = bin(last), int(lrev) |
|
466 | last, lrev = bin(last), int(lrev) | |
467 | if lrev >= len(self) or self[lrev].node() != last: |
|
467 | if lrev >= len(self) or self[lrev].node() != last: | |
468 | # invalidate the cache |
|
468 | # invalidate the cache | |
469 | raise ValueError('invalidating branch cache (tip differs)') |
|
469 | raise ValueError('invalidating branch cache (tip differs)') | |
470 | for l in lines: |
|
470 | for l in lines: | |
471 | if not l: |
|
471 | if not l: | |
472 | continue |
|
472 | continue | |
473 | node, label = l.split(" ", 1) |
|
473 | node, label = l.split(" ", 1) | |
474 | label = encoding.tolocal(label.strip()) |
|
474 | label = encoding.tolocal(label.strip()) | |
475 | partial.setdefault(label, []).append(bin(node)) |
|
475 | partial.setdefault(label, []).append(bin(node)) | |
476 | except KeyboardInterrupt: |
|
476 | except KeyboardInterrupt: | |
477 | raise |
|
477 | raise | |
478 | except Exception, inst: |
|
478 | except Exception, inst: | |
479 | if self.ui.debugflag: |
|
479 | if self.ui.debugflag: | |
480 | self.ui.warn(str(inst), '\n') |
|
480 | self.ui.warn(str(inst), '\n') | |
481 | partial, last, lrev = {}, nullid, nullrev |
|
481 | partial, last, lrev = {}, nullid, nullrev | |
482 | return partial, last, lrev |
|
482 | return partial, last, lrev | |
483 |
|
483 | |||
484 | def _writebranchcache(self, branches, tip, tiprev): |
|
484 | def _writebranchcache(self, branches, tip, tiprev): | |
485 | try: |
|
485 | try: | |
486 | f = self.opener("cache/branchheads", "w", atomictemp=True) |
|
486 | f = self.opener("cache/branchheads", "w", atomictemp=True) | |
487 | f.write("%s %s\n" % (hex(tip), tiprev)) |
|
487 | f.write("%s %s\n" % (hex(tip), tiprev)) | |
488 | for label, nodes in branches.iteritems(): |
|
488 | for label, nodes in branches.iteritems(): | |
489 | for node in nodes: |
|
489 | for node in nodes: | |
490 | f.write("%s %s\n" % (hex(node), encoding.fromlocal(label))) |
|
490 | f.write("%s %s\n" % (hex(node), encoding.fromlocal(label))) | |
491 | f.rename() |
|
491 | f.rename() | |
492 | except (IOError, OSError): |
|
492 | except (IOError, OSError): | |
493 | pass |
|
493 | pass | |
494 |
|
494 | |||
495 | def _updatebranchcache(self, partial, ctxgen): |
|
495 | def _updatebranchcache(self, partial, ctxgen): | |
496 | # collect new branch entries |
|
496 | # collect new branch entries | |
497 | newbranches = {} |
|
497 | newbranches = {} | |
498 | for c in ctxgen: |
|
498 | for c in ctxgen: | |
499 | newbranches.setdefault(c.branch(), []).append(c.node()) |
|
499 | newbranches.setdefault(c.branch(), []).append(c.node()) | |
500 | # if older branchheads are reachable from new ones, they aren't |
|
500 | # if older branchheads are reachable from new ones, they aren't | |
501 | # really branchheads. Note checking parents is insufficient: |
|
501 | # really branchheads. Note checking parents is insufficient: | |
502 | # 1 (branch a) -> 2 (branch b) -> 3 (branch a) |
|
502 | # 1 (branch a) -> 2 (branch b) -> 3 (branch a) | |
503 | for branch, newnodes in newbranches.iteritems(): |
|
503 | for branch, newnodes in newbranches.iteritems(): | |
504 | bheads = partial.setdefault(branch, []) |
|
504 | bheads = partial.setdefault(branch, []) | |
505 | bheads.extend(newnodes) |
|
505 | bheads.extend(newnodes) | |
506 | if len(bheads) <= 1: |
|
506 | if len(bheads) <= 1: | |
507 | continue |
|
507 | continue | |
508 | # starting from tip means fewer passes over reachable |
|
508 | # starting from tip means fewer passes over reachable | |
509 | while newnodes: |
|
509 | while newnodes: | |
510 | latest = newnodes.pop() |
|
510 | latest = newnodes.pop() | |
511 | if latest not in bheads: |
|
511 | if latest not in bheads: | |
512 | continue |
|
512 | continue | |
513 | minbhrev = self[min([self[bh].rev() for bh in bheads])].node() |
|
513 | minbhrev = self[min([self[bh].rev() for bh in bheads])].node() | |
514 | reachable = self.changelog.reachable(latest, minbhrev) |
|
514 | reachable = self.changelog.reachable(latest, minbhrev) | |
515 | reachable.remove(latest) |
|
515 | reachable.remove(latest) | |
516 | bheads = [b for b in bheads if b not in reachable] |
|
516 | bheads = [b for b in bheads if b not in reachable] | |
517 | partial[branch] = bheads |
|
517 | partial[branch] = bheads | |
518 |
|
518 | |||
519 | def lookup(self, key): |
|
519 | def lookup(self, key): | |
520 | if isinstance(key, int): |
|
520 | if isinstance(key, int): | |
521 | return self.changelog.node(key) |
|
521 | return self.changelog.node(key) | |
522 | elif key == '.': |
|
522 | elif key == '.': | |
523 | return self.dirstate.parents()[0] |
|
523 | return self.dirstate.parents()[0] | |
524 | elif key == 'null': |
|
524 | elif key == 'null': | |
525 | return nullid |
|
525 | return nullid | |
526 | elif key == 'tip': |
|
526 | elif key == 'tip': | |
527 | return self.changelog.tip() |
|
527 | return self.changelog.tip() | |
528 | n = self.changelog._match(key) |
|
528 | n = self.changelog._match(key) | |
529 | if n: |
|
529 | if n: | |
530 | return n |
|
530 | return n | |
531 | if key in self._bookmarks: |
|
531 | if key in self._bookmarks: | |
532 | return self._bookmarks[key] |
|
532 | return self._bookmarks[key] | |
533 | if key in self.tags(): |
|
533 | if key in self.tags(): | |
534 | return self.tags()[key] |
|
534 | return self.tags()[key] | |
535 | if key in self.branchtags(): |
|
535 | if key in self.branchtags(): | |
536 | return self.branchtags()[key] |
|
536 | return self.branchtags()[key] | |
537 | n = self.changelog._partialmatch(key) |
|
537 | n = self.changelog._partialmatch(key) | |
538 | if n: |
|
538 | if n: | |
539 | return n |
|
539 | return n | |
540 |
|
540 | |||
541 | # can't find key, check if it might have come from damaged dirstate |
|
541 | # can't find key, check if it might have come from damaged dirstate | |
542 | if key in self.dirstate.parents(): |
|
542 | if key in self.dirstate.parents(): | |
543 | raise error.Abort(_("working directory has unknown parent '%s'!") |
|
543 | raise error.Abort(_("working directory has unknown parent '%s'!") | |
544 | % short(key)) |
|
544 | % short(key)) | |
545 | try: |
|
545 | try: | |
546 | if len(key) == 20: |
|
546 | if len(key) == 20: | |
547 | key = hex(key) |
|
547 | key = hex(key) | |
548 | except: |
|
548 | except: | |
549 | pass |
|
549 | pass | |
550 | raise error.RepoLookupError(_("unknown revision '%s'") % key) |
|
550 | raise error.RepoLookupError(_("unknown revision '%s'") % key) | |
551 |
|
551 | |||
552 | def lookupbranch(self, key, remote=None): |
|
552 | def lookupbranch(self, key, remote=None): | |
553 | repo = remote or self |
|
553 | repo = remote or self | |
554 | if key in repo.branchmap(): |
|
554 | if key in repo.branchmap(): | |
555 | return key |
|
555 | return key | |
556 |
|
556 | |||
557 | repo = (remote and remote.local()) and remote or self |
|
557 | repo = (remote and remote.local()) and remote or self | |
558 | return repo[key].branch() |
|
558 | return repo[key].branch() | |
559 |
|
559 | |||
560 | def local(self): |
|
560 | def local(self): | |
561 | return True |
|
561 | return True | |
562 |
|
562 | |||
563 | def join(self, f): |
|
563 | def join(self, f): | |
564 | return os.path.join(self.path, f) |
|
564 | return os.path.join(self.path, f) | |
565 |
|
565 | |||
566 | def wjoin(self, f): |
|
566 | def wjoin(self, f): | |
567 | return os.path.join(self.root, f) |
|
567 | return os.path.join(self.root, f) | |
568 |
|
568 | |||
569 | def file(self, f): |
|
569 | def file(self, f): | |
570 | if f[0] == '/': |
|
570 | if f[0] == '/': | |
571 | f = f[1:] |
|
571 | f = f[1:] | |
572 | return filelog.filelog(self.sopener, f) |
|
572 | return filelog.filelog(self.sopener, f) | |
573 |
|
573 | |||
574 | def changectx(self, changeid): |
|
574 | def changectx(self, changeid): | |
575 | return self[changeid] |
|
575 | return self[changeid] | |
576 |
|
576 | |||
577 | def parents(self, changeid=None): |
|
577 | def parents(self, changeid=None): | |
578 | '''get list of changectxs for parents of changeid''' |
|
578 | '''get list of changectxs for parents of changeid''' | |
579 | return self[changeid].parents() |
|
579 | return self[changeid].parents() | |
580 |
|
580 | |||
581 | def filectx(self, path, changeid=None, fileid=None): |
|
581 | def filectx(self, path, changeid=None, fileid=None): | |
582 | """changeid can be a changeset revision, node, or tag. |
|
582 | """changeid can be a changeset revision, node, or tag. | |
583 | fileid can be a file revision or node.""" |
|
583 | fileid can be a file revision or node.""" | |
584 | return context.filectx(self, path, changeid, fileid) |
|
584 | return context.filectx(self, path, changeid, fileid) | |
585 |
|
585 | |||
586 | def getcwd(self): |
|
586 | def getcwd(self): | |
587 | return self.dirstate.getcwd() |
|
587 | return self.dirstate.getcwd() | |
588 |
|
588 | |||
589 | def pathto(self, f, cwd=None): |
|
589 | def pathto(self, f, cwd=None): | |
590 | return self.dirstate.pathto(f, cwd) |
|
590 | return self.dirstate.pathto(f, cwd) | |
591 |
|
591 | |||
592 | def wfile(self, f, mode='r'): |
|
592 | def wfile(self, f, mode='r'): | |
593 | return self.wopener(f, mode) |
|
593 | return self.wopener(f, mode) | |
594 |
|
594 | |||
595 | def _link(self, f): |
|
595 | def _link(self, f): | |
596 | return os.path.islink(self.wjoin(f)) |
|
596 | return os.path.islink(self.wjoin(f)) | |
597 |
|
597 | |||
598 | def _loadfilter(self, filter): |
|
598 | def _loadfilter(self, filter): | |
599 | if filter not in self.filterpats: |
|
599 | if filter not in self.filterpats: | |
600 | l = [] |
|
600 | l = [] | |
601 | for pat, cmd in self.ui.configitems(filter): |
|
601 | for pat, cmd in self.ui.configitems(filter): | |
602 | if cmd == '!': |
|
602 | if cmd == '!': | |
603 | continue |
|
603 | continue | |
604 | mf = matchmod.match(self.root, '', [pat]) |
|
604 | mf = matchmod.match(self.root, '', [pat]) | |
605 | fn = None |
|
605 | fn = None | |
606 | params = cmd |
|
606 | params = cmd | |
607 | for name, filterfn in self._datafilters.iteritems(): |
|
607 | for name, filterfn in self._datafilters.iteritems(): | |
608 | if cmd.startswith(name): |
|
608 | if cmd.startswith(name): | |
609 | fn = filterfn |
|
609 | fn = filterfn | |
610 | params = cmd[len(name):].lstrip() |
|
610 | params = cmd[len(name):].lstrip() | |
611 | break |
|
611 | break | |
612 | if not fn: |
|
612 | if not fn: | |
613 | fn = lambda s, c, **kwargs: util.filter(s, c) |
|
613 | fn = lambda s, c, **kwargs: util.filter(s, c) | |
614 | # Wrap old filters not supporting keyword arguments |
|
614 | # Wrap old filters not supporting keyword arguments | |
615 | if not inspect.getargspec(fn)[2]: |
|
615 | if not inspect.getargspec(fn)[2]: | |
616 | oldfn = fn |
|
616 | oldfn = fn | |
617 | fn = lambda s, c, **kwargs: oldfn(s, c) |
|
617 | fn = lambda s, c, **kwargs: oldfn(s, c) | |
618 | l.append((mf, fn, params)) |
|
618 | l.append((mf, fn, params)) | |
619 | self.filterpats[filter] = l |
|
619 | self.filterpats[filter] = l | |
620 | return self.filterpats[filter] |
|
620 | return self.filterpats[filter] | |
621 |
|
621 | |||
622 | def _filter(self, filterpats, filename, data): |
|
622 | def _filter(self, filterpats, filename, data): | |
623 | for mf, fn, cmd in filterpats: |
|
623 | for mf, fn, cmd in filterpats: | |
624 | if mf(filename): |
|
624 | if mf(filename): | |
625 | self.ui.debug("filtering %s through %s\n" % (filename, cmd)) |
|
625 | self.ui.debug("filtering %s through %s\n" % (filename, cmd)) | |
626 | data = fn(data, cmd, ui=self.ui, repo=self, filename=filename) |
|
626 | data = fn(data, cmd, ui=self.ui, repo=self, filename=filename) | |
627 | break |
|
627 | break | |
628 |
|
628 | |||
629 | return data |
|
629 | return data | |
630 |
|
630 | |||
631 | @propertycache |
|
631 | @propertycache | |
632 | def _encodefilterpats(self): |
|
632 | def _encodefilterpats(self): | |
633 | return self._loadfilter('encode') |
|
633 | return self._loadfilter('encode') | |
634 |
|
634 | |||
635 | @propertycache |
|
635 | @propertycache | |
636 | def _decodefilterpats(self): |
|
636 | def _decodefilterpats(self): | |
637 | return self._loadfilter('decode') |
|
637 | return self._loadfilter('decode') | |
638 |
|
638 | |||
639 | def adddatafilter(self, name, filter): |
|
639 | def adddatafilter(self, name, filter): | |
640 | self._datafilters[name] = filter |
|
640 | self._datafilters[name] = filter | |
641 |
|
641 | |||
642 | def wread(self, filename): |
|
642 | def wread(self, filename): | |
643 | if self._link(filename): |
|
643 | if self._link(filename): | |
644 | data = os.readlink(self.wjoin(filename)) |
|
644 | data = os.readlink(self.wjoin(filename)) | |
645 | else: |
|
645 | else: | |
646 | data = self.wopener(filename, 'r').read() |
|
646 | data = self.wopener(filename, 'r').read() | |
647 | return self._filter(self._encodefilterpats, filename, data) |
|
647 | return self._filter(self._encodefilterpats, filename, data) | |
648 |
|
648 | |||
649 | def wwrite(self, filename, data, flags): |
|
649 | def wwrite(self, filename, data, flags): | |
650 | data = self._filter(self._decodefilterpats, filename, data) |
|
650 | data = self._filter(self._decodefilterpats, filename, data) | |
651 | if 'l' in flags: |
|
651 | if 'l' in flags: | |
652 | self.wopener.symlink(data, filename) |
|
652 | self.wopener.symlink(data, filename) | |
653 | else: |
|
653 | else: | |
654 | self.wopener(filename, 'w').write(data) |
|
654 | self.wopener(filename, 'w').write(data) | |
655 | if 'x' in flags: |
|
655 | if 'x' in flags: | |
656 | util.set_flags(self.wjoin(filename), False, True) |
|
656 | util.set_flags(self.wjoin(filename), False, True) | |
657 |
|
657 | |||
658 | def wwritedata(self, filename, data): |
|
658 | def wwritedata(self, filename, data): | |
659 | return self._filter(self._decodefilterpats, filename, data) |
|
659 | return self._filter(self._decodefilterpats, filename, data) | |
660 |
|
660 | |||
661 | def transaction(self, desc): |
|
661 | def transaction(self, desc): | |
662 | tr = self._transref and self._transref() or None |
|
662 | tr = self._transref and self._transref() or None | |
663 | if tr and tr.running(): |
|
663 | if tr and tr.running(): | |
664 | return tr.nest() |
|
664 | return tr.nest() | |
665 |
|
665 | |||
666 | # abort here if the journal already exists |
|
666 | # abort here if the journal already exists | |
667 | if os.path.exists(self.sjoin("journal")): |
|
667 | if os.path.exists(self.sjoin("journal")): | |
668 | raise error.RepoError( |
|
668 | raise error.RepoError( | |
669 | _("abandoned transaction found - run hg recover")) |
|
669 | _("abandoned transaction found - run hg recover")) | |
670 |
|
670 | |||
671 | # save dirstate for rollback |
|
671 | # save dirstate for rollback | |
672 | try: |
|
672 | try: | |
673 | ds = self.opener("dirstate").read() |
|
673 | ds = self.opener("dirstate").read() | |
674 | except IOError: |
|
674 | except IOError: | |
675 | ds = "" |
|
675 | ds = "" | |
676 | self.opener("journal.dirstate", "w").write(ds) |
|
676 | self.opener("journal.dirstate", "w").write(ds) | |
677 | self.opener("journal.branch", "w").write( |
|
677 | self.opener("journal.branch", "w").write( | |
678 | encoding.fromlocal(self.dirstate.branch())) |
|
678 | encoding.fromlocal(self.dirstate.branch())) | |
679 | self.opener("journal.desc", "w").write("%d\n%s\n" % (len(self), desc)) |
|
679 | self.opener("journal.desc", "w").write("%d\n%s\n" % (len(self), desc)) | |
680 |
|
680 | |||
681 | renames = [(self.sjoin("journal"), self.sjoin("undo")), |
|
681 | renames = [(self.sjoin("journal"), self.sjoin("undo")), | |
682 | (self.join("journal.dirstate"), self.join("undo.dirstate")), |
|
682 | (self.join("journal.dirstate"), self.join("undo.dirstate")), | |
683 | (self.join("journal.branch"), self.join("undo.branch")), |
|
683 | (self.join("journal.branch"), self.join("undo.branch")), | |
684 | (self.join("journal.desc"), self.join("undo.desc"))] |
|
684 | (self.join("journal.desc"), self.join("undo.desc"))] | |
685 | tr = transaction.transaction(self.ui.warn, self.sopener, |
|
685 | tr = transaction.transaction(self.ui.warn, self.sopener, | |
686 | self.sjoin("journal"), |
|
686 | self.sjoin("journal"), | |
687 | aftertrans(renames), |
|
687 | aftertrans(renames), | |
688 | self.store.createmode) |
|
688 | self.store.createmode) | |
689 | self._transref = weakref.ref(tr) |
|
689 | self._transref = weakref.ref(tr) | |
690 | return tr |
|
690 | return tr | |
691 |
|
691 | |||
692 | def recover(self): |
|
692 | def recover(self): | |
693 | lock = self.lock() |
|
693 | lock = self.lock() | |
694 | try: |
|
694 | try: | |
695 | if os.path.exists(self.sjoin("journal")): |
|
695 | if os.path.exists(self.sjoin("journal")): | |
696 | self.ui.status(_("rolling back interrupted transaction\n")) |
|
696 | self.ui.status(_("rolling back interrupted transaction\n")) | |
697 | transaction.rollback(self.sopener, self.sjoin("journal"), |
|
697 | transaction.rollback(self.sopener, self.sjoin("journal"), | |
698 | self.ui.warn) |
|
698 | self.ui.warn) | |
699 | self.invalidate() |
|
699 | self.invalidate() | |
700 | return True |
|
700 | return True | |
701 | else: |
|
701 | else: | |
702 | self.ui.warn(_("no interrupted transaction available\n")) |
|
702 | self.ui.warn(_("no interrupted transaction available\n")) | |
703 | return False |
|
703 | return False | |
704 | finally: |
|
704 | finally: | |
705 | lock.release() |
|
705 | lock.release() | |
706 |
|
706 | |||
707 | def rollback(self, dryrun=False): |
|
707 | def rollback(self, dryrun=False): | |
708 | wlock = lock = None |
|
708 | wlock = lock = None | |
709 | try: |
|
709 | try: | |
710 | wlock = self.wlock() |
|
710 | wlock = self.wlock() | |
711 | lock = self.lock() |
|
711 | lock = self.lock() | |
712 | if os.path.exists(self.sjoin("undo")): |
|
712 | if os.path.exists(self.sjoin("undo")): | |
713 | try: |
|
713 | try: | |
714 | args = self.opener("undo.desc", "r").read().splitlines() |
|
714 | args = self.opener("undo.desc", "r").read().splitlines() | |
715 | if len(args) >= 3 and self.ui.verbose: |
|
715 | if len(args) >= 3 and self.ui.verbose: | |
716 | desc = _("rolling back to revision %s" |
|
716 | desc = _("rolling back to revision %s" | |
717 | " (undo %s: %s)\n") % ( |
|
717 | " (undo %s: %s)\n") % ( | |
718 | int(args[0]) - 1, args[1], args[2]) |
|
718 | int(args[0]) - 1, args[1], args[2]) | |
719 | elif len(args) >= 2: |
|
719 | elif len(args) >= 2: | |
720 | desc = _("rolling back to revision %s (undo %s)\n") % ( |
|
720 | desc = _("rolling back to revision %s (undo %s)\n") % ( | |
721 | int(args[0]) - 1, args[1]) |
|
721 | int(args[0]) - 1, args[1]) | |
722 | except IOError: |
|
722 | except IOError: | |
723 | desc = _("rolling back unknown transaction\n") |
|
723 | desc = _("rolling back unknown transaction\n") | |
724 | self.ui.status(desc) |
|
724 | self.ui.status(desc) | |
725 | if dryrun: |
|
725 | if dryrun: | |
726 | return |
|
726 | return | |
727 | transaction.rollback(self.sopener, self.sjoin("undo"), |
|
727 | transaction.rollback(self.sopener, self.sjoin("undo"), | |
728 | self.ui.warn) |
|
728 | self.ui.warn) | |
729 | util.rename(self.join("undo.dirstate"), self.join("dirstate")) |
|
729 | util.rename(self.join("undo.dirstate"), self.join("dirstate")) | |
730 | if os.path.exists(self.join('undo.bookmarks')): |
|
730 | if os.path.exists(self.join('undo.bookmarks')): | |
731 | util.rename(self.join('undo.bookmarks'), |
|
731 | util.rename(self.join('undo.bookmarks'), | |
732 | self.join('bookmarks')) |
|
732 | self.join('bookmarks')) | |
733 | try: |
|
733 | try: | |
734 | branch = self.opener("undo.branch").read() |
|
734 | branch = self.opener("undo.branch").read() | |
735 | self.dirstate.setbranch(branch) |
|
735 | self.dirstate.setbranch(branch) | |
736 | except IOError: |
|
736 | except IOError: | |
737 | self.ui.warn(_("Named branch could not be reset, " |
|
737 | self.ui.warn(_("Named branch could not be reset, " | |
738 | "current branch still is: %s\n") |
|
738 | "current branch still is: %s\n") | |
739 | % self.dirstate.branch()) |
|
739 | % self.dirstate.branch()) | |
740 | self.invalidate() |
|
740 | self.invalidate() | |
741 | self.dirstate.invalidate() |
|
741 | self.dirstate.invalidate() | |
742 | self.destroyed() |
|
742 | self.destroyed() | |
743 | else: |
|
743 | else: | |
744 | self.ui.warn(_("no rollback information available\n")) |
|
744 | self.ui.warn(_("no rollback information available\n")) | |
745 | return 1 |
|
745 | return 1 | |
746 | finally: |
|
746 | finally: | |
747 | release(lock, wlock) |
|
747 | release(lock, wlock) | |
748 |
|
748 | |||
749 | def invalidatecaches(self): |
|
749 | def invalidatecaches(self): | |
750 | self._tags = None |
|
750 | self._tags = None | |
751 | self._tagtypes = None |
|
751 | self._tagtypes = None | |
752 | self.nodetagscache = None |
|
752 | self.nodetagscache = None | |
753 | self._branchcache = None # in UTF-8 |
|
753 | self._branchcache = None # in UTF-8 | |
754 | self._branchcachetip = None |
|
754 | self._branchcachetip = None | |
755 |
|
755 | |||
756 | def invalidate(self): |
|
756 | def invalidate(self): | |
757 | for a in ("changelog", "manifest", "_bookmarks", "_bookmarkscurrent"): |
|
757 | for a in ("changelog", "manifest", "_bookmarks", "_bookmarkscurrent"): | |
758 | if a in self.__dict__: |
|
758 | if a in self.__dict__: | |
759 | delattr(self, a) |
|
759 | delattr(self, a) | |
760 | self.invalidatecaches() |
|
760 | self.invalidatecaches() | |
761 |
|
761 | |||
762 | def _lock(self, lockname, wait, releasefn, acquirefn, desc): |
|
762 | def _lock(self, lockname, wait, releasefn, acquirefn, desc): | |
763 | try: |
|
763 | try: | |
764 | l = lock.lock(lockname, 0, releasefn, desc=desc) |
|
764 | l = lock.lock(lockname, 0, releasefn, desc=desc) | |
765 | except error.LockHeld, inst: |
|
765 | except error.LockHeld, inst: | |
766 | if not wait: |
|
766 | if not wait: | |
767 | raise |
|
767 | raise | |
768 | self.ui.warn(_("waiting for lock on %s held by %r\n") % |
|
768 | self.ui.warn(_("waiting for lock on %s held by %r\n") % | |
769 | (desc, inst.locker)) |
|
769 | (desc, inst.locker)) | |
770 | # default to 600 seconds timeout |
|
770 | # default to 600 seconds timeout | |
771 | l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")), |
|
771 | l = lock.lock(lockname, int(self.ui.config("ui", "timeout", "600")), | |
772 | releasefn, desc=desc) |
|
772 | releasefn, desc=desc) | |
773 | if acquirefn: |
|
773 | if acquirefn: | |
774 | acquirefn() |
|
774 | acquirefn() | |
775 | return l |
|
775 | return l | |
776 |
|
776 | |||
777 | def lock(self, wait=True): |
|
777 | def lock(self, wait=True): | |
778 | '''Lock the repository store (.hg/store) and return a weak reference |
|
778 | '''Lock the repository store (.hg/store) and return a weak reference | |
779 | to the lock. Use this before modifying the store (e.g. committing or |
|
779 | to the lock. Use this before modifying the store (e.g. committing or | |
780 | stripping). If you are opening a transaction, get a lock as well.)''' |
|
780 | stripping). If you are opening a transaction, get a lock as well.)''' | |
781 | l = self._lockref and self._lockref() |
|
781 | l = self._lockref and self._lockref() | |
782 | if l is not None and l.held: |
|
782 | if l is not None and l.held: | |
783 | l.lock() |
|
783 | l.lock() | |
784 | return l |
|
784 | return l | |
785 |
|
785 | |||
786 | l = self._lock(self.sjoin("lock"), wait, self.store.write, |
|
786 | l = self._lock(self.sjoin("lock"), wait, self.store.write, | |
787 | self.invalidate, _('repository %s') % self.origroot) |
|
787 | self.invalidate, _('repository %s') % self.origroot) | |
788 | self._lockref = weakref.ref(l) |
|
788 | self._lockref = weakref.ref(l) | |
789 | return l |
|
789 | return l | |
790 |
|
790 | |||
791 | def wlock(self, wait=True): |
|
791 | def wlock(self, wait=True): | |
792 | '''Lock the non-store parts of the repository (everything under |
|
792 | '''Lock the non-store parts of the repository (everything under | |
793 | .hg except .hg/store) and return a weak reference to the lock. |
|
793 | .hg except .hg/store) and return a weak reference to the lock. | |
794 | Use this before modifying files in .hg.''' |
|
794 | Use this before modifying files in .hg.''' | |
795 | l = self._wlockref and self._wlockref() |
|
795 | l = self._wlockref and self._wlockref() | |
796 | if l is not None and l.held: |
|
796 | if l is not None and l.held: | |
797 | l.lock() |
|
797 | l.lock() | |
798 | return l |
|
798 | return l | |
799 |
|
799 | |||
800 | l = self._lock(self.join("wlock"), wait, self.dirstate.write, |
|
800 | l = self._lock(self.join("wlock"), wait, self.dirstate.write, | |
801 | self.dirstate.invalidate, _('working directory of %s') % |
|
801 | self.dirstate.invalidate, _('working directory of %s') % | |
802 | self.origroot) |
|
802 | self.origroot) | |
803 | self._wlockref = weakref.ref(l) |
|
803 | self._wlockref = weakref.ref(l) | |
804 | return l |
|
804 | return l | |
805 |
|
805 | |||
806 | def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist): |
|
806 | def _filecommit(self, fctx, manifest1, manifest2, linkrev, tr, changelist): | |
807 | """ |
|
807 | """ | |
808 | commit an individual file as part of a larger transaction |
|
808 | commit an individual file as part of a larger transaction | |
809 | """ |
|
809 | """ | |
810 |
|
810 | |||
811 | fname = fctx.path() |
|
811 | fname = fctx.path() | |
812 | text = fctx.data() |
|
812 | text = fctx.data() | |
813 | flog = self.file(fname) |
|
813 | flog = self.file(fname) | |
814 | fparent1 = manifest1.get(fname, nullid) |
|
814 | fparent1 = manifest1.get(fname, nullid) | |
815 | fparent2 = fparent2o = manifest2.get(fname, nullid) |
|
815 | fparent2 = fparent2o = manifest2.get(fname, nullid) | |
816 |
|
816 | |||
817 | meta = {} |
|
817 | meta = {} | |
818 | copy = fctx.renamed() |
|
818 | copy = fctx.renamed() | |
819 | if copy and copy[0] != fname: |
|
819 | if copy and copy[0] != fname: | |
820 | # Mark the new revision of this file as a copy of another |
|
820 | # Mark the new revision of this file as a copy of another | |
821 | # file. This copy data will effectively act as a parent |
|
821 | # file. This copy data will effectively act as a parent | |
822 | # of this new revision. If this is a merge, the first |
|
822 | # of this new revision. If this is a merge, the first | |
823 | # parent will be the nullid (meaning "look up the copy data") |
|
823 | # parent will be the nullid (meaning "look up the copy data") | |
824 | # and the second one will be the other parent. For example: |
|
824 | # and the second one will be the other parent. For example: | |
825 | # |
|
825 | # | |
826 | # 0 --- 1 --- 3 rev1 changes file foo |
|
826 | # 0 --- 1 --- 3 rev1 changes file foo | |
827 | # \ / rev2 renames foo to bar and changes it |
|
827 | # \ / rev2 renames foo to bar and changes it | |
828 | # \- 2 -/ rev3 should have bar with all changes and |
|
828 | # \- 2 -/ rev3 should have bar with all changes and | |
829 | # should record that bar descends from |
|
829 | # should record that bar descends from | |
830 | # bar in rev2 and foo in rev1 |
|
830 | # bar in rev2 and foo in rev1 | |
831 | # |
|
831 | # | |
832 | # this allows this merge to succeed: |
|
832 | # this allows this merge to succeed: | |
833 | # |
|
833 | # | |
834 | # 0 --- 1 --- 3 rev4 reverts the content change from rev2 |
|
834 | # 0 --- 1 --- 3 rev4 reverts the content change from rev2 | |
835 | # \ / merging rev3 and rev4 should use bar@rev2 |
|
835 | # \ / merging rev3 and rev4 should use bar@rev2 | |
836 | # \- 2 --- 4 as the merge base |
|
836 | # \- 2 --- 4 as the merge base | |
837 | # |
|
837 | # | |
838 |
|
838 | |||
839 | cfname = copy[0] |
|
839 | cfname = copy[0] | |
840 | crev = manifest1.get(cfname) |
|
840 | crev = manifest1.get(cfname) | |
841 | newfparent = fparent2 |
|
841 | newfparent = fparent2 | |
842 |
|
842 | |||
843 | if manifest2: # branch merge |
|
843 | if manifest2: # branch merge | |
844 | if fparent2 == nullid or crev is None: # copied on remote side |
|
844 | if fparent2 == nullid or crev is None: # copied on remote side | |
845 | if cfname in manifest2: |
|
845 | if cfname in manifest2: | |
846 | crev = manifest2[cfname] |
|
846 | crev = manifest2[cfname] | |
847 | newfparent = fparent1 |
|
847 | newfparent = fparent1 | |
848 |
|
848 | |||
849 | # find source in nearest ancestor if we've lost track |
|
849 | # find source in nearest ancestor if we've lost track | |
850 | if not crev: |
|
850 | if not crev: | |
851 | self.ui.debug(" %s: searching for copy revision for %s\n" % |
|
851 | self.ui.debug(" %s: searching for copy revision for %s\n" % | |
852 | (fname, cfname)) |
|
852 | (fname, cfname)) | |
853 | for ancestor in self[None].ancestors(): |
|
853 | for ancestor in self[None].ancestors(): | |
854 | if cfname in ancestor: |
|
854 | if cfname in ancestor: | |
855 | crev = ancestor[cfname].filenode() |
|
855 | crev = ancestor[cfname].filenode() | |
856 | break |
|
856 | break | |
857 |
|
857 | |||
858 | if crev: |
|
858 | if crev: | |
859 | self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev))) |
|
859 | self.ui.debug(" %s: copy %s:%s\n" % (fname, cfname, hex(crev))) | |
860 | meta["copy"] = cfname |
|
860 | meta["copy"] = cfname | |
861 | meta["copyrev"] = hex(crev) |
|
861 | meta["copyrev"] = hex(crev) | |
862 | fparent1, fparent2 = nullid, newfparent |
|
862 | fparent1, fparent2 = nullid, newfparent | |
863 | else: |
|
863 | else: | |
864 | self.ui.warn(_("warning: can't find ancestor for '%s' " |
|
864 | self.ui.warn(_("warning: can't find ancestor for '%s' " | |
865 | "copied from '%s'!\n") % (fname, cfname)) |
|
865 | "copied from '%s'!\n") % (fname, cfname)) | |
866 |
|
866 | |||
867 | elif fparent2 != nullid: |
|
867 | elif fparent2 != nullid: | |
868 | # is one parent an ancestor of the other? |
|
868 | # is one parent an ancestor of the other? | |
869 | fparentancestor = flog.ancestor(fparent1, fparent2) |
|
869 | fparentancestor = flog.ancestor(fparent1, fparent2) | |
870 | if fparentancestor == fparent1: |
|
870 | if fparentancestor == fparent1: | |
871 | fparent1, fparent2 = fparent2, nullid |
|
871 | fparent1, fparent2 = fparent2, nullid | |
872 | elif fparentancestor == fparent2: |
|
872 | elif fparentancestor == fparent2: | |
873 | fparent2 = nullid |
|
873 | fparent2 = nullid | |
874 |
|
874 | |||
875 | # is the file changed? |
|
875 | # is the file changed? | |
876 | if fparent2 != nullid or flog.cmp(fparent1, text) or meta: |
|
876 | if fparent2 != nullid or flog.cmp(fparent1, text) or meta: | |
877 | changelist.append(fname) |
|
877 | changelist.append(fname) | |
878 | return flog.add(text, meta, tr, linkrev, fparent1, fparent2) |
|
878 | return flog.add(text, meta, tr, linkrev, fparent1, fparent2) | |
879 |
|
879 | |||
880 | # are just the flags changed during merge? |
|
880 | # are just the flags changed during merge? | |
881 | if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags(): |
|
881 | if fparent1 != fparent2o and manifest1.flags(fname) != fctx.flags(): | |
882 | changelist.append(fname) |
|
882 | changelist.append(fname) | |
883 |
|
883 | |||
884 | return fparent1 |
|
884 | return fparent1 | |
885 |
|
885 | |||
886 | def commit(self, text="", user=None, date=None, match=None, force=False, |
|
886 | def commit(self, text="", user=None, date=None, match=None, force=False, | |
887 | editor=False, extra={}): |
|
887 | editor=False, extra={}): | |
888 | """Add a new revision to current repository. |
|
888 | """Add a new revision to current repository. | |
889 |
|
889 | |||
890 | Revision information is gathered from the working directory, |
|
890 | Revision information is gathered from the working directory, | |
891 | match can be used to filter the committed files. If editor is |
|
891 | match can be used to filter the committed files. If editor is | |
892 | supplied, it is called to get a commit message. |
|
892 | supplied, it is called to get a commit message. | |
893 | """ |
|
893 | """ | |
894 |
|
894 | |||
895 | def fail(f, msg): |
|
895 | def fail(f, msg): | |
896 | raise util.Abort('%s: %s' % (f, msg)) |
|
896 | raise util.Abort('%s: %s' % (f, msg)) | |
897 |
|
897 | |||
898 | if not match: |
|
898 | if not match: | |
899 | match = matchmod.always(self.root, '') |
|
899 | match = matchmod.always(self.root, '') | |
900 |
|
900 | |||
901 | if not force: |
|
901 | if not force: | |
902 | vdirs = [] |
|
902 | vdirs = [] | |
903 | match.dir = vdirs.append |
|
903 | match.dir = vdirs.append | |
904 | match.bad = fail |
|
904 | match.bad = fail | |
905 |
|
905 | |||
906 | wlock = self.wlock() |
|
906 | wlock = self.wlock() | |
907 | try: |
|
907 | try: | |
908 | wctx = self[None] |
|
908 | wctx = self[None] | |
909 | merge = len(wctx.parents()) > 1 |
|
909 | merge = len(wctx.parents()) > 1 | |
910 |
|
910 | |||
911 | if (not force and merge and match and |
|
911 | if (not force and merge and match and | |
912 | (match.files() or match.anypats())): |
|
912 | (match.files() or match.anypats())): | |
913 | raise util.Abort(_('cannot partially commit a merge ' |
|
913 | raise util.Abort(_('cannot partially commit a merge ' | |
914 | '(do not specify files or patterns)')) |
|
914 | '(do not specify files or patterns)')) | |
915 |
|
915 | |||
916 | changes = self.status(match=match, clean=force) |
|
916 | changes = self.status(match=match, clean=force) | |
917 | if force: |
|
917 | if force: | |
918 | changes[0].extend(changes[6]) # mq may commit unchanged files |
|
918 | changes[0].extend(changes[6]) # mq may commit unchanged files | |
919 |
|
919 | |||
920 | # check subrepos |
|
920 | # check subrepos | |
921 | subs = [] |
|
921 | subs = [] | |
922 | removedsubs = set() |
|
922 | removedsubs = set() | |
923 | for p in wctx.parents(): |
|
923 | for p in wctx.parents(): | |
924 | removedsubs.update(s for s in p.substate if match(s)) |
|
924 | removedsubs.update(s for s in p.substate if match(s)) | |
925 | for s in wctx.substate: |
|
925 | for s in wctx.substate: | |
926 | removedsubs.discard(s) |
|
926 | removedsubs.discard(s) | |
927 | if match(s) and wctx.sub(s).dirty(): |
|
927 | if match(s) and wctx.sub(s).dirty(): | |
928 | subs.append(s) |
|
928 | subs.append(s) | |
929 | if (subs or removedsubs): |
|
929 | if (subs or removedsubs): | |
930 | if (not match('.hgsub') and |
|
930 | if (not match('.hgsub') and | |
931 | '.hgsub' in (wctx.modified() + wctx.added())): |
|
931 | '.hgsub' in (wctx.modified() + wctx.added())): | |
932 | raise util.Abort(_("can't commit subrepos without .hgsub")) |
|
932 | raise util.Abort(_("can't commit subrepos without .hgsub")) | |
933 | if '.hgsubstate' not in changes[0]: |
|
933 | if '.hgsubstate' not in changes[0]: | |
934 | changes[0].insert(0, '.hgsubstate') |
|
934 | changes[0].insert(0, '.hgsubstate') | |
935 |
|
935 | |||
|
936 | if subs and not self.ui.configbool('ui', 'commitsubrepos', True): | |||
|
937 | changedsubs = [s for s in subs if wctx.sub(s).dirty(True)] | |||
|
938 | if changedsubs: | |||
|
939 | raise util.Abort(_("uncommitted changes in subrepo %s") | |||
|
940 | % changedsubs[0]) | |||
|
941 | ||||
936 | # make sure all explicit patterns are matched |
|
942 | # make sure all explicit patterns are matched | |
937 | if not force and match.files(): |
|
943 | if not force and match.files(): | |
938 | matched = set(changes[0] + changes[1] + changes[2]) |
|
944 | matched = set(changes[0] + changes[1] + changes[2]) | |
939 |
|
945 | |||
940 | for f in match.files(): |
|
946 | for f in match.files(): | |
941 | if f == '.' or f in matched or f in wctx.substate: |
|
947 | if f == '.' or f in matched or f in wctx.substate: | |
942 | continue |
|
948 | continue | |
943 | if f in changes[3]: # missing |
|
949 | if f in changes[3]: # missing | |
944 | fail(f, _('file not found!')) |
|
950 | fail(f, _('file not found!')) | |
945 | if f in vdirs: # visited directory |
|
951 | if f in vdirs: # visited directory | |
946 | d = f + '/' |
|
952 | d = f + '/' | |
947 | for mf in matched: |
|
953 | for mf in matched: | |
948 | if mf.startswith(d): |
|
954 | if mf.startswith(d): | |
949 | break |
|
955 | break | |
950 | else: |
|
956 | else: | |
951 | fail(f, _("no match under directory!")) |
|
957 | fail(f, _("no match under directory!")) | |
952 | elif f not in self.dirstate: |
|
958 | elif f not in self.dirstate: | |
953 | fail(f, _("file not tracked!")) |
|
959 | fail(f, _("file not tracked!")) | |
954 |
|
960 | |||
955 | if (not force and not extra.get("close") and not merge |
|
961 | if (not force and not extra.get("close") and not merge | |
956 | and not (changes[0] or changes[1] or changes[2]) |
|
962 | and not (changes[0] or changes[1] or changes[2]) | |
957 | and wctx.branch() == wctx.p1().branch()): |
|
963 | and wctx.branch() == wctx.p1().branch()): | |
958 | return None |
|
964 | return None | |
959 |
|
965 | |||
960 | ms = mergemod.mergestate(self) |
|
966 | ms = mergemod.mergestate(self) | |
961 | for f in changes[0]: |
|
967 | for f in changes[0]: | |
962 | if f in ms and ms[f] == 'u': |
|
968 | if f in ms and ms[f] == 'u': | |
963 | raise util.Abort(_("unresolved merge conflicts " |
|
969 | raise util.Abort(_("unresolved merge conflicts " | |
964 | "(see hg resolve)")) |
|
970 | "(see hg resolve)")) | |
965 |
|
971 | |||
966 | cctx = context.workingctx(self, text, user, date, extra, changes) |
|
972 | cctx = context.workingctx(self, text, user, date, extra, changes) | |
967 | if editor: |
|
973 | if editor: | |
968 | cctx._text = editor(self, cctx, subs) |
|
974 | cctx._text = editor(self, cctx, subs) | |
969 | edited = (text != cctx._text) |
|
975 | edited = (text != cctx._text) | |
970 |
|
976 | |||
971 | # commit subs |
|
977 | # commit subs | |
972 | if subs or removedsubs: |
|
978 | if subs or removedsubs: | |
973 | state = wctx.substate.copy() |
|
979 | state = wctx.substate.copy() | |
974 | for s in sorted(subs): |
|
980 | for s in sorted(subs): | |
975 | sub = wctx.sub(s) |
|
981 | sub = wctx.sub(s) | |
976 | self.ui.status(_('committing subrepository %s\n') % |
|
982 | self.ui.status(_('committing subrepository %s\n') % | |
977 | subrepo.subrelpath(sub)) |
|
983 | subrepo.subrelpath(sub)) | |
978 | sr = sub.commit(cctx._text, user, date) |
|
984 | sr = sub.commit(cctx._text, user, date) | |
979 | state[s] = (state[s][0], sr) |
|
985 | state[s] = (state[s][0], sr) | |
980 | subrepo.writestate(self, state) |
|
986 | subrepo.writestate(self, state) | |
981 |
|
987 | |||
982 | # Save commit message in case this transaction gets rolled back |
|
988 | # Save commit message in case this transaction gets rolled back | |
983 | # (e.g. by a pretxncommit hook). Leave the content alone on |
|
989 | # (e.g. by a pretxncommit hook). Leave the content alone on | |
984 | # the assumption that the user will use the same editor again. |
|
990 | # the assumption that the user will use the same editor again. | |
985 | msgfile = self.opener('last-message.txt', 'wb') |
|
991 | msgfile = self.opener('last-message.txt', 'wb') | |
986 | msgfile.write(cctx._text) |
|
992 | msgfile.write(cctx._text) | |
987 | msgfile.close() |
|
993 | msgfile.close() | |
988 |
|
994 | |||
989 | p1, p2 = self.dirstate.parents() |
|
995 | p1, p2 = self.dirstate.parents() | |
990 | hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '') |
|
996 | hookp1, hookp2 = hex(p1), (p2 != nullid and hex(p2) or '') | |
991 | try: |
|
997 | try: | |
992 | self.hook("precommit", throw=True, parent1=hookp1, parent2=hookp2) |
|
998 | self.hook("precommit", throw=True, parent1=hookp1, parent2=hookp2) | |
993 | ret = self.commitctx(cctx, True) |
|
999 | ret = self.commitctx(cctx, True) | |
994 | except: |
|
1000 | except: | |
995 | if edited: |
|
1001 | if edited: | |
996 | msgfn = self.pathto(msgfile.name[len(self.root)+1:]) |
|
1002 | msgfn = self.pathto(msgfile.name[len(self.root)+1:]) | |
997 | self.ui.write( |
|
1003 | self.ui.write( | |
998 | _('note: commit message saved in %s\n') % msgfn) |
|
1004 | _('note: commit message saved in %s\n') % msgfn) | |
999 | raise |
|
1005 | raise | |
1000 |
|
1006 | |||
1001 | # update bookmarks, dirstate and mergestate |
|
1007 | # update bookmarks, dirstate and mergestate | |
1002 | parents = (p1, p2) |
|
1008 | parents = (p1, p2) | |
1003 | if p2 == nullid: |
|
1009 | if p2 == nullid: | |
1004 | parents = (p1,) |
|
1010 | parents = (p1,) | |
1005 | bookmarks.update(self, parents, ret) |
|
1011 | bookmarks.update(self, parents, ret) | |
1006 | for f in changes[0] + changes[1]: |
|
1012 | for f in changes[0] + changes[1]: | |
1007 | self.dirstate.normal(f) |
|
1013 | self.dirstate.normal(f) | |
1008 | for f in changes[2]: |
|
1014 | for f in changes[2]: | |
1009 | self.dirstate.forget(f) |
|
1015 | self.dirstate.forget(f) | |
1010 | self.dirstate.setparents(ret) |
|
1016 | self.dirstate.setparents(ret) | |
1011 | ms.reset() |
|
1017 | ms.reset() | |
1012 | finally: |
|
1018 | finally: | |
1013 | wlock.release() |
|
1019 | wlock.release() | |
1014 |
|
1020 | |||
1015 | self.hook("commit", node=hex(ret), parent1=hookp1, parent2=hookp2) |
|
1021 | self.hook("commit", node=hex(ret), parent1=hookp1, parent2=hookp2) | |
1016 | return ret |
|
1022 | return ret | |
1017 |
|
1023 | |||
1018 | def commitctx(self, ctx, error=False): |
|
1024 | def commitctx(self, ctx, error=False): | |
1019 | """Add a new revision to current repository. |
|
1025 | """Add a new revision to current repository. | |
1020 | Revision information is passed via the context argument. |
|
1026 | Revision information is passed via the context argument. | |
1021 | """ |
|
1027 | """ | |
1022 |
|
1028 | |||
1023 | tr = lock = None |
|
1029 | tr = lock = None | |
1024 | removed = list(ctx.removed()) |
|
1030 | removed = list(ctx.removed()) | |
1025 | p1, p2 = ctx.p1(), ctx.p2() |
|
1031 | p1, p2 = ctx.p1(), ctx.p2() | |
1026 | m1 = p1.manifest().copy() |
|
1032 | m1 = p1.manifest().copy() | |
1027 | m2 = p2.manifest() |
|
1033 | m2 = p2.manifest() | |
1028 | user = ctx.user() |
|
1034 | user = ctx.user() | |
1029 |
|
1035 | |||
1030 | lock = self.lock() |
|
1036 | lock = self.lock() | |
1031 | try: |
|
1037 | try: | |
1032 | tr = self.transaction("commit") |
|
1038 | tr = self.transaction("commit") | |
1033 | trp = weakref.proxy(tr) |
|
1039 | trp = weakref.proxy(tr) | |
1034 |
|
1040 | |||
1035 | # check in files |
|
1041 | # check in files | |
1036 | new = {} |
|
1042 | new = {} | |
1037 | changed = [] |
|
1043 | changed = [] | |
1038 | linkrev = len(self) |
|
1044 | linkrev = len(self) | |
1039 | for f in sorted(ctx.modified() + ctx.added()): |
|
1045 | for f in sorted(ctx.modified() + ctx.added()): | |
1040 | self.ui.note(f + "\n") |
|
1046 | self.ui.note(f + "\n") | |
1041 | try: |
|
1047 | try: | |
1042 | fctx = ctx[f] |
|
1048 | fctx = ctx[f] | |
1043 | new[f] = self._filecommit(fctx, m1, m2, linkrev, trp, |
|
1049 | new[f] = self._filecommit(fctx, m1, m2, linkrev, trp, | |
1044 | changed) |
|
1050 | changed) | |
1045 | m1.set(f, fctx.flags()) |
|
1051 | m1.set(f, fctx.flags()) | |
1046 | except OSError, inst: |
|
1052 | except OSError, inst: | |
1047 | self.ui.warn(_("trouble committing %s!\n") % f) |
|
1053 | self.ui.warn(_("trouble committing %s!\n") % f) | |
1048 | raise |
|
1054 | raise | |
1049 | except IOError, inst: |
|
1055 | except IOError, inst: | |
1050 | errcode = getattr(inst, 'errno', errno.ENOENT) |
|
1056 | errcode = getattr(inst, 'errno', errno.ENOENT) | |
1051 | if error or errcode and errcode != errno.ENOENT: |
|
1057 | if error or errcode and errcode != errno.ENOENT: | |
1052 | self.ui.warn(_("trouble committing %s!\n") % f) |
|
1058 | self.ui.warn(_("trouble committing %s!\n") % f) | |
1053 | raise |
|
1059 | raise | |
1054 | else: |
|
1060 | else: | |
1055 | removed.append(f) |
|
1061 | removed.append(f) | |
1056 |
|
1062 | |||
1057 | # update manifest |
|
1063 | # update manifest | |
1058 | m1.update(new) |
|
1064 | m1.update(new) | |
1059 | removed = [f for f in sorted(removed) if f in m1 or f in m2] |
|
1065 | removed = [f for f in sorted(removed) if f in m1 or f in m2] | |
1060 | drop = [f for f in removed if f in m1] |
|
1066 | drop = [f for f in removed if f in m1] | |
1061 | for f in drop: |
|
1067 | for f in drop: | |
1062 | del m1[f] |
|
1068 | del m1[f] | |
1063 | mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(), |
|
1069 | mn = self.manifest.add(m1, trp, linkrev, p1.manifestnode(), | |
1064 | p2.manifestnode(), (new, drop)) |
|
1070 | p2.manifestnode(), (new, drop)) | |
1065 |
|
1071 | |||
1066 | # update changelog |
|
1072 | # update changelog | |
1067 | self.changelog.delayupdate() |
|
1073 | self.changelog.delayupdate() | |
1068 | n = self.changelog.add(mn, changed + removed, ctx.description(), |
|
1074 | n = self.changelog.add(mn, changed + removed, ctx.description(), | |
1069 | trp, p1.node(), p2.node(), |
|
1075 | trp, p1.node(), p2.node(), | |
1070 | user, ctx.date(), ctx.extra().copy()) |
|
1076 | user, ctx.date(), ctx.extra().copy()) | |
1071 | p = lambda: self.changelog.writepending() and self.root or "" |
|
1077 | p = lambda: self.changelog.writepending() and self.root or "" | |
1072 | xp1, xp2 = p1.hex(), p2 and p2.hex() or '' |
|
1078 | xp1, xp2 = p1.hex(), p2 and p2.hex() or '' | |
1073 | self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1, |
|
1079 | self.hook('pretxncommit', throw=True, node=hex(n), parent1=xp1, | |
1074 | parent2=xp2, pending=p) |
|
1080 | parent2=xp2, pending=p) | |
1075 | self.changelog.finalize(trp) |
|
1081 | self.changelog.finalize(trp) | |
1076 | tr.close() |
|
1082 | tr.close() | |
1077 |
|
1083 | |||
1078 | if self._branchcache: |
|
1084 | if self._branchcache: | |
1079 | self.updatebranchcache() |
|
1085 | self.updatebranchcache() | |
1080 | return n |
|
1086 | return n | |
1081 | finally: |
|
1087 | finally: | |
1082 | if tr: |
|
1088 | if tr: | |
1083 | tr.release() |
|
1089 | tr.release() | |
1084 | lock.release() |
|
1090 | lock.release() | |
1085 |
|
1091 | |||
1086 | def destroyed(self): |
|
1092 | def destroyed(self): | |
1087 | '''Inform the repository that nodes have been destroyed. |
|
1093 | '''Inform the repository that nodes have been destroyed. | |
1088 | Intended for use by strip and rollback, so there's a common |
|
1094 | Intended for use by strip and rollback, so there's a common | |
1089 | place for anything that has to be done after destroying history.''' |
|
1095 | place for anything that has to be done after destroying history.''' | |
1090 | # XXX it might be nice if we could take the list of destroyed |
|
1096 | # XXX it might be nice if we could take the list of destroyed | |
1091 | # nodes, but I don't see an easy way for rollback() to do that |
|
1097 | # nodes, but I don't see an easy way for rollback() to do that | |
1092 |
|
1098 | |||
1093 | # Ensure the persistent tag cache is updated. Doing it now |
|
1099 | # Ensure the persistent tag cache is updated. Doing it now | |
1094 | # means that the tag cache only has to worry about destroyed |
|
1100 | # means that the tag cache only has to worry about destroyed | |
1095 | # heads immediately after a strip/rollback. That in turn |
|
1101 | # heads immediately after a strip/rollback. That in turn | |
1096 | # guarantees that "cachetip == currenttip" (comparing both rev |
|
1102 | # guarantees that "cachetip == currenttip" (comparing both rev | |
1097 | # and node) always means no nodes have been added or destroyed. |
|
1103 | # and node) always means no nodes have been added or destroyed. | |
1098 |
|
1104 | |||
1099 | # XXX this is suboptimal when qrefresh'ing: we strip the current |
|
1105 | # XXX this is suboptimal when qrefresh'ing: we strip the current | |
1100 | # head, refresh the tag cache, then immediately add a new head. |
|
1106 | # head, refresh the tag cache, then immediately add a new head. | |
1101 | # But I think doing it this way is necessary for the "instant |
|
1107 | # But I think doing it this way is necessary for the "instant | |
1102 | # tag cache retrieval" case to work. |
|
1108 | # tag cache retrieval" case to work. | |
1103 | self.invalidatecaches() |
|
1109 | self.invalidatecaches() | |
1104 |
|
1110 | |||
1105 | def walk(self, match, node=None): |
|
1111 | def walk(self, match, node=None): | |
1106 | ''' |
|
1112 | ''' | |
1107 | walk recursively through the directory tree or a given |
|
1113 | walk recursively through the directory tree or a given | |
1108 | changeset, finding all files matched by the match |
|
1114 | changeset, finding all files matched by the match | |
1109 | function |
|
1115 | function | |
1110 | ''' |
|
1116 | ''' | |
1111 | return self[node].walk(match) |
|
1117 | return self[node].walk(match) | |
1112 |
|
1118 | |||
1113 | def status(self, node1='.', node2=None, match=None, |
|
1119 | def status(self, node1='.', node2=None, match=None, | |
1114 | ignored=False, clean=False, unknown=False, |
|
1120 | ignored=False, clean=False, unknown=False, | |
1115 | listsubrepos=False): |
|
1121 | listsubrepos=False): | |
1116 | """return status of files between two nodes or node and working directory |
|
1122 | """return status of files between two nodes or node and working directory | |
1117 |
|
1123 | |||
1118 | If node1 is None, use the first dirstate parent instead. |
|
1124 | If node1 is None, use the first dirstate parent instead. | |
1119 | If node2 is None, compare node1 with working directory. |
|
1125 | If node2 is None, compare node1 with working directory. | |
1120 | """ |
|
1126 | """ | |
1121 |
|
1127 | |||
1122 | def mfmatches(ctx): |
|
1128 | def mfmatches(ctx): | |
1123 | mf = ctx.manifest().copy() |
|
1129 | mf = ctx.manifest().copy() | |
1124 | for fn in mf.keys(): |
|
1130 | for fn in mf.keys(): | |
1125 | if not match(fn): |
|
1131 | if not match(fn): | |
1126 | del mf[fn] |
|
1132 | del mf[fn] | |
1127 | return mf |
|
1133 | return mf | |
1128 |
|
1134 | |||
1129 | if isinstance(node1, context.changectx): |
|
1135 | if isinstance(node1, context.changectx): | |
1130 | ctx1 = node1 |
|
1136 | ctx1 = node1 | |
1131 | else: |
|
1137 | else: | |
1132 | ctx1 = self[node1] |
|
1138 | ctx1 = self[node1] | |
1133 | if isinstance(node2, context.changectx): |
|
1139 | if isinstance(node2, context.changectx): | |
1134 | ctx2 = node2 |
|
1140 | ctx2 = node2 | |
1135 | else: |
|
1141 | else: | |
1136 | ctx2 = self[node2] |
|
1142 | ctx2 = self[node2] | |
1137 |
|
1143 | |||
1138 | working = ctx2.rev() is None |
|
1144 | working = ctx2.rev() is None | |
1139 | parentworking = working and ctx1 == self['.'] |
|
1145 | parentworking = working and ctx1 == self['.'] | |
1140 | match = match or matchmod.always(self.root, self.getcwd()) |
|
1146 | match = match or matchmod.always(self.root, self.getcwd()) | |
1141 | listignored, listclean, listunknown = ignored, clean, unknown |
|
1147 | listignored, listclean, listunknown = ignored, clean, unknown | |
1142 |
|
1148 | |||
1143 | # load earliest manifest first for caching reasons |
|
1149 | # load earliest manifest first for caching reasons | |
1144 | if not working and ctx2.rev() < ctx1.rev(): |
|
1150 | if not working and ctx2.rev() < ctx1.rev(): | |
1145 | ctx2.manifest() |
|
1151 | ctx2.manifest() | |
1146 |
|
1152 | |||
1147 | if not parentworking: |
|
1153 | if not parentworking: | |
1148 | def bad(f, msg): |
|
1154 | def bad(f, msg): | |
1149 | if f not in ctx1: |
|
1155 | if f not in ctx1: | |
1150 | self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg)) |
|
1156 | self.ui.warn('%s: %s\n' % (self.dirstate.pathto(f), msg)) | |
1151 | match.bad = bad |
|
1157 | match.bad = bad | |
1152 |
|
1158 | |||
1153 | if working: # we need to scan the working dir |
|
1159 | if working: # we need to scan the working dir | |
1154 | subrepos = [] |
|
1160 | subrepos = [] | |
1155 | if '.hgsub' in self.dirstate: |
|
1161 | if '.hgsub' in self.dirstate: | |
1156 | subrepos = ctx1.substate.keys() |
|
1162 | subrepos = ctx1.substate.keys() | |
1157 | s = self.dirstate.status(match, subrepos, listignored, |
|
1163 | s = self.dirstate.status(match, subrepos, listignored, | |
1158 | listclean, listunknown) |
|
1164 | listclean, listunknown) | |
1159 | cmp, modified, added, removed, deleted, unknown, ignored, clean = s |
|
1165 | cmp, modified, added, removed, deleted, unknown, ignored, clean = s | |
1160 |
|
1166 | |||
1161 | # check for any possibly clean files |
|
1167 | # check for any possibly clean files | |
1162 | if parentworking and cmp: |
|
1168 | if parentworking and cmp: | |
1163 | fixup = [] |
|
1169 | fixup = [] | |
1164 | # do a full compare of any files that might have changed |
|
1170 | # do a full compare of any files that might have changed | |
1165 | for f in sorted(cmp): |
|
1171 | for f in sorted(cmp): | |
1166 | if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f) |
|
1172 | if (f not in ctx1 or ctx2.flags(f) != ctx1.flags(f) | |
1167 | or ctx1[f].cmp(ctx2[f])): |
|
1173 | or ctx1[f].cmp(ctx2[f])): | |
1168 | modified.append(f) |
|
1174 | modified.append(f) | |
1169 | else: |
|
1175 | else: | |
1170 | fixup.append(f) |
|
1176 | fixup.append(f) | |
1171 |
|
1177 | |||
1172 | # update dirstate for files that are actually clean |
|
1178 | # update dirstate for files that are actually clean | |
1173 | if fixup: |
|
1179 | if fixup: | |
1174 | if listclean: |
|
1180 | if listclean: | |
1175 | clean += fixup |
|
1181 | clean += fixup | |
1176 |
|
1182 | |||
1177 | try: |
|
1183 | try: | |
1178 | # updating the dirstate is optional |
|
1184 | # updating the dirstate is optional | |
1179 | # so we don't wait on the lock |
|
1185 | # so we don't wait on the lock | |
1180 | wlock = self.wlock(False) |
|
1186 | wlock = self.wlock(False) | |
1181 | try: |
|
1187 | try: | |
1182 | for f in fixup: |
|
1188 | for f in fixup: | |
1183 | self.dirstate.normal(f) |
|
1189 | self.dirstate.normal(f) | |
1184 | finally: |
|
1190 | finally: | |
1185 | wlock.release() |
|
1191 | wlock.release() | |
1186 | except error.LockError: |
|
1192 | except error.LockError: | |
1187 | pass |
|
1193 | pass | |
1188 |
|
1194 | |||
1189 | if not parentworking: |
|
1195 | if not parentworking: | |
1190 | mf1 = mfmatches(ctx1) |
|
1196 | mf1 = mfmatches(ctx1) | |
1191 | if working: |
|
1197 | if working: | |
1192 | # we are comparing working dir against non-parent |
|
1198 | # we are comparing working dir against non-parent | |
1193 | # generate a pseudo-manifest for the working dir |
|
1199 | # generate a pseudo-manifest for the working dir | |
1194 | mf2 = mfmatches(self['.']) |
|
1200 | mf2 = mfmatches(self['.']) | |
1195 | for f in cmp + modified + added: |
|
1201 | for f in cmp + modified + added: | |
1196 | mf2[f] = None |
|
1202 | mf2[f] = None | |
1197 | mf2.set(f, ctx2.flags(f)) |
|
1203 | mf2.set(f, ctx2.flags(f)) | |
1198 | for f in removed: |
|
1204 | for f in removed: | |
1199 | if f in mf2: |
|
1205 | if f in mf2: | |
1200 | del mf2[f] |
|
1206 | del mf2[f] | |
1201 | else: |
|
1207 | else: | |
1202 | # we are comparing two revisions |
|
1208 | # we are comparing two revisions | |
1203 | deleted, unknown, ignored = [], [], [] |
|
1209 | deleted, unknown, ignored = [], [], [] | |
1204 | mf2 = mfmatches(ctx2) |
|
1210 | mf2 = mfmatches(ctx2) | |
1205 |
|
1211 | |||
1206 | modified, added, clean = [], [], [] |
|
1212 | modified, added, clean = [], [], [] | |
1207 | for fn in mf2: |
|
1213 | for fn in mf2: | |
1208 | if fn in mf1: |
|
1214 | if fn in mf1: | |
1209 | if (mf1.flags(fn) != mf2.flags(fn) or |
|
1215 | if (mf1.flags(fn) != mf2.flags(fn) or | |
1210 | (mf1[fn] != mf2[fn] and |
|
1216 | (mf1[fn] != mf2[fn] and | |
1211 | (mf2[fn] or ctx1[fn].cmp(ctx2[fn])))): |
|
1217 | (mf2[fn] or ctx1[fn].cmp(ctx2[fn])))): | |
1212 | modified.append(fn) |
|
1218 | modified.append(fn) | |
1213 | elif listclean: |
|
1219 | elif listclean: | |
1214 | clean.append(fn) |
|
1220 | clean.append(fn) | |
1215 | del mf1[fn] |
|
1221 | del mf1[fn] | |
1216 | else: |
|
1222 | else: | |
1217 | added.append(fn) |
|
1223 | added.append(fn) | |
1218 | removed = mf1.keys() |
|
1224 | removed = mf1.keys() | |
1219 |
|
1225 | |||
1220 | r = modified, added, removed, deleted, unknown, ignored, clean |
|
1226 | r = modified, added, removed, deleted, unknown, ignored, clean | |
1221 |
|
1227 | |||
1222 | if listsubrepos: |
|
1228 | if listsubrepos: | |
1223 | for subpath, sub in subrepo.itersubrepos(ctx1, ctx2): |
|
1229 | for subpath, sub in subrepo.itersubrepos(ctx1, ctx2): | |
1224 | if working: |
|
1230 | if working: | |
1225 | rev2 = None |
|
1231 | rev2 = None | |
1226 | else: |
|
1232 | else: | |
1227 | rev2 = ctx2.substate[subpath][1] |
|
1233 | rev2 = ctx2.substate[subpath][1] | |
1228 | try: |
|
1234 | try: | |
1229 | submatch = matchmod.narrowmatcher(subpath, match) |
|
1235 | submatch = matchmod.narrowmatcher(subpath, match) | |
1230 | s = sub.status(rev2, match=submatch, ignored=listignored, |
|
1236 | s = sub.status(rev2, match=submatch, ignored=listignored, | |
1231 | clean=listclean, unknown=listunknown, |
|
1237 | clean=listclean, unknown=listunknown, | |
1232 | listsubrepos=True) |
|
1238 | listsubrepos=True) | |
1233 | for rfiles, sfiles in zip(r, s): |
|
1239 | for rfiles, sfiles in zip(r, s): | |
1234 | rfiles.extend("%s/%s" % (subpath, f) for f in sfiles) |
|
1240 | rfiles.extend("%s/%s" % (subpath, f) for f in sfiles) | |
1235 | except error.LookupError: |
|
1241 | except error.LookupError: | |
1236 | self.ui.status(_("skipping missing subrepository: %s\n") |
|
1242 | self.ui.status(_("skipping missing subrepository: %s\n") | |
1237 | % subpath) |
|
1243 | % subpath) | |
1238 |
|
1244 | |||
1239 | [l.sort() for l in r] |
|
1245 | [l.sort() for l in r] | |
1240 | return r |
|
1246 | return r | |
1241 |
|
1247 | |||
1242 | def heads(self, start=None): |
|
1248 | def heads(self, start=None): | |
1243 | heads = self.changelog.heads(start) |
|
1249 | heads = self.changelog.heads(start) | |
1244 | # sort the output in rev descending order |
|
1250 | # sort the output in rev descending order | |
1245 | return sorted(heads, key=self.changelog.rev, reverse=True) |
|
1251 | return sorted(heads, key=self.changelog.rev, reverse=True) | |
1246 |
|
1252 | |||
1247 | def branchheads(self, branch=None, start=None, closed=False): |
|
1253 | def branchheads(self, branch=None, start=None, closed=False): | |
1248 | '''return a (possibly filtered) list of heads for the given branch |
|
1254 | '''return a (possibly filtered) list of heads for the given branch | |
1249 |
|
1255 | |||
1250 | Heads are returned in topological order, from newest to oldest. |
|
1256 | Heads are returned in topological order, from newest to oldest. | |
1251 | If branch is None, use the dirstate branch. |
|
1257 | If branch is None, use the dirstate branch. | |
1252 | If start is not None, return only heads reachable from start. |
|
1258 | If start is not None, return only heads reachable from start. | |
1253 | If closed is True, return heads that are marked as closed as well. |
|
1259 | If closed is True, return heads that are marked as closed as well. | |
1254 | ''' |
|
1260 | ''' | |
1255 | if branch is None: |
|
1261 | if branch is None: | |
1256 | branch = self[None].branch() |
|
1262 | branch = self[None].branch() | |
1257 | branches = self.branchmap() |
|
1263 | branches = self.branchmap() | |
1258 | if branch not in branches: |
|
1264 | if branch not in branches: | |
1259 | return [] |
|
1265 | return [] | |
1260 | # the cache returns heads ordered lowest to highest |
|
1266 | # the cache returns heads ordered lowest to highest | |
1261 | bheads = list(reversed(branches[branch])) |
|
1267 | bheads = list(reversed(branches[branch])) | |
1262 | if start is not None: |
|
1268 | if start is not None: | |
1263 | # filter out the heads that cannot be reached from startrev |
|
1269 | # filter out the heads that cannot be reached from startrev | |
1264 | fbheads = set(self.changelog.nodesbetween([start], bheads)[2]) |
|
1270 | fbheads = set(self.changelog.nodesbetween([start], bheads)[2]) | |
1265 | bheads = [h for h in bheads if h in fbheads] |
|
1271 | bheads = [h for h in bheads if h in fbheads] | |
1266 | if not closed: |
|
1272 | if not closed: | |
1267 | bheads = [h for h in bheads if |
|
1273 | bheads = [h for h in bheads if | |
1268 | ('close' not in self.changelog.read(h)[5])] |
|
1274 | ('close' not in self.changelog.read(h)[5])] | |
1269 | return bheads |
|
1275 | return bheads | |
1270 |
|
1276 | |||
1271 | def branches(self, nodes): |
|
1277 | def branches(self, nodes): | |
1272 | if not nodes: |
|
1278 | if not nodes: | |
1273 | nodes = [self.changelog.tip()] |
|
1279 | nodes = [self.changelog.tip()] | |
1274 | b = [] |
|
1280 | b = [] | |
1275 | for n in nodes: |
|
1281 | for n in nodes: | |
1276 | t = n |
|
1282 | t = n | |
1277 | while 1: |
|
1283 | while 1: | |
1278 | p = self.changelog.parents(n) |
|
1284 | p = self.changelog.parents(n) | |
1279 | if p[1] != nullid or p[0] == nullid: |
|
1285 | if p[1] != nullid or p[0] == nullid: | |
1280 | b.append((t, n, p[0], p[1])) |
|
1286 | b.append((t, n, p[0], p[1])) | |
1281 | break |
|
1287 | break | |
1282 | n = p[0] |
|
1288 | n = p[0] | |
1283 | return b |
|
1289 | return b | |
1284 |
|
1290 | |||
1285 | def between(self, pairs): |
|
1291 | def between(self, pairs): | |
1286 | r = [] |
|
1292 | r = [] | |
1287 |
|
1293 | |||
1288 | for top, bottom in pairs: |
|
1294 | for top, bottom in pairs: | |
1289 | n, l, i = top, [], 0 |
|
1295 | n, l, i = top, [], 0 | |
1290 | f = 1 |
|
1296 | f = 1 | |
1291 |
|
1297 | |||
1292 | while n != bottom and n != nullid: |
|
1298 | while n != bottom and n != nullid: | |
1293 | p = self.changelog.parents(n)[0] |
|
1299 | p = self.changelog.parents(n)[0] | |
1294 | if i == f: |
|
1300 | if i == f: | |
1295 | l.append(n) |
|
1301 | l.append(n) | |
1296 | f = f * 2 |
|
1302 | f = f * 2 | |
1297 | n = p |
|
1303 | n = p | |
1298 | i += 1 |
|
1304 | i += 1 | |
1299 |
|
1305 | |||
1300 | r.append(l) |
|
1306 | r.append(l) | |
1301 |
|
1307 | |||
1302 | return r |
|
1308 | return r | |
1303 |
|
1309 | |||
1304 | def pull(self, remote, heads=None, force=False): |
|
1310 | def pull(self, remote, heads=None, force=False): | |
1305 | lock = self.lock() |
|
1311 | lock = self.lock() | |
1306 | try: |
|
1312 | try: | |
1307 | tmp = discovery.findcommonincoming(self, remote, heads=heads, |
|
1313 | tmp = discovery.findcommonincoming(self, remote, heads=heads, | |
1308 | force=force) |
|
1314 | force=force) | |
1309 | common, fetch, rheads = tmp |
|
1315 | common, fetch, rheads = tmp | |
1310 | if not fetch: |
|
1316 | if not fetch: | |
1311 | self.ui.status(_("no changes found\n")) |
|
1317 | self.ui.status(_("no changes found\n")) | |
1312 | result = 0 |
|
1318 | result = 0 | |
1313 | else: |
|
1319 | else: | |
1314 | if heads is None and fetch == [nullid]: |
|
1320 | if heads is None and fetch == [nullid]: | |
1315 | self.ui.status(_("requesting all changes\n")) |
|
1321 | self.ui.status(_("requesting all changes\n")) | |
1316 | elif heads is None and remote.capable('changegroupsubset'): |
|
1322 | elif heads is None and remote.capable('changegroupsubset'): | |
1317 | # issue1320, avoid a race if remote changed after discovery |
|
1323 | # issue1320, avoid a race if remote changed after discovery | |
1318 | heads = rheads |
|
1324 | heads = rheads | |
1319 |
|
1325 | |||
1320 | if heads is None: |
|
1326 | if heads is None: | |
1321 | cg = remote.changegroup(fetch, 'pull') |
|
1327 | cg = remote.changegroup(fetch, 'pull') | |
1322 | elif not remote.capable('changegroupsubset'): |
|
1328 | elif not remote.capable('changegroupsubset'): | |
1323 | raise util.Abort(_("partial pull cannot be done because " |
|
1329 | raise util.Abort(_("partial pull cannot be done because " | |
1324 | "other repository doesn't support " |
|
1330 | "other repository doesn't support " | |
1325 | "changegroupsubset.")) |
|
1331 | "changegroupsubset.")) | |
1326 | else: |
|
1332 | else: | |
1327 | cg = remote.changegroupsubset(fetch, heads, 'pull') |
|
1333 | cg = remote.changegroupsubset(fetch, heads, 'pull') | |
1328 | result = self.addchangegroup(cg, 'pull', remote.url(), |
|
1334 | result = self.addchangegroup(cg, 'pull', remote.url(), | |
1329 | lock=lock) |
|
1335 | lock=lock) | |
1330 | finally: |
|
1336 | finally: | |
1331 | lock.release() |
|
1337 | lock.release() | |
1332 |
|
1338 | |||
1333 | self.ui.debug("checking for updated bookmarks\n") |
|
1339 | self.ui.debug("checking for updated bookmarks\n") | |
1334 | rb = remote.listkeys('bookmarks') |
|
1340 | rb = remote.listkeys('bookmarks') | |
1335 | changed = False |
|
1341 | changed = False | |
1336 | for k in rb.keys(): |
|
1342 | for k in rb.keys(): | |
1337 | if k in self._bookmarks: |
|
1343 | if k in self._bookmarks: | |
1338 | nr, nl = rb[k], self._bookmarks[k] |
|
1344 | nr, nl = rb[k], self._bookmarks[k] | |
1339 | if nr in self: |
|
1345 | if nr in self: | |
1340 | cr = self[nr] |
|
1346 | cr = self[nr] | |
1341 | cl = self[nl] |
|
1347 | cl = self[nl] | |
1342 | if cl.rev() >= cr.rev(): |
|
1348 | if cl.rev() >= cr.rev(): | |
1343 | continue |
|
1349 | continue | |
1344 | if cr in cl.descendants(): |
|
1350 | if cr in cl.descendants(): | |
1345 | self._bookmarks[k] = cr.node() |
|
1351 | self._bookmarks[k] = cr.node() | |
1346 | changed = True |
|
1352 | changed = True | |
1347 | self.ui.status(_("updating bookmark %s\n") % k) |
|
1353 | self.ui.status(_("updating bookmark %s\n") % k) | |
1348 | else: |
|
1354 | else: | |
1349 | self.ui.warn(_("not updating divergent" |
|
1355 | self.ui.warn(_("not updating divergent" | |
1350 | " bookmark %s\n") % k) |
|
1356 | " bookmark %s\n") % k) | |
1351 | if changed: |
|
1357 | if changed: | |
1352 | bookmarks.write(self) |
|
1358 | bookmarks.write(self) | |
1353 |
|
1359 | |||
1354 | return result |
|
1360 | return result | |
1355 |
|
1361 | |||
1356 | def checkpush(self, force, revs): |
|
1362 | def checkpush(self, force, revs): | |
1357 | """Extensions can override this function if additional checks have |
|
1363 | """Extensions can override this function if additional checks have | |
1358 | to be performed before pushing, or call it if they override push |
|
1364 | to be performed before pushing, or call it if they override push | |
1359 | command. |
|
1365 | command. | |
1360 | """ |
|
1366 | """ | |
1361 | pass |
|
1367 | pass | |
1362 |
|
1368 | |||
1363 | def push(self, remote, force=False, revs=None, newbranch=False): |
|
1369 | def push(self, remote, force=False, revs=None, newbranch=False): | |
1364 | '''Push outgoing changesets (limited by revs) from the current |
|
1370 | '''Push outgoing changesets (limited by revs) from the current | |
1365 | repository to remote. Return an integer: |
|
1371 | repository to remote. Return an integer: | |
1366 | - 0 means HTTP error *or* nothing to push |
|
1372 | - 0 means HTTP error *or* nothing to push | |
1367 | - 1 means we pushed and remote head count is unchanged *or* |
|
1373 | - 1 means we pushed and remote head count is unchanged *or* | |
1368 | we have outgoing changesets but refused to push |
|
1374 | we have outgoing changesets but refused to push | |
1369 | - other values as described by addchangegroup() |
|
1375 | - other values as described by addchangegroup() | |
1370 | ''' |
|
1376 | ''' | |
1371 | # there are two ways to push to remote repo: |
|
1377 | # there are two ways to push to remote repo: | |
1372 | # |
|
1378 | # | |
1373 | # addchangegroup assumes local user can lock remote |
|
1379 | # addchangegroup assumes local user can lock remote | |
1374 | # repo (local filesystem, old ssh servers). |
|
1380 | # repo (local filesystem, old ssh servers). | |
1375 | # |
|
1381 | # | |
1376 | # unbundle assumes local user cannot lock remote repo (new ssh |
|
1382 | # unbundle assumes local user cannot lock remote repo (new ssh | |
1377 | # servers, http servers). |
|
1383 | # servers, http servers). | |
1378 |
|
1384 | |||
1379 | self.checkpush(force, revs) |
|
1385 | self.checkpush(force, revs) | |
1380 | lock = None |
|
1386 | lock = None | |
1381 | unbundle = remote.capable('unbundle') |
|
1387 | unbundle = remote.capable('unbundle') | |
1382 | if not unbundle: |
|
1388 | if not unbundle: | |
1383 | lock = remote.lock() |
|
1389 | lock = remote.lock() | |
1384 | try: |
|
1390 | try: | |
1385 | cg, remote_heads = discovery.prepush(self, remote, force, revs, |
|
1391 | cg, remote_heads = discovery.prepush(self, remote, force, revs, | |
1386 | newbranch) |
|
1392 | newbranch) | |
1387 | ret = remote_heads |
|
1393 | ret = remote_heads | |
1388 | if cg is not None: |
|
1394 | if cg is not None: | |
1389 | if unbundle: |
|
1395 | if unbundle: | |
1390 | # local repo finds heads on server, finds out what |
|
1396 | # local repo finds heads on server, finds out what | |
1391 | # revs it must push. once revs transferred, if server |
|
1397 | # revs it must push. once revs transferred, if server | |
1392 | # finds it has different heads (someone else won |
|
1398 | # finds it has different heads (someone else won | |
1393 | # commit/push race), server aborts. |
|
1399 | # commit/push race), server aborts. | |
1394 | if force: |
|
1400 | if force: | |
1395 | remote_heads = ['force'] |
|
1401 | remote_heads = ['force'] | |
1396 | # ssh: return remote's addchangegroup() |
|
1402 | # ssh: return remote's addchangegroup() | |
1397 | # http: return remote's addchangegroup() or 0 for error |
|
1403 | # http: return remote's addchangegroup() or 0 for error | |
1398 | ret = remote.unbundle(cg, remote_heads, 'push') |
|
1404 | ret = remote.unbundle(cg, remote_heads, 'push') | |
1399 | else: |
|
1405 | else: | |
1400 | # we return an integer indicating remote head count change |
|
1406 | # we return an integer indicating remote head count change | |
1401 | ret = remote.addchangegroup(cg, 'push', self.url(), |
|
1407 | ret = remote.addchangegroup(cg, 'push', self.url(), | |
1402 | lock=lock) |
|
1408 | lock=lock) | |
1403 | finally: |
|
1409 | finally: | |
1404 | if lock is not None: |
|
1410 | if lock is not None: | |
1405 | lock.release() |
|
1411 | lock.release() | |
1406 |
|
1412 | |||
1407 | self.ui.debug("checking for updated bookmarks\n") |
|
1413 | self.ui.debug("checking for updated bookmarks\n") | |
1408 | rb = remote.listkeys('bookmarks') |
|
1414 | rb = remote.listkeys('bookmarks') | |
1409 | for k in rb.keys(): |
|
1415 | for k in rb.keys(): | |
1410 | if k in self._bookmarks: |
|
1416 | if k in self._bookmarks: | |
1411 | nr, nl = rb[k], hex(self._bookmarks[k]) |
|
1417 | nr, nl = rb[k], hex(self._bookmarks[k]) | |
1412 | if nr in self: |
|
1418 | if nr in self: | |
1413 | cr = self[nr] |
|
1419 | cr = self[nr] | |
1414 | cl = self[nl] |
|
1420 | cl = self[nl] | |
1415 | if cl in cr.descendants(): |
|
1421 | if cl in cr.descendants(): | |
1416 | r = remote.pushkey('bookmarks', k, nr, nl) |
|
1422 | r = remote.pushkey('bookmarks', k, nr, nl) | |
1417 | if r: |
|
1423 | if r: | |
1418 | self.ui.status(_("updating bookmark %s\n") % k) |
|
1424 | self.ui.status(_("updating bookmark %s\n") % k) | |
1419 | else: |
|
1425 | else: | |
1420 | self.ui.warn(_('updating bookmark %s' |
|
1426 | self.ui.warn(_('updating bookmark %s' | |
1421 | ' failed!\n') % k) |
|
1427 | ' failed!\n') % k) | |
1422 |
|
1428 | |||
1423 | return ret |
|
1429 | return ret | |
1424 |
|
1430 | |||
1425 | def changegroupinfo(self, nodes, source): |
|
1431 | def changegroupinfo(self, nodes, source): | |
1426 | if self.ui.verbose or source == 'bundle': |
|
1432 | if self.ui.verbose or source == 'bundle': | |
1427 | self.ui.status(_("%d changesets found\n") % len(nodes)) |
|
1433 | self.ui.status(_("%d changesets found\n") % len(nodes)) | |
1428 | if self.ui.debugflag: |
|
1434 | if self.ui.debugflag: | |
1429 | self.ui.debug("list of changesets:\n") |
|
1435 | self.ui.debug("list of changesets:\n") | |
1430 | for node in nodes: |
|
1436 | for node in nodes: | |
1431 | self.ui.debug("%s\n" % hex(node)) |
|
1437 | self.ui.debug("%s\n" % hex(node)) | |
1432 |
|
1438 | |||
1433 | def changegroupsubset(self, bases, heads, source, extranodes=None): |
|
1439 | def changegroupsubset(self, bases, heads, source, extranodes=None): | |
1434 | """Compute a changegroup consisting of all the nodes that are |
|
1440 | """Compute a changegroup consisting of all the nodes that are | |
1435 | descendents of any of the bases and ancestors of any of the heads. |
|
1441 | descendents of any of the bases and ancestors of any of the heads. | |
1436 | Return a chunkbuffer object whose read() method will return |
|
1442 | Return a chunkbuffer object whose read() method will return | |
1437 | successive changegroup chunks. |
|
1443 | successive changegroup chunks. | |
1438 |
|
1444 | |||
1439 | It is fairly complex as determining which filenodes and which |
|
1445 | It is fairly complex as determining which filenodes and which | |
1440 | manifest nodes need to be included for the changeset to be complete |
|
1446 | manifest nodes need to be included for the changeset to be complete | |
1441 | is non-trivial. |
|
1447 | is non-trivial. | |
1442 |
|
1448 | |||
1443 | Another wrinkle is doing the reverse, figuring out which changeset in |
|
1449 | Another wrinkle is doing the reverse, figuring out which changeset in | |
1444 | the changegroup a particular filenode or manifestnode belongs to. |
|
1450 | the changegroup a particular filenode or manifestnode belongs to. | |
1445 |
|
1451 | |||
1446 | The caller can specify some nodes that must be included in the |
|
1452 | The caller can specify some nodes that must be included in the | |
1447 | changegroup using the extranodes argument. It should be a dict |
|
1453 | changegroup using the extranodes argument. It should be a dict | |
1448 | where the keys are the filenames (or 1 for the manifest), and the |
|
1454 | where the keys are the filenames (or 1 for the manifest), and the | |
1449 | values are lists of (node, linknode) tuples, where node is a wanted |
|
1455 | values are lists of (node, linknode) tuples, where node is a wanted | |
1450 | node and linknode is the changelog node that should be transmitted as |
|
1456 | node and linknode is the changelog node that should be transmitted as | |
1451 | the linkrev. |
|
1457 | the linkrev. | |
1452 | """ |
|
1458 | """ | |
1453 |
|
1459 | |||
1454 | # Set up some initial variables |
|
1460 | # Set up some initial variables | |
1455 | # Make it easy to refer to self.changelog |
|
1461 | # Make it easy to refer to self.changelog | |
1456 | cl = self.changelog |
|
1462 | cl = self.changelog | |
1457 | # Compute the list of changesets in this changegroup. |
|
1463 | # Compute the list of changesets in this changegroup. | |
1458 | # Some bases may turn out to be superfluous, and some heads may be |
|
1464 | # Some bases may turn out to be superfluous, and some heads may be | |
1459 | # too. nodesbetween will return the minimal set of bases and heads |
|
1465 | # too. nodesbetween will return the minimal set of bases and heads | |
1460 | # necessary to re-create the changegroup. |
|
1466 | # necessary to re-create the changegroup. | |
1461 | if not bases: |
|
1467 | if not bases: | |
1462 | bases = [nullid] |
|
1468 | bases = [nullid] | |
1463 | msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads) |
|
1469 | msng_cl_lst, bases, heads = cl.nodesbetween(bases, heads) | |
1464 |
|
1470 | |||
1465 | if extranodes is None: |
|
1471 | if extranodes is None: | |
1466 | # can we go through the fast path ? |
|
1472 | # can we go through the fast path ? | |
1467 | heads.sort() |
|
1473 | heads.sort() | |
1468 | allheads = self.heads() |
|
1474 | allheads = self.heads() | |
1469 | allheads.sort() |
|
1475 | allheads.sort() | |
1470 | if heads == allheads: |
|
1476 | if heads == allheads: | |
1471 | return self._changegroup(msng_cl_lst, source) |
|
1477 | return self._changegroup(msng_cl_lst, source) | |
1472 |
|
1478 | |||
1473 | # slow path |
|
1479 | # slow path | |
1474 | self.hook('preoutgoing', throw=True, source=source) |
|
1480 | self.hook('preoutgoing', throw=True, source=source) | |
1475 |
|
1481 | |||
1476 | self.changegroupinfo(msng_cl_lst, source) |
|
1482 | self.changegroupinfo(msng_cl_lst, source) | |
1477 |
|
1483 | |||
1478 | # We assume that all ancestors of bases are known |
|
1484 | # We assume that all ancestors of bases are known | |
1479 | commonrevs = set(cl.ancestors(*[cl.rev(n) for n in bases])) |
|
1485 | commonrevs = set(cl.ancestors(*[cl.rev(n) for n in bases])) | |
1480 |
|
1486 | |||
1481 | # Make it easy to refer to self.manifest |
|
1487 | # Make it easy to refer to self.manifest | |
1482 | mnfst = self.manifest |
|
1488 | mnfst = self.manifest | |
1483 | # We don't know which manifests are missing yet |
|
1489 | # We don't know which manifests are missing yet | |
1484 | msng_mnfst_set = {} |
|
1490 | msng_mnfst_set = {} | |
1485 | # Nor do we know which filenodes are missing. |
|
1491 | # Nor do we know which filenodes are missing. | |
1486 | msng_filenode_set = {} |
|
1492 | msng_filenode_set = {} | |
1487 |
|
1493 | |||
1488 | # A changeset always belongs to itself, so the changenode lookup |
|
1494 | # A changeset always belongs to itself, so the changenode lookup | |
1489 | # function for a changenode is identity. |
|
1495 | # function for a changenode is identity. | |
1490 | def identity(x): |
|
1496 | def identity(x): | |
1491 | return x |
|
1497 | return x | |
1492 |
|
1498 | |||
1493 | # A function generating function that sets up the initial environment |
|
1499 | # A function generating function that sets up the initial environment | |
1494 | # the inner function. |
|
1500 | # the inner function. | |
1495 | def filenode_collector(changedfiles): |
|
1501 | def filenode_collector(changedfiles): | |
1496 | # This gathers information from each manifestnode included in the |
|
1502 | # This gathers information from each manifestnode included in the | |
1497 | # changegroup about which filenodes the manifest node references |
|
1503 | # changegroup about which filenodes the manifest node references | |
1498 | # so we can include those in the changegroup too. |
|
1504 | # so we can include those in the changegroup too. | |
1499 | # |
|
1505 | # | |
1500 | # It also remembers which changenode each filenode belongs to. It |
|
1506 | # It also remembers which changenode each filenode belongs to. It | |
1501 | # does this by assuming the a filenode belongs to the changenode |
|
1507 | # does this by assuming the a filenode belongs to the changenode | |
1502 | # the first manifest that references it belongs to. |
|
1508 | # the first manifest that references it belongs to. | |
1503 | def collect_msng_filenodes(mnfstnode): |
|
1509 | def collect_msng_filenodes(mnfstnode): | |
1504 | r = mnfst.rev(mnfstnode) |
|
1510 | r = mnfst.rev(mnfstnode) | |
1505 | if mnfst.deltaparent(r) in mnfst.parentrevs(r): |
|
1511 | if mnfst.deltaparent(r) in mnfst.parentrevs(r): | |
1506 | # If the previous rev is one of the parents, |
|
1512 | # If the previous rev is one of the parents, | |
1507 | # we only need to see a diff. |
|
1513 | # we only need to see a diff. | |
1508 | deltamf = mnfst.readdelta(mnfstnode) |
|
1514 | deltamf = mnfst.readdelta(mnfstnode) | |
1509 | # For each line in the delta |
|
1515 | # For each line in the delta | |
1510 | for f, fnode in deltamf.iteritems(): |
|
1516 | for f, fnode in deltamf.iteritems(): | |
1511 | # And if the file is in the list of files we care |
|
1517 | # And if the file is in the list of files we care | |
1512 | # about. |
|
1518 | # about. | |
1513 | if f in changedfiles: |
|
1519 | if f in changedfiles: | |
1514 | # Get the changenode this manifest belongs to |
|
1520 | # Get the changenode this manifest belongs to | |
1515 | clnode = msng_mnfst_set[mnfstnode] |
|
1521 | clnode = msng_mnfst_set[mnfstnode] | |
1516 | # Create the set of filenodes for the file if |
|
1522 | # Create the set of filenodes for the file if | |
1517 | # there isn't one already. |
|
1523 | # there isn't one already. | |
1518 | ndset = msng_filenode_set.setdefault(f, {}) |
|
1524 | ndset = msng_filenode_set.setdefault(f, {}) | |
1519 | # And set the filenode's changelog node to the |
|
1525 | # And set the filenode's changelog node to the | |
1520 | # manifest's if it hasn't been set already. |
|
1526 | # manifest's if it hasn't been set already. | |
1521 | ndset.setdefault(fnode, clnode) |
|
1527 | ndset.setdefault(fnode, clnode) | |
1522 | else: |
|
1528 | else: | |
1523 | # Otherwise we need a full manifest. |
|
1529 | # Otherwise we need a full manifest. | |
1524 | m = mnfst.read(mnfstnode) |
|
1530 | m = mnfst.read(mnfstnode) | |
1525 | # For every file in we care about. |
|
1531 | # For every file in we care about. | |
1526 | for f in changedfiles: |
|
1532 | for f in changedfiles: | |
1527 | fnode = m.get(f, None) |
|
1533 | fnode = m.get(f, None) | |
1528 | # If it's in the manifest |
|
1534 | # If it's in the manifest | |
1529 | if fnode is not None: |
|
1535 | if fnode is not None: | |
1530 | # See comments above. |
|
1536 | # See comments above. | |
1531 | clnode = msng_mnfst_set[mnfstnode] |
|
1537 | clnode = msng_mnfst_set[mnfstnode] | |
1532 | ndset = msng_filenode_set.setdefault(f, {}) |
|
1538 | ndset = msng_filenode_set.setdefault(f, {}) | |
1533 | ndset.setdefault(fnode, clnode) |
|
1539 | ndset.setdefault(fnode, clnode) | |
1534 | return collect_msng_filenodes |
|
1540 | return collect_msng_filenodes | |
1535 |
|
1541 | |||
1536 | # If we determine that a particular file or manifest node must be a |
|
1542 | # If we determine that a particular file or manifest node must be a | |
1537 | # node that the recipient of the changegroup will already have, we can |
|
1543 | # node that the recipient of the changegroup will already have, we can | |
1538 | # also assume the recipient will have all the parents. This function |
|
1544 | # also assume the recipient will have all the parents. This function | |
1539 | # prunes them from the set of missing nodes. |
|
1545 | # prunes them from the set of missing nodes. | |
1540 | def prune(revlog, missingnodes): |
|
1546 | def prune(revlog, missingnodes): | |
1541 | hasset = set() |
|
1547 | hasset = set() | |
1542 | # If a 'missing' filenode thinks it belongs to a changenode we |
|
1548 | # If a 'missing' filenode thinks it belongs to a changenode we | |
1543 | # assume the recipient must have, then the recipient must have |
|
1549 | # assume the recipient must have, then the recipient must have | |
1544 | # that filenode. |
|
1550 | # that filenode. | |
1545 | for n in missingnodes: |
|
1551 | for n in missingnodes: | |
1546 | clrev = revlog.linkrev(revlog.rev(n)) |
|
1552 | clrev = revlog.linkrev(revlog.rev(n)) | |
1547 | if clrev in commonrevs: |
|
1553 | if clrev in commonrevs: | |
1548 | hasset.add(n) |
|
1554 | hasset.add(n) | |
1549 | for n in hasset: |
|
1555 | for n in hasset: | |
1550 | missingnodes.pop(n, None) |
|
1556 | missingnodes.pop(n, None) | |
1551 | for r in revlog.ancestors(*[revlog.rev(n) for n in hasset]): |
|
1557 | for r in revlog.ancestors(*[revlog.rev(n) for n in hasset]): | |
1552 | missingnodes.pop(revlog.node(r), None) |
|
1558 | missingnodes.pop(revlog.node(r), None) | |
1553 |
|
1559 | |||
1554 | # Add the nodes that were explicitly requested. |
|
1560 | # Add the nodes that were explicitly requested. | |
1555 | def add_extra_nodes(name, nodes): |
|
1561 | def add_extra_nodes(name, nodes): | |
1556 | if not extranodes or name not in extranodes: |
|
1562 | if not extranodes or name not in extranodes: | |
1557 | return |
|
1563 | return | |
1558 |
|
1564 | |||
1559 | for node, linknode in extranodes[name]: |
|
1565 | for node, linknode in extranodes[name]: | |
1560 | if node not in nodes: |
|
1566 | if node not in nodes: | |
1561 | nodes[node] = linknode |
|
1567 | nodes[node] = linknode | |
1562 |
|
1568 | |||
1563 | # Now that we have all theses utility functions to help out and |
|
1569 | # Now that we have all theses utility functions to help out and | |
1564 | # logically divide up the task, generate the group. |
|
1570 | # logically divide up the task, generate the group. | |
1565 | def gengroup(): |
|
1571 | def gengroup(): | |
1566 | # The set of changed files starts empty. |
|
1572 | # The set of changed files starts empty. | |
1567 | changedfiles = set() |
|
1573 | changedfiles = set() | |
1568 | collect = changegroup.collector(cl, msng_mnfst_set, changedfiles) |
|
1574 | collect = changegroup.collector(cl, msng_mnfst_set, changedfiles) | |
1569 |
|
1575 | |||
1570 | # Create a changenode group generator that will call our functions |
|
1576 | # Create a changenode group generator that will call our functions | |
1571 | # back to lookup the owning changenode and collect information. |
|
1577 | # back to lookup the owning changenode and collect information. | |
1572 | group = cl.group(msng_cl_lst, identity, collect) |
|
1578 | group = cl.group(msng_cl_lst, identity, collect) | |
1573 | for cnt, chnk in enumerate(group): |
|
1579 | for cnt, chnk in enumerate(group): | |
1574 | yield chnk |
|
1580 | yield chnk | |
1575 | # revlog.group yields three entries per node, so |
|
1581 | # revlog.group yields three entries per node, so | |
1576 | # dividing by 3 gives an approximation of how many |
|
1582 | # dividing by 3 gives an approximation of how many | |
1577 | # nodes have been processed. |
|
1583 | # nodes have been processed. | |
1578 | self.ui.progress(_('bundling'), cnt / 3, |
|
1584 | self.ui.progress(_('bundling'), cnt / 3, | |
1579 | unit=_('changesets')) |
|
1585 | unit=_('changesets')) | |
1580 | changecount = cnt / 3 |
|
1586 | changecount = cnt / 3 | |
1581 | self.ui.progress(_('bundling'), None) |
|
1587 | self.ui.progress(_('bundling'), None) | |
1582 |
|
1588 | |||
1583 | prune(mnfst, msng_mnfst_set) |
|
1589 | prune(mnfst, msng_mnfst_set) | |
1584 | add_extra_nodes(1, msng_mnfst_set) |
|
1590 | add_extra_nodes(1, msng_mnfst_set) | |
1585 | msng_mnfst_lst = msng_mnfst_set.keys() |
|
1591 | msng_mnfst_lst = msng_mnfst_set.keys() | |
1586 | # Sort the manifestnodes by revision number. |
|
1592 | # Sort the manifestnodes by revision number. | |
1587 | msng_mnfst_lst.sort(key=mnfst.rev) |
|
1593 | msng_mnfst_lst.sort(key=mnfst.rev) | |
1588 | # Create a generator for the manifestnodes that calls our lookup |
|
1594 | # Create a generator for the manifestnodes that calls our lookup | |
1589 | # and data collection functions back. |
|
1595 | # and data collection functions back. | |
1590 | group = mnfst.group(msng_mnfst_lst, |
|
1596 | group = mnfst.group(msng_mnfst_lst, | |
1591 | lambda mnode: msng_mnfst_set[mnode], |
|
1597 | lambda mnode: msng_mnfst_set[mnode], | |
1592 | filenode_collector(changedfiles)) |
|
1598 | filenode_collector(changedfiles)) | |
1593 | efiles = {} |
|
1599 | efiles = {} | |
1594 | for cnt, chnk in enumerate(group): |
|
1600 | for cnt, chnk in enumerate(group): | |
1595 | if cnt % 3 == 1: |
|
1601 | if cnt % 3 == 1: | |
1596 | mnode = chnk[:20] |
|
1602 | mnode = chnk[:20] | |
1597 | efiles.update(mnfst.readdelta(mnode)) |
|
1603 | efiles.update(mnfst.readdelta(mnode)) | |
1598 | yield chnk |
|
1604 | yield chnk | |
1599 | # see above comment for why we divide by 3 |
|
1605 | # see above comment for why we divide by 3 | |
1600 | self.ui.progress(_('bundling'), cnt / 3, |
|
1606 | self.ui.progress(_('bundling'), cnt / 3, | |
1601 | unit=_('manifests'), total=changecount) |
|
1607 | unit=_('manifests'), total=changecount) | |
1602 | self.ui.progress(_('bundling'), None) |
|
1608 | self.ui.progress(_('bundling'), None) | |
1603 | efiles = len(efiles) |
|
1609 | efiles = len(efiles) | |
1604 |
|
1610 | |||
1605 | # These are no longer needed, dereference and toss the memory for |
|
1611 | # These are no longer needed, dereference and toss the memory for | |
1606 | # them. |
|
1612 | # them. | |
1607 | msng_mnfst_lst = None |
|
1613 | msng_mnfst_lst = None | |
1608 | msng_mnfst_set.clear() |
|
1614 | msng_mnfst_set.clear() | |
1609 |
|
1615 | |||
1610 | if extranodes: |
|
1616 | if extranodes: | |
1611 | for fname in extranodes: |
|
1617 | for fname in extranodes: | |
1612 | if isinstance(fname, int): |
|
1618 | if isinstance(fname, int): | |
1613 | continue |
|
1619 | continue | |
1614 | msng_filenode_set.setdefault(fname, {}) |
|
1620 | msng_filenode_set.setdefault(fname, {}) | |
1615 | changedfiles.add(fname) |
|
1621 | changedfiles.add(fname) | |
1616 | # Go through all our files in order sorted by name. |
|
1622 | # Go through all our files in order sorted by name. | |
1617 | for idx, fname in enumerate(sorted(changedfiles)): |
|
1623 | for idx, fname in enumerate(sorted(changedfiles)): | |
1618 | filerevlog = self.file(fname) |
|
1624 | filerevlog = self.file(fname) | |
1619 | if not len(filerevlog): |
|
1625 | if not len(filerevlog): | |
1620 | raise util.Abort(_("empty or missing revlog for %s") % fname) |
|
1626 | raise util.Abort(_("empty or missing revlog for %s") % fname) | |
1621 | # Toss out the filenodes that the recipient isn't really |
|
1627 | # Toss out the filenodes that the recipient isn't really | |
1622 | # missing. |
|
1628 | # missing. | |
1623 | missingfnodes = msng_filenode_set.pop(fname, {}) |
|
1629 | missingfnodes = msng_filenode_set.pop(fname, {}) | |
1624 | prune(filerevlog, missingfnodes) |
|
1630 | prune(filerevlog, missingfnodes) | |
1625 | add_extra_nodes(fname, missingfnodes) |
|
1631 | add_extra_nodes(fname, missingfnodes) | |
1626 | # If any filenodes are left, generate the group for them, |
|
1632 | # If any filenodes are left, generate the group for them, | |
1627 | # otherwise don't bother. |
|
1633 | # otherwise don't bother. | |
1628 | if missingfnodes: |
|
1634 | if missingfnodes: | |
1629 | yield changegroup.chunkheader(len(fname)) |
|
1635 | yield changegroup.chunkheader(len(fname)) | |
1630 | yield fname |
|
1636 | yield fname | |
1631 | # Sort the filenodes by their revision # (topological order) |
|
1637 | # Sort the filenodes by their revision # (topological order) | |
1632 | nodeiter = list(missingfnodes) |
|
1638 | nodeiter = list(missingfnodes) | |
1633 | nodeiter.sort(key=filerevlog.rev) |
|
1639 | nodeiter.sort(key=filerevlog.rev) | |
1634 | # Create a group generator and only pass in a changenode |
|
1640 | # Create a group generator and only pass in a changenode | |
1635 | # lookup function as we need to collect no information |
|
1641 | # lookup function as we need to collect no information | |
1636 | # from filenodes. |
|
1642 | # from filenodes. | |
1637 | group = filerevlog.group(nodeiter, |
|
1643 | group = filerevlog.group(nodeiter, | |
1638 | lambda fnode: missingfnodes[fnode]) |
|
1644 | lambda fnode: missingfnodes[fnode]) | |
1639 | for chnk in group: |
|
1645 | for chnk in group: | |
1640 | # even though we print the same progress on |
|
1646 | # even though we print the same progress on | |
1641 | # most loop iterations, put the progress call |
|
1647 | # most loop iterations, put the progress call | |
1642 | # here so that time estimates (if any) can be updated |
|
1648 | # here so that time estimates (if any) can be updated | |
1643 | self.ui.progress( |
|
1649 | self.ui.progress( | |
1644 | _('bundling'), idx, item=fname, |
|
1650 | _('bundling'), idx, item=fname, | |
1645 | unit=_('files'), total=efiles) |
|
1651 | unit=_('files'), total=efiles) | |
1646 | yield chnk |
|
1652 | yield chnk | |
1647 | # Signal that no more groups are left. |
|
1653 | # Signal that no more groups are left. | |
1648 | yield changegroup.closechunk() |
|
1654 | yield changegroup.closechunk() | |
1649 | self.ui.progress(_('bundling'), None) |
|
1655 | self.ui.progress(_('bundling'), None) | |
1650 |
|
1656 | |||
1651 | if msng_cl_lst: |
|
1657 | if msng_cl_lst: | |
1652 | self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source) |
|
1658 | self.hook('outgoing', node=hex(msng_cl_lst[0]), source=source) | |
1653 |
|
1659 | |||
1654 | return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN') |
|
1660 | return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN') | |
1655 |
|
1661 | |||
1656 | def changegroup(self, basenodes, source): |
|
1662 | def changegroup(self, basenodes, source): | |
1657 | # to avoid a race we use changegroupsubset() (issue1320) |
|
1663 | # to avoid a race we use changegroupsubset() (issue1320) | |
1658 | return self.changegroupsubset(basenodes, self.heads(), source) |
|
1664 | return self.changegroupsubset(basenodes, self.heads(), source) | |
1659 |
|
1665 | |||
1660 | def _changegroup(self, nodes, source): |
|
1666 | def _changegroup(self, nodes, source): | |
1661 | """Compute the changegroup of all nodes that we have that a recipient |
|
1667 | """Compute the changegroup of all nodes that we have that a recipient | |
1662 | doesn't. Return a chunkbuffer object whose read() method will return |
|
1668 | doesn't. Return a chunkbuffer object whose read() method will return | |
1663 | successive changegroup chunks. |
|
1669 | successive changegroup chunks. | |
1664 |
|
1670 | |||
1665 | This is much easier than the previous function as we can assume that |
|
1671 | This is much easier than the previous function as we can assume that | |
1666 | the recipient has any changenode we aren't sending them. |
|
1672 | the recipient has any changenode we aren't sending them. | |
1667 |
|
1673 | |||
1668 | nodes is the set of nodes to send""" |
|
1674 | nodes is the set of nodes to send""" | |
1669 |
|
1675 | |||
1670 | self.hook('preoutgoing', throw=True, source=source) |
|
1676 | self.hook('preoutgoing', throw=True, source=source) | |
1671 |
|
1677 | |||
1672 | cl = self.changelog |
|
1678 | cl = self.changelog | |
1673 | revset = set([cl.rev(n) for n in nodes]) |
|
1679 | revset = set([cl.rev(n) for n in nodes]) | |
1674 | self.changegroupinfo(nodes, source) |
|
1680 | self.changegroupinfo(nodes, source) | |
1675 |
|
1681 | |||
1676 | def identity(x): |
|
1682 | def identity(x): | |
1677 | return x |
|
1683 | return x | |
1678 |
|
1684 | |||
1679 | def gennodelst(log): |
|
1685 | def gennodelst(log): | |
1680 | for r in log: |
|
1686 | for r in log: | |
1681 | if log.linkrev(r) in revset: |
|
1687 | if log.linkrev(r) in revset: | |
1682 | yield log.node(r) |
|
1688 | yield log.node(r) | |
1683 |
|
1689 | |||
1684 | def lookuplinkrev_func(revlog): |
|
1690 | def lookuplinkrev_func(revlog): | |
1685 | def lookuplinkrev(n): |
|
1691 | def lookuplinkrev(n): | |
1686 | return cl.node(revlog.linkrev(revlog.rev(n))) |
|
1692 | return cl.node(revlog.linkrev(revlog.rev(n))) | |
1687 | return lookuplinkrev |
|
1693 | return lookuplinkrev | |
1688 |
|
1694 | |||
1689 | def gengroup(): |
|
1695 | def gengroup(): | |
1690 | '''yield a sequence of changegroup chunks (strings)''' |
|
1696 | '''yield a sequence of changegroup chunks (strings)''' | |
1691 | # construct a list of all changed files |
|
1697 | # construct a list of all changed files | |
1692 | changedfiles = set() |
|
1698 | changedfiles = set() | |
1693 | mmfs = {} |
|
1699 | mmfs = {} | |
1694 | collect = changegroup.collector(cl, mmfs, changedfiles) |
|
1700 | collect = changegroup.collector(cl, mmfs, changedfiles) | |
1695 |
|
1701 | |||
1696 | for cnt, chnk in enumerate(cl.group(nodes, identity, collect)): |
|
1702 | for cnt, chnk in enumerate(cl.group(nodes, identity, collect)): | |
1697 | # revlog.group yields three entries per node, so |
|
1703 | # revlog.group yields three entries per node, so | |
1698 | # dividing by 3 gives an approximation of how many |
|
1704 | # dividing by 3 gives an approximation of how many | |
1699 | # nodes have been processed. |
|
1705 | # nodes have been processed. | |
1700 | self.ui.progress(_('bundling'), cnt / 3, unit=_('changesets')) |
|
1706 | self.ui.progress(_('bundling'), cnt / 3, unit=_('changesets')) | |
1701 | yield chnk |
|
1707 | yield chnk | |
1702 | changecount = cnt / 3 |
|
1708 | changecount = cnt / 3 | |
1703 | self.ui.progress(_('bundling'), None) |
|
1709 | self.ui.progress(_('bundling'), None) | |
1704 |
|
1710 | |||
1705 | mnfst = self.manifest |
|
1711 | mnfst = self.manifest | |
1706 | nodeiter = gennodelst(mnfst) |
|
1712 | nodeiter = gennodelst(mnfst) | |
1707 | efiles = {} |
|
1713 | efiles = {} | |
1708 | for cnt, chnk in enumerate(mnfst.group(nodeiter, |
|
1714 | for cnt, chnk in enumerate(mnfst.group(nodeiter, | |
1709 | lookuplinkrev_func(mnfst))): |
|
1715 | lookuplinkrev_func(mnfst))): | |
1710 | if cnt % 3 == 1: |
|
1716 | if cnt % 3 == 1: | |
1711 | mnode = chnk[:20] |
|
1717 | mnode = chnk[:20] | |
1712 | efiles.update(mnfst.readdelta(mnode)) |
|
1718 | efiles.update(mnfst.readdelta(mnode)) | |
1713 | # see above comment for why we divide by 3 |
|
1719 | # see above comment for why we divide by 3 | |
1714 | self.ui.progress(_('bundling'), cnt / 3, |
|
1720 | self.ui.progress(_('bundling'), cnt / 3, | |
1715 | unit=_('manifests'), total=changecount) |
|
1721 | unit=_('manifests'), total=changecount) | |
1716 | yield chnk |
|
1722 | yield chnk | |
1717 | efiles = len(efiles) |
|
1723 | efiles = len(efiles) | |
1718 | self.ui.progress(_('bundling'), None) |
|
1724 | self.ui.progress(_('bundling'), None) | |
1719 |
|
1725 | |||
1720 | for idx, fname in enumerate(sorted(changedfiles)): |
|
1726 | for idx, fname in enumerate(sorted(changedfiles)): | |
1721 | filerevlog = self.file(fname) |
|
1727 | filerevlog = self.file(fname) | |
1722 | if not len(filerevlog): |
|
1728 | if not len(filerevlog): | |
1723 | raise util.Abort(_("empty or missing revlog for %s") % fname) |
|
1729 | raise util.Abort(_("empty or missing revlog for %s") % fname) | |
1724 | nodeiter = gennodelst(filerevlog) |
|
1730 | nodeiter = gennodelst(filerevlog) | |
1725 | nodeiter = list(nodeiter) |
|
1731 | nodeiter = list(nodeiter) | |
1726 | if nodeiter: |
|
1732 | if nodeiter: | |
1727 | yield changegroup.chunkheader(len(fname)) |
|
1733 | yield changegroup.chunkheader(len(fname)) | |
1728 | yield fname |
|
1734 | yield fname | |
1729 | lookup = lookuplinkrev_func(filerevlog) |
|
1735 | lookup = lookuplinkrev_func(filerevlog) | |
1730 | for chnk in filerevlog.group(nodeiter, lookup): |
|
1736 | for chnk in filerevlog.group(nodeiter, lookup): | |
1731 | self.ui.progress( |
|
1737 | self.ui.progress( | |
1732 | _('bundling'), idx, item=fname, |
|
1738 | _('bundling'), idx, item=fname, | |
1733 | total=efiles, unit=_('files')) |
|
1739 | total=efiles, unit=_('files')) | |
1734 | yield chnk |
|
1740 | yield chnk | |
1735 | self.ui.progress(_('bundling'), None) |
|
1741 | self.ui.progress(_('bundling'), None) | |
1736 |
|
1742 | |||
1737 | yield changegroup.closechunk() |
|
1743 | yield changegroup.closechunk() | |
1738 |
|
1744 | |||
1739 | if nodes: |
|
1745 | if nodes: | |
1740 | self.hook('outgoing', node=hex(nodes[0]), source=source) |
|
1746 | self.hook('outgoing', node=hex(nodes[0]), source=source) | |
1741 |
|
1747 | |||
1742 | return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN') |
|
1748 | return changegroup.unbundle10(util.chunkbuffer(gengroup()), 'UN') | |
1743 |
|
1749 | |||
1744 | def addchangegroup(self, source, srctype, url, emptyok=False, lock=None): |
|
1750 | def addchangegroup(self, source, srctype, url, emptyok=False, lock=None): | |
1745 | """Add the changegroup returned by source.read() to this repo. |
|
1751 | """Add the changegroup returned by source.read() to this repo. | |
1746 | srctype is a string like 'push', 'pull', or 'unbundle'. url is |
|
1752 | srctype is a string like 'push', 'pull', or 'unbundle'. url is | |
1747 | the URL of the repo where this changegroup is coming from. |
|
1753 | the URL of the repo where this changegroup is coming from. | |
1748 | If lock is not None, the function takes ownership of the lock |
|
1754 | If lock is not None, the function takes ownership of the lock | |
1749 | and releases it after the changegroup is added. |
|
1755 | and releases it after the changegroup is added. | |
1750 |
|
1756 | |||
1751 | Return an integer summarizing the change to this repo: |
|
1757 | Return an integer summarizing the change to this repo: | |
1752 | - nothing changed or no source: 0 |
|
1758 | - nothing changed or no source: 0 | |
1753 | - more heads than before: 1+added heads (2..n) |
|
1759 | - more heads than before: 1+added heads (2..n) | |
1754 | - fewer heads than before: -1-removed heads (-2..-n) |
|
1760 | - fewer heads than before: -1-removed heads (-2..-n) | |
1755 | - number of heads stays the same: 1 |
|
1761 | - number of heads stays the same: 1 | |
1756 | """ |
|
1762 | """ | |
1757 | def csmap(x): |
|
1763 | def csmap(x): | |
1758 | self.ui.debug("add changeset %s\n" % short(x)) |
|
1764 | self.ui.debug("add changeset %s\n" % short(x)) | |
1759 | return len(cl) |
|
1765 | return len(cl) | |
1760 |
|
1766 | |||
1761 | def revmap(x): |
|
1767 | def revmap(x): | |
1762 | return cl.rev(x) |
|
1768 | return cl.rev(x) | |
1763 |
|
1769 | |||
1764 | if not source: |
|
1770 | if not source: | |
1765 | return 0 |
|
1771 | return 0 | |
1766 |
|
1772 | |||
1767 | self.hook('prechangegroup', throw=True, source=srctype, url=url) |
|
1773 | self.hook('prechangegroup', throw=True, source=srctype, url=url) | |
1768 |
|
1774 | |||
1769 | changesets = files = revisions = 0 |
|
1775 | changesets = files = revisions = 0 | |
1770 | efiles = set() |
|
1776 | efiles = set() | |
1771 |
|
1777 | |||
1772 | # write changelog data to temp files so concurrent readers will not see |
|
1778 | # write changelog data to temp files so concurrent readers will not see | |
1773 | # inconsistent view |
|
1779 | # inconsistent view | |
1774 | cl = self.changelog |
|
1780 | cl = self.changelog | |
1775 | cl.delayupdate() |
|
1781 | cl.delayupdate() | |
1776 | oldheads = len(cl.heads()) |
|
1782 | oldheads = len(cl.heads()) | |
1777 |
|
1783 | |||
1778 | tr = self.transaction("\n".join([srctype, urlmod.hidepassword(url)])) |
|
1784 | tr = self.transaction("\n".join([srctype, urlmod.hidepassword(url)])) | |
1779 | try: |
|
1785 | try: | |
1780 | trp = weakref.proxy(tr) |
|
1786 | trp = weakref.proxy(tr) | |
1781 | # pull off the changeset group |
|
1787 | # pull off the changeset group | |
1782 | self.ui.status(_("adding changesets\n")) |
|
1788 | self.ui.status(_("adding changesets\n")) | |
1783 | clstart = len(cl) |
|
1789 | clstart = len(cl) | |
1784 | class prog(object): |
|
1790 | class prog(object): | |
1785 | step = _('changesets') |
|
1791 | step = _('changesets') | |
1786 | count = 1 |
|
1792 | count = 1 | |
1787 | ui = self.ui |
|
1793 | ui = self.ui | |
1788 | total = None |
|
1794 | total = None | |
1789 | def __call__(self): |
|
1795 | def __call__(self): | |
1790 | self.ui.progress(self.step, self.count, unit=_('chunks'), |
|
1796 | self.ui.progress(self.step, self.count, unit=_('chunks'), | |
1791 | total=self.total) |
|
1797 | total=self.total) | |
1792 | self.count += 1 |
|
1798 | self.count += 1 | |
1793 | pr = prog() |
|
1799 | pr = prog() | |
1794 | source.callback = pr |
|
1800 | source.callback = pr | |
1795 |
|
1801 | |||
1796 | if (cl.addgroup(source, csmap, trp) is None |
|
1802 | if (cl.addgroup(source, csmap, trp) is None | |
1797 | and not emptyok): |
|
1803 | and not emptyok): | |
1798 | raise util.Abort(_("received changelog group is empty")) |
|
1804 | raise util.Abort(_("received changelog group is empty")) | |
1799 | clend = len(cl) |
|
1805 | clend = len(cl) | |
1800 | changesets = clend - clstart |
|
1806 | changesets = clend - clstart | |
1801 | for c in xrange(clstart, clend): |
|
1807 | for c in xrange(clstart, clend): | |
1802 | efiles.update(self[c].files()) |
|
1808 | efiles.update(self[c].files()) | |
1803 | efiles = len(efiles) |
|
1809 | efiles = len(efiles) | |
1804 | self.ui.progress(_('changesets'), None) |
|
1810 | self.ui.progress(_('changesets'), None) | |
1805 |
|
1811 | |||
1806 | # pull off the manifest group |
|
1812 | # pull off the manifest group | |
1807 | self.ui.status(_("adding manifests\n")) |
|
1813 | self.ui.status(_("adding manifests\n")) | |
1808 | pr.step = _('manifests') |
|
1814 | pr.step = _('manifests') | |
1809 | pr.count = 1 |
|
1815 | pr.count = 1 | |
1810 | pr.total = changesets # manifests <= changesets |
|
1816 | pr.total = changesets # manifests <= changesets | |
1811 | # no need to check for empty manifest group here: |
|
1817 | # no need to check for empty manifest group here: | |
1812 | # if the result of the merge of 1 and 2 is the same in 3 and 4, |
|
1818 | # if the result of the merge of 1 and 2 is the same in 3 and 4, | |
1813 | # no new manifest will be created and the manifest group will |
|
1819 | # no new manifest will be created and the manifest group will | |
1814 | # be empty during the pull |
|
1820 | # be empty during the pull | |
1815 | self.manifest.addgroup(source, revmap, trp) |
|
1821 | self.manifest.addgroup(source, revmap, trp) | |
1816 | self.ui.progress(_('manifests'), None) |
|
1822 | self.ui.progress(_('manifests'), None) | |
1817 |
|
1823 | |||
1818 | needfiles = {} |
|
1824 | needfiles = {} | |
1819 | if self.ui.configbool('server', 'validate', default=False): |
|
1825 | if self.ui.configbool('server', 'validate', default=False): | |
1820 | # validate incoming csets have their manifests |
|
1826 | # validate incoming csets have their manifests | |
1821 | for cset in xrange(clstart, clend): |
|
1827 | for cset in xrange(clstart, clend): | |
1822 | mfest = self.changelog.read(self.changelog.node(cset))[0] |
|
1828 | mfest = self.changelog.read(self.changelog.node(cset))[0] | |
1823 | mfest = self.manifest.readdelta(mfest) |
|
1829 | mfest = self.manifest.readdelta(mfest) | |
1824 | # store file nodes we must see |
|
1830 | # store file nodes we must see | |
1825 | for f, n in mfest.iteritems(): |
|
1831 | for f, n in mfest.iteritems(): | |
1826 | needfiles.setdefault(f, set()).add(n) |
|
1832 | needfiles.setdefault(f, set()).add(n) | |
1827 |
|
1833 | |||
1828 | # process the files |
|
1834 | # process the files | |
1829 | self.ui.status(_("adding file changes\n")) |
|
1835 | self.ui.status(_("adding file changes\n")) | |
1830 | pr.step = 'files' |
|
1836 | pr.step = 'files' | |
1831 | pr.count = 1 |
|
1837 | pr.count = 1 | |
1832 | pr.total = efiles |
|
1838 | pr.total = efiles | |
1833 | source.callback = None |
|
1839 | source.callback = None | |
1834 |
|
1840 | |||
1835 | while 1: |
|
1841 | while 1: | |
1836 | f = source.chunk() |
|
1842 | f = source.chunk() | |
1837 | if not f: |
|
1843 | if not f: | |
1838 | break |
|
1844 | break | |
1839 | self.ui.debug("adding %s revisions\n" % f) |
|
1845 | self.ui.debug("adding %s revisions\n" % f) | |
1840 | pr() |
|
1846 | pr() | |
1841 | fl = self.file(f) |
|
1847 | fl = self.file(f) | |
1842 | o = len(fl) |
|
1848 | o = len(fl) | |
1843 | if fl.addgroup(source, revmap, trp) is None: |
|
1849 | if fl.addgroup(source, revmap, trp) is None: | |
1844 | raise util.Abort(_("received file revlog group is empty")) |
|
1850 | raise util.Abort(_("received file revlog group is empty")) | |
1845 | revisions += len(fl) - o |
|
1851 | revisions += len(fl) - o | |
1846 | files += 1 |
|
1852 | files += 1 | |
1847 | if f in needfiles: |
|
1853 | if f in needfiles: | |
1848 | needs = needfiles[f] |
|
1854 | needs = needfiles[f] | |
1849 | for new in xrange(o, len(fl)): |
|
1855 | for new in xrange(o, len(fl)): | |
1850 | n = fl.node(new) |
|
1856 | n = fl.node(new) | |
1851 | if n in needs: |
|
1857 | if n in needs: | |
1852 | needs.remove(n) |
|
1858 | needs.remove(n) | |
1853 | if not needs: |
|
1859 | if not needs: | |
1854 | del needfiles[f] |
|
1860 | del needfiles[f] | |
1855 | self.ui.progress(_('files'), None) |
|
1861 | self.ui.progress(_('files'), None) | |
1856 |
|
1862 | |||
1857 | for f, needs in needfiles.iteritems(): |
|
1863 | for f, needs in needfiles.iteritems(): | |
1858 | fl = self.file(f) |
|
1864 | fl = self.file(f) | |
1859 | for n in needs: |
|
1865 | for n in needs: | |
1860 | try: |
|
1866 | try: | |
1861 | fl.rev(n) |
|
1867 | fl.rev(n) | |
1862 | except error.LookupError: |
|
1868 | except error.LookupError: | |
1863 | raise util.Abort( |
|
1869 | raise util.Abort( | |
1864 | _('missing file data for %s:%s - run hg verify') % |
|
1870 | _('missing file data for %s:%s - run hg verify') % | |
1865 | (f, hex(n))) |
|
1871 | (f, hex(n))) | |
1866 |
|
1872 | |||
1867 | newheads = len(cl.heads()) |
|
1873 | newheads = len(cl.heads()) | |
1868 | heads = "" |
|
1874 | heads = "" | |
1869 | if oldheads and newheads != oldheads: |
|
1875 | if oldheads and newheads != oldheads: | |
1870 | heads = _(" (%+d heads)") % (newheads - oldheads) |
|
1876 | heads = _(" (%+d heads)") % (newheads - oldheads) | |
1871 |
|
1877 | |||
1872 | self.ui.status(_("added %d changesets" |
|
1878 | self.ui.status(_("added %d changesets" | |
1873 | " with %d changes to %d files%s\n") |
|
1879 | " with %d changes to %d files%s\n") | |
1874 | % (changesets, revisions, files, heads)) |
|
1880 | % (changesets, revisions, files, heads)) | |
1875 |
|
1881 | |||
1876 | if changesets > 0: |
|
1882 | if changesets > 0: | |
1877 | p = lambda: cl.writepending() and self.root or "" |
|
1883 | p = lambda: cl.writepending() and self.root or "" | |
1878 | self.hook('pretxnchangegroup', throw=True, |
|
1884 | self.hook('pretxnchangegroup', throw=True, | |
1879 | node=hex(cl.node(clstart)), source=srctype, |
|
1885 | node=hex(cl.node(clstart)), source=srctype, | |
1880 | url=url, pending=p) |
|
1886 | url=url, pending=p) | |
1881 |
|
1887 | |||
1882 | # make changelog see real files again |
|
1888 | # make changelog see real files again | |
1883 | cl.finalize(trp) |
|
1889 | cl.finalize(trp) | |
1884 |
|
1890 | |||
1885 | tr.close() |
|
1891 | tr.close() | |
1886 | finally: |
|
1892 | finally: | |
1887 | tr.release() |
|
1893 | tr.release() | |
1888 | if lock: |
|
1894 | if lock: | |
1889 | lock.release() |
|
1895 | lock.release() | |
1890 |
|
1896 | |||
1891 | if changesets > 0: |
|
1897 | if changesets > 0: | |
1892 | # forcefully update the on-disk branch cache |
|
1898 | # forcefully update the on-disk branch cache | |
1893 | self.ui.debug("updating the branch cache\n") |
|
1899 | self.ui.debug("updating the branch cache\n") | |
1894 | self.updatebranchcache() |
|
1900 | self.updatebranchcache() | |
1895 | self.hook("changegroup", node=hex(cl.node(clstart)), |
|
1901 | self.hook("changegroup", node=hex(cl.node(clstart)), | |
1896 | source=srctype, url=url) |
|
1902 | source=srctype, url=url) | |
1897 |
|
1903 | |||
1898 | for i in xrange(clstart, clend): |
|
1904 | for i in xrange(clstart, clend): | |
1899 | self.hook("incoming", node=hex(cl.node(i)), |
|
1905 | self.hook("incoming", node=hex(cl.node(i)), | |
1900 | source=srctype, url=url) |
|
1906 | source=srctype, url=url) | |
1901 |
|
1907 | |||
1902 | # FIXME - why does this care about tip? |
|
1908 | # FIXME - why does this care about tip? | |
1903 | if newheads == oldheads: |
|
1909 | if newheads == oldheads: | |
1904 | bookmarks.update(self, self.dirstate.parents(), self['tip'].node()) |
|
1910 | bookmarks.update(self, self.dirstate.parents(), self['tip'].node()) | |
1905 |
|
1911 | |||
1906 | # never return 0 here: |
|
1912 | # never return 0 here: | |
1907 | if newheads < oldheads: |
|
1913 | if newheads < oldheads: | |
1908 | return newheads - oldheads - 1 |
|
1914 | return newheads - oldheads - 1 | |
1909 | else: |
|
1915 | else: | |
1910 | return newheads - oldheads + 1 |
|
1916 | return newheads - oldheads + 1 | |
1911 |
|
1917 | |||
1912 |
|
1918 | |||
1913 | def stream_in(self, remote, requirements): |
|
1919 | def stream_in(self, remote, requirements): | |
1914 | lock = self.lock() |
|
1920 | lock = self.lock() | |
1915 | try: |
|
1921 | try: | |
1916 | fp = remote.stream_out() |
|
1922 | fp = remote.stream_out() | |
1917 | l = fp.readline() |
|
1923 | l = fp.readline() | |
1918 | try: |
|
1924 | try: | |
1919 | resp = int(l) |
|
1925 | resp = int(l) | |
1920 | except ValueError: |
|
1926 | except ValueError: | |
1921 | raise error.ResponseError( |
|
1927 | raise error.ResponseError( | |
1922 | _('Unexpected response from remote server:'), l) |
|
1928 | _('Unexpected response from remote server:'), l) | |
1923 | if resp == 1: |
|
1929 | if resp == 1: | |
1924 | raise util.Abort(_('operation forbidden by server')) |
|
1930 | raise util.Abort(_('operation forbidden by server')) | |
1925 | elif resp == 2: |
|
1931 | elif resp == 2: | |
1926 | raise util.Abort(_('locking the remote repository failed')) |
|
1932 | raise util.Abort(_('locking the remote repository failed')) | |
1927 | elif resp != 0: |
|
1933 | elif resp != 0: | |
1928 | raise util.Abort(_('the server sent an unknown error code')) |
|
1934 | raise util.Abort(_('the server sent an unknown error code')) | |
1929 | self.ui.status(_('streaming all changes\n')) |
|
1935 | self.ui.status(_('streaming all changes\n')) | |
1930 | l = fp.readline() |
|
1936 | l = fp.readline() | |
1931 | try: |
|
1937 | try: | |
1932 | total_files, total_bytes = map(int, l.split(' ', 1)) |
|
1938 | total_files, total_bytes = map(int, l.split(' ', 1)) | |
1933 | except (ValueError, TypeError): |
|
1939 | except (ValueError, TypeError): | |
1934 | raise error.ResponseError( |
|
1940 | raise error.ResponseError( | |
1935 | _('Unexpected response from remote server:'), l) |
|
1941 | _('Unexpected response from remote server:'), l) | |
1936 | self.ui.status(_('%d files to transfer, %s of data\n') % |
|
1942 | self.ui.status(_('%d files to transfer, %s of data\n') % | |
1937 | (total_files, util.bytecount(total_bytes))) |
|
1943 | (total_files, util.bytecount(total_bytes))) | |
1938 | start = time.time() |
|
1944 | start = time.time() | |
1939 | for i in xrange(total_files): |
|
1945 | for i in xrange(total_files): | |
1940 | # XXX doesn't support '\n' or '\r' in filenames |
|
1946 | # XXX doesn't support '\n' or '\r' in filenames | |
1941 | l = fp.readline() |
|
1947 | l = fp.readline() | |
1942 | try: |
|
1948 | try: | |
1943 | name, size = l.split('\0', 1) |
|
1949 | name, size = l.split('\0', 1) | |
1944 | size = int(size) |
|
1950 | size = int(size) | |
1945 | except (ValueError, TypeError): |
|
1951 | except (ValueError, TypeError): | |
1946 | raise error.ResponseError( |
|
1952 | raise error.ResponseError( | |
1947 | _('Unexpected response from remote server:'), l) |
|
1953 | _('Unexpected response from remote server:'), l) | |
1948 | self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size))) |
|
1954 | self.ui.debug('adding %s (%s)\n' % (name, util.bytecount(size))) | |
1949 | # for backwards compat, name was partially encoded |
|
1955 | # for backwards compat, name was partially encoded | |
1950 | ofp = self.sopener(store.decodedir(name), 'w') |
|
1956 | ofp = self.sopener(store.decodedir(name), 'w') | |
1951 | for chunk in util.filechunkiter(fp, limit=size): |
|
1957 | for chunk in util.filechunkiter(fp, limit=size): | |
1952 | ofp.write(chunk) |
|
1958 | ofp.write(chunk) | |
1953 | ofp.close() |
|
1959 | ofp.close() | |
1954 | elapsed = time.time() - start |
|
1960 | elapsed = time.time() - start | |
1955 | if elapsed <= 0: |
|
1961 | if elapsed <= 0: | |
1956 | elapsed = 0.001 |
|
1962 | elapsed = 0.001 | |
1957 | self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % |
|
1963 | self.ui.status(_('transferred %s in %.1f seconds (%s/sec)\n') % | |
1958 | (util.bytecount(total_bytes), elapsed, |
|
1964 | (util.bytecount(total_bytes), elapsed, | |
1959 | util.bytecount(total_bytes / elapsed))) |
|
1965 | util.bytecount(total_bytes / elapsed))) | |
1960 |
|
1966 | |||
1961 | # new requirements = old non-format requirements + new format-related |
|
1967 | # new requirements = old non-format requirements + new format-related | |
1962 | # requirements from the streamed-in repository |
|
1968 | # requirements from the streamed-in repository | |
1963 | requirements.update(set(self.requirements) - self.supportedformats) |
|
1969 | requirements.update(set(self.requirements) - self.supportedformats) | |
1964 | self._applyrequirements(requirements) |
|
1970 | self._applyrequirements(requirements) | |
1965 | self._writerequirements() |
|
1971 | self._writerequirements() | |
1966 |
|
1972 | |||
1967 | self.invalidate() |
|
1973 | self.invalidate() | |
1968 | return len(self.heads()) + 1 |
|
1974 | return len(self.heads()) + 1 | |
1969 | finally: |
|
1975 | finally: | |
1970 | lock.release() |
|
1976 | lock.release() | |
1971 |
|
1977 | |||
1972 | def clone(self, remote, heads=[], stream=False): |
|
1978 | def clone(self, remote, heads=[], stream=False): | |
1973 | '''clone remote repository. |
|
1979 | '''clone remote repository. | |
1974 |
|
1980 | |||
1975 | keyword arguments: |
|
1981 | keyword arguments: | |
1976 | heads: list of revs to clone (forces use of pull) |
|
1982 | heads: list of revs to clone (forces use of pull) | |
1977 | stream: use streaming clone if possible''' |
|
1983 | stream: use streaming clone if possible''' | |
1978 |
|
1984 | |||
1979 | # now, all clients that can request uncompressed clones can |
|
1985 | # now, all clients that can request uncompressed clones can | |
1980 | # read repo formats supported by all servers that can serve |
|
1986 | # read repo formats supported by all servers that can serve | |
1981 | # them. |
|
1987 | # them. | |
1982 |
|
1988 | |||
1983 | # if revlog format changes, client will have to check version |
|
1989 | # if revlog format changes, client will have to check version | |
1984 | # and format flags on "stream" capability, and use |
|
1990 | # and format flags on "stream" capability, and use | |
1985 | # uncompressed only if compatible. |
|
1991 | # uncompressed only if compatible. | |
1986 |
|
1992 | |||
1987 | if stream and not heads: |
|
1993 | if stream and not heads: | |
1988 | # 'stream' means remote revlog format is revlogv1 only |
|
1994 | # 'stream' means remote revlog format is revlogv1 only | |
1989 | if remote.capable('stream'): |
|
1995 | if remote.capable('stream'): | |
1990 | return self.stream_in(remote, set(('revlogv1',))) |
|
1996 | return self.stream_in(remote, set(('revlogv1',))) | |
1991 | # otherwise, 'streamreqs' contains the remote revlog format |
|
1997 | # otherwise, 'streamreqs' contains the remote revlog format | |
1992 | streamreqs = remote.capable('streamreqs') |
|
1998 | streamreqs = remote.capable('streamreqs') | |
1993 | if streamreqs: |
|
1999 | if streamreqs: | |
1994 | streamreqs = set(streamreqs.split(',')) |
|
2000 | streamreqs = set(streamreqs.split(',')) | |
1995 | # if we support it, stream in and adjust our requirements |
|
2001 | # if we support it, stream in and adjust our requirements | |
1996 | if not streamreqs - self.supportedformats: |
|
2002 | if not streamreqs - self.supportedformats: | |
1997 | return self.stream_in(remote, streamreqs) |
|
2003 | return self.stream_in(remote, streamreqs) | |
1998 | return self.pull(remote, heads) |
|
2004 | return self.pull(remote, heads) | |
1999 |
|
2005 | |||
2000 | def pushkey(self, namespace, key, old, new): |
|
2006 | def pushkey(self, namespace, key, old, new): | |
2001 | return pushkey.push(self, namespace, key, old, new) |
|
2007 | return pushkey.push(self, namespace, key, old, new) | |
2002 |
|
2008 | |||
2003 | def listkeys(self, namespace): |
|
2009 | def listkeys(self, namespace): | |
2004 | return pushkey.list(self, namespace) |
|
2010 | return pushkey.list(self, namespace) | |
2005 |
|
2011 | |||
2006 | # used to avoid circular references so destructors work |
|
2012 | # used to avoid circular references so destructors work | |
2007 | def aftertrans(files): |
|
2013 | def aftertrans(files): | |
2008 | renamefiles = [tuple(t) for t in files] |
|
2014 | renamefiles = [tuple(t) for t in files] | |
2009 | def a(): |
|
2015 | def a(): | |
2010 | for src, dest in renamefiles: |
|
2016 | for src, dest in renamefiles: | |
2011 | util.rename(src, dest) |
|
2017 | util.rename(src, dest) | |
2012 | return a |
|
2018 | return a | |
2013 |
|
2019 | |||
2014 | def instance(ui, path, create): |
|
2020 | def instance(ui, path, create): | |
2015 | return localrepository(ui, util.drop_scheme('file', path), create) |
|
2021 | return localrepository(ui, util.drop_scheme('file', path), create) | |
2016 |
|
2022 | |||
2017 | def islocal(path): |
|
2023 | def islocal(path): | |
2018 | return True |
|
2024 | return True |
@@ -1,705 +1,708 b'' | |||||
1 | $ rm -rf sub |
|
1 | $ rm -rf sub | |
2 | $ mkdir sub |
|
2 | $ mkdir sub | |
3 | $ cd sub |
|
3 | $ cd sub | |
4 | $ hg init t |
|
4 | $ hg init t | |
5 | $ cd t |
|
5 | $ cd t | |
6 |
|
6 | |||
7 | first revision, no sub |
|
7 | first revision, no sub | |
8 |
|
8 | |||
9 | $ echo a > a |
|
9 | $ echo a > a | |
10 | $ hg ci -Am0 |
|
10 | $ hg ci -Am0 | |
11 | adding a |
|
11 | adding a | |
12 |
|
12 | |||
13 | add first sub |
|
13 | add first sub | |
14 |
|
14 | |||
15 | $ echo s = s > .hgsub |
|
15 | $ echo s = s > .hgsub | |
16 | $ hg add .hgsub |
|
16 | $ hg add .hgsub | |
17 | $ hg init s |
|
17 | $ hg init s | |
18 | $ echo a > s/a |
|
18 | $ echo a > s/a | |
19 |
|
19 | |||
20 | Issue2232: committing a subrepo without .hgsub |
|
20 | Issue2232: committing a subrepo without .hgsub | |
21 |
|
21 | |||
22 | $ hg ci -mbad s |
|
22 | $ hg ci -mbad s | |
23 | abort: can't commit subrepos without .hgsub |
|
23 | abort: can't commit subrepos without .hgsub | |
24 | [255] |
|
24 | [255] | |
25 |
|
25 | |||
26 | $ hg -R s ci -Ams0 |
|
26 | $ hg -R s ci -Ams0 | |
27 | adding a |
|
27 | adding a | |
28 | $ hg sum |
|
28 | $ hg sum | |
29 | parent: 0:f7b1eb17ad24 tip |
|
29 | parent: 0:f7b1eb17ad24 tip | |
30 | 0 |
|
30 | 0 | |
31 | branch: default |
|
31 | branch: default | |
32 | commit: 1 added, 1 subrepos |
|
32 | commit: 1 added, 1 subrepos | |
33 | update: (current) |
|
33 | update: (current) | |
34 | $ hg ci -m1 |
|
34 | $ hg ci -m1 | |
35 | committing subrepository s |
|
35 | committing subrepository s | |
36 |
|
36 | |||
37 | Issue2022: update -C |
|
37 | Issue2022: update -C | |
38 |
|
38 | |||
39 | $ echo b > s/a |
|
39 | $ echo b > s/a | |
40 | $ hg sum |
|
40 | $ hg sum | |
41 | parent: 1:7cf8cfea66e4 tip |
|
41 | parent: 1:7cf8cfea66e4 tip | |
42 | 1 |
|
42 | 1 | |
43 | branch: default |
|
43 | branch: default | |
44 | commit: 1 subrepos |
|
44 | commit: 1 subrepos | |
45 | update: (current) |
|
45 | update: (current) | |
46 | $ hg co -C 1 |
|
46 | $ hg co -C 1 | |
47 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
47 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
48 | $ hg sum |
|
48 | $ hg sum | |
49 | parent: 1:7cf8cfea66e4 tip |
|
49 | parent: 1:7cf8cfea66e4 tip | |
50 | 1 |
|
50 | 1 | |
51 | branch: default |
|
51 | branch: default | |
52 | commit: (clean) |
|
52 | commit: (clean) | |
53 | update: (current) |
|
53 | update: (current) | |
54 |
|
54 | |||
55 | add sub sub |
|
55 | add sub sub | |
56 |
|
56 | |||
57 | $ echo ss = ss > s/.hgsub |
|
57 | $ echo ss = ss > s/.hgsub | |
58 | $ hg init s/ss |
|
58 | $ hg init s/ss | |
59 | $ echo a > s/ss/a |
|
59 | $ echo a > s/ss/a | |
60 | $ hg -R s add s/.hgsub |
|
60 | $ hg -R s add s/.hgsub | |
61 | $ hg -R s/ss add s/ss/a |
|
61 | $ hg -R s/ss add s/ss/a | |
62 | $ hg sum |
|
62 | $ hg sum | |
63 | parent: 1:7cf8cfea66e4 tip |
|
63 | parent: 1:7cf8cfea66e4 tip | |
64 | 1 |
|
64 | 1 | |
65 | branch: default |
|
65 | branch: default | |
66 | commit: 1 subrepos |
|
66 | commit: 1 subrepos | |
67 | update: (current) |
|
67 | update: (current) | |
68 | $ hg ci -m2 |
|
68 | $ hg ci -m2 | |
69 | committing subrepository s |
|
69 | committing subrepository s | |
70 | committing subrepository s/ss |
|
70 | committing subrepository s/ss | |
71 | $ hg sum |
|
71 | $ hg sum | |
72 | parent: 2:df30734270ae tip |
|
72 | parent: 2:df30734270ae tip | |
73 | 2 |
|
73 | 2 | |
74 | branch: default |
|
74 | branch: default | |
75 | commit: (clean) |
|
75 | commit: (clean) | |
76 | update: (current) |
|
76 | update: (current) | |
77 |
|
77 | |||
78 | bump sub rev |
|
78 | bump sub rev (and check it is ignored by ui.commitsubrepos) | |
79 |
|
79 | |||
80 | $ echo b > s/a |
|
80 | $ echo b > s/a | |
81 | $ hg -R s ci -ms1 |
|
81 | $ hg -R s ci -ms1 | |
82 | $ hg ci -m3 |
|
82 | $ hg --config ui.commitsubrepos=no ci -m3 | |
83 | committing subrepository s |
|
83 | committing subrepository s | |
84 |
|
84 | |||
85 | leave sub dirty |
|
85 | leave sub dirty (and check ui.commitsubrepos=no aborts the commit) | |
86 |
|
86 | |||
87 | $ echo c > s/a |
|
87 | $ echo c > s/a | |
|
88 | $ hg --config ui.commitsubrepos=no ci -m4 | |||
|
89 | abort: uncommitted changes in subrepo s | |||
|
90 | [255] | |||
88 | $ hg ci -m4 |
|
91 | $ hg ci -m4 | |
89 | committing subrepository s |
|
92 | committing subrepository s | |
90 | $ hg tip -R s |
|
93 | $ hg tip -R s | |
91 | changeset: 3:1c833a7a9e3a |
|
94 | changeset: 3:1c833a7a9e3a | |
92 | tag: tip |
|
95 | tag: tip | |
93 | user: test |
|
96 | user: test | |
94 | date: Thu Jan 01 00:00:00 1970 +0000 |
|
97 | date: Thu Jan 01 00:00:00 1970 +0000 | |
95 | summary: 4 |
|
98 | summary: 4 | |
96 |
|
99 | |||
97 |
|
100 | |||
98 | check caching |
|
101 | check caching | |
99 |
|
102 | |||
100 | $ hg co 0 |
|
103 | $ hg co 0 | |
101 | 0 files updated, 0 files merged, 2 files removed, 0 files unresolved |
|
104 | 0 files updated, 0 files merged, 2 files removed, 0 files unresolved | |
102 | $ hg debugsub |
|
105 | $ hg debugsub | |
103 |
|
106 | |||
104 | restore |
|
107 | restore | |
105 |
|
108 | |||
106 | $ hg co |
|
109 | $ hg co | |
107 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
110 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
108 | $ hg debugsub |
|
111 | $ hg debugsub | |
109 | path s |
|
112 | path s | |
110 | source s |
|
113 | source s | |
111 | revision 1c833a7a9e3a4445c711aaf0f012379cd0d4034e |
|
114 | revision 1c833a7a9e3a4445c711aaf0f012379cd0d4034e | |
112 |
|
115 | |||
113 | new branch for merge tests |
|
116 | new branch for merge tests | |
114 |
|
117 | |||
115 | $ hg co 1 |
|
118 | $ hg co 1 | |
116 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
119 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
117 | $ echo t = t >> .hgsub |
|
120 | $ echo t = t >> .hgsub | |
118 | $ hg init t |
|
121 | $ hg init t | |
119 | $ echo t > t/t |
|
122 | $ echo t > t/t | |
120 | $ hg -R t add t |
|
123 | $ hg -R t add t | |
121 | adding t/t |
|
124 | adding t/t | |
122 |
|
125 | |||
123 | 5 |
|
126 | 5 | |
124 |
|
127 | |||
125 | $ hg ci -m5 # add sub |
|
128 | $ hg ci -m5 # add sub | |
126 | committing subrepository t |
|
129 | committing subrepository t | |
127 | created new head |
|
130 | created new head | |
128 | $ echo t2 > t/t |
|
131 | $ echo t2 > t/t | |
129 |
|
132 | |||
130 | 6 |
|
133 | 6 | |
131 |
|
134 | |||
132 | $ hg st -R s |
|
135 | $ hg st -R s | |
133 | $ hg ci -m6 # change sub |
|
136 | $ hg ci -m6 # change sub | |
134 | committing subrepository t |
|
137 | committing subrepository t | |
135 | $ hg debugsub |
|
138 | $ hg debugsub | |
136 | path s |
|
139 | path s | |
137 | source s |
|
140 | source s | |
138 | revision e4ece1bf43360ddc8f6a96432201a37b7cd27ae4 |
|
141 | revision e4ece1bf43360ddc8f6a96432201a37b7cd27ae4 | |
139 | path t |
|
142 | path t | |
140 | source t |
|
143 | source t | |
141 | revision 6747d179aa9a688023c4b0cad32e4c92bb7f34ad |
|
144 | revision 6747d179aa9a688023c4b0cad32e4c92bb7f34ad | |
142 | $ echo t3 > t/t |
|
145 | $ echo t3 > t/t | |
143 |
|
146 | |||
144 | 7 |
|
147 | 7 | |
145 |
|
148 | |||
146 | $ hg ci -m7 # change sub again for conflict test |
|
149 | $ hg ci -m7 # change sub again for conflict test | |
147 | committing subrepository t |
|
150 | committing subrepository t | |
148 | $ hg rm .hgsub |
|
151 | $ hg rm .hgsub | |
149 |
|
152 | |||
150 | 8 |
|
153 | 8 | |
151 |
|
154 | |||
152 | $ hg ci -m8 # remove sub |
|
155 | $ hg ci -m8 # remove sub | |
153 |
|
156 | |||
154 | merge tests |
|
157 | merge tests | |
155 |
|
158 | |||
156 | $ hg co -C 3 |
|
159 | $ hg co -C 3 | |
157 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
160 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
158 | $ hg merge 5 # test adding |
|
161 | $ hg merge 5 # test adding | |
159 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
162 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
160 | (branch merge, don't forget to commit) |
|
163 | (branch merge, don't forget to commit) | |
161 | $ hg debugsub |
|
164 | $ hg debugsub | |
162 | path s |
|
165 | path s | |
163 | source s |
|
166 | source s | |
164 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 |
|
167 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 | |
165 | path t |
|
168 | path t | |
166 | source t |
|
169 | source t | |
167 | revision 60ca1237c19474e7a3978b0dc1ca4e6f36d51382 |
|
170 | revision 60ca1237c19474e7a3978b0dc1ca4e6f36d51382 | |
168 | $ hg ci -m9 |
|
171 | $ hg ci -m9 | |
169 | created new head |
|
172 | created new head | |
170 | $ hg merge 6 --debug # test change |
|
173 | $ hg merge 6 --debug # test change | |
171 | searching for copies back to rev 2 |
|
174 | searching for copies back to rev 2 | |
172 | resolving manifests |
|
175 | resolving manifests | |
173 | overwrite None partial False |
|
176 | overwrite None partial False | |
174 | ancestor 1f14a2e2d3ec local f0d2028bf86d+ remote 1831e14459c4 |
|
177 | ancestor 1f14a2e2d3ec local f0d2028bf86d+ remote 1831e14459c4 | |
175 | .hgsubstate: versions differ -> m |
|
178 | .hgsubstate: versions differ -> m | |
176 | updating: .hgsubstate 1/1 files (100.00%) |
|
179 | updating: .hgsubstate 1/1 files (100.00%) | |
177 | subrepo merge f0d2028bf86d+ 1831e14459c4 1f14a2e2d3ec |
|
180 | subrepo merge f0d2028bf86d+ 1831e14459c4 1f14a2e2d3ec | |
178 | subrepo t: other changed, get t:6747d179aa9a688023c4b0cad32e4c92bb7f34ad:hg |
|
181 | subrepo t: other changed, get t:6747d179aa9a688023c4b0cad32e4c92bb7f34ad:hg | |
179 | getting subrepo t |
|
182 | getting subrepo t | |
180 | resolving manifests |
|
183 | resolving manifests | |
181 | overwrite True partial False |
|
184 | overwrite True partial False | |
182 | ancestor 60ca1237c194+ local 60ca1237c194+ remote 6747d179aa9a |
|
185 | ancestor 60ca1237c194+ local 60ca1237c194+ remote 6747d179aa9a | |
183 | t: remote is newer -> g |
|
186 | t: remote is newer -> g | |
184 | updating: t 1/1 files (100.00%) |
|
187 | updating: t 1/1 files (100.00%) | |
185 | getting t |
|
188 | getting t | |
186 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
189 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
187 | (branch merge, don't forget to commit) |
|
190 | (branch merge, don't forget to commit) | |
188 | $ hg debugsub |
|
191 | $ hg debugsub | |
189 | path s |
|
192 | path s | |
190 | source s |
|
193 | source s | |
191 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 |
|
194 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 | |
192 | path t |
|
195 | path t | |
193 | source t |
|
196 | source t | |
194 | revision 6747d179aa9a688023c4b0cad32e4c92bb7f34ad |
|
197 | revision 6747d179aa9a688023c4b0cad32e4c92bb7f34ad | |
195 | $ echo conflict > t/t |
|
198 | $ echo conflict > t/t | |
196 | $ hg ci -m10 |
|
199 | $ hg ci -m10 | |
197 | committing subrepository t |
|
200 | committing subrepository t | |
198 | $ HGMERGE=internal:merge hg merge --debug 7 # test conflict |
|
201 | $ HGMERGE=internal:merge hg merge --debug 7 # test conflict | |
199 | searching for copies back to rev 2 |
|
202 | searching for copies back to rev 2 | |
200 | resolving manifests |
|
203 | resolving manifests | |
201 | overwrite None partial False |
|
204 | overwrite None partial False | |
202 | ancestor 1831e14459c4 local e45c8b14af55+ remote f94576341bcf |
|
205 | ancestor 1831e14459c4 local e45c8b14af55+ remote f94576341bcf | |
203 | .hgsubstate: versions differ -> m |
|
206 | .hgsubstate: versions differ -> m | |
204 | updating: .hgsubstate 1/1 files (100.00%) |
|
207 | updating: .hgsubstate 1/1 files (100.00%) | |
205 | subrepo merge e45c8b14af55+ f94576341bcf 1831e14459c4 |
|
208 | subrepo merge e45c8b14af55+ f94576341bcf 1831e14459c4 | |
206 | subrepo t: both sides changed, merge with t:7af322bc1198a32402fe903e0b7ebcfc5c9bf8f4:hg |
|
209 | subrepo t: both sides changed, merge with t:7af322bc1198a32402fe903e0b7ebcfc5c9bf8f4:hg | |
207 | merging subrepo t |
|
210 | merging subrepo t | |
208 | searching for copies back to rev 2 |
|
211 | searching for copies back to rev 2 | |
209 | resolving manifests |
|
212 | resolving manifests | |
210 | overwrite None partial False |
|
213 | overwrite None partial False | |
211 | ancestor 6747d179aa9a local 20a0db6fbf6c+ remote 7af322bc1198 |
|
214 | ancestor 6747d179aa9a local 20a0db6fbf6c+ remote 7af322bc1198 | |
212 | t: versions differ -> m |
|
215 | t: versions differ -> m | |
213 | preserving t for resolve of t |
|
216 | preserving t for resolve of t | |
214 | updating: t 1/1 files (100.00%) |
|
217 | updating: t 1/1 files (100.00%) | |
215 | picked tool 'internal:merge' for t (binary False symlink False) |
|
218 | picked tool 'internal:merge' for t (binary False symlink False) | |
216 | merging t |
|
219 | merging t | |
217 | my t@20a0db6fbf6c+ other t@7af322bc1198 ancestor t@6747d179aa9a |
|
220 | my t@20a0db6fbf6c+ other t@7af322bc1198 ancestor t@6747d179aa9a | |
218 | warning: conflicts during merge. |
|
221 | warning: conflicts during merge. | |
219 | merging t failed! |
|
222 | merging t failed! | |
220 | 0 files updated, 0 files merged, 0 files removed, 1 files unresolved |
|
223 | 0 files updated, 0 files merged, 0 files removed, 1 files unresolved | |
221 | use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon |
|
224 | use 'hg resolve' to retry unresolved file merges or 'hg update -C .' to abandon | |
222 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
225 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
223 | (branch merge, don't forget to commit) |
|
226 | (branch merge, don't forget to commit) | |
224 |
|
227 | |||
225 | should conflict |
|
228 | should conflict | |
226 |
|
229 | |||
227 | $ cat t/t |
|
230 | $ cat t/t | |
228 | <<<<<<< local |
|
231 | <<<<<<< local | |
229 | conflict |
|
232 | conflict | |
230 | ======= |
|
233 | ======= | |
231 | t3 |
|
234 | t3 | |
232 | >>>>>>> other |
|
235 | >>>>>>> other | |
233 |
|
236 | |||
234 | clone |
|
237 | clone | |
235 |
|
238 | |||
236 | $ cd .. |
|
239 | $ cd .. | |
237 | $ hg clone t tc |
|
240 | $ hg clone t tc | |
238 | updating to branch default |
|
241 | updating to branch default | |
239 | pulling subrepo s from $TESTTMP/sub/t/s |
|
242 | pulling subrepo s from $TESTTMP/sub/t/s | |
240 | requesting all changes |
|
243 | requesting all changes | |
241 | adding changesets |
|
244 | adding changesets | |
242 | adding manifests |
|
245 | adding manifests | |
243 | adding file changes |
|
246 | adding file changes | |
244 | added 4 changesets with 5 changes to 3 files |
|
247 | added 4 changesets with 5 changes to 3 files | |
245 | pulling subrepo s/ss from $TESTTMP/sub/t/s/ss |
|
248 | pulling subrepo s/ss from $TESTTMP/sub/t/s/ss | |
246 | requesting all changes |
|
249 | requesting all changes | |
247 | adding changesets |
|
250 | adding changesets | |
248 | adding manifests |
|
251 | adding manifests | |
249 | adding file changes |
|
252 | adding file changes | |
250 | added 1 changesets with 1 changes to 1 files |
|
253 | added 1 changesets with 1 changes to 1 files | |
251 | pulling subrepo t from $TESTTMP/sub/t/t |
|
254 | pulling subrepo t from $TESTTMP/sub/t/t | |
252 | requesting all changes |
|
255 | requesting all changes | |
253 | adding changesets |
|
256 | adding changesets | |
254 | adding manifests |
|
257 | adding manifests | |
255 | adding file changes |
|
258 | adding file changes | |
256 | added 4 changesets with 4 changes to 1 files (+1 heads) |
|
259 | added 4 changesets with 4 changes to 1 files (+1 heads) | |
257 | 3 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
260 | 3 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
258 | $ cd tc |
|
261 | $ cd tc | |
259 | $ hg debugsub |
|
262 | $ hg debugsub | |
260 | path s |
|
263 | path s | |
261 | source s |
|
264 | source s | |
262 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 |
|
265 | revision fc627a69481fcbe5f1135069e8a3881c023e4cf5 | |
263 | path t |
|
266 | path t | |
264 | source t |
|
267 | source t | |
265 | revision 20a0db6fbf6c3d2836e6519a642ae929bfc67c0e |
|
268 | revision 20a0db6fbf6c3d2836e6519a642ae929bfc67c0e | |
266 |
|
269 | |||
267 | push |
|
270 | push | |
268 |
|
271 | |||
269 | $ echo bah > t/t |
|
272 | $ echo bah > t/t | |
270 | $ hg ci -m11 |
|
273 | $ hg ci -m11 | |
271 | committing subrepository t |
|
274 | committing subrepository t | |
272 | $ hg push |
|
275 | $ hg push | |
273 | pushing to $TESTTMP/sub/t |
|
276 | pushing to $TESTTMP/sub/t | |
274 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss |
|
277 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss | |
275 | searching for changes |
|
278 | searching for changes | |
276 | no changes found |
|
279 | no changes found | |
277 | pushing subrepo s to $TESTTMP/sub/t/s |
|
280 | pushing subrepo s to $TESTTMP/sub/t/s | |
278 | searching for changes |
|
281 | searching for changes | |
279 | no changes found |
|
282 | no changes found | |
280 | pushing subrepo t to $TESTTMP/sub/t/t |
|
283 | pushing subrepo t to $TESTTMP/sub/t/t | |
281 | searching for changes |
|
284 | searching for changes | |
282 | adding changesets |
|
285 | adding changesets | |
283 | adding manifests |
|
286 | adding manifests | |
284 | adding file changes |
|
287 | adding file changes | |
285 | added 1 changesets with 1 changes to 1 files |
|
288 | added 1 changesets with 1 changes to 1 files | |
286 | searching for changes |
|
289 | searching for changes | |
287 | adding changesets |
|
290 | adding changesets | |
288 | adding manifests |
|
291 | adding manifests | |
289 | adding file changes |
|
292 | adding file changes | |
290 | added 1 changesets with 1 changes to 1 files |
|
293 | added 1 changesets with 1 changes to 1 files | |
291 |
|
294 | |||
292 | push -f |
|
295 | push -f | |
293 |
|
296 | |||
294 | $ echo bah > s/a |
|
297 | $ echo bah > s/a | |
295 | $ hg ci -m12 |
|
298 | $ hg ci -m12 | |
296 | committing subrepository s |
|
299 | committing subrepository s | |
297 | $ hg push |
|
300 | $ hg push | |
298 | pushing to $TESTTMP/sub/t |
|
301 | pushing to $TESTTMP/sub/t | |
299 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss |
|
302 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss | |
300 | searching for changes |
|
303 | searching for changes | |
301 | no changes found |
|
304 | no changes found | |
302 | pushing subrepo s to $TESTTMP/sub/t/s |
|
305 | pushing subrepo s to $TESTTMP/sub/t/s | |
303 | searching for changes |
|
306 | searching for changes | |
304 | abort: push creates new remote heads on branch 'default'! |
|
307 | abort: push creates new remote heads on branch 'default'! | |
305 | (did you forget to merge? use push -f to force) |
|
308 | (did you forget to merge? use push -f to force) | |
306 | [255] |
|
309 | [255] | |
307 | $ hg push -f |
|
310 | $ hg push -f | |
308 | pushing to $TESTTMP/sub/t |
|
311 | pushing to $TESTTMP/sub/t | |
309 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss |
|
312 | pushing subrepo s/ss to $TESTTMP/sub/t/s/ss | |
310 | searching for changes |
|
313 | searching for changes | |
311 | no changes found |
|
314 | no changes found | |
312 | pushing subrepo s to $TESTTMP/sub/t/s |
|
315 | pushing subrepo s to $TESTTMP/sub/t/s | |
313 | searching for changes |
|
316 | searching for changes | |
314 | adding changesets |
|
317 | adding changesets | |
315 | adding manifests |
|
318 | adding manifests | |
316 | adding file changes |
|
319 | adding file changes | |
317 | added 1 changesets with 1 changes to 1 files (+1 heads) |
|
320 | added 1 changesets with 1 changes to 1 files (+1 heads) | |
318 | pushing subrepo t to $TESTTMP/sub/t/t |
|
321 | pushing subrepo t to $TESTTMP/sub/t/t | |
319 | searching for changes |
|
322 | searching for changes | |
320 | no changes found |
|
323 | no changes found | |
321 | searching for changes |
|
324 | searching for changes | |
322 | adding changesets |
|
325 | adding changesets | |
323 | adding manifests |
|
326 | adding manifests | |
324 | adding file changes |
|
327 | adding file changes | |
325 | added 1 changesets with 1 changes to 1 files |
|
328 | added 1 changesets with 1 changes to 1 files | |
326 |
|
329 | |||
327 | update |
|
330 | update | |
328 |
|
331 | |||
329 | $ cd ../t |
|
332 | $ cd ../t | |
330 | $ hg up -C # discard our earlier merge |
|
333 | $ hg up -C # discard our earlier merge | |
331 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
334 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
332 | $ echo blah > t/t |
|
335 | $ echo blah > t/t | |
333 | $ hg ci -m13 |
|
336 | $ hg ci -m13 | |
334 | committing subrepository t |
|
337 | committing subrepository t | |
335 |
|
338 | |||
336 | pull |
|
339 | pull | |
337 |
|
340 | |||
338 | $ cd ../tc |
|
341 | $ cd ../tc | |
339 | $ hg pull |
|
342 | $ hg pull | |
340 | pulling from $TESTTMP/sub/t |
|
343 | pulling from $TESTTMP/sub/t | |
341 | searching for changes |
|
344 | searching for changes | |
342 | adding changesets |
|
345 | adding changesets | |
343 | adding manifests |
|
346 | adding manifests | |
344 | adding file changes |
|
347 | adding file changes | |
345 | added 1 changesets with 1 changes to 1 files |
|
348 | added 1 changesets with 1 changes to 1 files | |
346 | (run 'hg update' to get a working copy) |
|
349 | (run 'hg update' to get a working copy) | |
347 |
|
350 | |||
348 | should pull t |
|
351 | should pull t | |
349 |
|
352 | |||
350 | $ hg up |
|
353 | $ hg up | |
351 | pulling subrepo t from $TESTTMP/sub/t/t |
|
354 | pulling subrepo t from $TESTTMP/sub/t/t | |
352 | searching for changes |
|
355 | searching for changes | |
353 | adding changesets |
|
356 | adding changesets | |
354 | adding manifests |
|
357 | adding manifests | |
355 | adding file changes |
|
358 | adding file changes | |
356 | added 1 changesets with 1 changes to 1 files |
|
359 | added 1 changesets with 1 changes to 1 files | |
357 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
360 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
358 | $ cat t/t |
|
361 | $ cat t/t | |
359 | blah |
|
362 | blah | |
360 |
|
363 | |||
361 | bogus subrepo path aborts |
|
364 | bogus subrepo path aborts | |
362 |
|
365 | |||
363 | $ echo 'bogus=[boguspath' >> .hgsub |
|
366 | $ echo 'bogus=[boguspath' >> .hgsub | |
364 | $ hg ci -m 'bogus subrepo path' |
|
367 | $ hg ci -m 'bogus subrepo path' | |
365 | abort: missing ] in subrepo source |
|
368 | abort: missing ] in subrepo source | |
366 | [255] |
|
369 | [255] | |
367 |
|
370 | |||
368 | Issue1986: merge aborts when trying to merge a subrepo that |
|
371 | Issue1986: merge aborts when trying to merge a subrepo that | |
369 | shouldn't need merging |
|
372 | shouldn't need merging | |
370 |
|
373 | |||
371 | # subrepo layout |
|
374 | # subrepo layout | |
372 | # |
|
375 | # | |
373 | # o 5 br |
|
376 | # o 5 br | |
374 | # /| |
|
377 | # /| | |
375 | # o | 4 default |
|
378 | # o | 4 default | |
376 | # | | |
|
379 | # | | | |
377 | # | o 3 br |
|
380 | # | o 3 br | |
378 | # |/| |
|
381 | # |/| | |
379 | # o | 2 default |
|
382 | # o | 2 default | |
380 | # | | |
|
383 | # | | | |
381 | # | o 1 br |
|
384 | # | o 1 br | |
382 | # |/ |
|
385 | # |/ | |
383 | # o 0 default |
|
386 | # o 0 default | |
384 |
|
387 | |||
385 | $ cd .. |
|
388 | $ cd .. | |
386 | $ rm -rf sub |
|
389 | $ rm -rf sub | |
387 | $ hg init main |
|
390 | $ hg init main | |
388 | $ cd main |
|
391 | $ cd main | |
389 | $ hg init s |
|
392 | $ hg init s | |
390 | $ cd s |
|
393 | $ cd s | |
391 | $ echo a > a |
|
394 | $ echo a > a | |
392 | $ hg ci -Am1 |
|
395 | $ hg ci -Am1 | |
393 | adding a |
|
396 | adding a | |
394 | $ hg branch br |
|
397 | $ hg branch br | |
395 | marked working directory as branch br |
|
398 | marked working directory as branch br | |
396 | $ echo a >> a |
|
399 | $ echo a >> a | |
397 | $ hg ci -m1 |
|
400 | $ hg ci -m1 | |
398 | $ hg up default |
|
401 | $ hg up default | |
399 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
402 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
400 | $ echo b > b |
|
403 | $ echo b > b | |
401 | $ hg ci -Am1 |
|
404 | $ hg ci -Am1 | |
402 | adding b |
|
405 | adding b | |
403 | $ hg up br |
|
406 | $ hg up br | |
404 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
407 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
405 | $ hg merge tip |
|
408 | $ hg merge tip | |
406 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
409 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
407 | (branch merge, don't forget to commit) |
|
410 | (branch merge, don't forget to commit) | |
408 | $ hg ci -m1 |
|
411 | $ hg ci -m1 | |
409 | $ hg up 2 |
|
412 | $ hg up 2 | |
410 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
413 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
411 | $ echo c > c |
|
414 | $ echo c > c | |
412 | $ hg ci -Am1 |
|
415 | $ hg ci -Am1 | |
413 | adding c |
|
416 | adding c | |
414 | $ hg up 3 |
|
417 | $ hg up 3 | |
415 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
418 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
416 | $ hg merge 4 |
|
419 | $ hg merge 4 | |
417 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
420 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
418 | (branch merge, don't forget to commit) |
|
421 | (branch merge, don't forget to commit) | |
419 | $ hg ci -m1 |
|
422 | $ hg ci -m1 | |
420 |
|
423 | |||
421 | # main repo layout: |
|
424 | # main repo layout: | |
422 | # |
|
425 | # | |
423 | # * <-- try to merge default into br again |
|
426 | # * <-- try to merge default into br again | |
424 | # .`| |
|
427 | # .`| | |
425 | # . o 5 br --> substate = 5 |
|
428 | # . o 5 br --> substate = 5 | |
426 | # . | |
|
429 | # . | | |
427 | # o | 4 default --> substate = 4 |
|
430 | # o | 4 default --> substate = 4 | |
428 | # | | |
|
431 | # | | | |
429 | # | o 3 br --> substate = 2 |
|
432 | # | o 3 br --> substate = 2 | |
430 | # |/| |
|
433 | # |/| | |
431 | # o | 2 default --> substate = 2 |
|
434 | # o | 2 default --> substate = 2 | |
432 | # | | |
|
435 | # | | | |
433 | # | o 1 br --> substate = 3 |
|
436 | # | o 1 br --> substate = 3 | |
434 | # |/ |
|
437 | # |/ | |
435 | # o 0 default --> substate = 2 |
|
438 | # o 0 default --> substate = 2 | |
436 |
|
439 | |||
437 | $ cd .. |
|
440 | $ cd .. | |
438 | $ echo 's = s' > .hgsub |
|
441 | $ echo 's = s' > .hgsub | |
439 | $ hg -R s up 2 |
|
442 | $ hg -R s up 2 | |
440 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
443 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
441 | $ hg ci -Am1 |
|
444 | $ hg ci -Am1 | |
442 | adding .hgsub |
|
445 | adding .hgsub | |
443 | committing subrepository s |
|
446 | committing subrepository s | |
444 | $ hg branch br |
|
447 | $ hg branch br | |
445 | marked working directory as branch br |
|
448 | marked working directory as branch br | |
446 | $ echo b > b |
|
449 | $ echo b > b | |
447 | $ hg -R s up 3 |
|
450 | $ hg -R s up 3 | |
448 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
451 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
449 | $ hg ci -Am1 |
|
452 | $ hg ci -Am1 | |
450 | adding b |
|
453 | adding b | |
451 | committing subrepository s |
|
454 | committing subrepository s | |
452 | $ hg up default |
|
455 | $ hg up default | |
453 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
456 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
454 | $ echo c > c |
|
457 | $ echo c > c | |
455 | $ hg ci -Am1 |
|
458 | $ hg ci -Am1 | |
456 | adding c |
|
459 | adding c | |
457 | $ hg up 1 |
|
460 | $ hg up 1 | |
458 | 2 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
461 | 2 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
459 | $ hg merge 2 |
|
462 | $ hg merge 2 | |
460 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
463 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
461 | (branch merge, don't forget to commit) |
|
464 | (branch merge, don't forget to commit) | |
462 | $ hg ci -m1 |
|
465 | $ hg ci -m1 | |
463 | $ hg up 2 |
|
466 | $ hg up 2 | |
464 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
467 | 1 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
465 | $ hg -R s up 4 |
|
468 | $ hg -R s up 4 | |
466 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
469 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
467 | $ echo d > d |
|
470 | $ echo d > d | |
468 | $ hg ci -Am1 |
|
471 | $ hg ci -Am1 | |
469 | adding d |
|
472 | adding d | |
470 | committing subrepository s |
|
473 | committing subrepository s | |
471 | $ hg up 3 |
|
474 | $ hg up 3 | |
472 | 2 files updated, 0 files merged, 1 files removed, 0 files unresolved |
|
475 | 2 files updated, 0 files merged, 1 files removed, 0 files unresolved | |
473 | $ hg -R s up 5 |
|
476 | $ hg -R s up 5 | |
474 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
477 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
475 | $ echo e > e |
|
478 | $ echo e > e | |
476 | $ hg ci -Am1 |
|
479 | $ hg ci -Am1 | |
477 | adding e |
|
480 | adding e | |
478 | committing subrepository s |
|
481 | committing subrepository s | |
479 |
|
482 | |||
480 | $ hg up 5 |
|
483 | $ hg up 5 | |
481 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
484 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
482 | $ hg merge 4 # try to merge default into br again |
|
485 | $ hg merge 4 # try to merge default into br again | |
483 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
486 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
484 | (branch merge, don't forget to commit) |
|
487 | (branch merge, don't forget to commit) | |
485 | $ cd .. |
|
488 | $ cd .. | |
486 |
|
489 | |||
487 | test subrepo delete from .hgsubstate |
|
490 | test subrepo delete from .hgsubstate | |
488 |
|
491 | |||
489 | $ hg init testdelete |
|
492 | $ hg init testdelete | |
490 | $ mkdir testdelete/nested testdelete/nested2 |
|
493 | $ mkdir testdelete/nested testdelete/nested2 | |
491 | $ hg init testdelete/nested |
|
494 | $ hg init testdelete/nested | |
492 | $ hg init testdelete/nested2 |
|
495 | $ hg init testdelete/nested2 | |
493 | $ echo test > testdelete/nested/foo |
|
496 | $ echo test > testdelete/nested/foo | |
494 | $ echo test > testdelete/nested2/foo |
|
497 | $ echo test > testdelete/nested2/foo | |
495 | $ hg -R testdelete/nested add |
|
498 | $ hg -R testdelete/nested add | |
496 | adding testdelete/nested/foo |
|
499 | adding testdelete/nested/foo | |
497 | $ hg -R testdelete/nested2 add |
|
500 | $ hg -R testdelete/nested2 add | |
498 | adding testdelete/nested2/foo |
|
501 | adding testdelete/nested2/foo | |
499 | $ hg -R testdelete/nested ci -m test |
|
502 | $ hg -R testdelete/nested ci -m test | |
500 | $ hg -R testdelete/nested2 ci -m test |
|
503 | $ hg -R testdelete/nested2 ci -m test | |
501 | $ echo nested = nested > testdelete/.hgsub |
|
504 | $ echo nested = nested > testdelete/.hgsub | |
502 | $ echo nested2 = nested2 >> testdelete/.hgsub |
|
505 | $ echo nested2 = nested2 >> testdelete/.hgsub | |
503 | $ hg -R testdelete add |
|
506 | $ hg -R testdelete add | |
504 | adding testdelete/.hgsub |
|
507 | adding testdelete/.hgsub | |
505 | $ hg -R testdelete ci -m "nested 1 & 2 added" |
|
508 | $ hg -R testdelete ci -m "nested 1 & 2 added" | |
506 | committing subrepository nested |
|
509 | committing subrepository nested | |
507 | committing subrepository nested2 |
|
510 | committing subrepository nested2 | |
508 | $ echo nested = nested > testdelete/.hgsub |
|
511 | $ echo nested = nested > testdelete/.hgsub | |
509 | $ hg -R testdelete ci -m "nested 2 deleted" |
|
512 | $ hg -R testdelete ci -m "nested 2 deleted" | |
510 | $ cat testdelete/.hgsubstate |
|
513 | $ cat testdelete/.hgsubstate | |
511 | bdf5c9a3103743d900b12ae0db3ffdcfd7b0d878 nested |
|
514 | bdf5c9a3103743d900b12ae0db3ffdcfd7b0d878 nested | |
512 | $ hg -R testdelete remove testdelete/.hgsub |
|
515 | $ hg -R testdelete remove testdelete/.hgsub | |
513 | $ hg -R testdelete ci -m ".hgsub deleted" |
|
516 | $ hg -R testdelete ci -m ".hgsub deleted" | |
514 | $ cat testdelete/.hgsubstate |
|
517 | $ cat testdelete/.hgsubstate | |
515 |
|
518 | |||
516 | test repository cloning |
|
519 | test repository cloning | |
517 |
|
520 | |||
518 | $ mkdir mercurial mercurial2 |
|
521 | $ mkdir mercurial mercurial2 | |
519 | $ hg init nested_absolute |
|
522 | $ hg init nested_absolute | |
520 | $ echo test > nested_absolute/foo |
|
523 | $ echo test > nested_absolute/foo | |
521 | $ hg -R nested_absolute add |
|
524 | $ hg -R nested_absolute add | |
522 | adding nested_absolute/foo |
|
525 | adding nested_absolute/foo | |
523 | $ hg -R nested_absolute ci -mtest |
|
526 | $ hg -R nested_absolute ci -mtest | |
524 | $ cd mercurial |
|
527 | $ cd mercurial | |
525 | $ hg init nested_relative |
|
528 | $ hg init nested_relative | |
526 | $ echo test2 > nested_relative/foo2 |
|
529 | $ echo test2 > nested_relative/foo2 | |
527 | $ hg -R nested_relative add |
|
530 | $ hg -R nested_relative add | |
528 | adding nested_relative/foo2 |
|
531 | adding nested_relative/foo2 | |
529 | $ hg -R nested_relative ci -mtest2 |
|
532 | $ hg -R nested_relative ci -mtest2 | |
530 | $ hg init main |
|
533 | $ hg init main | |
531 | $ echo "nested_relative = ../nested_relative" > main/.hgsub |
|
534 | $ echo "nested_relative = ../nested_relative" > main/.hgsub | |
532 | $ echo "nested_absolute = `pwd`/nested_absolute" >> main/.hgsub |
|
535 | $ echo "nested_absolute = `pwd`/nested_absolute" >> main/.hgsub | |
533 | $ hg -R main add |
|
536 | $ hg -R main add | |
534 | adding main/.hgsub |
|
537 | adding main/.hgsub | |
535 | $ hg -R main ci -m "add subrepos" |
|
538 | $ hg -R main ci -m "add subrepos" | |
536 | committing subrepository nested_absolute |
|
539 | committing subrepository nested_absolute | |
537 | committing subrepository nested_relative |
|
540 | committing subrepository nested_relative | |
538 | $ cd .. |
|
541 | $ cd .. | |
539 | $ hg clone mercurial/main mercurial2/main |
|
542 | $ hg clone mercurial/main mercurial2/main | |
540 | updating to branch default |
|
543 | updating to branch default | |
541 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
544 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
542 | $ cat mercurial2/main/nested_absolute/.hg/hgrc \ |
|
545 | $ cat mercurial2/main/nested_absolute/.hg/hgrc \ | |
543 | > mercurial2/main/nested_relative/.hg/hgrc |
|
546 | > mercurial2/main/nested_relative/.hg/hgrc | |
544 | [paths] |
|
547 | [paths] | |
545 | default = $TESTTMP/sub/mercurial/nested_absolute |
|
548 | default = $TESTTMP/sub/mercurial/nested_absolute | |
546 | [paths] |
|
549 | [paths] | |
547 | default = $TESTTMP/sub/mercurial/nested_relative |
|
550 | default = $TESTTMP/sub/mercurial/nested_relative | |
548 | $ rm -rf mercurial mercurial2 |
|
551 | $ rm -rf mercurial mercurial2 | |
549 |
|
552 | |||
550 | Issue1977: multirepo push should fail if subrepo push fails |
|
553 | Issue1977: multirepo push should fail if subrepo push fails | |
551 |
|
554 | |||
552 | $ hg init repo |
|
555 | $ hg init repo | |
553 | $ hg init repo/s |
|
556 | $ hg init repo/s | |
554 | $ echo a > repo/s/a |
|
557 | $ echo a > repo/s/a | |
555 | $ hg -R repo/s ci -Am0 |
|
558 | $ hg -R repo/s ci -Am0 | |
556 | adding a |
|
559 | adding a | |
557 | $ echo s = s > repo/.hgsub |
|
560 | $ echo s = s > repo/.hgsub | |
558 | $ hg -R repo ci -Am1 |
|
561 | $ hg -R repo ci -Am1 | |
559 | adding .hgsub |
|
562 | adding .hgsub | |
560 | committing subrepository s |
|
563 | committing subrepository s | |
561 | $ hg clone repo repo2 |
|
564 | $ hg clone repo repo2 | |
562 | updating to branch default |
|
565 | updating to branch default | |
563 | pulling subrepo s from $TESTTMP/sub/repo/s |
|
566 | pulling subrepo s from $TESTTMP/sub/repo/s | |
564 | requesting all changes |
|
567 | requesting all changes | |
565 | adding changesets |
|
568 | adding changesets | |
566 | adding manifests |
|
569 | adding manifests | |
567 | adding file changes |
|
570 | adding file changes | |
568 | added 1 changesets with 1 changes to 1 files |
|
571 | added 1 changesets with 1 changes to 1 files | |
569 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
572 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
570 | $ hg -q -R repo2 pull -u |
|
573 | $ hg -q -R repo2 pull -u | |
571 | $ echo 1 > repo2/s/a |
|
574 | $ echo 1 > repo2/s/a | |
572 | $ hg -R repo2/s ci -m2 |
|
575 | $ hg -R repo2/s ci -m2 | |
573 | $ hg -q -R repo2/s push |
|
576 | $ hg -q -R repo2/s push | |
574 | $ hg -R repo2/s up -C 0 |
|
577 | $ hg -R repo2/s up -C 0 | |
575 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
578 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
576 | $ echo 2 > repo2/s/a |
|
579 | $ echo 2 > repo2/s/a | |
577 | $ hg -R repo2/s ci -m3 |
|
580 | $ hg -R repo2/s ci -m3 | |
578 | created new head |
|
581 | created new head | |
579 | $ hg -R repo2 ci -m3 |
|
582 | $ hg -R repo2 ci -m3 | |
580 | committing subrepository s |
|
583 | committing subrepository s | |
581 | $ hg -q -R repo2 push |
|
584 | $ hg -q -R repo2 push | |
582 | abort: push creates new remote heads on branch 'default'! |
|
585 | abort: push creates new remote heads on branch 'default'! | |
583 | (did you forget to merge? use push -f to force) |
|
586 | (did you forget to merge? use push -f to force) | |
584 | [255] |
|
587 | [255] | |
585 | $ hg -R repo update |
|
588 | $ hg -R repo update | |
586 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
589 | 0 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
587 | $ rm -rf repo2 repo |
|
590 | $ rm -rf repo2 repo | |
588 |
|
591 | |||
589 |
|
592 | |||
590 | Issue1852 subrepos with relative paths always push/pull relative to default |
|
593 | Issue1852 subrepos with relative paths always push/pull relative to default | |
591 |
|
594 | |||
592 | Prepare a repo with subrepo |
|
595 | Prepare a repo with subrepo | |
593 |
|
596 | |||
594 | $ hg init issue1852a |
|
597 | $ hg init issue1852a | |
595 | $ cd issue1852a |
|
598 | $ cd issue1852a | |
596 | $ hg init sub/repo |
|
599 | $ hg init sub/repo | |
597 | $ echo test > sub/repo/foo |
|
600 | $ echo test > sub/repo/foo | |
598 | $ hg -R sub/repo add sub/repo/foo |
|
601 | $ hg -R sub/repo add sub/repo/foo | |
599 | $ echo sub/repo = sub/repo > .hgsub |
|
602 | $ echo sub/repo = sub/repo > .hgsub | |
600 | $ hg add .hgsub |
|
603 | $ hg add .hgsub | |
601 | $ hg ci -mtest |
|
604 | $ hg ci -mtest | |
602 | committing subrepository sub/repo |
|
605 | committing subrepository sub/repo | |
603 | $ echo test >> sub/repo/foo |
|
606 | $ echo test >> sub/repo/foo | |
604 | $ hg ci -mtest |
|
607 | $ hg ci -mtest | |
605 | committing subrepository sub/repo |
|
608 | committing subrepository sub/repo | |
606 | $ cd .. |
|
609 | $ cd .. | |
607 |
|
610 | |||
608 | Create repo without default path, pull top repo, and see what happens on update |
|
611 | Create repo without default path, pull top repo, and see what happens on update | |
609 |
|
612 | |||
610 | $ hg init issue1852b |
|
613 | $ hg init issue1852b | |
611 | $ hg -R issue1852b pull issue1852a |
|
614 | $ hg -R issue1852b pull issue1852a | |
612 | pulling from issue1852a |
|
615 | pulling from issue1852a | |
613 | requesting all changes |
|
616 | requesting all changes | |
614 | adding changesets |
|
617 | adding changesets | |
615 | adding manifests |
|
618 | adding manifests | |
616 | adding file changes |
|
619 | adding file changes | |
617 | added 2 changesets with 3 changes to 2 files |
|
620 | added 2 changesets with 3 changes to 2 files | |
618 | (run 'hg update' to get a working copy) |
|
621 | (run 'hg update' to get a working copy) | |
619 | $ hg -R issue1852b update |
|
622 | $ hg -R issue1852b update | |
620 | abort: default path for subrepository sub/repo not found |
|
623 | abort: default path for subrepository sub/repo not found | |
621 | [255] |
|
624 | [255] | |
622 |
|
625 | |||
623 | Pull -u now doesn't help |
|
626 | Pull -u now doesn't help | |
624 |
|
627 | |||
625 | $ hg -R issue1852b pull -u issue1852a |
|
628 | $ hg -R issue1852b pull -u issue1852a | |
626 | pulling from issue1852a |
|
629 | pulling from issue1852a | |
627 | searching for changes |
|
630 | searching for changes | |
628 | no changes found |
|
631 | no changes found | |
629 |
|
632 | |||
630 | Try the same, but with pull -u |
|
633 | Try the same, but with pull -u | |
631 |
|
634 | |||
632 | $ hg init issue1852c |
|
635 | $ hg init issue1852c | |
633 | $ hg -R issue1852c pull -r0 -u issue1852a |
|
636 | $ hg -R issue1852c pull -r0 -u issue1852a | |
634 | pulling from issue1852a |
|
637 | pulling from issue1852a | |
635 | adding changesets |
|
638 | adding changesets | |
636 | adding manifests |
|
639 | adding manifests | |
637 | adding file changes |
|
640 | adding file changes | |
638 | added 1 changesets with 2 changes to 2 files |
|
641 | added 1 changesets with 2 changes to 2 files | |
639 | pulling subrepo sub/repo from issue1852a/sub/repo |
|
642 | pulling subrepo sub/repo from issue1852a/sub/repo | |
640 | requesting all changes |
|
643 | requesting all changes | |
641 | adding changesets |
|
644 | adding changesets | |
642 | adding manifests |
|
645 | adding manifests | |
643 | adding file changes |
|
646 | adding file changes | |
644 | added 2 changesets with 2 changes to 1 files |
|
647 | added 2 changesets with 2 changes to 1 files | |
645 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
648 | 2 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
646 |
|
649 | |||
647 | Try to push from the other side |
|
650 | Try to push from the other side | |
648 |
|
651 | |||
649 | $ hg -R issue1852a push `pwd`/issue1852c |
|
652 | $ hg -R issue1852a push `pwd`/issue1852c | |
650 | pushing to $TESTTMP/sub/issue1852c |
|
653 | pushing to $TESTTMP/sub/issue1852c | |
651 | pushing subrepo sub/repo to $TESTTMP/sub/issue1852c/sub/repo |
|
654 | pushing subrepo sub/repo to $TESTTMP/sub/issue1852c/sub/repo | |
652 | searching for changes |
|
655 | searching for changes | |
653 | no changes found |
|
656 | no changes found | |
654 | searching for changes |
|
657 | searching for changes | |
655 | adding changesets |
|
658 | adding changesets | |
656 | adding manifests |
|
659 | adding manifests | |
657 | adding file changes |
|
660 | adding file changes | |
658 | added 1 changesets with 1 changes to 1 files |
|
661 | added 1 changesets with 1 changes to 1 files | |
659 |
|
662 | |||
660 | Check status of files when none of them belong to the first |
|
663 | Check status of files when none of them belong to the first | |
661 | subrepository: |
|
664 | subrepository: | |
662 |
|
665 | |||
663 | $ hg init subrepo-status |
|
666 | $ hg init subrepo-status | |
664 | $ cd subrepo-status |
|
667 | $ cd subrepo-status | |
665 | $ hg init subrepo-1 |
|
668 | $ hg init subrepo-1 | |
666 | $ hg init subrepo-2 |
|
669 | $ hg init subrepo-2 | |
667 | $ cd subrepo-2 |
|
670 | $ cd subrepo-2 | |
668 | $ touch file |
|
671 | $ touch file | |
669 | $ hg add file |
|
672 | $ hg add file | |
670 | $ cd .. |
|
673 | $ cd .. | |
671 | $ echo subrepo-1 = subrepo-1 > .hgsub |
|
674 | $ echo subrepo-1 = subrepo-1 > .hgsub | |
672 | $ echo subrepo-2 = subrepo-2 >> .hgsub |
|
675 | $ echo subrepo-2 = subrepo-2 >> .hgsub | |
673 | $ hg add .hgsub |
|
676 | $ hg add .hgsub | |
674 | $ hg ci -m 'Added subrepos' |
|
677 | $ hg ci -m 'Added subrepos' | |
675 | committing subrepository subrepo-1 |
|
678 | committing subrepository subrepo-1 | |
676 | committing subrepository subrepo-2 |
|
679 | committing subrepository subrepo-2 | |
677 | $ hg st subrepo-2/file |
|
680 | $ hg st subrepo-2/file | |
678 |
|
681 | |||
679 | Check hg update --clean |
|
682 | Check hg update --clean | |
680 | $ cd $TESTTMP/sub/t |
|
683 | $ cd $TESTTMP/sub/t | |
681 | $ rm -r t/t.orig |
|
684 | $ rm -r t/t.orig | |
682 | $ hg status -S --all |
|
685 | $ hg status -S --all | |
683 | C .hgsub |
|
686 | C .hgsub | |
684 | C .hgsubstate |
|
687 | C .hgsubstate | |
685 | C a |
|
688 | C a | |
686 | C s/.hgsub |
|
689 | C s/.hgsub | |
687 | C s/.hgsubstate |
|
690 | C s/.hgsubstate | |
688 | C s/a |
|
691 | C s/a | |
689 | C s/ss/a |
|
692 | C s/ss/a | |
690 | C t/t |
|
693 | C t/t | |
691 | $ echo c1 > s/a |
|
694 | $ echo c1 > s/a | |
692 | $ cd s |
|
695 | $ cd s | |
693 | $ echo c1 > b |
|
696 | $ echo c1 > b | |
694 | $ echo c1 > c |
|
697 | $ echo c1 > c | |
695 | $ hg add b |
|
698 | $ hg add b | |
696 | $ cd .. |
|
699 | $ cd .. | |
697 | $ hg status -S |
|
700 | $ hg status -S | |
698 | M s/a |
|
701 | M s/a | |
699 | A s/b |
|
702 | A s/b | |
700 | ? s/c |
|
703 | ? s/c | |
701 | $ hg update -C |
|
704 | $ hg update -C | |
702 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved |
|
705 | 1 files updated, 0 files merged, 0 files removed, 0 files unresolved | |
703 | $ hg status -S |
|
706 | $ hg status -S | |
704 | ? s/b |
|
707 | ? s/b | |
705 | ? s/c |
|
708 | ? s/c |
General Comments 0
You need to be logged in to leave comments.
Login now